Simple enough...
Unless I am really close to geometry I only see white. When I get really close I see the white to black gradient. I'm followiwing this guide btw: http://www.gamedev.n...d-shadows-r2193
What am I doing wrong?
EDIT: Btw, near and far values are set to allow me to see far. If using a default shader I can see everything normal. Near = 0.1, Far = 1000.0.
Well you definitely don't want to store just z. If you want to store exactly what's stored in a depth buffer, the you want to interpolate both z and w and then output z/w from your pixel shader. However that's not necessarily the best way to store depth to a texture, depending on what you're doing. What exactly are you going to use this depth texture for?
Your depth looks pure white because your display does not have sufficient bit depth to display the contents of the depth buffer. The depth buffer is usually 16-24 bits but that is black to white instead of rgb which means you can only display the first 8 bits in the buffer and everything above that is displayed as pure white. The depth buffer should be fine for reading though, if you really need to display it you could try dividing the depth value by 2 for 16 bits or 3 for 24 bits before you display them, that should compress it to the visible bit-range.
Your depth looks pure white because your display does not have sufficient bit depth to display the contents of the depth buffer. The depth buffer is usually 16-24 bits but that is black to white instead of rgb which means you can only display the first 8 bits in the buffer and everything above that is displayed as pure white.
That's not really at all how it works. A z/w depth buffer will appear white for most of the visible depth range because z/w is non linear, and most of the depth range ends up getting mapped to values > 0.9 with a perspective projection. A common way to make it better for visualization is to just remap [0.9 - 1.0] to [0.0 1.0] when displaying it, which you can do with saturate((depth - 0.9) * 10).
If you want, you can also just store a depth value that's linear and it will display correctly. It will also have a more even distribution of precision throughout the visible depth range. A common value to use is ViewSpaceZ / FarClip, which you could do in that shader by setting fGrey = Input.Depth.y / FarClip.
Also, if you're using a floating-point depth buffer (DXGI_FORMAT_R32F/R16F and friends, assuming D3D10+) you can actually flop the near and far planes (near plane has what you set as far plane distance, and vice versa) and switch the test direction (depth test becomes GREATER_EQUAL instead of LESS_EQUAL, etc.) for some further precision improvements. This will *NOT* work for UNORM-based formats, as it exploits the mechanics of how floating-point numbers are represented-- some unneeded extra precision is shifted away from where the camera is and ends up 'spread out' over the remaining depth range.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.