It's not often that I have to resort to posting questions on here but my debugging and googling skills have failed me this time.
I have a deferred renderer that I am integrating ray-traced shadows (in hlsl) into for my masters dissertation. For this I am rendering a depth map and using that to get the position of the pixel when it comes to the ray-tracing and deferred lighting stage. This works fine and I get the correct position, everything is hunky dory until I actually go to move the camera.
I think what is happening is that when the camera moves, the texture holding the depth is obviously changing, when it's getting passed to the ray-tracer effect it seems to be very low quality, as if it were rendered in 8-bit instead of the Single format 32-bit it is rendered in. I don't know if this is some "optimisation" to lower bandwidth usage or something but whatever it is is really screwing with the position calculation in the deferred stage.
This is most obvious when I render the height of the pixel, I made a short video of it and you can see it here (view in HD):
I hadn't noticed this when just doing deferred lighting but when it comes to ray-tracing the shadows I have to use the pixel position as the start point of the ray so any error is very noticeable.
Does anyone know what exactly is causing this problem and is there a way to stop it? I have an ATi 2900xt, it might be a driver optimisation on that card.
Whenever I try to take a screenshot of it happening it doesn't show up, to take the screenshot it must use full quality textures.