Deferred Rendering, Reconstructing Fragment Position From Depth

Started by
5 comments, last by nullsquared 15 years, 11 months ago
I wrote a deferred renderer, and it performs like crap [lol]. Main point being, I did it as a learning experience and ended up using 2 G-buffers, one 256-bit and the other 128-bit. Yes, I rendered to both G-buffers and then used them [embarrass] Anyways, I felt like I might decide to actually reuse this deferred renderer in my actual project, albeit it I'll need to make lots of optimizations. First one being: I want to dump the fist 256-bit MRT and do everything in 1 pass, like it's usually done. Meaning, I've decided to stop storing full-on position, and start reconstructing the position from a depth. So, questions: [1] What kind of depth do I store? Post-projection Z, or linear eye-Z? Something else? [2] Now that I have the depth, how do I end up reconstructing the eye-space position if I have the screen-coordinates (within [0, 1])? This is the biggest part I'm stuck at. I've seen people construct an eye-space ray, and then their depth is just the distance along the ray, but I'm confused as to how to do this. Any (pseudo-)code or explanations is appreciated, as online references I've found only say "reconstruct view-space position from depth," and don't explain it. While I'm at it, at 3rd question: [3] I'd use a sphere for a point light, but what do I use for a spot light? (when rendering a convex volume) Will a cone do the trick? Thanks very much in advance [smile]
Advertisement
There is a long, informative thread on this exact subject at this URL: http://www.gamedev.net/community/forums/topic.asp?topic_id=474166
Quote:Original post by agi_shi
I wrote a deferred renderer, and it performs like crap [lol]. Main point being, I did it as a learning experience and ended up using 2 G-buffers, one 256-bit and the other 128-bit. Yes, I rendered to both G-buffers and then used them [embarrass]

Anyways, I felt like I might decide to actually reuse this deferred renderer in my actual project, albeit it I'll need to make lots of optimizations. First one being: I want to dump the fist 256-bit MRT and do everything in 1 pass, like it's usually done. Meaning, I've decided to stop storing full-on position, and start reconstructing the position from a depth.

Maybe looking at the source code / shaders / pipeline-xml from Horde3D might be informative. It's default implementation of deferred rendering writes data to 3 RGBA16F (64bit?) G-Buffers in a single pass. Buffer 0 stores position + material ID, Buffer 1 stores the normal, and buffer 2 stores albedo and specular mask.
Its performance seems acceptable on newish hardware.
Quote:Original post by agi_shi
I wrote a deferred renderer, and it performs like crap [lol]. Main point being, I did it as a learning experience and ended up using 2 G-buffers, one 256-bit and the other 128-bit. Yes, I rendered to both G-buffers and then used them [embarrass]


It's alright, most deferred renderers start out that way. You should be able to cut that down pretty significantly...my old DR used 4 32bpp buffers.


Quote:Original post by agi_shi

[1] What kind of depth do I store? Post-projection Z, or linear eye-Z? Something else?



You can work with either. I prefer linear eye-space Z.

Quote:Original post by agi_shi

[2] Now that I have the depth, how do I end up reconstructing the eye-space position if I have the screen-coordinates (within [0, 1])? This is the biggest part I'm stuck at. I've seen people construct an eye-space ray, and then their depth is just the distance along the ray, but I'm confused as to how to do this. Any (pseudo-)code or explanations is appreciated, as online references I've found only say "reconstruct view-space position from depth," and don't explain it.



That thread rgoer linked to has plenty of discussion on this topic. You should be able to piece together how to reconstruct view-space or world-space position from either linear view-space Z or post-projection Z. For post-projection Z, you reconstruct the full post-projection coordinate in the pixel shader by using that pixel's current X and Y coordinate and combining it with the depth value from your buffer, then you apply the inverse of your projection matrix to get a view-space position. For linear view-space Z, for each pixel you basically construct a ray that starts at the camera's position and goes "through" the pixel all the way back to the far clipping plane of the view frustum. You then multiply this ray with your depth to get position. If you're rendering a full-screen quad, you can get this ray very easily by giving each corner of the quad the position of the corresponding corner of the view frustums far-clipping plane, then passing it through to the pixel shader (so it's interpolated).


Quote:Original post by agi_shi
While I'm at it, at 3rd question:
[3] I'd use a sphere for a point light, but what do I use for a spot light? (when rendering a convex volume) Will a cone do the trick?


Cones work just fine.

One thing I think was not mentioned is: you want to re-construct position from the depth buffer, this way you save bandwidth and memory. As far as I know the main PC graphics cards now give you access to the depth buffer.
Thanks a bunch! [smile]

The ray to frustum-corners works very, very nicely. However, I ran into a bit of a pickle. At first, my view-space position look correctly reconstructed (I outputted it), the green, the red, you know - it looked just as when I dumped it to a gigantic texture. However, seems like I did something (or I just didn't do something [lol]), because now the colours are purple (1, 1, 0), white (1, 1, 1), light blue (0, 1, 1), and I forgot the other colour (not on my development machine).

Any ideas as to what might cause this? I output the ray that goes through the pixels, and it is indeed correct (that plain red/green/etc. combination). I outputted the depth, and from what I can tell, it's correct. Hm. Any ideas as to what might cause this, other than me programming at 11 PM instead of going to bed [grin]? I'm just wondering if it's a common bug/mistake that I've made and that is very obvious.
Fixed! [smile] The problem is that I was using viewSpacePosition.z, instead of the length of the view space position (distance to the camera is what you want). Which, when I thought about it, makes perfect sense, but it seems the other post about this threw me off.

Quote:Original post by agi_shi
However, seems like I did something (or I just didn't do something [lol]), because now the colours are purple (1, 1, 0), white (1, 1, 1), light blue (0, 1, 1), and I forgot the other colour (not on my development machine).

This is what I meant, in case anyone runs into a similar issue:
viewSpacePosition.z

... isn't it amazing how much a good night sleep and a good day at school can do for those darned bugs? :D

Oh, and about the part where it did once work right, but then it didn't: what happened is that I have multiple techniques in my materials - some for deferred shading, some for shadow mapping, some for forward shading, etc. Anyways, I had a syntax error in my deferred shader pass, so my renderer automatically fell back to the next supported technique, which was my shadow mapping technique. Guess what? My shadow mapping uses the same code to retrieve the scene depth (distanceToCamera / farClipDistance), so what ended up getting stored in my G-buffer was correct regarding the depth [embarrass]

This topic is closed to new replies.

Advertisement