Deferred shading screenspace to worldspace by ray

Started by
12 comments, last by B_old 14 years, 11 months ago
Hi, I read that you can construct the worldspace coordinates of a pixel by multiplying the depth of the pixel with a ray from the eye to a far corner of the frustum. I haven't quite managed to accomplish that and wonder whether it has to do with the fact that I use the normal depthbuffer, which is not linear, as the source for the pixel depth. (D3D10). Or should it work regardless?
Advertisement
you need the projected depth + u,v coords of the screen (assuming top left is 0,0, bottom right 1,1)

make a vector ( -1 + u * 2, 1 - v * 2, projZ, 1)

transform that by the inverse projection matrix which provides viewspace

the transform the results by inverse viewspace will take you into world space.
Thanks for the answer.
The way you describe is how I am doing things right now. I was wondering about another method that involves rays from the eye to the far corners of the frustum which is supposed to be faster. I can't get it to work though.
You might want to make an effort to search for the latest threads surrounding this topic. I believe the last one is two or three days old. MJP had good answers to your question.
http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/
So I take it this ray technique is not compatible with the non-linear depthbuffer? What do people usually do when they have access to the z-buffer as a shader-resource as in D3D10 for example? Should I create my own depthmap anyway? I probably wouldn't bother, except that I get small artifacts when I try to apply deferred shadow occlusion to a variance shadow map. They are small but really annoy me...

Thanks for the link btw, it was a good read.
No you could use that technique with a non-linear depth buffer, you would just have to convert it so that it's normalized to the range of camera->far clip plane. However it won't do anything for you precision-wise since you're still storing it in the non-linear format.

It might be worth it for you just to manually lay out depth, especially if you're getting artifacts. Hard to say without profiling your app to figure out the additional cost, though. Also keep in mind that in D3D10 you can't sample a MSAA depth buffer (you can in D3D10.1), so if you want MSAA then you'll probably want to manually render depth anyway.
Hi!
I'm not sure I understand. Should I change the way I render to the depth buffer or the way the rays are computed? I'll give it another thought tomorrow as I'm getting really tired. Edit: I think I know what you mean. Something like this?
At least I found a clue to my artifact problem. I was using 8x anisotropic filtering on the variance map. This gave me small unshadowed polygon silhouettes, where there should be shadow. They could even be there during the forward shadowing but were generally a lot less noticeable.
When I switch to trilinear filtering it is impossible for me to see a difference between forward and deferred shadowing. Does anybody have an idea why the anisotropic filtering could produce those unshadowed silhouettes?

Another thing is, that I still don't understand how deferred shadowing can be faster (most times it is for me) with a forward lighting solution. And does it imply that deferred lighting would be faster in such a scene too?

Well, if someone has an idea where those artifacts that get worse by anisotropic filtering could come from, that would be cool.

Thanks a lot for the help so far!

[Edited by - B_old on May 4, 2009 3:54:40 AM]
I tried rendering to a separate depth map with depth linear between [0, 1] like suggested here. That does not get rid of the artifacts I am experiencing. I got the ray thing to work, though.

I still can't figure out why I get those artifacts with anisotropic filtering, which really is a shame.
Do you mind posting a screenshot of these artifacts?

This topic is closed to new replies.

Advertisement