• Advertisement
Sign in to follow this  

deferred frustration :(

This topic is 2998 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi guys! I´m having problems with deferred shading. First the code, then the problem: Position reconstruction
/*Reconstruct position from depth and screen coordinates (800x600), far plane at 512.0*/
vec3 posFromDepth(vec2 coord){
     float d = texture2D(gdepth, coord).r;
     vec3 vray = vec3((gl_FragCoord.x-400.0)*2.0,(gl_FragCoord.y-300.0)*2.0,512.0);
     return vray*d;
}

//actual lighting:
vec3 p = posFromDepth(gl_TexCoord[0].st);
float len = length(lightpos.xyz - p);
vec3 color2 = vec3(1,1,0)*1.0/(1.0+len*len);





(My depth buffer is linear, so 0 depth at near plane and 1 depth at far plane.) Passing the light position to the shader:
shader->passVec4(getNode("Camera")->getWorldTransform()*maths::CVector4(0,0,0,1));





If i don´t transform by the view matrix, the lights shows up fine, right in the camera´s position, and moves with it which is supposed to happen. But when i multiply with the camera´s transform to get the coordinate in view space, then it disappears or fills the entire screen, depending on i don´t know what. I can´t use the frustum corners method to get the view ray because i want to find the position for more than the current pixel. So i must use the frag coord. Any hints on what might be happening?

Share this post


Link to post
Share on other sites
Advertisement
Hi ArKano,

First I would say post some screenshots. That will make it easier for everyone to help you out.

Second, sorry for being curious but why do you want to find position of more than one pixel? The formula you use to find vray, is it accurate? Also, how do create the camera's transform in your project?

Hope to see more information.

Share this post


Link to post
Share on other sites
Well, screenshots won´t help at all because there is just a light that moves with the camera or no light at all. There are no artifacts of any kind, just light off/on, depending on if i use the camera transform or not.

I want to find position for more than 1 pixel because i plan on doing some post-processing involving a form-factor calculation for which i need to be able to read the position of some neighbouring pixels.

Finally, my camera transform is created by just multiplying all transforms applied to the camera. It works ok because for other uses, it yields correct transformations, so my error must be somewhere in the position reconstruction. :(

thanks

Share this post


Link to post
Share on other sites
Well, i got it working calculating the view ray from the frustum corners. So the problem was not the transform but the view ray calculation.

I need help calculating the view ray from a full screen quad´s texture coordinates. I´ve tried several methods but none of them worked, for example:

 
vec3 posFromDepth(vec2 coord){
float d = texture2D(gdepth, coord).r;
float rayy = tan(3.1416/360.0*30.0)*512.0;
float rayx = rayy*(800.0/600.0);
vec3 eye = vec3( coord.x*rayx,coord.y*rayy,512.0);
return eye*d;
}



This one makes the light appear stretched out in the x direction, and it moves with the camera, but faster. I also tried the method found in "Deferred lighting in Leadwerks" but it also produced incorrect results. any help?

Share this post


Link to post
Share on other sites
Well, at the end i figured it out myself, i will write it down in case anyone is having the same problem.


vec3 posFromDepth(vec2 coord){
float d = texture2D(gdepth, coord).r;
vec3 tray = mat3x3(gl_ProjectionMatrixInverse)*vec3((coord.x-0.5)*2.0,(coord.y-0.5)*2.0,1.0);
return tray*d;
}





First you convert the UV (range[0,1]) to clip coordinates [-1,1]. Then you multiply it with the inverse projection and the depth to get the view-space position. No need to pass the frustum corners, and you can find the view space position for as many screen positions you want. Be warned that it is MUCH faster to do it the corner way, so only use this one if you really need it (I´m using it to implement SSDO).

Share this post


Link to post
Share on other sites
thanks :)

Well, its not real ssdo (for some reason i find the directional dependence of direct lighting less visually appealing :S), i´m just using the one bounce-gi as implemented in the paper and reworking my ssao to use proper form-factor calculations for occlusion and radiance. To calculate these form factors i needed the real distance between samples, that´s why i needed the position reconstruction :)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement