Jump to content
  • Advertisement
Sign in to follow this  

Shadow maps flipped and offset in deferred renderer

This topic is 2156 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been trying to implement shadow mapping into my deferred renderer, but I seem to have hit a brick wall.


My shadows are coming out mirrored with an offset on the X axis. Top left is the shadow map.




I can add 0.166, a seemingly arbitrary number, to the X axis and multiply it by -1, but testing against the shadowmap's depth doesn't work--I'm assuming this is another symptom of the underlying problem at hand. In that screenshot I'm just multiplying my light attenuation the shadow map depth so I can see where the shadow map is.


Either way it's not much of a solution as that doesn't work if I use anything other than a directional light.



I'm not positive if it's a problem with how I'm rendering the shadowmap or with how I'm transforming coordinates to light-space, but I'm pretty sure it's the latter.


I'm using a deferred renderer, so I'm doing my shadowing on a full screen quad using data from the G-buffer. All my lighting is done in view space, and I'm reconstructing viewspace positions from depth. That's working perfectly for everything else, but I thought it may have been incompatible with the way I was doing shadow maps, so I tried storing full XYZ positions in the G-buffer as well as applying the shadow map in the G-buffer, but I got the same results every way.



Here's how I'm making the matrices for and rendering the shadow map:

glm::mat4 shadowProjectionMatrix = glm::ortho(-15.f, 15.f, -15.f, 15.f, -10.f, 20.f);
glm::mat4 shadowViewMatrix = glm::lookAt( light->GetDirection(), glm::vec3(0,0,0), glm::vec3(0,1,0) );
glm::mat4 shadowMatrix = shadowViewMatrix * shadowProjectionMatrix;

glViewport( 0, 0, shadowMapSize, shadowMapSize );
world->Render( &shadowViewMatrix, &shadowProjectionMatrix );



This is how I'm applying shadows in my light shader:

vec4 PositionWS = inverseCameraViewMatrix * vec4( positionVS.xyz, 1.0 );
vec4 PositionLS = shadowMatrix * PositionWS;

PositionLS = PositionLS * 0.5 + 0.5;

// PositionLS.x = PositionLS.x * -1; // flip x axis
// PositionLS.x += 0.166; // seemingly arbitrary number

float shadowDepth = texture( tShadowMap, PositionLS.xy ).z;

// temporary, display shadow map directly on image
visibility = shadowDepth.z;
attenuation *= visibility;



This is really driving me insane. I've seemingly tried everything to get it to work properly but to no avail. This is the closest I've gotten. What really frustrates me is how this really shouldn't be that hard, once I put my view space position into world space it should just be a matter of multiplying it by the exact same matrices I use to render the shadowmap in the first place.


Any help would be greatly appreciated. I'm really on my last straw here.

Edited by Neobim

Share this post

Link to post
Share on other sites

Fixed it. I did have a bias, although it wasn't correct, it wasn't the full cause of the issues. The underlying problem was really, really silly.


This line of code...


shadowMatrix = shadowViewMatrix * shadowProjectionMatrix;


Was reversed, and should have been:


shadowMatrix = shadowProjectionMatrix * shadowViewMatrix;


I didn't think that would make a difference, but it did.


Thanks anyway, I appreciate it!

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!