Jump to content

  • Log In with Google      Sign In   
  • Create Account


Deferred rendering basics


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
2 replies to this topic

#1 keym   Members   -  Reputation: 229

Like
1Likes
Like

Posted 17 October 2012 - 04:19 AM

Hello,
I'm trying to implement deferred rendering and I'm stuck with this problem. I do all calculations in view space, I set the light position for testing to (0,0,0) - camera position. But the problem is when I look up or down the lighting changes to brighter or darker. It sounds like I have something wrong with matrices but I checked that many times and still no luck. I'm not sure if my reconstruction from depth is right because it's a new thing to me and I may be doing something wrong here. But on the other hand lighting doesn't change when I move backward or forward and keep the angle (so I guess position is correct?). Anyway I'm posting some pictures, maybe someone can figure out where I go wrong. For now I'm trying to compute lighting with geometry normals and then move with bump maps. To me my normals look too bright, pinkish, dull etc. when I compare them to normals from Killzone paper or other tutorials. Maybe this is the case?

Thanks for watching

Edit: I consider simplest scenario for now - only diffuse lighting, no attenuation, no specular and no ambient

This is how I recreate position from depth in light pass:
float z = f1tex2D(depth_map, texCoord);
float x = texCoord.x * 2 - 1;
float y = (1 - texCoord.y) * 2 - 1;
float4 vProjectedPos = float4(x, y, z, 1.0f);
//unproject the position from clip space to view space using inv. proj. matrix
float4 vPositionVS = mul(invProj, vProjectedPos);
float3 vsPos = vPositionVS.xyz / vPositionVS.w;

I use depth buffer to get depth or I can use depth encoded as color which is:
in vertex program:
outPosition = mul(ModelViewProj, position);
vDepthCS.xy = outPosition.zw;

and in fragment program I do division z/w and pack it as color but the result is the same as with depth buffer.

Attached Thumbnails

  • deferred.jpg
  • deferred.jpg

Edited by keym, 17 October 2012 - 06:06 AM.


Sponsor:

#2 Ashaman73   Crossbones+   -  Reputation: 5793

Like
1Likes
Like

Posted 18 October 2012 - 12:30 AM

First off, you can't display the normals without hack, because normals have negative values. Therefore you need to map them to a visible colorspace (color = normal*0.5+0.5), I guess that killzone use an other mapping(or none at all).

Your normal screenshot seems to be ok (green pointing up, red pointing right, blue tint for pointing to the camera), how does the normals change if you look up/down, maybe your inversion of the y-coord is not consistent.

Edited by Ashaman73, 18 October 2012 - 12:31 AM.


#3 keym   Members   -  Reputation: 229

Like
0Likes
Like

Posted 18 October 2012 - 03:07 AM

Hi,
I solved my problem yesterday. Normals were ok (I packed and unpacked them from color with that hack you mention but I didn't mention that, that Killzone paper made me doubt...). Turned out that my reconstruction of position was wrong (like said before, it's a new thing to me and I was unsure of that bit the most). I used this http://www.opengl.or...l=1#post1234440 code and it finally worked Posted Image. Thanks for interest though.

Cheers




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS