Deferred rendering diffuse

This topic is 2383 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

I have problems with point lights, when it's position in x or y is negative everything goes black(smoothly). I attached screnshot with final buffer, from top ambientBuffer, depthBuffer (the light blue part is textured mesh, and I am only working with non textured areas right now), normalBuffer

Here's the code for light calculation and reconstruction of position from depth:
texture normal; texture depth; texture ambient; float3 EyeVec; float3 lightPos; float4x4 InvertedProjectionMat; sampler normSamp : register (s1) = sampler_state { texture = <normal>; }; sampler depSamp : register (s2) = sampler_state { texture = <depth>; }; sampler ambSamp : register (s3) = sampler_state { texture = <ambient>; }; float3 getPosFromDepth(float2 texCoord) { float z = length(tex2D(depSamp, texCoord).rgb); float x = texCoord.x * 2 - 1; float y = (1 - texCoord.y) * 2 - 1; float4 vProjectedPos = float4(x, y, z, 1.0f); float4 vPositionVS = mul(vProjectedPos, InvertedProjectionMat); return vPositionVS.xyz / tex2D(depSamp, texCoord).w; } struct PixelShaderInput { float2 texCoord : TEXCOORD0; }; float4 PixelShaderFunction(PixelShaderInput input) : COLOR0 { float4 Color; Color = tex2D(ambSamp, input.texCoord); if(length(Color) == 0) return float4(0.2f, 0.4f, 0.9f, 1.0f);; float4 SpecularColor = float4(1.0f, 1.0f, 0.0f, 1.0f); float specularLVL; float3 Position = getPosFromDepth( input.texCoord); float3 Normal = normalize(tex2D(normSamp, input.texCoord)).rgb; float3 LightVector = normalize( lightPos.xyz - Position.xyz ); float NdL = dot(Normal, LightVector); float3 Eye = normalize(EyeVec ); float3 halfVec = reflect(LightVector, Normal); specularLVL = pow(dot(halfVec, Eye), 100); return Color * NdL;// + (0.7f * specularLVL * SpecularColor); } technique Technique1 { pass Pass1 { PixelShader = compile ps_2_0 PixelShaderFunction(); } } 
Some mess inside the code is caused by me loosing my mind and start typing random stuff '
Also as I am posting something on forums (which I try to do only in an act of desperation )... what is good way to handle lighting with deferred shading? I kinda got lost with all those documentations found on internet
Anyway I will appreciate any help and thank You in advice

Share on other sites
Try to debug in this way. First return (from this shader) the normal variable, then the depth variable, and so on. You should see something logical in each test.

Question: What space do you work? View space or world space?

By the way, EyeVec can't be a global. It needs the vertex/fragment position.

Share on other sites
I agree with jischneider, that you should output one intermediate result at a time to find the error.

However, from glancing at your code, I can see two things, that you should check:

1. in getPosFromDepth you do:
return vPositionVS.xyz / tex2D(depSamp, texCoord).w;
but I'm pretty sure you need to do:
return vPositionVS.xyz / vPositionVS.w;

2. The pow of a negative base is NaN which gets converted to black. If you are doing any posprocessing like bloom, such NaNs can spread over the entire image.
Right now, the specular term is disabled but once you enable it, make sure it is never negative. Change
pow(dot(halfVec, Eye), 100)
to
pow(max(dot(halfVec, Eye), 0), 100)

Share on other sites

Try to debug in this way.

Question: What space do you work? View space or world space?

By the way, EyeVec can't be a global. It needs the vertex/fragment position.

I will debug as soon as I will come back home As for space it's View. If EyeVec can'y be global how to deal with movable camera here? I must admint I noticed that but I had no clue how to deal with it without making the whole process less efficient

However, from glancing at your code, I can see two things, that you should check:

1. in getPosFromDepth you do:
return vPositionVS.xyz / tex2D(depSamp, texCoord).w;
but I'm pretty sure you need to do:
return vPositionVS.xyz / vPositionVS.w;

2. The pow of a negative base is NaN which gets converted to black. If you are doing any posprocessing like bloom, such NaNs can spread over the entire image.
Right now, the specular term is disabled but once you enable it, make sure it is never negative. Change
pow(dot(halfVec, Eye), 100)
to
pow(max(dot(halfVec, Eye), 0), 100)

1. Good catch, but it din't change a thing.
2. Thanks for the tip

Share on other sites
The eye vector is a direction from the vertex/fragment to the camera. If you are working in view space the camera is in the position (0, 0, 0), consequently the eye vector equals: normalize(- Position.xyz);

Share on other sites

The eye vector is a direction from the vertex/fragment to the camera. If you are working in view space the camera is in the position (0, 0, 0), consequently the eye vector equals: normalize(- Position.xyz);

Thank You, that's realy helpful. I actually noticed now that's the first time I wanted to implement phong from View space, and I did it like for World space. I will try that and tell if I will make it correctly

Btw I noticed just now that You are behind the final engine(I didn't look at sig in the morning), really impressive and good work I am waiting to see more, keep this up:)

Share on other sites

[quote name='jischneider' timestamp='1334235333' post='4930554']
The eye vector is a direction from the vertex/fragment to the camera. If you are working in view space the camera is in the position (0, 0, 0), consequently the eye vector equals: normalize(- Position.xyz);

Thank You, that's realy helpful. I actually noticed now that's the first time I wanted to implement phong from View space, and I did it like for World space. I will try that and tell if I will make it correctly
[/quote]

That is the reason why I ask you what space you work. Traditionally OpenGL works in view space, so you can find examples of how to work in this space.
Be careful, you need to pass every position and direction information to this space. For instance, the lightPosition has to be multiplied by the viewMatrix and the lightDirection (if you have one) should be multiplied by the inverse transpose of the viewmatrix (i.e. a view matrix with only orientation information, similar to what you do with the normal).

Moreover, the normals should be stored normalized, to avoid unnecessary calculations. And this:
float3 Normal = normalize(tex2D(normSamp, input.texCoord)).rgb;
Becomes this:
float3 Normal = tex2D(normSamp, input.texCoord).rgb;
Besides, you are normalizing the four channels. This is incorrect.

Like Ohforf said pow is undefined for values bellow 0, and actually also 0. But in this case the result is 0, so everything is fine.

Btw I noticed just now that You are behind the final engine(I didn't look at sig in the morning), really impressive and good work I am waiting to see more, keep this up:)

Thanks!!

Share on other sites

That is the reason why I ask you what space you work. Traditionally OpenGL works in view space, so you can find examples of how to work in this space. float3 Normal = normalize(tex2D(normSamp, input.texCoord)).rgb;
Becomes this:
float3 Normal = tex2D(normSamp, input.texCoord).rgb;
Besides, you are normalizing the four channels. This is incorrect.

I migrated from openGL and glsl to dx and hlsl (to be more specific i use dx with xna), and in openGL I had no problems... I made little experiment. I switched to world space(oh boy, I 'see' much more that way), then I used 'free' channel in my G-buffer for storing diffuse level calculated in G-buffer shader anddd... It works. The interesting thing for me is that I used the same data from buffers in Combine shader(the one I posted here, modified as You aided me) and it didn't work. Does hlsl change negative values to 0 on the output? that would make sense then...

Share on other sites
"Does hlsl change negative values to 0 on the output?"

It depends of the surface format of the render target. If it is a floating point render target then negatives will be there, if it is a color format then no.

"The interesting thing for me is that I used the same data from buffers in Combine shader(the one I posted here, modified as You aided me) and it didn't work."

Me no understand tu English. No really, what do you mean?
Do you try to render the G-Buffer in the combine shader? I want to know if you are reading the information correctly.

Share on other sites

Do you try to render the G-Buffer in the combine shader? I want to know if you are reading the information correctly.

Hahaha now I see how chaotic was that sentence:D As for deferred rendering idea I am doing it wrong, but only for "testing purpose". What I did was:
1. store each position in each channel (so i didn't use depth, r for x g for y and b for z)
2. in alpha i stored Normal dot LightVector calculated inside gbuffer shader
then with that done i run Combine shader in 2 ways
1. with NdL = tex2D(depSamp, texCoord).a; (g-buffer calculated dif - working)
2. with NdL = dot(Normal, normalize(LightPos - tex2D(depSamp, texCoord).rgb)); (i dind't do it exactly like here, I write it just to see what i did:) and it didn't work)

Right now I am 90% convinced that I set wrong format and I am going to correct that

1. 1
Rutin
33
2. 2
3. 3
4. 4
5. 5

• 11
• 10
• 13
• 96
• 11
• Forum Statistics

• Total Topics
632974
• Total Posts
3009641
• Who's Online (See full list)

There are no registered users currently online

×