Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


Deferred rendering diffuse


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
12 replies to this topic

#1 Tasaq   Members   -  Reputation: 1255

Like
0Likes
Like

Posted 11 April 2012 - 12:31 PM

I have problems with point lights, when it's position in x or y is negative everything goes black(smoothly). I attached screnshot with final buffer, from top ambientBuffer, depthBuffer (the light blue part is textured mesh, and I am only working with non textured areas right now), normalBuffer

Here's the code for light calculation and reconstruction of position from depth:
texture normal;
texture depth;
texture ambient;
float3 EyeVec;
float3 lightPos;
float4x4 InvertedProjectionMat;
sampler normSamp : register (s1) = sampler_state
{
texture = <normal>;
};
sampler depSamp : register (s2) = sampler_state
{
texture = <depth>;
};
sampler ambSamp : register (s3) = sampler_state
{
texture = <ambient>;
};
float3 getPosFromDepth(float2 texCoord)
{

	float z = length(tex2D(depSamp, texCoord).rgb);
	float x = texCoord.x * 2 - 1;
	float y = (1 - texCoord.y) * 2 - 1;
	float4 vProjectedPos = float4(x, y, z, 1.0f);
	float4 vPositionVS = mul(vProjectedPos, InvertedProjectionMat);
	return vPositionVS.xyz / tex2D(depSamp, texCoord).w;
}
struct PixelShaderInput
{
float2 texCoord : TEXCOORD0;
};
float4 PixelShaderFunction(PixelShaderInput input) : COLOR0
{
float4 Color;
Color = tex2D(ambSamp, input.texCoord);
if(length(Color) == 0)
return float4(0.2f, 0.4f, 0.9f, 1.0f);;
float4 SpecularColor = float4(1.0f, 1.0f, 0.0f, 1.0f);
float specularLVL;
float3 Position = getPosFromDepth( input.texCoord);
float3 Normal = normalize(tex2D(normSamp, input.texCoord)).rgb;
float3 LightVector = normalize( lightPos.xyz - Position.xyz );
float NdL = dot(Normal, LightVector);
float3 Eye  = normalize(EyeVec );
float3 halfVec = reflect(LightVector, Normal);
specularLVL = pow(dot(halfVec, Eye), 100);
	return Color * NdL;// + (0.7f * specularLVL * SpecularColor);
}
technique Technique1
{
	pass Pass1
	{
		PixelShader = compile ps_2_0 PixelShaderFunction();
	}
}
Some mess inside the code is caused by me loosing my mind and start typing random stuff Posted Image'
Also as I am posting something on forums (which I try to do only in an act of desperation Posted Image )... what is good way to handle lighting with deferred shading? I kinda got lost with all those documentations found on internet Posted Image
Anyway I will appreciate any help and thank You in advice Posted Image

Attached Thumbnails

  • screenshot.jpg


Sponsor:

#2 jischneider   Members   -  Reputation: 252

Like
0Likes
Like

Posted 11 April 2012 - 04:16 PM

Try to debug in this way. First return (from this shader) the normal variable, then the depth variable, and so on. You should see something logical in each test.

Question: What space do you work? View space or world space?

By the way, EyeVec can't be a global. It needs the vertex/fragment position.

Project page: < XNA FINAL Engine >


#3 Ohforf sake   Members   -  Reputation: 1832

Like
0Likes
Like

Posted 11 April 2012 - 11:08 PM

I agree with jischneider, that you should output one intermediate result at a time to find the error.

However, from glancing at your code, I can see two things, that you should check:

1. in getPosFromDepth you do:
return vPositionVS.xyz / tex2D(depSamp, texCoord).w;
but I'm pretty sure you need to do:
return vPositionVS.xyz / vPositionVS.w;

2. The pow of a negative base is NaN which gets converted to black. If you are doing any posprocessing like bloom, such NaNs can spread over the entire image.
Right now, the specular term is disabled but once you enable it, make sure it is never negative. Change
pow(dot(halfVec, Eye), 100)
to
pow(max(dot(halfVec, Eye), 0), 100)

#4 Tasaq   Members   -  Reputation: 1255

Like
-1Likes
Like

Posted 12 April 2012 - 02:23 AM

Try to debug in this way.

Question: What space do you work? View space or world space?

By the way, EyeVec can't be a global. It needs the vertex/fragment position.

I will debug as soon as I will come back home :) As for space it's View. If EyeVec can'y be global how to deal with movable camera here? I must admint I noticed that but I had no clue how to deal with it without making the whole process less efficient :)

However, from glancing at your code, I can see two things, that you should check:

1. in getPosFromDepth you do:
return vPositionVS.xyz / tex2D(depSamp, texCoord).w;
but I'm pretty sure you need to do:
return vPositionVS.xyz / vPositionVS.w;

2. The pow of a negative base is NaN which gets converted to black. If you are doing any posprocessing like bloom, such NaNs can spread over the entire image.
Right now, the specular term is disabled but once you enable it, make sure it is never negative. Change
pow(dot(halfVec, Eye), 100)
to
pow(max(dot(halfVec, Eye), 0), 100)

1. Good catch, but it din't change a thing.
2. Thanks for the tip :)

#5 jischneider   Members   -  Reputation: 252

Like
0Likes
Like

Posted 12 April 2012 - 06:55 AM

The eye vector is a direction from the vertex/fragment to the camera. If you are working in view space the camera is in the position (0, 0, 0), consequently the eye vector equals: normalize(- Position.xyz);

Project page: < XNA FINAL Engine >


#6 Tasaq   Members   -  Reputation: 1255

Like
0Likes
Like

Posted 12 April 2012 - 11:18 AM

The eye vector is a direction from the vertex/fragment to the camera. If you are working in view space the camera is in the position (0, 0, 0), consequently the eye vector equals: normalize(- Position.xyz);

Thank You, that's realy helpful. I actually noticed now that's the first time I wanted to implement phong from View space, and I did it like for World space. I will try that and tell if I will make it correctly Posted Image

Btw I noticed just now that You are behind the final engine(I didn't look at sig in the morning), really impressive and good work Posted Image I am waiting to see more, keep this up:)

#7 jischneider   Members   -  Reputation: 252

Like
0Likes
Like

Posted 12 April 2012 - 11:33 AM


The eye vector is a direction from the vertex/fragment to the camera. If you are working in view space the camera is in the position (0, 0, 0), consequently the eye vector equals: normalize(- Position.xyz);

Thank You, that's realy helpful. I actually noticed now that's the first time I wanted to implement phong from View space, and I did it like for World space. I will try that and tell if I will make it correctly Posted Image


That is the reason why I ask you what space you work. Traditionally OpenGL works in view space, so you can find examples of how to work in this space.
Be careful, you need to pass every position and direction information to this space. For instance, the lightPosition has to be multiplied by the viewMatrix and the lightDirection (if you have one) should be multiplied by the inverse transpose of the viewmatrix (i.e. a view matrix with only orientation information, similar to what you do with the normal).

Moreover, the normals should be stored normalized, to avoid unnecessary calculations. And this:
float3 Normal = normalize(tex2D(normSamp, input.texCoord)).rgb;
Becomes this:
float3 Normal = tex2D(normSamp, input.texCoord).rgb;
Besides, you are normalizing the four channels. This is incorrect.

Like Ohforf said pow is undefined for values bellow 0, and actually also 0. But in this case the result is 0, so everything is fine.

Btw I noticed just now that You are behind the final engine(I didn't look at sig in the morning), really impressive and good work Posted Image I am waiting to see more, keep this up:)


Thanks!!

Project page: < XNA FINAL Engine >


#8 Tasaq   Members   -  Reputation: 1255

Like
0Likes
Like

Posted 12 April 2012 - 01:08 PM

That is the reason why I ask you what space you work. Traditionally OpenGL works in view space, so you can find examples of how to work in this space. float3 Normal = normalize(tex2D(normSamp, input.texCoord)).rgb;
Becomes this:
float3 Normal = tex2D(normSamp, input.texCoord).rgb;
Besides, you are normalizing the four channels. This is incorrect.

I migrated from openGL and glsl to dx and hlsl (to be more specific i use dx with xna), and in openGL I had no problems... I made little experiment. I switched to world space(oh boy, I 'see' much more that way), then I used 'free' channel in my G-buffer for storing diffuse level calculated in G-buffer shader anddd... It works. The interesting thing for me is that I used the same data from buffers in Combine shader(the one I posted here, modified as You aided me) and it didn't work. Does hlsl change negative values to 0 on the output? that would make sense then...

#9 jischneider   Members   -  Reputation: 252

Like
0Likes
Like

Posted 12 April 2012 - 01:18 PM

"Does hlsl change negative values to 0 on the output?"

It depends of the surface format of the render target. If it is a floating point render target then negatives will be there, if it is a color format then no.

"The interesting thing for me is that I used the same data from buffers in Combine shader(the one I posted here, modified as You aided me) and it didn't work."

Me no understand tu English. Posted Image No really, what do you mean?
Do you try to render the G-Buffer in the combine shader? I want to know if you are reading the information correctly.

Project page: < XNA FINAL Engine >


#10 Tasaq   Members   -  Reputation: 1255

Like
0Likes
Like

Posted 12 April 2012 - 01:32 PM

Do you try to render the G-Buffer in the combine shader? I want to know if you are reading the information correctly.

Hahaha now I see how chaotic was that sentence:D As for deferred rendering idea I am doing it wrong, but only for "testing purpose". What I did was:
1. store each position in each channel (so i didn't use depth, r for x g for y and b for z)
2. in alpha i stored Normal dot LightVector calculated inside gbuffer shader
then with that done i run Combine shader in 2 ways
1. with NdL = tex2D(depSamp, texCoord).a; (g-buffer calculated dif - working)
2. with NdL = dot(Normal, normalize(LightPos - tex2D(depSamp, texCoord).rgb)); (i dind't do it exactly like here, I write it just to see what i did:) and it didn't work)

Right now I am 90% convinced that I set wrong format and I am going to correct that :)

#11 jischneider   Members   -  Reputation: 252

Like
1Likes
Like

Posted 12 April 2012 - 01:43 PM

And what surface formats do you use in this GBuffer? I predict a lack of precision. 8 bits per channel are not enough, and more important don’t store negative values.

Use a R32F to store the depth (only the depth), be aware that Z is managed different in OpenGL/XNA and DirectX. For now you should use a 1010102 format to store normals in a floating point format.

If you are serious about this you should do things correctly for the start. Deferred rendering is not an easy topic.

Project page: < XNA FINAL Engine >


#12 Tasaq   Members   -  Reputation: 1255

Like
0Likes
Like

Posted 12 April 2012 - 02:11 PM

If you are serious about this you should do things correctly for the start. Deferred rendering is not an easy topic.

I am doing it for fun mostly, and educate myself, and for passion. I know it's not easy, but I love challlenges :D And finally, changing format did the trick! :) I love you! xD thanks for all the help :D I used hdrBlendable, it's the only format i get to use now because others throw exceptions, but now I should manage to do the rest. I also 'uncomented' specular and it works like a charm :) Now I need to clean the mess I made in code and make everything more optimised ^^ You should be a tutor(if you are not one that is), because you explain stuff better than tutors at my univesity >_> Oh and rep for You, thanks again for all help :)

#13 jischneider   Members   -  Reputation: 252

Like
0Likes
Like

Posted 12 April 2012 - 02:22 PM

I love you!


I’m sorry. We can't be together. Posted Image

You should be a tutor(if you are not one that is), because you explain stuff better than tutors at my univesity


I was teacher assistance four years in Computer Graphics in the Universidad Nacional del Sur. Posted Image

I'm glad I could help. Bye!!!

Ps: hdrBlendable is 1010102 in XBOX and 16161616 (half-format) in PC.

Project page: < XNA FINAL Engine >





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS