Specular Mapping in Deferred Rendering

Started by
24 comments, last by Paul C Skertich 9 years, 9 months ago

You need a way to get the world position (or possibly the view-space position, depending on how you're calculating lighting) of a pixel in the lighting phase. This isn't just necessary for the specular lighting components, but also for point lights (even without specular) and other things. It's a pretty fundamental thing to have working for deferred rendering.

The easy/wasteful way is just to store the position in the gbuffer. The way it's usually done is to reconstruct it just from the depth information. Do a search around and you'll find many ways to do this.


float4 worldPosition = mul(modelMatrix, input.position);
float4 cameraposition = normalize(cameraDirection - worldPosition);
output.cameraVector = cameraposition.

//-- the pixelshader that outputs to the render targets.

output.cameradata = input.cameraVector;

I don't really understand this. You're subtracting a position (worldPosition) from a vector (cameraDirection). That doesn't really make sense - what are you trying to achieve?

Inside the specular mapping tutorial on Rastertek I just took that part where he calculates the camera position in world space realtive to the input vertex of the model. ANd yes, I store the position of the camera inside the world space in the gbuffer.

Game Engine's WIP Videos - http://www.youtube.com/sicgames88
SIC Games @ GitHub - https://github.com/SICGames?tab=repositories
Simple D2D1 Font Wrapper for D3D11 - https://github.com/SICGames/D2DFontX
Advertisement


Inside the specular mapping tutorial on Rastertek I just took that part where he calculates the camera position in world space realtive to the input vertex of the model.

Are you talking about this tutorial? I don't see that anywhere. Maybe your terminology is wrong... A world space position is relative to (0, 0, 0), it doesn't make sense for it to be relative to an "input vertex".


ANd yes, I store the position of the camera inside the world space in the gbuffer.

Why would you do that? The position of the camera is constant for the whole scene. It makes no sense for that information to be in the gbuffer.


Inside the specular mapping tutorial on Rastertek I just took that part where he calculates the camera position in world space realtive to the input vertex of the model.

Since you have a deferred renderer this part is actually deferred to the lighting shader. For beginners, simply output


float4 worldPosition = mul(modelMatrix, input.position);

output.worldPosition = worldPosition;

to the gbuffer you currently write your camera-vector in. Then, inside the deferred lighting shader, you can read out that world-position from the gbuffer and do exactly that calculation


float4 cameraposition = normalize(cameraDirection - worldPosition);

there in the pixelshader.


Inside the specular mapping tutorial on Rastertek I just took that part where he calculates the camera position in world space realtive to the input vertex of the model.

Since you have a deferred renderer this part is actually deferred to the lighting shader. For beginners, simply output


float4 worldPosition = mul(modelMatrix, input.position);

output.worldPosition = worldPosition;

to the gbuffer you currently write your camera-vector in. Then, inside the deferred lighting shader, you can read out that world-position from the gbuffer and do exactly that calculation


float4 cameraposition = normalize(cameraDirection - worldPosition);

there in the pixelshader.

Wow - that's working ten times better than what I had working last night! I outputted to the gbuffer the camera position instead of the worldposition. I signed in this morning and looked at what you were talking about. Now for once it's starting to have that specular feel it it! I was comparing my kitchen floor by moving around and other youtube videos. So bringing over the worldposition to the gbuffer was the ideal choice.

So, a couple of things I learned while working with Deferred Rendering is any important information/data such as the worldposition - should be brought through the GBUFFER.

So with the code you specified and why it works a hella lot better is because the worldposition is row multiplying from the model's world space and vertex position. Then when it's brought over to the GBUFFER the normalizes the length of cameraDirection - worldPosition.

If my camera is at (8,10,-15) and the GBUFFER worldposition is at (0,0,0) then (8,10,-15) - (0,0,0) will be 8,10,-15 then when doing the specular the dot product of relfection and the camera viewing gives me the correct specular mapping light.

Thanks Juliean for clearing up some things. I'm glad you understood what I was going at. THanks!

Game Engine's WIP Videos - http://www.youtube.com/sicgames88
SIC Games @ GitHub - https://github.com/SICGames?tab=repositories
Simple D2D1 Font Wrapper for D3D11 - https://github.com/SICGames/D2DFontX


Thanks Juliean for clearing up some things. I'm glad you understood what I was going at. THanks!

Glad it works now, still keep in mind the depth-reconstruction technique both phil_t and me mentioned, as this might be essential for good performance in the future ;)

Just noticed this thread, and if it'd be any use to you, about a year back I adapted one of the rastertek DX11 tutorials to do deferred shading (with specular) alongside the original forward shaded specular from the tutorial. It supports deferred directional and point lights, and uses the depth buffer to reconstruct world position. The gbuffer is only two textures (albedo and normals).

Anyhow, if you'd find it useful (as it's easy to follow the same coding style as the rastertek code), let me know and i'll put it up on dropbox for you.

It does need VS2012 or 2013 though, since it uses the windows SDK libraries that come with the newer VS, instead of the older directx SDK libraries like the rastertek tutorials (e.g. uses DirectXMath instead of D3DX10Math).

One more question before this post gets wrapped up. When exporting normals to the GBUFFER Render Targets - could I just export it as is or do I have to calculate the nromal map in tangent space then export it to the GBUFFER? Everything was fine until I wanted to make sure that I have the normal map and specular map running then I came into a weird abnormality with the attached photo. The attached photo is the output of the normals that were calculated in tangent space before heading out to the GBUFFER. I determined the issue was the calculation for normal map to tangent texture space.

Inside the deferred.hlsl that sends everything to the GBUFFER I have this:



float4 normalmap = textureSampler[1].Sample(sampleState, input.texcoord);
normalmap = (2.0f * normalmap) - 1.0f;

input.tangent = normalize(input.tangent - dot(input.tangent, input.normal) * input.normal);
float3 biTangent = cross(input.normal, input.tangent);

float3x3 texspace = float3x3(input.tangent, biTangent, input.normal);

input.normal = normalize(mul(normalmap, texspace));

//output.normal = float4(0.5f * input.normal + 0.5f, 1.0f); //-- Temporary commented out for debugging Normal Mapping issue.
output.normal = textureSampler[1].Sample(sampleState, input.texcoord);

Another screen shot was taken to show the result of just outputing the normalmap without calcuating to tangent space. However, what are those funky looking dark spots or shades?

So, I was wondering if I nered to calcualte the tangent space or should I not when it comes down to deferred rendering?

Game Engine's WIP Videos - http://www.youtube.com/sicgames88
SIC Games @ GitHub - https://github.com/SICGames?tab=repositories
Simple D2D1 Font Wrapper for D3D11 - https://github.com/SICGames/D2DFontX

Putting tangent-space normals into the g-buffer doesn't make sense - picture two triangle faces, facing different directions but having the same normal map. If you put the tangent-space normals into the g-buffer, they would have identical normals and thus would be lit identically by your lighting pass (despite facing different directions). You've essentially lost information because you have ignored the surface normal of the two triangles.

All of the normals in your g-buffer need to be in the same "space".

Okay, that makes a lot more sense than why there was a weird abnormality in picture one. So what you're saying Phil is to just put the normal map to the GBUFFER as like a diffuse texture or any other texture outputted to the GBUFFER?

Game Engine's WIP Videos - http://www.youtube.com/sicgames88
SIC Games @ GitHub - https://github.com/SICGames?tab=repositories
Simple D2D1 Font Wrapper for D3D11 - https://github.com/SICGames/D2DFontX


So what you're saying Phil is to just put the normal map to the GBUFFER as like a diffuse texture or any other texture outputted to the GBUFFER?

No, that's the opposite of what I'm saying. Presumably your normal maps are in tangent-space. You need to convert the normal map normal to world space before you store it in the g-buffer (using the matrix you construct from the surface's tangent, normal and binormal). Then your lighting pass performs the lighting in world space.

(Alternately, you could store them in viewspace... that's what I do, since I'm only storing two components of the normal, and reconstructing the third. If you're storing all 3 normal components in the g-buffer, then it's simplest to use worldspace).

If you're getting sharp discontinuities between your triangles, then your triangles must have incorrect normals (or binormal or tangent vectors), and/or you're constructing the "texspace" matrix incorrectly.

My code to get the world space normal to store in the g-buffer looks something like this (ps_3_0):


	float3 normalMap =  tex2D(NormalSampler, input.TexCoords);
	float3x3 TangentToWorld = float3x3(input.Tangent, input.Binormal, input.Normal);
	float3 worldNormal = mul(normalMap, TangentToWorld);

This topic is closed to new replies.

Advertisement