Sign in to follow this  
Followers 0
Paul C Skertich

Specular Mapping in Deferred Rendering

25 posts in this topic

Hey everyone, I haven't posted in a long time but I was wondering on some insight about deferred rendering. What I understand about the deferred shading at least is to have the diffuse texture, normal map texture, specular map texture, depth map sent to the G BUffer. I have four render targets then a 2D Quad for the Screen Rendering like in Post Processing. The Screen Quad is hooked up to do the shading needed and the rendering to the four render targets are using deferred.hlsl.

 

The Screen Quad is using shader deferredLighting.hlsl.I ran into a problem when trying to do specular mapping calculation inside the deferredlighting.hlsl. So, I did the specular mapping calculations inside the deferred.hlsl where I render out the scene to the four render targets. Isn't this defeating the whole purpose of calculating the deferred shading? If I'm drawing objects to the render targets and  doing calculations for the specular lighting - won't it hurt the performance?

 

I thought you send through the diffuse, normal, specular, depth from the render targets to the deferredlighting.hlsl and do the lighting calculation there - not upon rendering the scene to the render targets. I posted a image that shows the result but there has to be a way to do the calcuation inside the deferredrendering.hlsl.

 

The problem was that I was having is that the screen quad has four vertices and a world position. If I was trying to calculate the camera position with the screen quad vertices world position - it wouldn't make sense to do that, right? I don't want to bog you guys down with code but if you really would like to take some snipplets to see where I can improve upon than that'll I'll share.

 

What I did notice is the forward-pass rendering technique is much different from deferred rendering - so I believe that's what tripping me up big time. My older engine had forward pass rendering until I decided to look more in depth using deferred rendering.

 

I watched other videos of deferred rendering and compared it with mine and I don't think I'm hitting the nail on the head! For instance the normal mapping the tangent space had to be calculated upon rendering the scene to render targets - that's not so bad because there's no light calcuations needed until it reaches the deferredlighting.hlsl for the normal map. Just this specular mapping is getting me mixed up.

 

If you look the image - the specular highlight is calculated during the render scene to texture not the screen quad.

 

0

Share this post


Link to post
Share on other sites

You have to store means of position data alongside the other data for deferred rendering. You can eigther directly store position values in a 32bit-per-component render-target and sample directly from it to get the position. Since this wastes resources as well as bandwidth, you are better of storing the depth in a 32-bit single-component render-target, and use this value to reconstruct position - this is a little complicated & I haven't done it myself yet, but from what I know you have to store a ray per screen edge that runs along the view-frustum, and use the texel position to interpolate between those rays to get the "XY"-position value. You then use the depth (in 0.0 to 1.0 screen-space range) to reconstruct "Z"-position. I suggest you read up a tutorial on how to do this, I doubt I'm able to explain this properly :D

 

Also, since I already talked about wasting resources, you should further try to stuff things into as little rendertargets as possible. Put the specular component in the alpha channel of the diffuse RT. You can then compress normals to go into two channels instead of three, also I hope you are storing normals already in 8bit-per-component-format using this conversion trick:

// compress normal for output in shader
out.vNormal = vNormal * 0.5f + 0.5f;

// decompress when you read in the normal
float3 vNormal = Sample(NormalTex, in.vTex0) * 2.0f - 1.0f;

Now each normal fits into 8 bit unsigned range, and using that other compression trick I mentioned (again, you better read this up yourself) you have two additional 8-bit channels that are unused, where you can store other stuff like shininess etc...

1

Share this post


Link to post
Share on other sites

Now each normal fits into 8 bit unsigned range, and using that other compression trick I mentioned (again, you better read this up yourself) you have two additional 8-bit channels that are unused, where you can store other stuff like shininess etc...

Hm, wouldn't that increase the visual artifacts quite a bit?

 

I've seen Killzone 2 slides where they say they use two 16 bit floats to store X,Y, then reconstruct Z. Or Battlefield 3 slides where they say they use RGB8 and just store XYZ in it. I never saw something from a game using both methods at the same time.

Edited by TheChubu
2

Share this post


Link to post
Share on other sites

Hm, wouldn't that increase the visual artifacts quite a bit?

 

I'm not entirely sure, I'm only using the RGB8-method at the time being. I remembered reading something about the second compression method at some blog, and I'm pretty sure they only used 8 bit, but I might as well be mistaken. Okay, I just checked that blog, it referes to the Killzone 2 slides you mentioned, so it might probably produce really heavy artifacts. It would be interesting to see how notable they are actually... might try it out if I find the time.

 

EDIT: I find it interesting though, how is compressing the normals to X&Y going to help if you still use the same amount of bits use even more bits (16*2 = 32, 8*3 = 24 which leaves you with 8 bit to put something else in) mind you?

Edited by Juliean
1

Share this post


Link to post
Share on other sites

For normals, I'm using the spheremap transform described here: http://aras-p.info/texts/CompactNormalStorage.html to store the normals in two 8 bit channels. There are definitely some artifacts, but they're not too bad:

 

attachicon.gifSphermapTx.png

It's not to bad the artifacts are barely visible. To answer some people's question the render targets are stored under R32G32B32A32_FLOAT and are multisampled - as well as the depth stencil view.

 

Here's just  a quick snipplets not too crazy. The scene gets rendered to texture using Deferred.hlsl and the quad gets rendered with deferredlighting.hlsl from what I understood from Rastertek.com. Some may not like the website and I could agree but until I master something then I can work on optimizing.

 

So, I'm guessing the specular map is way different that in a forward pass rendering system?

0

Share this post


Link to post
Share on other sites

So, what if I am able to store this line in the deferred.hlsl to the output as data - would I be able to do this?

float4 worldPosition = mul(modelMatrix, input.position);
float4 cameraposition = normalize(cameraDirection - worldPosition);
output.cameraVector = cameraposition.

//-- the pixelshader that outputs to the render targets.

output.cameradata = input.cameraVector;


That's the only issue I see that's messing up the specular mapping for me at this point. If I could send the camera data information to where I can do the calcuation of the specular lighting - then the issue would be resolved, no?

0

Share this post


Link to post
Share on other sites

You need a way to get the world position (or possibly the view-space position, depending on how you're calculating lighting) of a pixel in the lighting phase. This isn't just necessary for the specular lighting components, but also for point lights (even without specular) and other things. It's a pretty fundamental thing to have working for deferred rendering.

 

The easy/wasteful way is just to store the position in the gbuffer. The way it's usually done is to reconstruct it just from the depth information. Do a search around and you'll find many ways to do this.

 

 


float4 worldPosition = mul(modelMatrix, input.position);
float4 cameraposition = normalize(cameraDirection - worldPosition);
output.cameraVector = cameraposition.

//-- the pixelshader that outputs to the render targets.

output.cameradata = input.cameraVector;

 

I don't really understand this. You're subtracting a position (worldPosition) from a vector (cameraDirection). That doesn't really make sense - what are you trying to achieve? 

0

Share this post


Link to post
Share on other sites

So, what if I am able to store this line in the deferred.hlsl to the output as data - would I be able to do this?

float4 worldPosition = mul(modelMatrix, input.position);
float4 cameraposition = normalize(cameraDirection - worldPosition);
output.cameraVector = cameraposition.

//-- the pixelshader that outputs to the render targets.

output.cameradata = input.cameraVector;


That's the only issue I see that's messing up the specular mapping for me at this point. If I could send the camera data information to where I can do the calcuation of the specular lighting - then the issue would be resolved, no?

 

It works alright but I had to create addition render target to store the cameraposition for the lighting calculation. Just how muich data can i stuff inside the G BUFFER anyways?

0

Share this post


Link to post
Share on other sites

You need a way to get the world position (or possibly the view-space position, depending on how you're calculating lighting) of a pixel in the lighting phase. This isn't just necessary for the specular lighting components, but also for point lights (even without specular) and other things. It's a pretty fundamental thing to have working for deferred rendering.

 

The easy/wasteful way is just to store the position in the gbuffer. The way it's usually done is to reconstruct it just from the depth information. Do a search around and you'll find many ways to do this.

 

 

 

 


float4 worldPosition = mul(modelMatrix, input.position);
float4 cameraposition = normalize(cameraDirection - worldPosition);
output.cameraVector = cameraposition.

//-- the pixelshader that outputs to the render targets.

output.cameradata = input.cameraVector;
 

 

I don't really understand this. You're subtracting a position (worldPosition) from a vector (cameraDirection). That doesn't really make sense - what are you trying to achieve? 

Inside the specular mapping tutorial on Rastertek I just took that part where he calculates the camera position in world space realtive to the input vertex of the model. ANd yes, I store the position of the camera inside the world space in the gbuffer.

0

Share this post


Link to post
Share on other sites


Inside the specular mapping tutorial on Rastertek I just took that part where he calculates the camera position in world space realtive to the input vertex of the model.

 

Are you talking about this tutorial? I don't see that anywhere. Maybe your terminology is wrong... A world space position is relative to (0, 0, 0), it doesn't make sense for it to be relative to an "input vertex".

 


ANd yes, I store the position of the camera inside the world space in the gbuffer.

 

Why would you do that? The position of the camera is constant for the whole scene. It makes no sense for that information to be in the gbuffer.

0

Share this post


Link to post
Share on other sites


Inside the specular mapping tutorial on Rastertek I just took that part where he calculates the camera position in world space realtive to the input vertex of the model.

 

Since you have a deferred renderer this part is actually deferred to the lighting shader. For beginners, simply output

float4 worldPosition = mul(modelMatrix, input.position);

output.worldPosition = worldPosition;

to the gbuffer you currently write your camera-vector in. Then, inside the deferred lighting shader, you can read out that world-position from the gbuffer and do exactly that calculation

float4 cameraposition = normalize(cameraDirection - worldPosition);

there in the pixelshader.

0

Share this post


Link to post
Share on other sites

 


Inside the specular mapping tutorial on Rastertek I just took that part where he calculates the camera position in world space realtive to the input vertex of the model.

 

Since you have a deferred renderer this part is actually deferred to the lighting shader. For beginners, simply output

float4 worldPosition = mul(modelMatrix, input.position);

output.worldPosition = worldPosition;

to the gbuffer you currently write your camera-vector in. Then, inside the deferred lighting shader, you can read out that world-position from the gbuffer and do exactly that calculation

float4 cameraposition = normalize(cameraDirection - worldPosition);

there in the pixelshader.

 

 

Wow - that's working ten times better than what I had working last night! I outputted to the gbuffer the camera position instead of the worldposition.  I signed in this morning and looked at what you were talking about. Now for once it's starting to have that specular feel it it! I was comparing my kitchen floor by moving around and other youtube videos. So bringing over the worldposition to the gbuffer was the ideal choice.

 

So, a couple of things I learned while working with Deferred Rendering is any important information/data such as the worldposition - should be brought through the GBUFFER.

So with the code you specified and why it works a hella lot better is because the worldposition is row multiplying from the model's world space and vertex position. Then when it's brought over to the GBUFFER the normalizes the length of cameraDirection - worldPosition. 

 

If my camera is at (8,10,-15) and the GBUFFER worldposition is at (0,0,0) then (8,10,-15) - (0,0,0) will be 8,10,-15 then when doing the specular the dot product of relfection and the camera viewing gives me the correct specular mapping light.

 

Thanks Juliean for clearing up some things. I'm glad you understood what I was going at. THanks!

0

Share this post


Link to post
Share on other sites


Thanks Juliean for clearing up some things. I'm glad you understood what I was going at. THanks!

 

Glad it works now, still keep in mind the depth-reconstruction technique both phil_t and me mentioned, as this might be essential for good performance in the future ;)

1

Share this post


Link to post
Share on other sites

Just noticed this thread, and if it'd be any use to you, about a year back I adapted one of the rastertek DX11 tutorials to do deferred shading (with specular) alongside the original forward shaded specular from the tutorial.  It supports deferred directional and point lights, and uses the depth buffer to reconstruct world position.  The gbuffer is only two textures (albedo and normals).

 

Anyhow, if you'd find it useful (as it's easy to follow the same coding style as the rastertek code), let me know and i'll put it up on dropbox for you.  

 

It does need VS2012 or 2013 though, since it uses the windows SDK libraries that come with the newer VS, instead of the older directx SDK libraries like the rastertek tutorials (e.g. uses DirectXMath instead of D3DX10Math).

1

Share this post


Link to post
Share on other sites

One more question before this post gets wrapped up. When exporting normals to the GBUFFER Render Targets - could I just export it as is or do I have to calculate the nromal map in tangent space then export it to the GBUFFER? Everything was fine until I wanted to make sure that I have the normal map and specular map running then I came into a weird abnormality with the attached photo. The attached photo is the output of the normals that were calculated in tangent space before heading out to the GBUFFER. I determined the issue was the calculation for normal map to tangent texture space.

 

Inside the deferred.hlsl that sends everything to the GBUFFER I have this:


float4 normalmap = textureSampler[1].Sample(sampleState, input.texcoord);
normalmap = (2.0f * normalmap) - 1.0f;

input.tangent = normalize(input.tangent - dot(input.tangent, input.normal) * input.normal);
float3 biTangent = cross(input.normal, input.tangent);

float3x3 texspace = float3x3(input.tangent, biTangent, input.normal);

input.normal = normalize(mul(normalmap, texspace));

//output.normal = float4(0.5f * input.normal + 0.5f, 1.0f); //-- Temporary commented out for debugging Normal Mapping issue.
output.normal = textureSampler[1].Sample(sampleState, input.texcoord);

Another screen shot was taken to show the result of just outputing the normalmap without calcuating to tangent space. However, what are those funky looking dark spots or shades?

 

So, I was wondering if I nered to calcualte the tangent space or should I not when it comes down to deferred rendering?

 

 

0

Share this post


Link to post
Share on other sites

Putting tangent-space normals into the g-buffer doesn't make sense - picture two triangle faces, facing different directions but having the same normal map. If you put the tangent-space normals into the g-buffer, they would have identical normals and thus would be lit identically by your lighting pass (despite facing different directions). You've essentially lost information because you have ignored the surface normal of the two triangles.

 

All of the normals in your g-buffer need to be in the same "space". 

Edited by phil_t
1

Share this post


Link to post
Share on other sites

Okay, that makes a lot more sense than why there was a weird abnormality in picture one. So what you're saying Phil is to just put the normal map to the GBUFFER as like a diffuse texture or any other texture outputted to the GBUFFER?

0

Share this post


Link to post
Share on other sites


So what you're saying Phil is to just put the normal map to the GBUFFER as like a diffuse texture or any other texture outputted to the GBUFFER?

 

No, that's the opposite of what I'm saying. Presumably your normal maps are in tangent-space. You need to convert the normal map normal to world space before you store it in the g-buffer (using the matrix you construct from the surface's tangent, normal and binormal). Then your lighting pass performs the lighting in world space.

 

(Alternately, you could store them in viewspace... that's what I do, since I'm only storing two components of the normal, and reconstructing the third. If you're storing all 3 normal components in the g-buffer, then it's simplest to use worldspace).

 

If you're getting sharp discontinuities between your triangles, then your triangles must have incorrect normals (or binormal or tangent vectors), and/or you're constructing the "texspace" matrix incorrectly.

 

My code to get the world space normal to store in the g-buffer looks something like this (ps_3_0):

	float3 normalMap =  tex2D(NormalSampler, input.TexCoords);
	float3x3 TangentToWorld = float3x3(input.Tangent, input.Binormal, input.Normal);
	float3 worldNormal = mul(normalMap, TangentToWorld);

1

Share this post


Link to post
Share on other sites

Inside the vertex shader the tangent and normal space  stored inside the gbuffer like this:


output.normal = mul((float3x3)modelWorld, input.normal);

output.tangent = mul((float3x3)modelWorld, input.tangent);

Brayzersoft tutorial for bump mapping uses model's normals for the tangents. the modelworld matrix is the matrix idenity of the model.  Maybe I'm not understanding deferred lighting very much perhaps.

0

Share this post


Link to post
Share on other sites

I would like to thank everyone for contributing their advice on this post! I finally have the issue at hand resolved. The final resulting image shows this - which matches up from a rastertek tutorial specular mapping for DirectX 11.

 

I solved the issue by just sending the normal mapped texture to the GBUFFER Render Target then with this:

float4 normalmap = Texture[1].Sample(ss, input.texcoord);
normalmap = (normalmap - 0.5) * 2.0;

I saw something on a forum or this forum about that slight caclulation above. It's giving me the results and I looked at the site you refered to Phil about the sphericalmapping encoding and decoding the X Y Z values. However, the guy was storing the render targets as R8G8B8A8 a single 32 bit render target. I'm storing as R32G32B32A32 render target.  However, here's the final result:

 

Thanks for the help guys!

0

Share this post


Link to post
Share on other sites

Oops, i had to invert the light direction to get a more precise lighting. Here's the correct lighting with normal specular mapping inside the deferred shading system.

0

Share this post


Link to post
Share on other sites

I don't see how this could be correct. From what you've described, the surface normal of your geometry never figures into your calculation (Does the lighting change when you rotate the cube or the direction of the light source?).

 

It seems like think you have it working successfully without actually achieving it, and without understanding why it's broken. You're trying random things without knowing why they do or don't work, and that's no way to learn. I'm not trying to be mean, but just offering advice. Here's an example:

 


I saw something on a forum or this forum about that slight caclulation above.

 

It maps the values [0, 1] to [-1,1]. This is needed since your normal map is likely in a standard RGBA format where each channel has values between 0 and 1. Do you want to learn? Sometimes it doesn't seem like it.

 

I would suggest to take a step back and try some simpler lighting tutorials first. Start with something simple and make sure you know what every part of your shader is doing and why. Otherwise you're just stumbling around in the dark.

0

Share this post


Link to post
Share on other sites

I get you - Make sense because when outputing just regular normals the green (Y) is upward, red (Z) is facing you, blue is right handed (x) the calculation of the bitangents and the tangents might not calculated right. I believe I'm having issues when converting from tangent space to view space. When I attempt to do as suggested it funks up everything. The calculations make sense because to -1, 1 space 2.0f * normalmap -1.0f would give you ( 2.0 * 0.0 = 0 - 1.0 = -1.0. Then for the V 2 * 1 = 2 - 1 = 1. ) If you understand where I'm going at.

 

Here's the sniplet - the problem has to be converting everything back to view space after I'm done with it. The GBUFFER is the output render target correct? So all this talk about the GBUFFER is the output to render target?


input.normal = normalize(input.normal);

float4 normalmap = textureSampler[1].Sample(sampleState, input.texcoord);

float3 normalT =  normalize(normalmap * 2.0 - 1.0f); //-- Convert normal map to [-1, 1] space. 2 * -1 = -2 - 1 = -1. 2 * 1 = 2 - 1 = 1. 
float3 N = input.normal;
float3 T = normalize(input.tangent - dot(input.tangent , N) * N); //-- Substract the tangent (V) of the normals dot product of tangent and normal times by normal. 
float3 B = cross(N, T); //-- Find the biTangent by the cross dot product of normal and tangent. 
float3x3 TBN = float3x3(T,B,N); //-- Create a 3x3 matrix to store the tangent, biTangent and normal.
float3 bumpmap = normalize(mul(normalT, TBN)); //-- normalize the muliplied result of the normal inside 3x3 matrix.

float3 bumpVS = bumpmap * 0.5f + 0.5f; //-- take the bump map that is created and convert to 0,1 space. -1 * 0.5 = -.5 + 0.5 = 0. 1 * 0.5 = 0.5f + 0.5f = 1. 
output.normal = float4(bumpVS,1.0f);

This is you guys suggestion where as again messed up everything and starts me off at square 0. I don't know how in the world temporary fix solved everything. What I noticed is sometimes ti would have specular high lights but sometimes not depending how far or where I move the camera. I calculated the position of the light direction to be at and even then sometimes it won't light up as showned in the photo. Interestly enough I turned my render targets into R8G8B8A8 unsigned integar and noticed white little artifacts.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0