Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Don't forget to read Tuesday's email newsletter for your chance to win a free copy of Construct 2!


Specular Mapping in Deferred Rendering


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
25 replies to this topic

#1 SIC Games   Members   -  Reputation: 617

Like
0Likes
Like

Posted 05 July 2014 - 09:48 AM

Hey everyone, I haven't posted in a long time but I was wondering on some insight about deferred rendering. What I understand about the deferred shading at least is to have the diffuse texture, normal map texture, specular map texture, depth map sent to the G BUffer. I have four render targets then a 2D Quad for the Screen Rendering like in Post Processing. The Screen Quad is hooked up to do the shading needed and the rendering to the four render targets are using deferred.hlsl.

 

The Screen Quad is using shader deferredLighting.hlsl.I ran into a problem when trying to do specular mapping calculation inside the deferredlighting.hlsl. So, I did the specular mapping calculations inside the deferred.hlsl where I render out the scene to the four render targets. Isn't this defeating the whole purpose of calculating the deferred shading? If I'm drawing objects to the render targets and  doing calculations for the specular lighting - won't it hurt the performance?

 

I thought you send through the diffuse, normal, specular, depth from the render targets to the deferredlighting.hlsl and do the lighting calculation there - not upon rendering the scene to the render targets. I posted a image that shows the result but there has to be a way to do the calcuation inside the deferredrendering.hlsl.

 

The problem was that I was having is that the screen quad has four vertices and a world position. If I was trying to calculate the camera position with the screen quad vertices world position - it wouldn't make sense to do that, right? I don't want to bog you guys down with code but if you really would like to take some snipplets to see where I can improve upon than that'll I'll share.

 

What I did notice is the forward-pass rendering technique is much different from deferred rendering - so I believe that's what tripping me up big time. My older engine had forward pass rendering until I decided to look more in depth using deferred rendering.

 

I watched other videos of deferred rendering and compared it with mine and I don't think I'm hitting the nail on the head! For instance the normal mapping the tangent space had to be calculated upon rendering the scene to render targets - that's not so bad because there's no light calcuations needed until it reaches the deferredlighting.hlsl for the normal map. Just this specular mapping is getting me mixed up.

 

If you look the image - the specular highlight is calculated during the render scene to texture not the screen quad.

 

Attached Thumbnails

  • SpecularMapInGBufferCalculation.PNG

Game Engine's WIP Videos - http://www.youtube.com/sicgames88


Sponsor:

#2 SIC Games   Members   -  Reputation: 617

Like
0Likes
Like

Posted 05 July 2014 - 10:28 AM

This is a screen shot from calculating the specular mapping light inside the Screen 2D Quad.

Attached Thumbnails

  • specularMapin2DScreenQuad.PNG

Game Engine's WIP Videos - http://www.youtube.com/sicgames88


#3 Juliean   GDNet+   -  Reputation: 2693

Like
1Likes
Like

Posted 05 July 2014 - 10:53 AM

You have to store means of position data alongside the other data for deferred rendering. You can eigther directly store position values in a 32bit-per-component render-target and sample directly from it to get the position. Since this wastes resources as well as bandwidth, you are better of storing the depth in a 32-bit single-component render-target, and use this value to reconstruct position - this is a little complicated & I haven't done it myself yet, but from what I know you have to store a ray per screen edge that runs along the view-frustum, and use the texel position to interpolate between those rays to get the "XY"-position value. You then use the depth (in 0.0 to 1.0 screen-space range) to reconstruct "Z"-position. I suggest you read up a tutorial on how to do this, I doubt I'm able to explain this properly :D

 

Also, since I already talked about wasting resources, you should further try to stuff things into as little rendertargets as possible. Put the specular component in the alpha channel of the diffuse RT. You can then compress normals to go into two channels instead of three, also I hope you are storing normals already in 8bit-per-component-format using this conversion trick:

// compress normal for output in shader
out.vNormal = vNormal * 0.5f + 0.5f;

// decompress when you read in the normal
float3 vNormal = Sample(NormalTex, in.vTex0) * 2.0f - 1.0f;

Now each normal fits into 8 bit unsigned range, and using that other compression trick I mentioned (again, you better read this up yourself) you have two additional 8-bit channels that are unused, where you can store other stuff like shininess etc...



#4 TheChubu   Crossbones+   -  Reputation: 4573

Like
2Likes
Like

Posted 05 July 2014 - 11:13 AM


Now each normal fits into 8 bit unsigned range, and using that other compression trick I mentioned (again, you better read this up yourself) you have two additional 8-bit channels that are unused, where you can store other stuff like shininess etc...

Hm, wouldn't that increase the visual artifacts quite a bit?

 

I've seen Killzone 2 slides where they say they use two 16 bit floats to store X,Y, then reconstruct Z. Or Battlefield 3 slides where they say they use RGB8 and just store XYZ in it. I never saw something from a game using both methods at the same time.


Edited by TheChubu, 05 July 2014 - 11:14 AM.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#5 Juliean   GDNet+   -  Reputation: 2693

Like
1Likes
Like

Posted 05 July 2014 - 11:28 AM


Hm, wouldn't that increase the visual artifacts quite a bit?

 

I'm not entirely sure, I'm only using the RGB8-method at the time being. I remembered reading something about the second compression method at some blog, and I'm pretty sure they only used 8 bit, but I might as well be mistaken. Okay, I just checked that blog, it referes to the Killzone 2 slides you mentioned, so it might probably produce really heavy artifacts. It would be interesting to see how notable they are actually... might try it out if I find the time.

 

EDIT: I find it interesting though, how is compressing the normals to X&Y going to help if you still use the same amount of bits use even more bits (16*2 = 32, 8*3 = 24 which leaves you with 8 bit to put something else in) mind you?


Edited by Juliean, 05 July 2014 - 12:05 PM.


#6 phil_t   Crossbones+   -  Reputation: 3947

Like
1Likes
Like

Posted 05 July 2014 - 12:34 PM

For normals, I'm using the spheremap transform described here: http://aras-p.info/texts/CompactNormalStorage.html to store the normals in two 8 bit channels. There are definitely some artifacts, but they're not too bad:

 

SphermapTx.png

 

 



#7 SIC Games   Members   -  Reputation: 617

Like
0Likes
Like

Posted 05 July 2014 - 03:43 PM

For normals, I'm using the spheremap transform described here: http://aras-p.info/texts/CompactNormalStorage.html to store the normals in two 8 bit channels. There are definitely some artifacts, but they're not too bad:

 

attachicon.gifSphermapTx.png

It's not to bad the artifacts are barely visible. To answer some people's question the render targets are stored under R32G32B32A32_FLOAT and are multisampled - as well as the depth stencil view.

 

Here's just  a quick snipplets not too crazy. The scene gets rendered to texture using Deferred.hlsl and the quad gets rendered with deferredlighting.hlsl from what I understood from Rastertek.com. Some may not like the website and I could agree but until I master something then I can work on optimizing.

 

So, I'm guessing the specular map is way different that in a forward pass rendering system?


Game Engine's WIP Videos - http://www.youtube.com/sicgames88


#8 SIC Games   Members   -  Reputation: 617

Like
0Likes
Like

Posted 05 July 2014 - 03:54 PM

So, what if I am able to store this line in the deferred.hlsl to the output as data - would I be able to do this?

float4 worldPosition = mul(modelMatrix, input.position);
float4 cameraposition = normalize(cameraDirection - worldPosition);
output.cameraVector = cameraposition.

//-- the pixelshader that outputs to the render targets.

output.cameradata = input.cameraVector;


That's the only issue I see that's messing up the specular mapping for me at this point. If I could send the camera data information to where I can do the calcuation of the specular lighting - then the issue would be resolved, no?


Game Engine's WIP Videos - http://www.youtube.com/sicgames88


#9 phil_t   Crossbones+   -  Reputation: 3947

Like
0Likes
Like

Posted 05 July 2014 - 04:10 PM

You need a way to get the world position (or possibly the view-space position, depending on how you're calculating lighting) of a pixel in the lighting phase. This isn't just necessary for the specular lighting components, but also for point lights (even without specular) and other things. It's a pretty fundamental thing to have working for deferred rendering.

 

The easy/wasteful way is just to store the position in the gbuffer. The way it's usually done is to reconstruct it just from the depth information. Do a search around and you'll find many ways to do this.

 

 


float4 worldPosition = mul(modelMatrix, input.position);
float4 cameraposition = normalize(cameraDirection - worldPosition);
output.cameraVector = cameraposition.

//-- the pixelshader that outputs to the render targets.

output.cameradata = input.cameraVector;

 

I don't really understand this. You're subtracting a position (worldPosition) from a vector (cameraDirection). That doesn't really make sense - what are you trying to achieve? 



#10 SIC Games   Members   -  Reputation: 617

Like
0Likes
Like

Posted 05 July 2014 - 04:12 PM

So, what if I am able to store this line in the deferred.hlsl to the output as data - would I be able to do this?

float4 worldPosition = mul(modelMatrix, input.position);
float4 cameraposition = normalize(cameraDirection - worldPosition);
output.cameraVector = cameraposition.

//-- the pixelshader that outputs to the render targets.

output.cameradata = input.cameraVector;


That's the only issue I see that's messing up the specular mapping for me at this point. If I could send the camera data information to where I can do the calcuation of the specular lighting - then the issue would be resolved, no?

 

It works alright but I had to create addition render target to store the cameraposition for the lighting calculation. Just how muich data can i stuff inside the G BUFFER anyways?


Game Engine's WIP Videos - http://www.youtube.com/sicgames88


#11 SIC Games   Members   -  Reputation: 617

Like
0Likes
Like

Posted 05 July 2014 - 04:15 PM

You need a way to get the world position (or possibly the view-space position, depending on how you're calculating lighting) of a pixel in the lighting phase. This isn't just necessary for the specular lighting components, but also for point lights (even without specular) and other things. It's a pretty fundamental thing to have working for deferred rendering.

 

The easy/wasteful way is just to store the position in the gbuffer. The way it's usually done is to reconstruct it just from the depth information. Do a search around and you'll find many ways to do this.

 

 

 

 


float4 worldPosition = mul(modelMatrix, input.position);
float4 cameraposition = normalize(cameraDirection - worldPosition);
output.cameraVector = cameraposition.

//-- the pixelshader that outputs to the render targets.

output.cameradata = input.cameraVector;
 

 

I don't really understand this. You're subtracting a position (worldPosition) from a vector (cameraDirection). That doesn't really make sense - what are you trying to achieve? 

Inside the specular mapping tutorial on Rastertek I just took that part where he calculates the camera position in world space realtive to the input vertex of the model. ANd yes, I store the position of the camera inside the world space in the gbuffer.


Game Engine's WIP Videos - http://www.youtube.com/sicgames88


#12 phil_t   Crossbones+   -  Reputation: 3947

Like
0Likes
Like

Posted 06 July 2014 - 02:01 AM


Inside the specular mapping tutorial on Rastertek I just took that part where he calculates the camera position in world space realtive to the input vertex of the model.

 

Are you talking about this tutorial? I don't see that anywhere. Maybe your terminology is wrong... A world space position is relative to (0, 0, 0), it doesn't make sense for it to be relative to an "input vertex".

 


ANd yes, I store the position of the camera inside the world space in the gbuffer.

 

Why would you do that? The position of the camera is constant for the whole scene. It makes no sense for that information to be in the gbuffer.



#13 Juliean   GDNet+   -  Reputation: 2693

Like
0Likes
Like

Posted 06 July 2014 - 06:47 AM


Inside the specular mapping tutorial on Rastertek I just took that part where he calculates the camera position in world space realtive to the input vertex of the model.

 

Since you have a deferred renderer this part is actually deferred to the lighting shader. For beginners, simply output

float4 worldPosition = mul(modelMatrix, input.position);

output.worldPosition = worldPosition;

to the gbuffer you currently write your camera-vector in. Then, inside the deferred lighting shader, you can read out that world-position from the gbuffer and do exactly that calculation

float4 cameraposition = normalize(cameraDirection - worldPosition);

there in the pixelshader.



#14 SIC Games   Members   -  Reputation: 617

Like
0Likes
Like

Posted 06 July 2014 - 08:56 AM

 


Inside the specular mapping tutorial on Rastertek I just took that part where he calculates the camera position in world space realtive to the input vertex of the model.

 

Since you have a deferred renderer this part is actually deferred to the lighting shader. For beginners, simply output

float4 worldPosition = mul(modelMatrix, input.position);

output.worldPosition = worldPosition;

to the gbuffer you currently write your camera-vector in. Then, inside the deferred lighting shader, you can read out that world-position from the gbuffer and do exactly that calculation

float4 cameraposition = normalize(cameraDirection - worldPosition);

there in the pixelshader.

 

 

Wow - that's working ten times better than what I had working last night! I outputted to the gbuffer the camera position instead of the worldposition.  I signed in this morning and looked at what you were talking about. Now for once it's starting to have that specular feel it it! I was comparing my kitchen floor by moving around and other youtube videos. So bringing over the worldposition to the gbuffer was the ideal choice.

 

So, a couple of things I learned while working with Deferred Rendering is any important information/data such as the worldposition - should be brought through the GBUFFER.

So with the code you specified and why it works a hella lot better is because the worldposition is row multiplying from the model's world space and vertex position. Then when it's brought over to the GBUFFER the normalizes the length of cameraDirection - worldPosition. 

 

If my camera is at (8,10,-15) and the GBUFFER worldposition is at (0,0,0) then (8,10,-15) - (0,0,0) will be 8,10,-15 then when doing the specular the dot product of relfection and the camera viewing gives me the correct specular mapping light.

 

Thanks Juliean for clearing up some things. I'm glad you understood what I was going at. THanks!


Game Engine's WIP Videos - http://www.youtube.com/sicgames88


#15 Juliean   GDNet+   -  Reputation: 2693

Like
1Likes
Like

Posted 06 July 2014 - 09:23 AM


Thanks Juliean for clearing up some things. I'm glad you understood what I was going at. THanks!

 

Glad it works now, still keep in mind the depth-reconstruction technique both phil_t and me mentioned, as this might be essential for good performance in the future ;)



#16 backstep   Members   -  Reputation: 336

Like
1Likes
Like

Posted 06 July 2014 - 04:44 PM

Just noticed this thread, and if it'd be any use to you, about a year back I adapted one of the rastertek DX11 tutorials to do deferred shading (with specular) alongside the original forward shaded specular from the tutorial.  It supports deferred directional and point lights, and uses the depth buffer to reconstruct world position.  The gbuffer is only two textures (albedo and normals).

 

Anyhow, if you'd find it useful (as it's easy to follow the same coding style as the rastertek code), let me know and i'll put it up on dropbox for you.  

 

It does need VS2012 or 2013 though, since it uses the windows SDK libraries that come with the newer VS, instead of the older directx SDK libraries like the rastertek tutorials (e.g. uses DirectXMath instead of D3DX10Math).



#17 SIC Games   Members   -  Reputation: 617

Like
0Likes
Like

Posted 07 July 2014 - 01:17 PM

One more question before this post gets wrapped up. When exporting normals to the GBUFFER Render Targets - could I just export it as is or do I have to calculate the nromal map in tangent space then export it to the GBUFFER? Everything was fine until I wanted to make sure that I have the normal map and specular map running then I came into a weird abnormality with the attached photo. The attached photo is the output of the normals that were calculated in tangent space before heading out to the GBUFFER. I determined the issue was the calculation for normal map to tangent texture space.

 

Inside the deferred.hlsl that sends everything to the GBUFFER I have this:


float4 normalmap = textureSampler[1].Sample(sampleState, input.texcoord);
normalmap = (2.0f * normalmap) - 1.0f;

input.tangent = normalize(input.tangent - dot(input.tangent, input.normal) * input.normal);
float3 biTangent = cross(input.normal, input.tangent);

float3x3 texspace = float3x3(input.tangent, biTangent, input.normal);

input.normal = normalize(mul(normalmap, texspace));

//output.normal = float4(0.5f * input.normal + 0.5f, 1.0f); //-- Temporary commented out for debugging Normal Mapping issue.
output.normal = textureSampler[1].Sample(sampleState, input.texcoord);

Another screen shot was taken to show the result of just outputing the normalmap without calcuating to tangent space. However, what are those funky looking dark spots or shades?

 

So, I was wondering if I nered to calcualte the tangent space or should I not when it comes down to deferred rendering?

 

 

Attached Thumbnails

  • MeshNormalsDeferredAreInccorect.PNG
  • DeferredMeshNormalMapPossiblyCorrect.PNG

Game Engine's WIP Videos - http://www.youtube.com/sicgames88


#18 phil_t   Crossbones+   -  Reputation: 3947

Like
1Likes
Like

Posted 07 July 2014 - 01:42 PM

Putting tangent-space normals into the g-buffer doesn't make sense - picture two triangle faces, facing different directions but having the same normal map. If you put the tangent-space normals into the g-buffer, they would have identical normals and thus would be lit identically by your lighting pass (despite facing different directions). You've essentially lost information because you have ignored the surface normal of the two triangles.

 

All of the normals in your g-buffer need to be in the same "space". 


Edited by phil_t, 07 July 2014 - 01:44 PM.


#19 SIC Games   Members   -  Reputation: 617

Like
0Likes
Like

Posted 07 July 2014 - 02:08 PM

Okay, that makes a lot more sense than why there was a weird abnormality in picture one. So what you're saying Phil is to just put the normal map to the GBUFFER as like a diffuse texture or any other texture outputted to the GBUFFER?


Game Engine's WIP Videos - http://www.youtube.com/sicgames88


#20 phil_t   Crossbones+   -  Reputation: 3947

Like
1Likes
Like

Posted 07 July 2014 - 02:30 PM


So what you're saying Phil is to just put the normal map to the GBUFFER as like a diffuse texture or any other texture outputted to the GBUFFER?

 

No, that's the opposite of what I'm saying. Presumably your normal maps are in tangent-space. You need to convert the normal map normal to world space before you store it in the g-buffer (using the matrix you construct from the surface's tangent, normal and binormal). Then your lighting pass performs the lighting in world space.

 

(Alternately, you could store them in viewspace... that's what I do, since I'm only storing two components of the normal, and reconstructing the third. If you're storing all 3 normal components in the g-buffer, then it's simplest to use worldspace).

 

If you're getting sharp discontinuities between your triangles, then your triangles must have incorrect normals (or binormal or tangent vectors), and/or you're constructing the "texspace" matrix incorrectly.

 

My code to get the world space normal to store in the g-buffer looks something like this (ps_3_0):

	float3 normalMap =  tex2D(NormalSampler, input.TexCoords);
	float3x3 TangentToWorld = float3x3(input.Tangent, input.Binormal, input.Normal);
	float3 worldNormal = mul(normalMap, TangentToWorld);






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS