Specular Mapping in Deferred Rendering

Started by
24 comments, last by Paul C Skertich 9 years, 9 months ago

Hey everyone, I haven't posted in a long time but I was wondering on some insight about deferred rendering. What I understand about the deferred shading at least is to have the diffuse texture, normal map texture, specular map texture, depth map sent to the G BUffer. I have four render targets then a 2D Quad for the Screen Rendering like in Post Processing. The Screen Quad is hooked up to do the shading needed and the rendering to the four render targets are using deferred.hlsl.

The Screen Quad is using shader deferredLighting.hlsl.I ran into a problem when trying to do specular mapping calculation inside the deferredlighting.hlsl. So, I did the specular mapping calculations inside the deferred.hlsl where I render out the scene to the four render targets. Isn't this defeating the whole purpose of calculating the deferred shading? If I'm drawing objects to the render targets and doing calculations for the specular lighting - won't it hurt the performance?

I thought you send through the diffuse, normal, specular, depth from the render targets to the deferredlighting.hlsl and do the lighting calculation there - not upon rendering the scene to the render targets. I posted a image that shows the result but there has to be a way to do the calcuation inside the deferredrendering.hlsl.

The problem was that I was having is that the screen quad has four vertices and a world position. If I was trying to calculate the camera position with the screen quad vertices world position - it wouldn't make sense to do that, right? I don't want to bog you guys down with code but if you really would like to take some snipplets to see where I can improve upon than that'll I'll share.

What I did notice is the forward-pass rendering technique is much different from deferred rendering - so I believe that's what tripping me up big time. My older engine had forward pass rendering until I decided to look more in depth using deferred rendering.

I watched other videos of deferred rendering and compared it with mine and I don't think I'm hitting the nail on the head! For instance the normal mapping the tangent space had to be calculated upon rendering the scene to render targets - that's not so bad because there's no light calcuations needed until it reaches the deferredlighting.hlsl for the normal map. Just this specular mapping is getting me mixed up.

If you look the image - the specular highlight is calculated during the render scene to texture not the screen quad.

Game Engine's WIP Videos - http://www.youtube.com/sicgames88
SIC Games @ GitHub - https://github.com/SICGames?tab=repositories
Simple D2D1 Font Wrapper for D3D11 - https://github.com/SICGames/D2DFontX
Advertisement

This is a screen shot from calculating the specular mapping light inside the Screen 2D Quad.

Game Engine's WIP Videos - http://www.youtube.com/sicgames88
SIC Games @ GitHub - https://github.com/SICGames?tab=repositories
Simple D2D1 Font Wrapper for D3D11 - https://github.com/SICGames/D2DFontX

You have to store means of position data alongside the other data for deferred rendering. You can eigther directly store position values in a 32bit-per-component render-target and sample directly from it to get the position. Since this wastes resources as well as bandwidth, you are better of storing the depth in a 32-bit single-component render-target, and use this value to reconstruct position - this is a little complicated & I haven't done it myself yet, but from what I know you have to store a ray per screen edge that runs along the view-frustum, and use the texel position to interpolate between those rays to get the "XY"-position value. You then use the depth (in 0.0 to 1.0 screen-space range) to reconstruct "Z"-position. I suggest you read up a tutorial on how to do this, I doubt I'm able to explain this properly :D

Also, since I already talked about wasting resources, you should further try to stuff things into as little rendertargets as possible. Put the specular component in the alpha channel of the diffuse RT. You can then compress normals to go into two channels instead of three, also I hope you are storing normals already in 8bit-per-component-format using this conversion trick:


// compress normal for output in shader
out.vNormal = vNormal * 0.5f + 0.5f;

// decompress when you read in the normal
float3 vNormal = Sample(NormalTex, in.vTex0) * 2.0f - 1.0f;

Now each normal fits into 8 bit unsigned range, and using that other compression trick I mentioned (again, you better read this up yourself) you have two additional 8-bit channels that are unused, where you can store other stuff like shininess etc...


Now each normal fits into 8 bit unsigned range, and using that other compression trick I mentioned (again, you better read this up yourself) you have two additional 8-bit channels that are unused, where you can store other stuff like shininess etc...

Hm, wouldn't that increase the visual artifacts quite a bit?

I've seen Killzone 2 slides where they say they use two 16 bit floats to store X,Y, then reconstruct Z. Or Battlefield 3 slides where they say they use RGB8 and just store XYZ in it. I never saw something from a game using both methods at the same time.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator


Hm, wouldn't that increase the visual artifacts quite a bit?

I'm not entirely sure, I'm only using the RGB8-method at the time being. I remembered reading something about the second compression method at some blog, and I'm pretty sure they only used 8 bit, but I might as well be mistaken. Okay, I just checked that blog, it referes to the Killzone 2 slides you mentioned, so it might probably produce really heavy artifacts. It would be interesting to see how notable they are actually... might try it out if I find the time.

EDIT: I find it interesting though, how is compressing the normals to X&Y going to help if you still use the same amount of bits use even more bits (16*2 = 32, 8*3 = 24 which leaves you with 8 bit to put something else in) mind you?

For normals, I'm using the spheremap transform described here: http://aras-p.info/texts/CompactNormalStorage.html to store the normals in two 8 bit channels. There are definitely some artifacts, but they're not too bad:

[attachment=22512:SphermapTx.png]

For normals, I'm using the spheremap transform described here: http://aras-p.info/texts/CompactNormalStorage.html to store the normals in two 8 bit channels. There are definitely some artifacts, but they're not too bad:

attachicon.gifSphermapTx.png

It's not to bad the artifacts are barely visible. To answer some people's question the render targets are stored under R32G32B32A32_FLOAT and are multisampled - as well as the depth stencil view.

Here's just a quick snipplets not too crazy. The scene gets rendered to texture using Deferred.hlsl and the quad gets rendered with deferredlighting.hlsl from what I understood from Rastertek.com. Some may not like the website and I could agree but until I master something then I can work on optimizing.

So, I'm guessing the specular map is way different that in a forward pass rendering system?

Game Engine's WIP Videos - http://www.youtube.com/sicgames88
SIC Games @ GitHub - https://github.com/SICGames?tab=repositories
Simple D2D1 Font Wrapper for D3D11 - https://github.com/SICGames/D2DFontX

So, what if I am able to store this line in the deferred.hlsl to the output as data - would I be able to do this?


float4 worldPosition = mul(modelMatrix, input.position);
float4 cameraposition = normalize(cameraDirection - worldPosition);
output.cameraVector = cameraposition.

//-- the pixelshader that outputs to the render targets.

output.cameradata = input.cameraVector;


That's the only issue I see that's messing up the specular mapping for me at this point. If I could send the camera data information to where I can do the calcuation of the specular lighting - then the issue would be resolved, no?

Game Engine's WIP Videos - http://www.youtube.com/sicgames88
SIC Games @ GitHub - https://github.com/SICGames?tab=repositories
Simple D2D1 Font Wrapper for D3D11 - https://github.com/SICGames/D2DFontX

You need a way to get the world position (or possibly the view-space position, depending on how you're calculating lighting) of a pixel in the lighting phase. This isn't just necessary for the specular lighting components, but also for point lights (even without specular) and other things. It's a pretty fundamental thing to have working for deferred rendering.

The easy/wasteful way is just to store the position in the gbuffer. The way it's usually done is to reconstruct it just from the depth information. Do a search around and you'll find many ways to do this.


float4 worldPosition = mul(modelMatrix, input.position);
float4 cameraposition = normalize(cameraDirection - worldPosition);
output.cameraVector = cameraposition.

//-- the pixelshader that outputs to the render targets.

output.cameradata = input.cameraVector;

I don't really understand this. You're subtracting a position (worldPosition) from a vector (cameraDirection). That doesn't really make sense - what are you trying to achieve?

So, what if I am able to store this line in the deferred.hlsl to the output as data - would I be able to do this?


float4 worldPosition = mul(modelMatrix, input.position);
float4 cameraposition = normalize(cameraDirection - worldPosition);
output.cameraVector = cameraposition.

//-- the pixelshader that outputs to the render targets.

output.cameradata = input.cameraVector;


That's the only issue I see that's messing up the specular mapping for me at this point. If I could send the camera data information to where I can do the calcuation of the specular lighting - then the issue would be resolved, no?

It works alright but I had to create addition render target to store the cameraposition for the lighting calculation. Just how muich data can i stuff inside the G BUFFER anyways?

Game Engine's WIP Videos - http://www.youtube.com/sicgames88
SIC Games @ GitHub - https://github.com/SICGames?tab=repositories
Simple D2D1 Font Wrapper for D3D11 - https://github.com/SICGames/D2DFontX

This topic is closed to new replies.

Advertisement