DirectX11 and Deferred rendering

Started by
38 comments, last by Zwingling 13 years ago
I am still rather stuck on this issue, I have tried debugging it and I can not find any issues in PIX or through the Watch window. Does anyone have any suggestions on what the issue may be?
Advertisement
Alright, as I said before I really believe this may have something to do with my blend state along with the point light. Here is why ...

For rendering a directional point light, this will "work".

Set render target
Turn on Blending
Render Directional light to light map
Turn off Blending
(Although blending does not even need to be turned on here since it is only one light).

For a point light however, this will give me a light map like this (posted above). To generate the light map, I do the following algorithm for a point light only.

Set render target
Render a point light to the light map
(Notice there is nothing dealing with blending).

However, when I turn on blending, then my final combined image will just be all white.

Set render target
Turn on Blending
Render a point light to the light map
Turn off Blending

As a test, I tried to render a directional light, then render a point light on top of it, and I was able to generate a light map that looks "correct".

Set render target
Turn on Blending
Render Directional light to light map
Render Point Light
Turn off Blending

I use the same blend states as I posted two posts above, except for turning off blending, which I now do like so

TurnBlendingOff(){	float blendFactor = 0;	m_pD3D11DeviceContext->OMSetBlendState(NULL, &blendFactor, 0xFFFFFFFF);}
Your blend states seem to be correct though.
I use this state to render my lights in my deferred renderer:

BlendStateDescription blendDesc = new BlendStateDescription();blendDesc.AlphaToCoverageEnable = false;blendDesc.IndependentBlendEnable = false;blendDesc.RenderTargets[0].BlendEnable = true;blendDesc.RenderTargets[0].BlendOperation = BlendOperation.Add;blendDesc.RenderTargets[0].BlendOperationAlpha = BlendOperation.Add;blendDesc.RenderTargets[0].SourceBlend = BlendOption.One;blendDesc.RenderTargets[0].SourceBlendAlpha = BlendOption.One;blendDesc.RenderTargets[0].DestinationBlend = BlendOption.One;blendDesc.RenderTargets[0].DestinationBlendAlpha = BlendOption.One;blendDesc.RenderTargets[0].RenderTargetWriteMask = ColorWriteMaskFlags.All;


What are you clearing your light buffer to?
Check if it's properly cleared to (0, 0, 0, 0) and not something like (0, 0, 0, 1)
Quote:Original post by Hyunkel
What are you clearing your light buffer to?
Check if it's properly cleared to (0, 0, 0, 0) and not something like (0, 0, 0, 1)


That was exactly the problem, I was clearing it to (0, 0, 0, 1).

The last thing that truly concerns me is the way my point light ends up looking.

When I have my ground plane and light at the same Y value (0.0), I will get something that looks like only the left side of the point light is being generated.



If I have my ground plane still at (0.0) but move my Y value for the point light (such as 0.2) I will get this strange sphere artifact that moves around on my light as I move my camera around.



Any clue what could be causing that?
Glad to hear you got your lightbuffer working. :)

I can't find any obvious mistakes.
It's hard to tell what could be causing this.
Since you say the artifact is affected by the camera, I suggest verifying that your invertViewProjection matrix is correct.

Quote:Original post by Hyunkel
Since you say the artifact is affected by the camera, I suggest verifying that your invertViewProjection matrix is correct.


What is the correct way to set the invertViewProjection matrix?

While I know that a matrix must be transposed before being sent to a shader in DirectX11, what is the order in which I should be inverting then transposing? Here is how I currently do it, should it be the other way around? (viewProjMatrix is NOT transposed already, it is the view projection matrix from the camera).

D3DXMATRIX* viewProjMatrix = scene->GetCamera()->GetViewProjMatrix();D3DXMATRIX invertexViewProjMatrix;D3DXMatrixInverse(&invertexViewProjMatrix, NULL, viewProjMatrix);D3DXMatrixTranspose(&invertexViewProjMatrix, &invertexViewProjMatrix);
This should be correct.

I don't really know where else to look though. :/
You could work backwards, disabling things step by step, such as specular lighting, attenuation etc, or just test if very basic things work, such as drawing the bounding volume to see if it ends up in the correct position.

I'd compare it to my code, but I do all my calculations in view space, and my depth texture is linear. :(
Quote:Original post by Hyunkel
I'd compare it to my code, but I do all my calculations in view space, and my depth texture is linear. :(


Interesting, why do you set it up like that?
Mostly because it's cheaper to reconstruct the position that way, and it's not very hard to implement either.

If you reconstruct the world position from a depth buffer you'll need to do a full madd. (you multiply by the inverseViewProjection)
Reconstructing view space position from a linear depth texture is cheaper:

float3 VSPositionFromDepth(float depth, float3 BVPositionVS){    // Calculate the frustum ray using the view-space position.    // farPlane is the distance to the camera's far clipping plane.    // Negating the Z component is only necessary for right-handed coordinate systems.    float3 FrustumRayVS = BVPositionVS.xyz * (farPlane/-BVPositionVS.z);    return depth * FrustumRayVS;}


You can find more information on this on MJP's blog:
http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/
http://mynameismjp.wordpress.com/2009/05/05/reconstructing-position-from-depth-continued/
http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/


The difference isn't phenomenal, but remember that this madd occurs once for every lit pixel and once for every frame.
I also have a little problem with deferred shading.

How to you convert for example m_ColorTexture to m_ColorTextureShaderResourceView?




This topic is closed to new replies.

Advertisement