Deferred rendering - Point light attenuation problem

Started by
7 comments, last by Steve_Segreto 11 years, 4 months ago
*Update* I fixed it. It was not at all related to anything in my shader, my attenuation is perfectly fine. I had my depth stencil set up all wrong! I found the right solution here: https://developer.nvidia.com/sites/default/files/akamai/gamedev/docs/6800_Leagues_Deferred_Shading.pdf at page 15, if anyone is having similar problems! Sorry for necroing. Thanks for all the help, you people!


Hi, sorry for vague title. I'm going by Catalin Zima's deferred renderer pretty heavily, so it's very similar. My problem is that my point light lighting doesn't fall off based on the attenuation like it should, it's a little hard to explain exactly what's wrong, so I frapsed it:


Upper left is color map, right of that is normal map and furthest to the right is depth map. The light map is the bottom left one, so look at that one.




Basically, they color things that are outside of the light radius. I strongly suspect there's something wrong about the projected texture coordinates. I've double checked so that all values that I send into the shaders actually get assigned, and I've looked through everything in pix and it seems to be fine. When I draw the sphere model that represents the point light, I scale the translation matrix with a (LightRadius, LightRadius, LightRadius) matrix.

I use additive blending mode for my lighting phase, and change rasterizer state depending on if I'm inside or not. I use a separate render target for my depth, I haven't bothered trying to use my depth stencil as a RT as I've seen some people do.

Here's how the shader looks:

Vertex shader:


cbuffer MatrixVertexBuffer
{
float4x4 World;
float4x4 View;
float4x4 Projection;
}
struct VertexShaderInput
{
float3 Position : POSITION0;
};
struct VertexShaderOutput
{
float4 Position : SV_Position;
float4 LightPosition : TEXCOORD0;
};
VertexShaderOutput LightVertexShader(VertexShaderInput input)
{
VertexShaderOutput output;
output.Position = mul(float4(input.Position, 1.0f), World);
output.Position = mul(output.Position, View);
output.Position = mul(output.Position, Projection);
output.LightPosition = output.Position;
return output;
}


Pixel shader:


cbuffer LightBufferType
{
float3 LightColor;
float3 LightPosition;
float LightRadius;
float LightPower;
float4 CameraPosition;
float4 Padding;
}
cbuffer PixelMatrixBufferType
{
float4x4 InvViewProjection;
}
//==Structs==
struct VertexShaderOutput
{
float4 Position : SV_Position;
float4 LightPosition : TEXCOORD0;
};
//==Variables==
Texture2D textures[3]; //Color, Normal, depth
SamplerState pointSampler;
//==Functions==
float2 postProjToScreen(float4 position)
{
float2 screenPos = position.xy / position.w;
return 0.5f * (float2(screenPos.x, -screenPos.y) + 1);
}
half4 LightPixelShader(VertexShaderOutput input) : SV_TARGET0
{
float2 texCoord = postProjToScreen(input.LightPosition);
float4 baseColor = textures[0].Sample(pointSampler, texCoord);
if(baseColor.r + baseColor.g + baseColor.b < 0.0f) //I cull early if the pixel is completely black, meaning there really isn't anything to light here.
{
return half4(0.0f, 0.0f, 0.0f, 0.0f);
}
//get normal data from the normalMap
float4 normalData = textures[1].Sample(pointSampler, texCoord);

//tranform normal back into [-1,1] range
float3 normal = 2.0f * normalData.xyz - 1.0f;
//read depth
float depth = textures[2].Sample(pointSampler, texCoord);
//compute screen-space position
float4 position;
position.x = texCoord.x;
position.y = -(texCoord.x);
position.z = depth;
position.w = 1.0f;
//transform to world space
position = mul(position, InvViewProjection);
position /= position.w;
//surface-to-light vector
float3 lightVector = position - input.LightPosition;
//compute attenuation based on distance - linear attenuation
float attenuation = saturate(1.0f - max(0.01f, lightVector)/(LightRadius/2)); //max(0.01f, lightVector) to avoid divide by zero!
//normalize light vector
lightVector = normalize(lightVector);
//compute diffuse light
float NdL = max(0, dot(normal, lightVector));
float3 diffuseLight = NdL * LightColor.rgb;
//reflection vector
float3 reflectionVector = normalize(reflect(-lightVector, normal));
//camera-to-surface vector
float3 directionToCamera = normalize(CameraPosition - position);
//compute specular light
float specularLight = pow( saturate(dot(reflectionVector, directionToCamera)), 128.0f);
//take into account attenuation and lightIntensity.
return attenuation * half4(diffuseLight.rgb, specularLight);
}


Sorry if it's messy, I'm pretty loose about standards and commenting while experimenting.

Thank you for your time, it's really appreciated.
Advertisement
Isn't the intensity of a point light source proportional to distance squared? In your code, attenuation seems to be defined as linearly proportional to the distance instead, but I could be wrong. Try this:
float attenuation = 1.0f / dot(lightVector, lightVector);
By the way, the 0.01f epsilon in your code doesn't protect against division by zero since it's the numerator of the division.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

I have to admit, I don't really understand all of the math behind this, so please do assume that the math is wrong, I think that's for the best. I tried what you suggested and it resulted in virtually nothing being rendered whatsoever, but hey, that solved my first problem! ;)

It's uh, hard to explain, but it did render if I had my camera in a very special angle and was looking at it with the edge of my screen.



You are of course right about the division by zero thing, that was silly of me.
Well I just noted this at line 47:
position.y = -(texCoord.x);
Shouldn't that be y instead of x?

I don't know what's wrong beyond that. A matrix transformation problem, I guess. I haven't looked at the shader in detail, but hopefully someone else can take a look..

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

Yeah, I changed it to

position.xy = texCoord.xy;

a while back. It didn't change anything. The reason I had it that way was because I had seen it done like that in some other samples. Pretty much just changing values around and crossing my fingers at this point.
I don't have answer to your problem but your code seems complicated at some parts:

output.Position = mul(float4(input.Position, 1.0f), World);

could be written as:

output.Position = mul(input.Position, World);

If you define your input.Position as float4. It isn't necessary to provide the 4th component from the program.


float2 texCoord = postProjToScreen(input.LightPosition);
float4 baseColor = textures[0].Sample(pointSampler, texCoord);

Since you are using D3D 10 or 11 that part of the code could be replaced with

int3 Index = int3(input.Position.x,input.Position.,0);
float4 baseColor = textures[0].Load(Index);
float4 normalData = textures[1].Load(Index);


Cheers!

I don't have answer to your problem but your code seems complicated at some parts:

output.Position = mul(float4(input.Position, 1.0f), World);

could be written as:

output.Position = mul(input.Position, World);

If you define your input.Position as float4. It isn't necessary to provide the 4th component from the program.


float2 texCoord = postProjToScreen(input.LightPosition);
float4 baseColor = textures[0].Sample(pointSampler, texCoord);

Since you are using D3D 10 or 11 that part of the code could be replaced with

int3 Index = int3(input.Position.x,input.Position.,0);
float4 baseColor = textures[0].Load(Index);
float4 normalData = textures[1].Load(Index);


Cheers!


Hi! The input position needs to be casted to a float4 and have an added 1.0f to the last channel, else you get some really weird undefined behaviour unless you re-write the model class vertex struct, which I see no reason to do.

The .Load function thing was neat! Fun to learn about new things. Question: Do you know if this is faster than using the sampler, or if it brings any other advantage?


If I'm going back on the subject, I'm starting to suspect that it actually isn't the attenuation that is the problem, because I've scoured the entire net and tried so many different attenuation methods, and they all have the same problem. It's as if the depth value gets screwed up by my InvertedViewProjection.

This is what I do:

viewProjection = viewMatrix*projectionMatrix;
D3DXMatrixInverse(&invertedViewProjection, NULL, &viewProjection);

Then when I send it into the shader I transpose it. I'm honestly not sure what transposition does, so I'm not sure if it can cause this kind of problem where it screws up my position Z axis when multiplied with it.

Another oddity I found was that when I looked through the code in pix I'd get depth values that were negative, so I'd get -1.001f and other values. This doesn't seem right?

The depth is stored the normal way in the gbuffer pixel shader:
output.Depth = input.Position.z / input.Position.w;

Hi! The input position needs to be casted to a float4 and have an added 1.0f to the last channel, else you get some really weird undefined behaviour unless you re-write the model class vertex struct, which I see no reason to do.
[/quote]

It is pretty simple, you define your vertex declaration in the program side to have a 3 component position (ie. DXGI_FORMAT_R32G32B32_FLOAT) , and in the shader side you define the position0 as float4. Voilà, The 4th value will be automatically 1.0f.

Transposing of the matrix before copying it to a constant buffer is a typical thing to do. Basically what is does is that it mirrors the matrix values by the diagonal. It could be considered as an optimization for the shaders.

Sorry that I can't help you with your problem. I use a different method for the position reconstruction (without multiplying with the inverse view projection matrix).

Cheers!
Hi Lemmi,

I don't quite know how to say this, so I'll just say it. Please don't take offense.

What you have attempted to do is quite legitimate. It is very common to integrate other programmer's code into a larger codebase. However it does require a bit of practice and organization to be successfull. I notice that you have tinkered with things and crossed your fingers in places. That is a painful and "tweak-based" approach that may never work.

When you integrate untested code and then it doesn't work as expected, you are stuck trying to determine if the original code did not work or if your integration failed. I have actually integrated Catalin Zima's deferred rendering with point lights before and I can tell you it works.

So what you will need to do is step back. Take the original code and place it in a small stand-alone project where you can compile it and execute it so you know what to expect when your conversion to what appears to be DX11 progresses. Once you have confidence and proof that the original source code works EXACTLY the way you want it to, then you can begin porting it over to your larger codebase. Test everything every step of the way, take small steps and immediately stop and fix any problem no matter how small, keeping the original source code as your reference.

It's a painstaking process, if you integrate other people's code 100 times or more, you may get to the level of experience where you can just "do it" without having to take all the little in-between steps, but if you mess up your on your own smile.png

EDIT:

Btw - Ultimately I used Catalin Zima's deferred rendering as a framework, customized for my project's needs. The point lights were functional, but had severe artefacts when viewed from large z distances, so I wound up integrating MJP's position from depth learnings into the system which gave me more precision and crisper point lights from really far away.

Some cheesy videos I threw together to try to show the point lighting:



This topic is closed to new replies.

Advertisement