• Create Account

# Deferred rendering - Point light attenuation problem

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

8 replies to this topic

### #1Lemmi  Members   -  Reputation: 126

Like
0Likes
Like

Posted 08 December 2012 - 05:57 PM

*Update* I fixed it. It was not at all related to anything in my shader, my attenuation is perfectly fine. I had my depth stencil set up all wrong! I found the right solution here: https://developer.nvidia.com/sites/default/files/akamai/gamedev/docs/6800_Leagues_Deferred_Shading.pdf at page 15, if anyone is having similar problems! Sorry for necroing. Thanks for all the help, you people!

Hi, sorry for vague title. I'm going by Catalin Zima's deferred renderer pretty heavily, so it's very similar. My problem is that my point light lighting doesn't fall off based on the attenuation like it should, it's a little hard to explain exactly what's wrong, so I frapsed it: http://youtu.be/1AY2xpmImgc

Upper left is color map, right of that is normal map and furthest to the right is depth map. The light map is the bottom left one, so look at that one.

Basically, they color things that are outside of the light radius. I strongly suspect there's something wrong about the projected texture coordinates. I've double checked so that all values that I send into the shaders actually get assigned, and I've looked through everything in pix and it seems to be fine. When I draw the sphere model that represents the point light, I scale the translation matrix with a (LightRadius, LightRadius, LightRadius) matrix.

I use additive blending mode for my lighting phase, and change rasterizer state depending on if I'm inside or not. I use a separate render target for my depth, I haven't bothered trying to use my depth stencil as a RT as I've seen some people do.

cbuffer MatrixVertexBuffer
{
float4x4 World;
float4x4 View;
float4x4 Projection;
}
{
float3 Position : POSITION0;
};
{
float4 Position : SV_Position;
float4 LightPosition : TEXCOORD0;
};
{
output.Position = mul(float4(input.Position, 1.0f), World);
output.Position = mul(output.Position, View);
output.Position = mul(output.Position, Projection);
output.LightPosition = output.Position;
return output;
}


cbuffer LightBufferType
{
float3 LightColor;
float3 LightPosition;
float LightPower;
float4 CameraPosition;
}
cbuffer PixelMatrixBufferType
{
float4x4 InvViewProjection;
}
//==Structs==
{
float4 Position : SV_Position;
float4 LightPosition : TEXCOORD0;
};
//==Variables==
Texture2D textures[3]; //Color, Normal, depth
SamplerState pointSampler;
//==Functions==
float2 postProjToScreen(float4 position)
{
float2 screenPos = position.xy / position.w;
return 0.5f * (float2(screenPos.x, -screenPos.y) + 1);
}
{
float2 texCoord = postProjToScreen(input.LightPosition);
float4 baseColor = textures[0].Sample(pointSampler, texCoord);
if(baseColor.r + baseColor.g + baseColor.b < 0.0f) //I cull early if the pixel is completely black, meaning there really isn't anything to light here.
{
return half4(0.0f, 0.0f, 0.0f, 0.0f);
}
//get normal data from the normalMap
float4 normalData = textures[1].Sample(pointSampler, texCoord);

//tranform normal back into [-1,1] range
float3 normal = 2.0f * normalData.xyz - 1.0f;
float depth = textures[2].Sample(pointSampler, texCoord);
//compute screen-space position
float4 position;
position.x = texCoord.x;
position.y = -(texCoord.x);
position.z = depth;
position.w = 1.0f;
//transform to world space
position = mul(position, InvViewProjection);
position /= position.w;
//surface-to-light vector
float3 lightVector = position - input.LightPosition;
//compute attenuation based on distance - linear attenuation
float attenuation = saturate(1.0f - max(0.01f, lightVector)/(LightRadius/2)); //max(0.01f, lightVector) to avoid divide by zero!
//normalize light vector
lightVector = normalize(lightVector);
//compute diffuse light
float NdL = max(0, dot(normal, lightVector));
float3 diffuseLight = NdL * LightColor.rgb;
//reflection vector
float3 reflectionVector = normalize(reflect(-lightVector, normal));
//camera-to-surface vector
float3 directionToCamera = normalize(CameraPosition - position);
//compute specular light
float specularLight =  pow( saturate(dot(reflectionVector, directionToCamera)), 128.0f);
//take into account attenuation and lightIntensity.
return attenuation * half4(diffuseLight.rgb, specularLight);
}


Sorry if it's messy, I'm pretty loose about standards and commenting while experimenting.

Thank you for your time, it's really appreciated.

Edited by Lemmi, 17 December 2012 - 01:12 PM.

### #2Bacterius  Crossbones+   -  Reputation: 9610

Like
1Likes
Like

Posted 08 December 2012 - 06:41 PM

Isn't the intensity of a point light source proportional to distance squared? In your code, attenuation seems to be defined as linearly proportional to the distance instead, but I could be wrong. Try this:
float attenuation = 1.0f / dot(lightVector, lightVector);
By the way, the 0.01f epsilon in your code doesn't protect against division by zero since it's the numerator of the division.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

### #3Lemmi  Members   -  Reputation: 126

Like
0Likes
Like

Posted 08 December 2012 - 07:01 PM

I have to admit, I don't really understand all of the math behind this, so please do assume that the math is wrong, I think that's for the best. I tried what you suggested and it resulted in virtually nothing being rendered whatsoever, but hey, that solved my first problem! ;)

It's uh, hard to explain, but it did render if I had my camera in a very special angle and was looking at it with the edge of my screen.

You are of course right about the division by zero thing, that was silly of me.

### #4Bacterius  Crossbones+   -  Reputation: 9610

Like
1Likes
Like

Posted 08 December 2012 - 08:00 PM

Well I just noted this at line 47:
position.y = -(texCoord.x);
Shouldn't that be y instead of x?

I don't know what's wrong beyond that. A matrix transformation problem, I guess. I haven't looked at the shader in detail, but hopefully someone else can take a look..

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

### #5Lemmi  Members   -  Reputation: 126

Like
0Likes
Like

Posted 08 December 2012 - 08:15 PM

Yeah, I changed it to

position.xy = texCoord.xy;

a while back. It didn't change anything. The reason I had it that way was because I had seen it done like that in some other samples. Pretty much just changing values around and crossing my fingers at this point.

### #6kauna  Crossbones+   -  Reputation: 2888

Like
1Likes
Like

Posted 09 December 2012 - 09:14 AM

output.Position = mul(float4(input.Position, 1.0f), World);

could be written as:

output.Position = mul(input.Position, World);

If you define your input.Position as float4. It isn't necessary to provide the 4th component from the program.

float2 texCoord = postProjToScreen(input.LightPosition);
float4 baseColor = textures[0].Sample(pointSampler, texCoord);

Since you are using D3D 10 or 11 that part of the code could be replaced with

int3 Index = int3(input.Position.x,input.Position.,0);

Cheers!

### #7Lemmi  Members   -  Reputation: 126

Like
0Likes
Like

Posted 09 December 2012 - 11:27 AM

output.Position = mul(float4(input.Position, 1.0f), World);

could be written as:

output.Position = mul(input.Position, World);

If you define your input.Position as float4. It isn't necessary to provide the 4th component from the program.

float2 texCoord = postProjToScreen(input.LightPosition);
float4 baseColor = textures[0].Sample(pointSampler, texCoord);

Since you are using D3D 10 or 11 that part of the code could be replaced with

int3 Index = int3(input.Position.x,input.Position.,0);

Cheers!

Hi! The input position needs to be casted to a float4 and have an added 1.0f to the last channel, else you get some really weird undefined behaviour unless you re-write the model class vertex struct, which I see no reason to do.

The .Load function thing was neat! Fun to learn about new things. Question: Do you know if this is faster than using the sampler, or if it brings any other advantage?

If I'm going back on the subject, I'm starting to suspect that it actually isn't the attenuation that is the problem, because I've scoured the entire net and tried so many different attenuation methods, and they all have the same problem. It's as if the depth value gets screwed up by my InvertedViewProjection.

This is what I do:

viewProjection = viewMatrix*projectionMatrix;
D3DXMatrixInverse(&invertedViewProjection, NULL, &viewProjection);

Then when I send it into the shader I transpose it. I'm honestly not sure what transposition does, so I'm not sure if it can cause this kind of problem where it screws up my position Z axis when multiplied with it.

Another oddity I found was that when I looked through the code in pix I'd get depth values that were negative, so I'd get -1.001f and other values. This doesn't seem right?

The depth is stored the normal way in the gbuffer pixel shader:
output.Depth = input.Position.z / input.Position.w;

Edited by Lemmi, 09 December 2012 - 11:30 AM.

### #8kauna  Crossbones+   -  Reputation: 2888

Like
0Likes
Like

Posted 09 December 2012 - 01:42 PM

Hi! The input position needs to be casted to a float4 and have an added 1.0f to the last channel, else you get some really weird undefined behaviour unless you re-write the model class vertex struct, which I see no reason to do.

It is pretty simple, you define your vertex declaration in the program side to have a 3 component position (ie. DXGI_FORMAT_R32G32B32_FLOAT) , and in the shader side you define the position0 as float4. Voilà, The 4th value will be automatically 1.0f.

Transposing of the matrix before copying it to a constant buffer is a typical thing to do. Basically what is does is that it mirrors the matrix values by the diagonal. It could be considered as an optimization for the shaders.

Sorry that I can't help you with your problem. I use a different method for the position reconstruction (without multiplying with the inverse view projection matrix).

Cheers!

### #9Steve_Segreto  Crossbones+   -  Reputation: 1639

Like
0Likes
Like

Posted 09 December 2012 - 02:10 PM

Hi Lemmi,

I don't quite know how to say this, so I'll just say it. Please don't take offense.

What you have attempted to do is quite legitimate. It is very common to integrate other programmer's code into a larger codebase. However it does require a bit of practice and organization to be successfull. I notice that you have tinkered with things and crossed your fingers in places. That is a painful and "tweak-based" approach that may never work.

When you integrate untested code and then it doesn't work as expected, you are stuck trying to determine if the original code did not work or if your integration failed. I have actually integrated Catalin Zima's deferred rendering with point lights before and I can tell you it works.

So what you will need to do is step back. Take the original code and place it in a small stand-alone project where you can compile it and execute it so you know what to expect when your conversion to what appears to be DX11 progresses. Once you have confidence and proof that the original source code works EXACTLY the way you want it to, then you can begin porting it over to your larger codebase. Test everything every step of the way, take small steps and immediately stop and fix any problem no matter how small, keeping the original source code as your reference.

It's a painstaking process, if you integrate other people's code 100 times or more, you may get to the level of experience where you can just "do it" without having to take all the little in-between steps, but if you mess up your on your own

EDIT:

Btw - Ultimately I used Catalin Zima's deferred rendering as a framework, customized for my project's needs. The point lights were functional, but had severe artefacts when viewed from large z distances, so I wound up integrating MJP's position from depth learnings into the system which gave me more precision and crisper point lights from really far away.

Some cheesy videos I threw together to try to show the point lighting:

Edited by Steve_Segreto, 09 December 2012 - 03:44 PM.

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS