I don't have answer to your problem but your code seems complicated at some parts:
output.Position = mul(float4(input.Position, 1.0f), World);
could be written as:
output.Position = mul(input.Position, World);
If you define your input.Position as float4. It isn't necessary to provide the 4th component from the program.
float2 texCoord = postProjToScreen(input.LightPosition);
float4 baseColor = textures.Sample(pointSampler, texCoord);
Since you are using D3D 10 or 11 that part of the code could be replaced with
int3 Index = int3(input.Position.x,input.Position.,0);
float4 baseColor = textures.Load(Index);
float4 normalData = textures.Load(Index);
Hi! The input position needs to be casted to a float4 and have an added 1.0f to the last channel, else you get some really weird undefined behaviour unless you re-write the model class vertex struct, which I see no reason to do.
The .Load function thing was neat! Fun to learn about new things. Question: Do you know if this is faster than using the sampler, or if it brings any other advantage?
If I'm going back on the subject, I'm starting to suspect that it actually isn't the attenuation that is the problem, because I've scoured the entire net and tried so many different attenuation methods, and they all have the same problem. It's as if the depth value gets screwed up by my InvertedViewProjection.
This is what I do:
viewProjection = viewMatrix*projectionMatrix;
D3DXMatrixInverse(&invertedViewProjection, NULL, &viewProjection);
Then when I send it into the shader I transpose it. I'm honestly not sure what transposition does, so I'm not sure if it can cause this kind of problem where it screws up my position Z axis when multiplied with it.
Another oddity I found was that when I looked through the code in pix I'd get depth values that were negative, so I'd get -1.001f and other values. This doesn't seem right?
The depth is stored the normal way in the gbuffer pixel shader:
output.Depth = input.Position.z / input.Position.w;