Deferred Pointlights in view space

Started by
11 comments, last by - hAk - 12 years, 8 months ago
Hey folks - it's me again :-)

I'm trying to get rid of the screen space depth rendering in my deferred setup to save some GPU bandwidth. So far only the volumetric lights (point/ spot) are some nasty leftovers.
For pointlights here's my PS (still with screen space depth):


float4 position;
position.xy = input.ScreenPosition.xy;
position.z = depthVal;
position.w = 1.0f;

position = mul(position, InvertViewProjection);
position /= position.w;


where ScreenPosition is computed in the VS as (again SSD):


float4 worldPosition = mul(float4(input.Position,1), World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
output.ScreenPosition = output.Position;


I was trying to use the approach of rays pointing to the far viewing frustum of my camera, but I can not get it working since there is no possibility of passing frustum corners to my VS (as said in the title: volumetric light).

Anybody got an idea how to solve this?


Kind regards
hAk
Advertisement
Sorry, it's not clear to me what your problem is.

What do you mean you want to "get rid of the screen space depth rendering" ? You mean you're writing depth into your gbuffer, separate from your depth buffer, and instead want to just read the depth buffer?

Also, this: position = mul(position, InvertViewProjection); position /= position.w; will get you a world space position, not view space, in case that matters
Hi RDragon1,

I think you got me wrong ;-)

My GBuffer pass is writing the depth buffer as -input.Depth.z / FarPlane (ViewSpace depth). I switched some parts of my framework from using screen space depth to view space, but the rendering of my light buffer (namely point- and spotlights) are still using screen space depth. Since the LBuffer pass is using light volumes (i.e. sphere/cone geometry) for this I can not use the generic approach of frustum corner computation.

For directional lights for example my reconstructed position works just fine like this

float depthVal = tex2D(depthSampler,input.TexCoord).r;
float3 position = float3(depthVal * input.FarFrustumCorner);


So the question is, how do I switch from the code below (using screen depth from the depth buffer) to view space depth?

Regards - hAK




input.ScreenPosition.xy /= input.ScreenPosition.w;
float2 texCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1);
texCoord -=halfPixel;

float4 normalData = tex2D(normalSampler,texCoord);
float3 normal = 2.0f * normalData.xyz - 1.0f;
float specularPower = normalData.a * 255;
float specularIntensity = tex2D(colorSampler, texCoord).a;

float4 position;
position.xy = input.ScreenPosition.xy;
position.z = depthVal;
position.w = 1.0f;

position = mul(position, InvertViewProjection);
position /= position.w;

float3 lightVector = lightPosition - position;
float attenuation = saturate(1.0f - length(lightVector)/lightRadius);
lightVector = normalize(lightVector);

float NdL = dot(normal,lightVector);
float3 diffuseLight = NdL * Color.rgb;

float3 reflectionVector = normalize(reflect(-lightVector, normal));
float3 directionToCamera = normalize(cameraPosition - position);
float specularLight = specularIntensity * pow( saturate(dot(reflectionVector, directionToCamera)), specularPower);

return attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight);
What do you mean by "screen-space depth"? Do you mean what's stored in the actual z-buffer, AKA post-projection z/w? If so, you can convert that to your original view-space Z value very easily:

float zw = tex2D(DepthBuffer, pixelCoord).x;
float linearZ = Projection._43 / (zw - Projection._33);
My screen space depth (AKA normalized view-space depth) is calculated as


-input.Depth.z / FarPlane


so, I'm using the Crytek approach of filling my GBuffer depth texture. The main problem for me is how to use the ray multiplication for my arbitrary geometry used to render the light volumes - since I do not render a fullscreen quad.
I also tried to use the approach from your (MJP) blog, but it didn't work out. If I use the computed positionVS for processing the light volume (see source one post above) nothing is redered.


float3 viewRay = float3(input.PositionVS.xy * (FarClipDistance / input.PositionVS.z), FarClipDistance);
float normalizedDepth = DepthTexture.Sample(PointSampler, texCoord).x;
float3 positionVS = viewRay * normalizedDepth;


Hope this clarifys my question a little bit :-s
For arbitrary geometry you should use vpos register for screen-space xy-coordinates. For directional light and fullscreen quad there's no perspective projection to consider so VS interpolants work, but not for arbitrary 3D geometry needed for local light sources, thus the need for vpos. Once you have constructed screen-space homogenous coordinates [x, y, z, w] from vpos + depth in your pixel shader, transform the vector with inverse projection transform to view/world space (3 4d dot products). Depending on your platform you can use linear depth (i.e. depth rendered to a texture) or z-buffer depth. Using z-buffer depth is a bit more computationally expensive but saves bandwidth because you don't have to render the depth.


Cheers, Jarkko
Thank your for your reply, JarkkoL :-)

Unfortunately I've never heared of the VPOS semantics yet.
Could you be so kind and offer me a concrete example regarding my issue and using VPOS?

Cheers - hAk
vpos is linearly interpolated on the screen (contains screen coordinates of the processed pixel, i.e. [0, width-1], [0, height-1]), while vertex shader interpolants are linearly interpolated in 3d, thus affected by the perspective transform. If a rasterized triangle is camera facing (like the fullscreen quad of a directional light), there's no perspective projection to worry about. The triangles of light volumes of omni/spot lights are not camera facing so there's perspective correction applied to the vertex shader interpolants and thus you don't get linear interpolation on the screen.

In Spin-X Platform there's light pre-pass renderer, which performs pixel screen->world transformation using vpos for omni & spot lights. In ps_light.h file there's world_pos() function which performs screen->world space transformation using screen-space coordinates calculated from vpos (scaled to range [0, 1]) and ps_light_omni.hlsl is an example of the use the the function and vpos. In d3d9_lumion_engine.cpp (render_lighting() function in line 1304) the transformations are setup to the pixel shader constants if you like to check how they are calculated. In these shaders I use linear view-space z rendered to a texture.


Cheers, Jarkko
Hey hAk :)


Could you be so kind and offer me a concrete example regarding my issue and using VPOS?



float2 uvFromVPOS(float2 vPos) { return vPos * InvScreenDim.xy + 0.5f * InvScreenDim.xy; }

PS_output ps_pointLight(in VS_OUTPUT input, in float2 vPos : VPOS)
{
PS_output Out;

float2 texCoord = uvFromVPOS(vPos);

float3 viewRay = float3(input.PositionVS.xy * (FarClipDistance / input.PositionVS.z), FarClipDistance);
float normalizedDepth = DepthTexture.Sample(PointSampler, texCoord).x;
float3 positionVS = viewRay * normalizedDepth;
...
}
Hey folks :-)

Good news first:
After checking out all your samples and advices (thanks so much for some new input to think about!) I can confirm that you all are generating the same position values.

Bad news last:
The positions generated still do not match those from my old shader implementation - so the lighting is still broken.


This can only mean two things:
- Something other in the shader is broken (which is a little bit unlikely)
- I'm going totaly insane and do not understand an other fundamental error in my code (most likely)

I know that I'm bugging you with this, but could any HLSL crack-O have a look at my shader again and point me into the right direction? The problem is still, that the light volumes do not get rendered to the light map. If the attenuation is set to 1 (const), at least some weird moving volume is visible...

Thanks a lot guys!

<-- EDIT -->

HEUREKA!! It is finally working!

After I realized what I actually calculated the solution was obvious: position = mul(positionVS, InverseViewMatrix). Anyway big big thanks to anybody involved :-)

For completeness here the working shader code (some tweaks to be done):


VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;

//processing geometry coordinates
float4 worldPosition = mul(float4(input.Position,1), World);
float4 viewPosition = mul(worldPosition, View);

output.Position = mul(viewPosition, Projection);
output.ScreenPosition = output.Position;
output.PositionVS = viewPosition;

return output;
}



float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
input.ScreenPosition.xy /= input.ScreenPosition.w;

float2 texCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1);
texCoord -=halfPixel;

float4 normalData = tex2D(normalSampler,texCoord);
float3 normal = 2.0f * normalData.xyz - 1.0f;

float specularPower = normalData.a * 255;
float specularIntensity = tex2D(colorSampler, texCoord).a;

//compute screen-space position
float depthVal = tex2D(depthSampler,texCoord).r;
float3 viewRay = input.PositionVS * (FarPlane / -input.PositionVS.z);
float4 position = float4(viewRay * depthVal, 1);

//transform to world space
position = mul(position, InvertView);
position /= position.w;

float3 lightVector = lightPosition - position;
float attenuation = saturate(1.0f - length(lightVector)/lightRadius);
lightVector = normalize(lightVector);

float NdL = dot(normal,lightVector);
float3 diffuseLight = NdL * Color.rgb;

float3 reflectionVector = normalize(reflect(-lightVector, normal));
float3 directionToCamera = normalize(cameraPosition - position);
float specularLight = specularIntensity * pow( saturate(dot(reflectionVector, directionToCamera)), specularPower);

//take into account attenuation and lightIntensity.
return attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight);
}

This topic is closed to new replies.

Advertisement