Sign in to follow this  
- hAk -

Deferred Pointlights in view space

Recommended Posts

Hey folks - it's me again :-)

I'm trying to get rid of the screen space depth rendering in my deferred setup to save some GPU bandwidth. So far only the volumetric lights (point/ spot) are some nasty leftovers.
For pointlights here's my PS (still with screen space depth):

[code]
float4 position;
position.xy = input.ScreenPosition.xy;
position.z = depthVal;
position.w = 1.0f;

position = mul(position, InvertViewProjection);
position /= position.w;
[/code]

where ScreenPosition is computed in the VS as (again SSD):

[code]
float4 worldPosition = mul(float4(input.Position,1), World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
output.ScreenPosition = output.Position;
[/code]

I was trying to use the approach of rays pointing to the far viewing frustum of my camera, but I can not get it working since there is no possibility of passing frustum corners to my VS (as said in the title: volumetric light).

Anybody got an idea how to solve this?


Kind regards
hAk

Share this post


Link to post
Share on other sites
Sorry, it's not clear to me what your problem is.

What do you mean you want to "get rid of the screen space depth rendering" ? You mean you're writing depth into your gbuffer, separate from your depth buffer, and instead want to just read the depth buffer?

Also, this: position = mul(position, InvertViewProjection); position /= position.w; will get you a world space position, not view space, in case that matters

Share this post


Link to post
Share on other sites
Hi RDragon1,

I think you got me wrong ;-)

My GBuffer pass is writing the depth buffer as -input.Depth.z / FarPlane (ViewSpace depth). I switched some parts of my framework from using screen space depth to view space, but the rendering of my light buffer (namely point- and spotlights) are still using screen space depth. Since the LBuffer pass is using light volumes (i.e. sphere/cone geometry) for this I can not use the generic approach of frustum corner computation.

For directional lights for example my reconstructed position works just fine like this
[code]
float depthVal = tex2D(depthSampler,input.TexCoord).r;
float3 position = float3(depthVal * input.FarFrustumCorner);
[/code]

So the question is, how do I switch from the code below (using screen depth from the depth buffer) to view space depth?

Regards - hAK



[code]
input.ScreenPosition.xy /= input.ScreenPosition.w;
float2 texCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1);
texCoord -=halfPixel;

float4 normalData = tex2D(normalSampler,texCoord);
float3 normal = 2.0f * normalData.xyz - 1.0f;
float specularPower = normalData.a * 255;
float specularIntensity = tex2D(colorSampler, texCoord).a;

float4 position;
position.xy = input.ScreenPosition.xy;
position.z = depthVal;
position.w = 1.0f;

position = mul(position, InvertViewProjection);
position /= position.w;

float3 lightVector = lightPosition - position;
float attenuation = saturate(1.0f - length(lightVector)/lightRadius);
lightVector = normalize(lightVector);

float NdL = dot(normal,lightVector);
float3 diffuseLight = NdL * Color.rgb;

float3 reflectionVector = normalize(reflect(-lightVector, normal));
float3 directionToCamera = normalize(cameraPosition - position);
float specularLight = specularIntensity * pow( saturate(dot(reflectionVector, directionToCamera)), specularPower);

return attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight);
[/code]

Share this post


Link to post
Share on other sites
What do you mean by "screen-space depth"? Do you mean what's stored in the actual z-buffer, AKA post-projection z/w? If so, you can convert that to your original view-space Z value very easily:
[code]
float zw = tex2D(DepthBuffer, pixelCoord).x;
float linearZ = Projection._43 / (zw - Projection._33);
[/code]

Share this post


Link to post
Share on other sites
My screen space depth (AKA normalized view-space depth) is calculated as

[code]
-input.Depth.z / FarPlane
[/code]

so, I'm using the Crytek approach of filling my GBuffer depth texture. The main problem for me is how to use the ray multiplication for my arbitrary geometry used to render the light volumes - since I do not render a fullscreen quad.
I also tried to use the approach from your (MJP) blog, but it didn't work out. If I use the computed positionVS for processing the light volume (see source one post above) nothing is redered.

[code]
float3 viewRay = float3(input.PositionVS.xy * (FarClipDistance / input.PositionVS.z), FarClipDistance);
float normalizedDepth = DepthTexture.Sample(PointSampler, texCoord).x;
float3 positionVS = viewRay * normalizedDepth;
[/code]

Hope this clarifys my question a little bit :-s

Share this post


Link to post
Share on other sites
For arbitrary geometry you should use vpos register for screen-space xy-coordinates. For directional light and fullscreen quad there's no perspective projection to consider so VS interpolants work, but not for arbitrary 3D geometry needed for local light sources, thus the need for vpos. Once you have constructed screen-space homogenous coordinates [x, y, z, w] from vpos + depth in your pixel shader, transform the vector with inverse projection transform to view/world space (3 4d dot products). Depending on your platform you can use linear depth (i.e. depth rendered to a texture) or z-buffer depth. Using z-buffer depth is a bit more computationally expensive but saves bandwidth because you don't have to render the depth.


Cheers, Jarkko

Share this post


Link to post
Share on other sites
Thank your for your reply, JarkkoL :-)

Unfortunately I've never heared of the VPOS semantics yet.
Could you be so kind and offer me a concrete example regarding my issue and using VPOS?

Cheers - hAk

Share this post


Link to post
Share on other sites
vpos is linearly interpolated on the screen (contains screen coordinates of the processed pixel, i.e. [0, width-1], [0, height-1]), while vertex shader interpolants are linearly interpolated in 3d, thus affected by the perspective transform. If a rasterized triangle is camera facing (like the fullscreen quad of a directional light), there's no perspective projection to worry about. The triangles of light volumes of omni/spot lights are not camera facing so there's perspective correction applied to the vertex shader interpolants and thus you don't get linear interpolation on the screen.

In [url="http://sourceforge.net/projects/spinxengine"]Spin-X Platform[/url] there's light pre-pass renderer, which performs pixel screen->world transformation using vpos for omni & spot lights. In [url="http://spinxengine.svn.sourceforge.net/viewvc/spinxengine/addons/lumion/src/platform/win/d3d9/shaders/ps_light.h?revision=353&view=markup"]ps_light.h[/url] file there's world_pos() function which performs screen->world space transformation using screen-space coordinates calculated from vpos (scaled to range [0, 1]) and [url="http://spinxengine.svn.sourceforge.net/viewvc/spinxengine/addons/lumion/src/platform/win/d3d9/shaders/ps_light_omni.hlsl?revision=353&view=markup"]ps_light_omni.hlsl[/url] is an example of the use the the function and vpos. In [url="http://spinxengine.svn.sourceforge.net/viewvc/spinxengine/addons/lumion/src/platform/win/d3d9/d3d9_lumion_engine.cpp?revision=353&view=markup"]d3d9_lumion_engine.cpp[/url] (render_lighting() function in line 1304) the transformations are setup to the pixel shader constants if you like to check how they are calculated. In these shaders I use linear view-space z rendered to a texture.


Cheers, Jarkko

Share this post


Link to post
Share on other sites
Hey hAk :)

[quote name='- hAk -' timestamp='1310921533' post='4836409']
Could you be so kind and offer me a concrete example regarding my issue and using VPOS?
[/quote]

[code]
float2 uvFromVPOS(float2 vPos) { return vPos * InvScreenDim.xy + 0.5f * InvScreenDim.xy; }

PS_output ps_pointLight(in VS_OUTPUT input, in float2 vPos : VPOS)
{
PS_output Out;

float2 texCoord = uvFromVPOS(vPos);

float3 viewRay = float3(input.PositionVS.xy * (FarClipDistance / input.PositionVS.z), FarClipDistance);
float normalizedDepth = DepthTexture.Sample(PointSampler, texCoord).x;
float3 positionVS = viewRay * normalizedDepth;
...
}
[/code]

Share this post


Link to post
Share on other sites
Hey folks :-)

Good news first:
After checking out all your samples and advices (thanks so much for some new input to think about!) I can confirm that you all are generating the same position values.

Bad news last:
The positions generated still do not match those from my old shader implementation - so the lighting is still broken.


This can only mean two things:
- Something other in the shader is broken (which is a little bit unlikely)
- I'm going totaly insane and do not understand an other fundamental error in my code (most likely)

I know that I'm bugging you with this, but could any HLSL crack-O have a look at my shader again and point me into the right direction? The problem is still, that the light volumes do not get rendered to the light map. If the attenuation is set to 1 (const), at least some weird moving volume is visible...

Thanks a lot guys!

<-- EDIT -->

HEUREKA!! It is finally working!

After I realized what I actually calculated the solution was obvious: position = mul(positionVS, InverseViewMatrix). Anyway big big thanks to anybody involved :-)

For completeness here the working shader code (some tweaks to be done):

[code]
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;

//processing geometry coordinates
float4 worldPosition = mul(float4(input.Position,1), World);
float4 viewPosition = mul(worldPosition, View);

output.Position = mul(viewPosition, Projection);
output.ScreenPosition = output.Position;
output.PositionVS = viewPosition;

return output;
}
[/code]

[code]
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
input.ScreenPosition.xy /= input.ScreenPosition.w;

float2 texCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1);
texCoord -=halfPixel;

float4 normalData = tex2D(normalSampler,texCoord);
float3 normal = 2.0f * normalData.xyz - 1.0f;

float specularPower = normalData.a * 255;
float specularIntensity = tex2D(colorSampler, texCoord).a;

//compute screen-space position
float depthVal = tex2D(depthSampler,texCoord).r;
float3 viewRay = input.PositionVS * (FarPlane / -input.PositionVS.z);
float4 position = float4(viewRay * depthVal, 1);

//transform to world space
position = mul(position, InvertView);
position /= position.w;

float3 lightVector = lightPosition - position;
float attenuation = saturate(1.0f - length(lightVector)/lightRadius);
lightVector = normalize(lightVector);

float NdL = dot(normal,lightVector);
float3 diffuseLight = NdL * Color.rgb;

float3 reflectionVector = normalize(reflect(-lightVector, normal));
float3 directionToCamera = normalize(cameraPosition - position);
float specularLight = specularIntensity * pow( saturate(dot(reflectionVector, directionToCamera)), specularPower);

//take into account attenuation and lightIntensity.
return attenuation * lightIntensity * float4(diffuseLight.rgb,specularLight);
}
[/code] Edited by - hAk -

Share this post


Link to post
Share on other sites
Glad you got it working, but why do you transform the position to world space in the pixel shader? You could transform your light positions to view space instead (in vertex shader or on the cpu).

Share this post


Link to post
Share on other sites
[quote name='- hAk -' timestamp='1311014780' post='4836940']
HEUREKA!! It is finally working!
[/quote]
Great! (: Do you have screenshots to share?


Cheers, Jarkko

Share this post


Link to post
Share on other sites
Hey guys :-)

Sorry for the late reply, but the last one week was too busy to turn on my private PC or fix the last issues in my lighting engine. These spotlights make me go nuts *lol*

However - it's saturday evening and the lighting is finaly done.
There is not much to see yet (most code is basicaly framework related) but here two screeners as requested!

[attachment=4619:nice_01.png]
[attachment=4620:nice_02.png]

The shadow mapping is based on the implementation of MJP. Lighting is done following the paper of Catalin Zima und finally the SSAO is done using an approach Daniel suggested (using some tweaks). I can't remember which paper I used to implement my bloom postprocessor. In this scene about 1k non index buffered cubes plus 144 pointlights are drawn. The shadow is casted by an directional light using 3x3 PCF.

Thank you once again for you help!

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this