Rendering Lights: Geometry vs. fullscreen quad

Started by
5 comments, last by Ashaman73 13 years, 6 months ago
I implemented the LightPrePass into my engine a while ago, using spheres to render my point lights.

Since a sphere isn't a fullscreen quad, I can't use Cryteks method to reconstruct the worldspace position from my depth buffer. (I'm using D3D11, so it's the hardware depth buffer in my case)

This problem would be gone when I would use a fullscreen quad with a scissoring rect around the light. But when I did that the last time, it wasn't very accurate.
You can see what I mean in our last demo.

There are borders popping up or wrong scissoring when you are behind a light.
I did a straight copy out of some sample code from Nvidia for this algorithm. Maybe there is a better one available?

So my questions are:
- Can I somehow reconstruct the worldspace position from depth using the FrustumRay method on light spheres?
- If not, is there a good algorithm to figure out the scissoring rect for a fullscreen quad for a point light?
- Are there any other ways to do this?
Advertisement
In the vertex shader:

  output.worldPosOffset = vertexWorldPos - cameraPos;  output.viewSpaceDepth = dot(output.worldPosOffset, cameraZBasis);


In the pixel shader:

  worldPos = cameraPos + input.worldPosOffset * sampledZDepth/input.viewSpaceDepth;


This is somewhat simpler if you light in view space (my preference).
Thanks for your reply.

Is this the shader for the light or for the mesh (Light I guess)?
What is cameraZBasis? Is it the near plane?

Quote:
This is somewhat simpler if you light in view space (my preference).


The problem is just that I can't think in view space [wink]
I'm better off with world space I guess because I understand it.
Quote:Original post by mind in a box
- Can I somehow reconstruct the worldspace position from depth using the FrustumRay method on light spheres?


You can reconstruct world space position from a depth buffer by converting it to a linear depth value. See the very end of this article.

Quote:Original post by mind in a box
- If not, is there a good algorithm to figure out the scissoring rect for a fullscreen quad for a point light?


It's not pretty. This article goes through the math. I don't really recommend going with scissoring...volumes offer a much tighter fit without all of the CPU overhead, plus you can make better use of depth testing.

Quote:
You can reconstruct world space position from a depth buffer by converting it to a linear depth value. See the very end of this article.


Oh, you wrote a 3rd Article about this? I didn't notice it.

You mean that code, right?
// Normalize the view rayfloat3 viewRay = normalize(input.ViewRay);// Sample the depth buffer and convert it to linear depthfloat depth = DepthTexture.Sample(PointSampler, texCoord).x;float linearDepth = ProjectionB / (depth - ProjectionA);// Project the view ray onto the camera's z-axisfloat viewZDist = dot(EyeZAxis, viewRay);// Scale the view ray by the ratio of the linear z value to the projected view rayfloat3 positionWS = CameraPositionWS + viewRay * (linearDepth / viewZDist);


Much to do for the GPU (At least for my old one [wink]). Could it be faster than my current code (Which works fine with the hardware depth buffer) anyways?
float3 DepthToWorld(float Depth,float2 TexCoord){	float4 clipPos;	clipPos.x = 2.0 * TexCoord.x - 1.0;	clipPos.y = -2.0 * TexCoord.y + 1.0;	clipPos.z = Depth;	clipPos.w = 1.0;	float4 PosWS=mul(clipPos,invViewProj);	PosWS.w = 1.0 / PosWS.w;	PosWS.xyz *= PosWS.w;	return PosWS.xyz;//World Space}




After all I may go better with view-space lighting... But I don't know anything about it!

How would I have to change my vertex shaders to work with viewspace lighting?
I guess I have to transform the normals only by the viewmatrix then instead of the worldmatrix?
Same for vertexpositions? Or do they need some mul()ing with the world matrix as well?

Whats with the positions of the lights? I think I have read something about a ViewToLight matrix or so. How do I build it?

And how do I have to alter my lighting computation?

I guess all that questions could be answered with a sample or article that shows proper viewspace lighting. Let's see if I find one..
This needn't be complicated, or slow. In the pseudo code I posted (intended for rendering a light volume), the camera z basis is the unit vector in world space that points along view space positive z. This will be either the 3rd column or 3rd row of your world to view matrix, depending on your matrix ordering/"majorness".

You could just as easily transform the input vertex position in world space by the world to view matrix, and use .z of the result for "viewSpaceDepth". Same thing.

In English, you just want to scale the interpolated world offset vector (from eye position to light vertex position) by the ratio of sampled z depth to interpolated view space z. This is arithmetic of similar triangles, where the world offset vector represents the hypotenuse, interpolated view space depth is the adjacent leg, and sampled depth is adjacent leg of the similar triangle. You want to recover the hypotenuse of the triangle for which one leg is known (sampled z depth).
Quote:Original post by mind in a box
After all I may go better with view-space lighting... But I don't know anything about it!

How would I have to change my vertex shaders to work with viewspace lighting?
I guess I have to transform the normals only by the viewmatrix then instead of the worldmatrix?
Same for vertexpositions? Or do they need some mul()ing with the world matrix as well?

Whats with the positions of the lights? I think I have read something about a ViewToLight matrix or so. How do I build it?

And how do I have to alter my lighting computation?

I guess all that questions could be answered with a sample or article that shows proper viewspace lighting. Let's see if I find one..

View space is better in deferred shading/lighting. To imagine the concept of viewspace, think of a modelling tool like blender/max where you have modelled your whole level and placed your camera somewhere. All positions/normals are in worldspace. Now move and rotate the complete scene until your camera is at position (0,0,0) and is pointing long the z axis. Now all positions and normals are in view space.

The standard transformation pipeline is
object space --model transform matrix--> world space --inv. camera(=view) matrix-->view space --projection matrix--> screen space

Glsl (and I think hlsl/D3D11 too) merges the model and view matrix and made it available in the shaders (i.e. gl_ModelViewMatrix). All you need is to multiply your vertex and normal(=direction) with this "predefined" matrix to move your model from model to view space (done in vertex shader).

The same has to be done with lights. When your lights are already in world space, you just need to transform the position and normal to view space (use the inverse camera matrix for this) before transfering the positions to your light shader.

Light& myLight = ...// tranform position considering translationVec4 lightPosition = InvCameraMatrix * Vec4( myLight.getPosition(),1);// transform normal, but rotation only (homogenes w = 0)!Vec4 lightDir = InvCameraMatrix * Vec4( myLight.getDirection(),0);myShader.setLight( lightPostion,lightDir, color...);


You don't need to change your light computation as long as all vectors are in the same space (world or view or tangent space).

This topic is closed to new replies.

Advertisement