Jump to content
  • Advertisement
Sign in to follow this  
arkangel2803

Deferred Rendering in World Space. Help with other spaces

This topic is 3597 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all I am writing this post because I implemented Deferred Rendering using World Space coordinates. I must admit that I think better in World Space than in other spaces, and this will explain why I have some lack of precision in my Deferred Rendering Technique. I want to show you how I done it and hopes if anyone can lead me to the correct changes to make all calculations in other space like eye or screen space. Deferred Rendering Technique in World Space: -------------------------------------------- First of all, I will shoy my distribution on 4 Render Targets to store data (also called G-Buffer). Render Target 1: Albedo.r Albedo.g Albedo.b NeedIlumination Render Target 2: Normal.r Normal.g Normal.b Depth Render Target 3: SpecularPower Specular Exponent <Empty> <Empty> Render Target 4: LightAccum.r LightAccum.g LightAccum.b SpecularAccum All these Render Targets are 16 bits per component, this let me compute HDR on some components and let me also to compute a glow from this data for later addition. * The first Render Target is inited with albedo (texture color) of object, also, if object doesn’t need illumination (like sky) it puts a 0.0f value on ‘NeedIllumination’ channel. Of course if it needs illumination, then it puts a 1.0f. This allows me to have objects with and without illumination in the same scene. * In the second Render Target, I fill the free three channels with Normal information. These Normals are stored in World Space coordinates, also they are stored with ((N+1) *0.5f) method. In the ‘Depth’ channel, I store (i.Distance - eye_near) / (eye_far - eye_near), being i.Distance the distance between eye_pos (the world position of the camera) and pixel_pos (the world position of the pixel). * The third Render Target is filled with specular information. For every object, I store his specular power (range from 0.0f to 1.0f) and his specular exponent (range from 1.0f to 100.0f multiplied by 0.01f to fit in channel). Of course, later I will multiply it by 100.0f to get again the exponent in correct conditions. * The fourth and last Render Target is used for lighting accumulation, but instead of filling with 0.0f values, I initialized with Lightmaps per object information and let ‘SpeccularAccum’ channel with 0.0f for later use. Now that I explained the Render Targets channels, I will explain how I get all the data in World Space coordinates. Vertex Shader: -------------- For geometry itself (vertex data) is not a problem, I multiply every vertex in vertex shader by matrix that contains: World Matrix, View Matrix, Projection Matrix, and the custom transformation of this object, I mean, rotations, translations, scalations, etc. A little more work comes with normals of the object. You need to rotate normals, but not to translate them nor scale, so I need to multiply the objects normals with the rotation matrix of the object (again, only rotate). With this, I have normals in a correct orientation, then, I need to initialize Tangent and Binormal vectors per vertex. So in vertex shader, I multiply Tangent vector (coming from vertex buffer) by Rotation matrix, and the same for Binormal vector. With these 3 vectors, Normal, Tangent and Binormal, we have the matrix called TBN that let us to transform a simple normal from our Normal map to a World Space coordinates normal. So I pass Normal, Tangent and Binormal to pixel shader. Pixel Shader: ------------- In pixel shader, I have the interpolated data of Normal, Tangent, Binormal and pixel world position for every pixel. With these, and some other data referent to texture coordinates, I can calculate all four Render Targets and of course fill them. For example, I multiply normal coming from Normal map by TBN matrix to get these normal in World Space, also I use this World Space Normal to calculate reflection color of surface if I need it. More calculations are made, but all are related to color, specular, etc. If you want to know more, please ask for more information :) Ok, at this point, all is ok; the problems arise in the next step. Light Accumulation Step: ------------------------ In this step, we have 4 Render Targets filled with data, no geometry and no more information than these 4 Render Targets, well… we have the information of the light we want to accumulate of course :) If I want to properly calculate illumination of any pixel, I need to have his World Position, and, for this, I only have his Depth from eye_pos. The method I use to recalculate the world position I this: I know eye_pos position in World Space, and I know his Depth, this Depth can be transformed to World Space multiplying it by Near_plane and Far_plane information, keep in mind that Depth is stored in 0.0f-1.0f range, so if you multiply it by Far_plane – Near_plane, you will have a World Space Depth from eye_pos. I need to know the direction of this Depth, because I need a point (eye_pos) and a vector (pixel_pos – eye_pos) to know another point (pixel_pos). To get the direction of every pixel on the screen, I use the near_plane corners, I mean, for rendering light accumulation stage, I render a plane, this plane is composed from 2 triangles, and 4 vertex. In every vertex of this plane, I put apart from position, the position of one corner of Near_plane, this position is in World Space, and the method to get these coordinates is simple. You only need to multiply [-1,1,0], [1,1,0], [-1,-1,0], [1,-1,0] by the inverse of ViewProjection matrix, and you will have these coordinates in World Space, in other words, these four coordinates will transform into four coordinates of the Near_plane. If we pass these coordinates from vertex shader to pixel shader, graphic card will interpolate for us, and we will have a point in Near_plane, with this point and eye_pos, we can calculate a direction vector. With this direction vector, depth, and eye_pos, all in World Space, we can get pixel_pos in World Space. But now, nightmares appear. Depth value is a value stored in a 16 bit channel, and this implies less precision than a 32 bits channel. In practice, I get problems of precision calculating lights data in coordinates that are far from [0.0f, 0.0f, 0.0f] in the scene. So I’m thinking that I will need to store Depth in a 32 bit channel, but this reduce my channels from 4 to 2, I mean, from 16x16x16x16 to 32x32, all the Render Targets must be the same size, in this case, 64 bits per Render Target. When you have the World Position of the pixel, the Light Accumulation is simple, only calculate the lights value for every pixel and add this to the same Render Target with blend activated, this will add every color you write. When you have all the lights added over the lightmaps initialization data, you can multiply these valors with Albedo and you will have an illuminated scene. At this point, I understand that in World Space, I need to use a 32 bits channel to store Depth, and of course, I lose channels, all the other people on the world is using 16 bits channel because they are making all the calculations in some kind of eye/screen space. So my question is: What I need to do to make my calculations eye/screen space friendly? How can you have reflections without a World Space normal to reflect from? And how to get normals in eye/screen space? Thanks for reading all this text and try to help me :) LLORENS

Share this post


Link to post
Share on other sites
Advertisement
I know it doesn't answer your question, but you could rearrange your g-buffer to test whether (lack of) precision is the issue. For example:

Render Target 1 4x16: Albedo.r Albedo.g Albedo.b NeedIlumination
Render Target 2 4x16: Normal.r Normal.g Normal.b Specular Exponent
Render Target 3 2x32: Depth SpecularPower
Render Target 4 4x16: LightAccum.r LightAccum.g LightAccum.b SpecularAccum

Share this post


Link to post
Share on other sites
Hi B_old

Yes, your aproximation is correct and i think on it, the bad new is if i do this, i have 0 channels for new things like fog or something :)

LLORENS

Share this post


Link to post
Share on other sites
Isn't fog usually implemented as a post process with a deferred solution?

Slightly back on topic I wonder whether viewspace calculations would solve your precision problem. I do my lighting in viewspace and still use 32 bit for the depth.

I'm not sure about OpenGL, but in D3D10 you can use MRTs of differing width.

Share this post


Link to post
Share on other sites
I find that in most cases you want 32 bits for depth. Sometimes you can definitely get away with 16 bits...sometimes you can't.

Using view-space is pretty easy. You just need to

A) store your normals in view-space too, for convenience
B) reconstruct view-space position from depth instead of world-space position
C) Transform your light positions/directions so they're also in view-space during your lighting pass

Also if you can probably trim your G-Buffer down a bit more. One common optimization is to store normals as two values, either by storing X and Y and the sign of Z or by storing spherical coordinates. I used to do something like this in my old deferred renderer:

RT1 (A8R8G8B8): Diffuse albedo in RGB, Emmissive factor in A
RT2 (R16G16F) : View-space normal as spherical coordinates
RT3 (R32F) : Depth
RT4 (A8R8G8B8): Specular albedo in R, Specular power in B, two spare channels for other stuff

Quote:
Original post by B_old
I'm not sure about OpenGL, but in D3D10 you can use MRTs of differing width.


Yeah that's just a requirement of certain hardware. AFAIK the only GPU's that require same bit width for MRT's are Nvidia's 6 and 7-series.

Share this post


Link to post
Share on other sites
Hi all and thx for your help :)

I will try to use a 32 bit Depth channel, i was thinking on fog and i believe that i can make fog with only Depth channel.

By the way, do you know why i must set a Near_plane distance to 1.0f strictly ? If i put another distance, greater or less than 1.0f, i am unable to get correct graphics on the screen :(, and it is disturbing me because i dont know how to correct it.

Thanks in advance.

LLORENS

Share this post


Link to post
Share on other sites
Hi,

I also want to do the same thing with the OP, in another word, to perform deferred shading in view-space. Right now I haven't get into any deferred stuff yet, but I have trouble doing normal lighting calculation in view-space for directional lighting.

To answer the question of transferring normal to view-space, this is how I did it:


float4x4 gWorldView; // world-view matrix
float4 gEyePosV; // Position of camera in view-space
float4 gLightDirV; // Direction of direct light in view-space

struct OutputVS
{
float4 posH : POSITION0;
float2 texCoord : TEXCOORD0;
float4 aoCoord : TEXCOORD1;
float3 viewPosition : TEXCOORD2; // position in view space
float3x3 tangentToView : TEXCOORD3; // transform from tangent to view space
};

OutputVS NormalMapVS(float3 posL : POSITION0,
float3 tangentL : TANGENT0,
float3 binormalL : BINORMAL0,
float3 normalL : NORMAL0,
float2 tex0 : TEXCOORD0)
{
// Zero out our output.
OutputVS outVS = (OutputVS)0;

float4 vViewPosition;
float3 vViewLightDir;
float3x3 TBN;

// Transform the position / tangent-matrix into view space
vViewPosition = mul(float4(posL, 1.0f), gWorldView);

TBN[0] = mul(tangentL, gWorldView);
TBN[1] = mul(binormalL, gWorldView);
TBN[2] = mul(normalL, gWorldView);

// Project view space to clip space
outVS.posH = mul(vViewPosition, gProjection);

// Pass on texture coordinates to be interpolated in rasterization.
outVS.texCoord = tex0;

// Pass view position into a texture iterator
outVS.viewPosition = vViewPosition.xyz;

// Pass tangent-to-view matrix
outVS.tangentToView = TBN;

// Pass to occlusion texture coordinate
outVS.aoCoord = outVS.posH;

// Done--return the output.
return outVS;
}





For the gLightDirV, I compute it in main program and pass it to the shader like this:


D3DXVECTOR4 lightDirInView;
D3DXVec3Transform(&lightDirInView, &mLight.dirW, &gCamera->view());
HR(mNormalFX->SetValue(mhColumnLightDirV, &lightDirInView, sizeof(D3DXVECTOR4)));





Now this is the pixel shader:


float4 NormalMapPS(float2 TexCoord : TEXCOORD0,
float4 AOCoord : TEXCOORD1,
float3 ViewPosition : TEXCOORD2,
float3x3 TangentToView : TEXCOORD3) : COLOR
{
// Interpolated normals can become unnormal--so normalize.
float3 vPointToCamera = normalize(gEyePosV.xyz - ViewPosition);
float3 vLightDirection = normalize(gLightDirV.xyz);

// Sample normal map and transform it to view space
float3 normalT = tex2D(NormalMapS, TexCoord);

// Expand from [0, 1] compressed interval to true [-1, 1] interval.
normalT = 2.0f * normalT - 1.0f;

float3 viewNormal = mul(normalT, TangentToView);

// Make it a unit vector.
viewNormal = normalize(viewNormal);

// Compute the reflection vector
float3 r = reflect(vLightDirection, viewNormal);

// Determine how much (if any) specular light makes it into the eye.
float t = pow(max(dot(r, vPointToCamera), 0.0f), gMtrl.specPower);

// Determine the diffuse light intensity that strikes the vertex.
float s = max(dot(-vLightDirection, viewNormal), 0.0f);

// If the diffuse light intensity is low, kill the specular lighting term.
// It doesn't look right to add specular light when the surface receives
// little diffuse light.
if(s <= 0.0f)
t = 0.0f;

// Get occlusion factor
float fOcclusion = GetOcclusion(AOCoord);

// Compute the ambient, diffuse and specular terms separatly.
float3 spec = t*(gMtrl.spec*gLight.spec).rgb;
float3 diffuse = s*(gMtrl.diffuse*gLight.diffuse).rgb;
float3 ambient = gMtrl.ambient*gLight.ambient*fOcclusion;

// Get the texture color.
float4 texColor = tex2D(TexS, TexCoord);

// Combine the color from lighting with the texture color.
float3 color = (ambient + diffuse)*texColor.rgb + spec;

// Output the color and the alpha.
return float4(color, gMtrl.diffuse.a*texColor.a);
}





When I run this, my directional lighting changes as I move the camera around. I think it's because the View matrix changes, thus also change the direction of the light. How can I fix this ? Sorry for the long post.

Share this post


Link to post
Share on other sites
You only want to rotate your light direction by you view matrix...what you're doing right now is also translating it. Use D3DXVec3TransformNormal instead.

[Edited by - MJP on August 13, 2009 6:18:25 PM]

Share this post


Link to post
Share on other sites
I too am having trouble with directional light in view space for deferred shading.
For me its my specular power that is killing me. which tells me that something is wrong with my camera position? i just can't put my finger on it though. Here is a pic.

Here is my source, I tried to comment it as best I could.


void LightV( float4 position : POSITION,
float2 texcoord : TEXCOORD0,

out float4 oPosition : POSITION,
out float2 oTex : TEXCOORD0,
out float4 oPos : TEXCOORD1,
out float4x4 View,

uniform float4x4 ModelViewProj,
uniform float4x4 ModelView,
uniform float4 lightPosition
)
{
oPosition = mul(ModelViewProj, position);
oTex = texcoord;
//gets mult'ed in PS
oPos = lightPosition; //mul(ModelView, lightPosition);
View = ModelView;
}

void LightF( float4 color : COLOR,
float3 lightpos : TEXCOORD1,
float3 ws : WPOS,

out float4 oColor : COLOR,
float4x4 ModelViewI,


float2 texcoord : TEXCOORD0,

uniform float3 lightColor,
//uniform float3 lightPosition,
uniform float4 Ke,
uniform float4 Ka,
uniform float4 Kd,
uniform float4 Ks,
uniform float kC,
uniform float kL,
uniform float kQ,

uniform sampler2D pos,
uniform sampler2D nor,
uniform sampler2D dif,
uniform float4x4 ModelView,
uniform float4x4 ModelViewIT,

uniform float3 eyeP,
uniform float2 WinSize
)
{
float4 ambient = float4( 0.0, 0.0, 0.0, 0.0);
float4 diffuse = float4( 0.0, 0.0, 0.0, 0.0);
float4 specular = float4( 0.0, 0.0, 0.0, 0.0);

//Grab the Window Size of the Gbuffer for texture lookup
ws.xy = ws.xy / WinSize;
//Sample the G-Buffer
float4 position4 = tex2D(pos, ws);
float4 normal4 = tex2D(nor, ws);
float4 diffuse4 = tex2D(dif, ws);

//only rotate the light direction.. since we are in lighting mode, ModelViewIT is just ViewMatrixIT
lightpos = mul( float3x3(ModelViewIT) , lightpos);

//In view space, eye position is the 0,0,-1 right??????
eyeP = -position4.xyz;

float3 position3 = position4.xyz;
//unpack from Gbuffer?
float3 normal3 = 2.0 * float3(normal4.xyz) - 1.0;
float3 diffuse3 = diffuse4.xyz;
//extras
float3 sem = float3( position4.w , normal4.w, diffuse4.w);
//not used yet.
float4 emission = sem.yyyy;
float shininess = sem.x;
//directional light only gets direction, not light position
float3 lightDir = normalize( lightpos.xyz );

//get the reflect vector of the light direction and the normal
float r = reflect(lightDir, normal3);

//float3 halfAngle = normalize(lightDir + eyeP);
float nDotLD = max(0.0, dot(normal3 , lightDir ) );
//float nDotHA = max(0.0, dot(normal3 , halfAngle ) );

float specDot = max(0.0, dot(r, normalize(eyeP.xyz - position3)));
float spec_power = pow(specDot, shininess);

/*float spec_power;*/
if(nDotLD <= 0.0) {
spec_power = 0.0; }

float4 LightColor = float4 (lightColor.xyz, 0.0f );
ambient = Ka;
diffuse = Kd * LightColor * nDotLD;// * attenuation; No attenuation for directional light.
specular += Ks * LightColor * spec_power; //* attenuation; //specular power still causing black artifacts

float4 colorr;
colorr= float4( diffuse3, 1.0);

oColor = (emission + ambient + diffuse)*colorr + specular;

//oColor = float4(spec_power);// + 0.5; //for debugging purposes.

}




EDIT: Just some additional notes... my Gbuffer takes Position, normals, and diffuse texture. I know my position could be optimized, but im trying to get a working foundation before i hack it up again (MJP i have read your article but im not quite there yet =) ).
Thanks for any help!

Share this post


Link to post
Share on other sites
Hi MJP,

Thank you so much ! You're spot on the D3DXVec3TransformNormal. How do you know all these stuff XD ?

So I have my direct lighting in view-space works now. For the deferred shading, I read around and see that I should implement light volume. I understand that for:

- Sun light: draw a full-screen quad

- Omni light: draw a sphere

- Spot light: draw a cone

But I have no idea on how to implement it. Let say for a omni light, how do I setup a sphere that represent its lighting area ?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!