Followers 0

## 16 posts in this topic

Do I have to render scene twice in deferred shading ? I.e one for normal buffer and one for color buffer?
0

##### Share on other sites

Not at all, actually it happens that you can output multiple colors (i.e. write to multiple render targets ) at the same time.

As an example, in DirectX, when you set the render targets, you have the choice of passing an array of render target pointers, and then in the pixel shader, you would do the following: ( Top of my head, so there may be small mistakes )

struct PixelOut{    float4 Color : SV_Target0;    float4 Normal : SV_Target1;};PixelOut PS(...){    PixelOut out = (PixelOut)0;    .. do the usual stuff, and fill out.Color and out.Normal ..   return out;}
Now I havn't done much in OpenGL, so Im not familiar with the syntaxes of this.

Hope this helps.
-MIGI0027
Thanks. Also I use directs 11 not opengl :p
Edit: how does normal mapping fit into deferred shading?
0

##### Share on other sites

Normal mapping works perfectly fine with deferred shading. You may output an interpolated vertex normal or a normal from a texture to the render target. Only thing that you'll need to define is the common space where you want to do your lighting (typically either in view space or world space). That means that you'll have to transform the normal from tangent space to the desired space.

Cheers!

typically you'd use more than 2 render targets to store more variables needed for lighting calculations such as roughness / shininess, specular factor, metalness ... what ever is required by your lighting system.

Edited by kauna
0

##### Share on other sites

Do I have to render scene twice in deferred shading ?

There are two similar techniques, deferred shading and deferred lighting.

deferred rendering:

1. render geometry(scene) to multiple buffers (color, material, normals,depth). the buffer holding the normal/depth is called g-buffer (for geometry), because you can reconstruct the visible 3d scene from it.

2. render multiple post processing passes on-top of the g-buffer (remember, you can reconstruct the 3d world from it), one of the most important post-processsing pass it the lighting pass.

deferred lighting

1. render geometry to mulitple buffers, but do not render color or material informations (only normal/depth).

2. render a lighting post-processing pass, this will write all lighting informationen to a new buffer.

3. render the geometry a second time, this time combine the color/material of your models with the lightbuffer

4. render some post-processing passes

The deferred rendering only needs one geometry pass,whereas deferred lighting needs two. On the other hand, deferred lighting has better lighting performance (you combine light/color/material only in the last step) and less material limitations.

Edited by Ashaman73
0

##### Share on other sites
Thanks for all this, i've another question. Will i still have to render objects twice? One foe gbuffers and one normally?
0

##### Share on other sites

Thanks for all this, i've another question. Will i still have to render objects twice? One foe gbuffers and one normally?

No, under the deferred rendering system described by Ashaman73, you render your objects once to fill the gbuffers, then the post-processing steps are all done as 2D passes.

1

##### Share on other sites

Thanks, I've started implementing deferred shading.

(The color is weird because its red channel ONLY!)

0

##### Share on other sites
Now i have implemented deferred shading but where do i store material and light properties like diffuse, specular, ambient, reflect, specular power? Also how do i use multiple lights with deferred shading?
0

##### Share on other sites

Now i have implemented deferred shading but where do i store material and light properties like diffuse, specular, ambient, reflect, specular power?

Um, well the diffuse color goes in your color buffer, that's what it's for. Specular color and power can be stuffed wherever you have space channels in your g-buffer. It's a bit of an art to try to minimize the size of your g-buffer while still getting the flexibility you need. Mine consists of 3 32-bit buffers:

(1) Albedo (RGB), and an emmissive value (A)

(2) Normal (RG), specular power (B), specular intensity (A) // I don't support a specular RGB color, just a single intensity value

(3) Depth (RG),object id (B), baked occlusion term (A)

I've managed to squeeze the normals into two 8-bit channels (using a spheremap transform), and 16-bit depth value into two 8-bit channels.

To start with though, I wouldn't bother with these optimizations, just make the buffers you need. If you just need spec power and a single spec intensity value, you could stuff them into the alpha channels of your color and normal buffers.

As for "ambient" and "reflect", I'm not sure what you mean. If these are light properties, they don't belong in the g-buffer. They are applied as you draw each light in your lighting pass (unless you're making a light pre-pass renderer (sometimes called deferred lighting), in which case you need to make a sort of g-buffer for lighting properties).

Also how do i use multiple lights with deferred shading?

You draw each light separately and additively blend then into your destination render target (your light accumulation buffer).

1

##### Share on other sites

I know about specular power but what is specular intensity? I know its general meaning but i never used it before in my old lighting? Also i dont know about emissive value and why do you store depth in 2 channels? Isn't it sufficient to store it in single channel? Also your normal is in 2 channels so where is your z component of normal? and why object id?

Sorry for too many questions but i'm confused.

EDIT: I understood specular intensity but what what about other terms?

Edited by newtechnology
0

##### Share on other sites

Also i dont know about emissive value

It's for when an object emits light (so it would show up even if no lights were shining on it).

and why do you store depth in 2 channels? Isn't it sufficient to store it in single channel?

Not if I'm using an 8 bit per channel render target format. But to start off with, I would just use a R32F format (32 bit floating point single channel).

Also your normal is in 2 channels so where is your z component of normal?

Normals are of unit length, so you only need to store 2 values. You can reconstruct the 3rd value in the shader, since you know the length of the normal is 1. (see http://aras-p.info/texts/CompactNormalStorage.html#method01xy)

But basically, get stuff working, and worry about all these kinds of optimizations at a later time.

2

##### Share on other sites

OK. So I implemented lighting from this tutorial: http://www.catalinzima.com/xna/tutorials/deferred-rendering-in-xna/directional-lights/

Why do I've wrong specular highlights? directional light's direction is (0.0f, -1.0f, 0.0f)

it is straight down so why are specular hightlights wrong (at up)?

My deferred renderer:

#include "LightHelper.fx"

//================================
//DeferredRenderer: Renders lighting info to gbuffers
//DefferedRenderer.fx by newtechnology
//================================

cbuffer cbPerObject
{
float4x4 gWorld;
float4x4 gWorldViewProj;
float4x4 gWorldInvTranspose;
float4x4 gTexTransform;

Material gMaterial;
};

cbuffer cbFixed
{
const float AOFactor = 0.4f;
const float SpecIntensity = 0.8f;
const float specPower = 0.5;
};

struct VertexIn
{
float3 PosL : POSITION;
float3 NormalL : NORMAL;
float2 Tex : TEXCOORD;
float3 TangentL : TANGENT;
};

struct VertexOut
{
float4 PosH : SV_POSITION;
float3 PosW : POSITION;
float3 NormalW : NORMAL;
float2 Tex : TEXCOORD;
float3 TangentW : TANGENT;
};

SamplerState samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
};

Texture2D gDiffuseMap;
Texture2D gNormalMap;

VertexOut VS(VertexIn vin)
{
VertexOut vout;

vout.PosW     = mul(float4(vin.PosL, 1.0f), gWorld).xyz;
vout.NormalW  = mul(vin.NormalL, (float3x3)gWorldInvTranspose);
vout.TangentW = mul(vin.TangentL, (float3x3)gWorld);
vout.PosH = mul(float4(vin.PosL, 1.0f), gWorldViewProj);
vout.Tex = mul(float4(vin.Tex, 0.0f, 1.0f), gTexTransform).xy;

return vout;
}

struct PixelOut
{
float4 Color : SV_Target0;
float4 Normal : SV_Target1;
float4 Position : SV_Target2;
};

PixelOut PS(VertexOut pin) : SV_Target
{
PixelOut Out;

pin.NormalW = 0.5f * (normalize(pin.NormalW) + 1.0f);

float3 color = gDiffuseMap.Sample(samLinear, pin.Tex).rgb;

//output lighting info to gbuffer
Out.Color = float4(color, specPower);
Out.Normal = float4(pin.NormalW, SpecIntensity);
Out.Position = float4(pin.PosW, AOFactor);

return Out;
}

technique11 BuildGBuffers
{
pass P0
{
}
}

My Directional light:

#include "LightHelper.fx"

SamplerState samPoint
{
Filter = MIN_MAG_MIP_POINT;
};

cbuffer cbPerObject
{
float4x4 gWorldViewProj;
};

cbuffer cbPerFrame
{
float3 gEyePosW;
Material gMaterial;
DirectionalLight gLight;
};

//gbuffers
Texture2D gColorMap;
Texture2D gNormalsMap;
Texture2D gPositionMap;

struct VertexIn
{
float3 PosL : POSITION;
float2 tex : TEXCOORD;
};

struct VertexOut
{
float4 PosH : SV_POSITION;
float2 tex : TEXCOORD;
};

VertexOut VS(VertexIn vin)
{
VertexOut vout;

vout.PosH = mul(float4(vin.PosL, 1.0f), gWorldViewProj);

//no transformation required here
vout.tex = vin.tex;

return vout;
}

float4 PS(VertexOut pin) : SV_Target
{
//Sample textures (gbuffers)
float4 color = gColorMap.Sample(samPoint, pin.tex);
float4 normal = gNormalsMap.Sample(samPoint, pin.tex);
float4 position = gPositionMap.Sample(samPoint, pin.tex);

//extract material properties from alpha channel of all 3 textures
float specPower = color.a * 255;
float specIntensity = normal.a;
float AOFactor = position.a;

float3 normalW = 2.0f * normal.rgb - 1.0f;

float3 lightVector = float3(0.0f, 1.0f, 0.0f); // -(0.0f, -1.0f, 0.0f) = (0.0f, 1.0f, 0.0f); -> -(-1) = +1
lightVector = normalize(lightVector);

float NdL = saturate(dot(lightVector, normalW));

float3 DirectionToCamera = normalize(gEyePosW - position.xyz);

float3 reflectionVector = normalize(reflect(lightVector, normalW));

float SpecularLight = specIntensity * pow(saturate(dot(reflectionVector, DirectionToCamera)), specPower);

// float3(1.0f, 1.0f, 1.0f) is light's color
float3 DiffuseLight = (NdL * float3(1.0f, 1.0f, 1.0f)) + AOFactor;

float3 finalcolor = color.rgb * DiffuseLight + SpecularLight;

return float4(finalcolor, 1.0f);
}

technique11 DeferredLighting
{
pass P0
{
}
};
Edited by newtechnology
0

##### Share on other sites

it is straight down so why are specular hightlights wrong (at up)?

You need to use the vector pointing to the light source, similar like the vector pointing to the camera. Therefor the sun light direction needs to be reverted.

0

##### Share on other sites

it is straight down so why are specular hightlights wrong (at up)?

You need to use the vector pointing to the light source, similar like the vector pointing to the camera. Therefor the sun light direction needs to be reverted.

I do invert it in pixel shader, look at the source code.

0

##### Share on other sites

I do invert it in pixel shader, look at the source code.

Invert it again :-). reflect(L, N) will produce something that points (roughly) in the opposite direction of L with respect to the normal. i.e. if L points towards the surface, reflect will return something that points away.

1

## Create an account

Register a new account