Sign in to follow this  
korvax

Defered Render structure.

Recommended Posts

korvax    309

Hi,

 

I render each object to  G-Buffer  after that I should render each light in the scene to the lightbuffer, and the result should be blended so the lightbuffer is a blended combination of all lights in the scene correct.

 

So how do I blend the lightbuffers renderpass? Is this something that i should program in the shaderworld or in the cpu world? And how would a typical very simple lightbuffer look like should that just be a pixel shader?

 

Also currently my setup works fine for 1 light (:-P) and it goes something like this.

 

1) Render all object to the G-Buffer one by one, each object has the G-Buffer as there primary shader

2) Render a Quad with the light buffer as a shader and a light as input. If i would start render 1000 lights I cant render

the Quad 1000 times, so I should just run the "light shader" a 10000s without any geometry somehow and then as a third final step render the  a Quad and then apply the blend result from the light shader to that quad, how would I do that?

// Lightbuffer
//--------------------------------------------------------------------------------
Texture2D txDiffuse : register(t0);
Texture2D txNormal : register(t1);
Texture2D txDepth : register(t2);
Texture2D txSpecular : register(t3);
//--------------------------------------------------------------------------------
SamplerState samLinear0 : register( s0 );
SamplerState samLinear1 : register( s1 );
SamplerState samLinear2 : register( s2 );
SamplerState samLinear3 : register( s3 );
//--------------------------------------------------------------------------------
cbuffer cbLight : register(b0)
{
	float4 lightDirection;
	float4 lightColor;
	float4 lightRange;	
}
//--------------------------------------------------------------------------------
struct PS_INPUT
{
  float4 position	 : SV_POSITION;
  float2 texcoord	 : TEXCOORD0;	
	};
//--------------------------------------------------------------------------------
float4 main(PS_INPUT input) : SV_Target
{
	float4 diffuse = txDiffuse.Sample(samLinear0, input.texcoord);
	float4 normal = txNormal.Sample(samLinear1, input.texcoord);
	float4 depth = txDepth.Sample(samLinear2, input.texcoord);
	float4 specular = txDepth.Sample(samLinear3, input.texcoord);
	float irradiance = saturate(dot(normal, -lightDirection));
	return lightColor*irradiance*diffuse;
}


// G-BUFFER
Texture2D txDiffuse : register( t0 );
SamplerState samLinear : register( s0 );

cbuffer cbPerObject : register(b0)
{
	float4 diffuse;
	float4 specular;
	bool isTextured;	
}

//--------------------------------------------------------------------------------------
struct PS_INPUT
{
  float4 position	 : SV_POSITION;
  float2 texcoord	 : TEXCOORD0;	
	float4 normal	: NORMAL;
};
//--------------------------------------------------------------------------------------
struct PSOutput
{
	float4 Color : SV_Target0;
  float4 Normal : SV_Target1;	
	float4 Depth : SV_Target2;
	float4 Specular : SV_Target3;
	
};
//--------------------------------------------------------------------------------------
// Pixel Shader
//--------------------------------------------------------------------------------------
PSOutput main(PS_INPUT input)  
{
	PSOutput output;
	output.Color = diffuse*txDiffuse.Sample(samLinear, input.texcoord);
	output.Normal = normalize(input.normal);
	output.Specular = specular;
	output.Depth =  input.position.z / input.position.w;
	return output;
}

any help is much appreciated.

Share this post


Link to post
Share on other sites
pressgreen    678

Differed lighting from what I understand is done like this.

 

First pass Render to texture:

                                             Color  buffer

                                             Depth buffer

                                             Normals buffer

 

Second Pass Calculate lighting:

                                            using above buffers information calculate ALL lights and their influence on the scene in ONE shader.

                                            Render out put to ONE full screen quad.

 

 

I would say render the scene without your lights, then using the normal and depth buffer information from your first pass, you then calculate the lighting for all your lights in the second pass and combine the two in the fragment shader and then present the results on one full screen quad. If that does not make sense here is an example that is well written that shows how this is done using multiple point lights.

Edited by greenzone

Share this post


Link to post
Share on other sites
korvax    309

Hi,

But how do I render x-number of light with a pixelshader without without any geomery and then merge the result from the lightshaders and then display it on a quad.

I found al ot of tutorials and articles that discause the gbuffer but I havent found much on the final merging part?

 

Will have a look at your link btw thx.

Share this post


Link to post
Share on other sites
pressgreen    678

You render the geometry first with out the lights to a texture. Only with color and texture. You THEN take that information you just gathered in the three buffers Color,Depth,and Normals and pass it to your differed lighting shader. in this shader you calculate your light affects using the lights positions, direction, intensity and what not. then you texture a full screen quad with it. in your case your "lightshader" would be the "differed lighting shader" with all of your lights information being calculated against the scenes Color, Depth, and Normal buffer information.

Edited by greenzone

Share this post


Link to post
Share on other sites
korvax    309
ou render the geometry first with out the lights. Only with color and texture. You THEN take that information you just gathered in the three buffers Color,Depth,and Normals and pass it to your differed lighting shader.

Im already doing that.. My problem is:

 

Lets say i have 100 lights, as i has understood it i need to excute the lightshader 100 times for each light correct and how do I do this in practice, how do I execute a pixelshader (which the lightshader is) without having any geometry bound to it? I'm a bit lost here.  I understand how it should work in principle but not in practice.. 

Share this post


Link to post
Share on other sites
pressgreen    678

You don't need to execute the lightshader 100 times. you only need to pass the one lightshader the position of your 100 lights. then calculate all of the 100 light positions against the Color,Depth, and Normal values you gathered in your first pass.

Share this post


Link to post
Share on other sites
satanir    1452

Check this tutorial - part 1, part 2 and part 3.

 

Let's say you are doing basic diffuse lighting. For that you need to know:

- World normal.

- The diffuse color.

 

Forward renderer will do it like this:

- VS will calculate the normal.

- PS get normal and tex-coord as input.

- PS samples the diffuse texture for the diffuse color.

- output = saturate(dot(normal, lightvec))*diffuse.

 

In contrast, in deferred shading you will have 2 passes:

Geometry pass:

- VS calculates normal

- PS gets normal

- PS samples for diffuse color

- PS outputs both values to the MRT

 

Lighting pass:

- Draw full screen quad. VS is pass-through, only output SV_Position and TexC.

Pixel shader psuedo-code:

Normal = GBuffer.Normal.sample(TexC)
Diffuse = GBuffer.diffuse.sample(TexC)

float3 color = 0;
for each light l
{
    color += saturate(dot(Normal, l.direction))*Diffuse;
}

return color;

You don't need to invoke the PS 1000 times. You call it once, and loop over the lights inside the PS.

This is a very basic example, check the tutorials I mentioned above for better explanation.

 

[EDIT] also consider using a Z pre-pass before the geometry pass (basically initialize the depth buffer without invoking the PS), it will optimize the geometry pass in case your depth complexity is high.

Edited by satanir

Share this post


Link to post
Share on other sites
pressgreen    678

This is one shader operating on YOUR 100 LIGHTS perpixel.


uniform sampler2D ColorBuffer, NormalBuffer, DepthBuffer;
uniform mat4x4 ProjectionBiasInverse;

void main()
{
    gl_FragColor = texture2D(ColorBuffer, gl_TexCoord[0].st);
    
    float Depth = texture2D(DepthBuffer, gl_TexCoord[0].st).r;
    
    if(Depth < 1.0)
    {
        vec3 Normal = normalize(texture2D(NormalBuffer, gl_TexCoord[0].st).rgb * 2.0 - 1.0);
        
        vec4 Position = ProjectionBiasInverse * vec4(gl_TexCoord[0].st, Depth, 1.0);
        Position /= Position.w;
        
        vec3 Light = vec3(0.0);
        
        for(int i = 0;        i <(YOUR 100 LIGHTS);           i++)
        {
            vec3 LightDirection = gl_LightSource[YOUR 100 LIGHTS].position.xyz - Position.xyz;
            
            float LightDistance2 = dot(LightDirection, LightDirection);
            float LightDistance = sqrt(LightDistance2);
            
            LightDirection /= LightDistance;
            
            float NdotLD = max(0.0, dot(Normal, LightDirection));
            
            float att = gl_LightSource[YOUR 100 LIGHTS].constantAttenuation;
            
            att += gl_LightSource[YOUR 100 LIGHTS].linearAttenuation * LightDistance;
            att += gl_LightSource[YOUR 100 LIGHTS].quadraticAttenuation * LightDistance2;
            
            Light += (gl_LightSource[YOUR 100 LIGHTS].ambient.rgb + gl_LightSource[YOUR 100 LIGHTS].diffuse.rgb * NdotLD) / att;
        }
        
        gl_FragColor.rgb *= Light;
    }
}

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this