defered rendering in simple explaination please?

Started by
2 comments, last by Krohm 12 years, 10 months ago
i can seam to figure out how defferred rendering works , can some explain it to me in simple terms

ie.

forward rendering
take possitions of all lights in the scene // these stay the same for the whole frame
itterate through each model , rendering and appplying the shader tequneque for each light in the scene
Advertisement

i can seam to figure out how defferred rendering works , can some explain it to me in simple terms

ie.

forward rendering
take possitions of all lights in the scene // these stay the same for the whole frame
itterate through each model , rendering and appplying the shader tequneque for each light in the scene


I'm still a bit fuzzy on this myself, but essentially forward rendering involves rendering m objects x n lights. which can be alot of draw calls. In a deferred context, everything is rendered to a Gbuffer, but rather than rendering an image, you are writing out position, normal, tangent, and other object data. This can then be used in a pixel shader to recreate the objects and lights and perform the shading. This results in M+n calls, versus m*n. which means that if you have lots of lights you save yourself a lot of time and draw calls. However there is an overhead to this so scenes with less lights and objects may in fact render slower.
Basically rendering the components of the lighting equation into a collection of textures - rather than the backbuffer. And then later sampling these components at the lighting stage to construct the lighting equation. During this stage the contribution of each light is calculated and accumulated. The result is a fully lit scene.


This presentation should provide more info: http://www.talula.demon.co.uk/DeferredShading.pdf

Saving the world, one semi-colon at a time.

Simply put:
In the traditional rendering scheme of things
the Pixel shader gets (via interpolators and uniforms) data, mangles it and outputs a single value - which is meant to be the "final" pixel color or, at best the input to the post-processing stage.

In deferred shading instead, the very same data is output to the Framebuffer with no computation (ideally - but in reality some mangling is necessary).
Because the final pixel values are then available, lighting pass can become essentially a post-processing effect. You could think at this as some sort of "rendering fullbright", with multiple channels besides color.

When compared, deferred has several pros and cons. When it comes to me the only advance worth mentioning is that decoupling of lighting means that shaders can be authored independantly - much easier if there is a lot of shaders around (see "shader combinatorial explosion). By contrast, there are a few cons such as 1) you have to implement your own antialiasing 2) transparency is still not worked out.

Previously "Krohm"

This topic is closed to new replies.

Advertisement