Kinds of deferred rendering

Started by
6 comments, last by ATEFred 9 years, 11 months ago

Hello,

I would like to know what kind of deferred rendering is most popular. I see in the internet a lot of tutorials how to do it, but they have others way to implementation eg:

- Deferred Irradiance

- Tile based deferred and others types.

What is the best way? Could you help me choose the best way?

Advertisement

The best way will depend on the type of scene you have and your target hw. They are mostly pretty similar, but here is an overview of a few of the popular ones:
- deferred shading, based on generating a gbuffer for opaque objects with all properties needed for both lighting and shading, followed by light pass typically done by rendering geometrical volumes or quad for each light and a fullscreen pass for sunlight, followed by composite pass where lighting and surface properties are combined to get back the final shaded buffer. This is followed by alpha passes, often with forward lighting, and post fx passes, which can use the content of the gbuffer if needed. Full or partial Z prepass is optional. Advantages include potentially rendering your scene only once and

- light prepass/ deferred lighting involves the same kind of steps, only with a minimal gbuffer containing only what you need for the actual lighting ( often just depth buffer + one render target containing normals + spec power ), the same kind of light pass, but then another full scene rendering pass to get the final colour buffer. This means loads more draw calls, but much lighter gbuffers, which can be handy on HW with limited bandwidth, limited support for MRTs, or limited EDRam like the 360. Also gives more flexibility than the previous approach when it comes to object materials, since you are not limited to the information you can store in the gbuffer.

- inferred rendering, which is like light prepass, only with a downsampled gbuffer containing material IDs, downsample light pass, but high res colour pass which uses IDs to pick the correct values from the light buffer without edge artifacts. Kind of neat way of doing gbuffer and light pass much faster at the cost of resolution. Can also be used to store the alpha object properties in the gbuffer with a dithered pattern, and then excluding the samples you don't want / not for that layer during the colour pass. So no more need for forward lighting for alpha objects (up to a point).

- tiled deferred involves not rendering volumes or quads for your lights, which can be pretty extensive when you get alot of light overdraw, especially if your light volumes are not super tight, but instead divide your screen into smaller tiles, generate a frustum per tile, cull your lights on gpu for each tile frustum, and then light only the fragments in the tile by the final list. Usually done in CS, no overdraw issues at all, overall much faster, but requires modern HW and also can generate very large tile frustums when you have large depth discontinuities per tile. The last part can be mitigated by adding a depth division to your tiles ( use 3d clusters instead of 2d tiles ).

- forward+ is similar, but involves z prepass instead of gbuffer generation, then pass to generate light lists per tile, same as above, but instead of lighting at that point, you render your scene again and light forward style using the list of lights intersecting the current tile. Allows for material flexibility and easy MSAA support at the cost of another full geo pass.

There are loads more variations of course, but these are maybe a good starting point.

Thank you for explain me this. Maybe i ask in a different way: What way is using in comercial game engines?

I would like to develop engine mainly for improve my skills.

I will be very happy if you give me a link with notice about it.

There are commercial game engines that use all of these approaches. Frostbite 3 uses the tiled CS approach, the Stalker engine was the first to use the deferred shading approach that I know of, loads of games have used light prepass ( especially 360 games to get around edram size limitations / avoid tiling ), Volition came up with and used inferred in their engine, forward+ seems to be one of the trendier approaches for future games, not sure if anything released uses that already.

The main thing is for you to decide what your target platforms are and what kind of scenes you want to render. (visible entity counts, light counts, light types, whether a single lighting model is enough for all your surfaces, etc.)

For learning purposes though, they are all similar enough that you can just pick a simpler one (deferred shading or light prepass maybe), get it working, and then adapt afterwards to a more complex approach if needed.


As for docs / presentations, there are plenty around for all of these. I would recommend reading the GPU pro books, there are plenty papers on this. Dice.se has presentations on their website you can freely access for the tiled approach they used on bf3. GDC vault is also a great place to look.

You can also find example implementations around, like here:
https://hieroglyph3.codeplex.com/
(authors are active one this forum btw)

Like ATEFred already said, there are various techniques in popular use depending on the platform. To decide which is best, you really need to have a solid idea of what hardware you're targeting and what you need from your renderer. Tiled deferred in a compute shader will generally give you the best peak performance for many lights, but you need hardware and API's that support that sort of thing. Light prepass or tiled forward can be useful if there's a restriction on render target sizes, for instance on mobile TBDR GPU's or the Xbox 360 GPU.

Like ATEFred already said, there are various techniques in popular use depending on the platform. To decide which is best, you really need to have a solid idea of what hardware you're targeting and what you need from your renderer. Tiled deferred in a compute shader will generally give you the best peak performance for many lights, but you need hardware and API's that support that sort of thing. Light prepass or tiled forward can be useful if there's a restriction on render target sizes, for instance on mobile TBDR GPU's or the Xbox 360 GPU.

For high end the popular choice is clustered forward/deferred. You can go deferred for opaque/generically shaded objects, while translucency/special lighting models can use forward. It's nice mostly because explicitly handles both at once while handling a large, or even very large number of lights better than anything, along with other fancy possibilities if you go for full cluster culling: http://www.humus.name/Articles/PracticalClusteredShading.pdf

Like MJP said though, you need the hardware to support it. Light pre-pass/forward+ is more popular for mobile solutions.

Thanks for answers. I would like to support only PC.

I thing that idea ATEFred is good and i will write at the beginning deferred shading.

Can you confirm my way of thinking about deferred shading:

First render all geometry to buffers with normals, position, color and material. Next for each light render geometry with additive blending ( sun light - rectangle, point light - sphere, spot light - cone).

Where i have to account gamma coretcion? In light shaders?

You don't need position, you can reconstruct it from the depth and pixel positions when you need it. (allows you to not require 32bit per channel additional RT in your gbuffer).
other than that, you have a good starting point. Normals, material properties like spec power, spec factor / roughness, albedo colour and depth of course.

gamma correction only really makes sense if you support HDR (or you would get quite bandy results I think). The idea is to get your colour texture samples and colour constants in linear space (either doing the conversion manually or using the api / hw for textures and rendertargets at least). then do lighting, then hdr resolve and conversion back into gamma space.

This topic is closed to new replies.

Advertisement