Materials and effects in a deffered shading renderer

Started by
8 comments, last by Ashaman73 12 years, 11 months ago
I'm looking to put together my own renderer for a personal game project of mine and, like a lot of people, I've been weighing the various techniques. I've looked at forward rendering, deferred lighting, deferred shading, etc. I've always liked the concept of deferred shading the most, but the problem is that a lot of the material I've read seems to be dated from '07-08, back when it hadn't yet been used in many commercial titles. Since then, the technique seems to be very popular. I'd like to know how some of it's limitations have been addressed and how they should effect my decision now, in 2011.

My biggest questions arise from the fact that I don't know how/if a deferred renderer can be made to play well with other common techniques. Some of the ones in particular that I'm interested in are:

1. Multiple material types (BRDFs) eg. phong, blinn, oren-nayar, anisotropics, etc
2. Sub-surface scattering
3. Reflections and refractions (environment maps and dynamic environment maps)
4. Relief mapping and parallax occlusion mapping

Those are the main ones I can think of. I would really like to be able to incorporate these types of effects in the game. I'm aware of the other obvious limitations, AA and transparency. I am really impressed with MLAA, so the difficulty of MSAA isn't a deal-breaker. As for transparency, I guess that will have to be handled by a forward path, although I've read about using deep g-buffers which would allow blending of at least a few layers.
Advertisement

I'm looking to put together my own renderer for a personal game project of mine and, like a lot of people, I've been weighing the various techniques. I've looked at forward rendering, deferred lighting, deferred shading, etc. I've always liked the concept of deferred shading the most, but the problem is that a lot of the material I've read seems to be dated from '07-08, back when it hadn't yet been used in many commercial titles. Since then, the technique seems to be very popular. I'd like to know how some of it's limitations have been addressed and how they should effect my decision now, in 2011.

My biggest questions arise from the fact that I don't know how/if a deferred renderer can be made to play well with other common techniques. Some of the ones in particular that I'm interested in are:

1. Multiple material types (BRDFs) eg. phong, blinn, oren-nayar, anisotropics, etc
2. Sub-surface scattering
3. Reflections and refractions (environment maps and dynamic environment maps)
4. Relief mapping and parallax occlusion mapping

Those are the main ones I can think of. I would really like to be able to incorporate these types of effects in the game. I'm aware of the other obvious limitations, AA and transparency. I am really impressed with MLAA, so the difficulty of MSAA isn't a deal-breaker. As for transparency, I guess that will have to be handled by a forward path, although I've read about using deep g-buffers which would allow blending of at least a few layers.


#1 is tough. You can store a material/BRDF ID in your G-Buffer if you want and branch on it in the lighting shader, but dynamic branching can be expensive if it's not coherent (or all of the time, on some older hardware). You could possibly do better than straight dynamic branching if you do it in compute shaders and dynamically reschedule threads, or by using stencil to mask off certain pixels, or by tile classification prepass (like in split/second). Some people (like the guys who made STALKER) create 2D lookup tables for different material types as slices of a volume texture, and then lookup into those tables using NdotL or something similar. For Crysis 2, the Crytek guys faked anisotropic materials by modifying the normals that they output into the G-Buffer.

#2 can be simulated in screen space pretty efficiently

#3 Reflections can be calculated in your G-Buffer pass and output directly to your lighting render target, or if you do light pre-pass/deferred lighting you can simply add it in during your second geometry pass. You can do the same for pre-baked lighting. Or if you want, you can even calculate forward rendering for light source or two and add it in with the deferred lighting.

#4 These can be done exactly the same way with no limitations with standard deferred rendering. You just calculate your transformed UV's, use it to sample your albedo/normal/specular/whatever maps, and output the data to your G-Buffer. If you do light prepass it's not as nice, since you have to calculate the transformed UV's during both geometry passes.

MSAA is doable if you target DX10-capable hardware, and you can make it more efficient with DX10.1-capable hardware (and even more efficient than that with DX11 hardware). However it can definitely be tricky to get right if you're not familiar with the particulars of how rasterization, pixel shaders, and MSAA work together. If you're fine with MLAA or FXAA or any of the post-process AA solutions, then they will be much easier to implement.

Transparency still sucks. You can indeed have multiple G-Buffer layers that you depth peel (Humus has a sample), but it's pretty not really practical in terms of performance or memory usage. The guys at Volition use what they call "inferred rendering", and as part of that they dither transparents into the G-Buffer. However I wouldn't really recommend that approach, since the downgrade in quality is pretty significant.

#3 Reflections can be calculated in your G-Buffer pass and output directly to your lighting render target, or if you do light pre-pass/deferred lighting you can simply add it in during your second geometry pass. You can do the same for pre-baked lighting. Or if you want, you can even calculate forward rendering for light source or two and add it in with the deferred lighting.


I don't understand. I'm not using deferred lighting, and everything else you said sounded over simplified. I have absolutly no idea how to implement reflections in a deferred renderer, and searching online brings up nothing. I would need to be able to have dynamic reflections for both flat surfaces and complex objects. I've seen it done before by using cubemaps that are generated dynamically for each object, but this was most likely a forward renderer.

I don't understand. I'm not using deferred lighting, and everything else you said sounded over simplified. I have absolutly no idea how to implement reflections in a deferred renderer, and searching online brings up nothing. I would need to be able to have dynamic reflections for both flat surfaces and complex objects. I've seen it done before by using cubemaps that are generated dynamically for each object, but this was most likely a forward renderer.

You need to understand that there was a paradigm shift from forward rendering to deferred s/l in the last decade. Deferred shading/lighting is not better in all aspect, it has several short comings. One is transparency which is really hard, the other is, that you use a deferred s/l to push the amount of polycount and lights up and do some nice but quite expensive post-processing steps.

But the nasty sideeffect of this is, that doing reflection will get really expensive, IF you use a dynamic approach. Recalculating a cube-map (=rendering the scene for 5 additional times) on-the-fly is a showstopper and rendering the reflection to the g-buffer only works for a plane. New approaches like billboard reflection (seen in the new epic demo) only works for billboards (=textures).

On the other hand, going for a forward renderer will really reduce the number of displayable polys and lights.
On the other hand, going for a forward renderer will really reduce the number of displayable polys and lights.[/quote]

Not necessarilly. You can still have dozens of lights in forward renderer, with some hierarchy for lights. Meaning this way - compute light for object only if object is affected by light.

On the other hand, when having lots and lots of small lights, deferred renderer will definitely be better choice.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com


I don't understand. I'm not using deferred lighting,


So you came here to ask about deferred rendering and it's limitations, yet you've already decided to limit yourself to a subset of deferred techniques? I thought you hadn't implemented anything yet?


and everything else you said sounded over simplified.


What sounds oversimplified to you?



I have absolutly no idea how to implement reflections in a deferred renderer, and searching online brings up nothing. I would need to be able to have dynamic reflections for both flat surfaces and complex objects. I've seen it done before by using cubemaps that are generated dynamically for each object, but this was most likely a forward renderer.


If you want dynamic reflections in a deferred renderer, you're in the same situation that you're in with a forward renderer. Static cubemaps/reflection textures are cheap but limiting, dynamic reflections are extremely expensive and difficult to get right. The only wrinkle with deferred rendering is that you have to somehow combine the reflection with the dynamic lighting, which is why I mentioned that. You can do the actual reflection rendering however you want: forward, deferred, all lighting, no lighting...although you'll probably realize pretty quick that you'll need to be extremely simple and cheap with your reflection rendering for it to have decent performance.

But the nasty sideeffect of this is, that doing reflection will get really expensive, IF you use a dynamic approach. Recalculating a cube-map (=rendering the scene for 5 additional times) on-the-fly is a showstopper and rendering the reflection to the g-buffer only works for a plane. New approaches like billboard reflection (seen in the new epic demo) only works for billboards (=textures).


Why exactly is it a "showstopper"? I guess I can see how having to set up another gbuffer for the cubemap would be more expensive, due to the size of gbuffers in general, but what if you use a relatively low resolution map, say 100x100 pixels per side, and only update 1 side of the cube map every frame, or every nth frame?

Secondly, what do you mean by "rendering the reflection to the g-buffer (only works for a plane)"? How does this technique work, and why is it limited to a plane?

Why exactly is it a "showstopper"? I guess I can see how having to set up another gbuffer for the cubemap would be more expensive, due to the size of gbuffers in general, but what if you use a relatively low resolution map, say 100x100 pixels per side, and only update 1 side of the cube map every frame, or every nth frame?

The question is: what is your goal ? Accurate reflection of a dynamic scene or some substle reflection. For substle reflection a dynamic cube reflection has a bad price performance ratio, a static pre-calculated cubemap of the scene (=probes) is often enough and much cheaper. When you want to have higher quality a low resolution map would be clearly visible. Additionally when updating your cube map every 5-6 frames only, you will get clear artifacts as long as you can't blend between cubemaps. Eventually reducing the resolution will not get rid of the overhead (polycount, light setup, postprocessing setup), it will only reduce the postprocessing performance hit.


Secondly, what do you mean by "rendering the reflection to the g-buffer (only works for a plane)"? How does this technique work, and why is it limited to a plane?

For correct reflection you would need to raytrace the scene. A fake of raytracing is to use a cube-map or you could do "real" raytracing for very simple geometry (billboard reflection). When using a plane you could render the "raytraced" scene in one pass due to the simple construction of the raytraced scene (simple transformation).
A simple plane refleciton would be to calculate the plane-reflection transformation, render the scene to the g-buffer, doing all the lighting and postprocessing and blend it with the original scene. This would half your rendering fps. The first optimization would be the use of the stencil buffer. A second optimization would be to use a single g-buffer, but this would be much harder if you want to get away without artifacts (SSAO,transparency, two light sets etc.).
I guess most of the time, I won't need high quality dynamic reflections of things like characters, but the lighting in the game is going to be dynamic and I think a static cube map would be quite obvious if the lighting is completely different.

I guess most of the time, I won't need high quality dynamic reflections of things like characters, but the lighting in the game is going to be dynamic and I think a static cube map would be quite obvious if the lighting is completely different.

Many games uses (ad-hoc)static cube maps for reflection , it works quite fine for them. When you use specular highlights, remember that these highlights are actual reflections of the dynamic light sources. The combination of specular highlights and cube map reflection works best on highly detailed surfaces which breaks the reflected image up. I would avoid this technique on a flat mirror or large scale mirrors (large bodies of water), in this case use plane-reflection.

This topic is closed to new replies.

Advertisement