# Deferred shading & foliage... Possible?

This topic is 3879 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi, Probably this is not the first time that this has been asked. But how about deferred shading & transparency? I know there are a couple of tricks to alpha-blending. www.humus.ca for example has a demo that does "Deep deferred shading". This allows to have a couple of layers (against a high memory cost though). Most scenes in my case only have 1 or 2 translucent objects behind at each other, so this could be option. But what about scenery that really has alot of transparent stuff? For example, a jungle (grass, leave textures, etc. -> Farcry / Crysis). I'm planning a couple of forest scenes. So either deferred shading has to support it, or another nice way has to exist to render foliage that does not require blending. I'm using deferred shading now, but maybe I'll have to consider another option :( Foliage does not nescesarly require blending. If pixels are either 100% visible, or totally transparent. But this gives ugly pixelated edges. Maybe some blurring can fix this, but I'm sceptic. Greetings, Rick

##### Share on other sites
Im no expert, but I think that if your game takes place mainly outside with lots of transparent stuff, your better off going with standard forward rendering instead of deferred shading.

However, if you really need to use deferred shading, you can make transparent objects work by rendering them normaly (forward rendering) on top of your deferred shaded scene (hopefully that makes sense);

Its not the most elegant way to do it, considering it completely eliminates any benefits (or pitfalls!) of deferred shading, but its one of the only ways I can think of for rendering transparent geometry...

Another alternative way of rendering things that takes some of the advantages of deferred shading and forward shading is "light indexed deferred shading" (or something like that)...google it and it should be one of the first ones. As far as I know this method supports transparent objects very nicely

##### Share on other sites
Hi,

Well, most of the scenes are not outside, or even needing many transparent objects. But it would be a shame to skip the couple of forest scenes I had in mind.

As for rendering on top of the rest, this won't work in all situations... I think. It depends on what kind of object you are trying to shade. A window would be possible. But what if the transparent object needs to be litten as well? Then you still have to use some sort of traditional rendering method to lit the transparent object, since it can't get its information from the buffers created in the deferred shading stage.

Another problem might be the depth. Transparent objects can be behind opaque objects, so you need to work with a depth buffer right. I could simply render the whole scene, mark all pixels from the opaque objects, and shade the transparent stuff. Then merge it with the rest somehow (replace the background pixels with the overlapping pixels from the transparent pass, if they are not 'marked'). Not a very fast solution of course.

I'm most concerned about rendering grass. They not only have to blend with the pixels from the opaque background, but also with other grass sprites behind them.

I'm thinking more and more about switching back to a normal renderer. On the other hand, I'm curious how games like Killzone 2 for the PS3 handle this problem (they use deferred shading as well, right?).

Thanks for helping,
Rick

##### Share on other sites
Grass/Trees only need to discard pixels not blend IMO - can't you use texkill with alpha values ? Hiarchial Z (on ATI) sould not matter THAT much in a deferred shading anyway :?

##### Share on other sites
If they discard pixels, the blending problem is solved. But in most cases I want the foliage to blend at the edges, because it gives softer result. Discarding pixels will give sharp pixelated edges, unless the foliage textures are really high-res. Also, leaves are naturally slightly translucent.

Besides of foliage, I must also deal with stuff like deccals (ash trails, blood, etc.). Big chance they need blending as well.

Greetings,
Rick

##### Share on other sites
Drawing foliage with forward shading is what you will probably want to do, but there can be visible glitches due to non-consistent lighting (if you have 20 or 30 lights, your forward rendering will be batching and blending hell, so you can't do that).

If you do supersampling (as for example described in the Killzone paper), you can use coverage to get some degree of transparency, even with deferred shading. Unluckily we're not talking about hardware accelerated alpha-to-coverage here, but about implementing something that's inferior and runs a lot slower yourself.
Your memory consumption and fillrate requirement will increase at least two-fold (for only 3 transparency levels), or higher if you want more. Also, you must add some noise or a pattern, to account for several layers of transparency (objects with the same alpha otherwise cover the same pixels).
It's questionable whether this approach is worth the trouble, unless you intended to supersample anyway (in which case, it's free).

What could work best (but isn't easiest, either) could be to render/light all the 100% opaque foliage pixels in the deferred shading pipeline and then forward-render-blend the rest with poly offset and blend params accordingly set.
That way, the amount of blending/overdraw is reduced, and also the major part of the foliage will be consistently lit using DS. Lighting artefacts only exist on the relatively small-area semi-transparent edges, so they aren't too disturbing.

##### Share on other sites
Hey, that last idea you gave doesn't sound too bad! In practice, the outside scenes only have a few lights, so you could do the forward blending with the sunlight or something.

However, I'm still stuck with other transparent stuff, such as bullet tracers, decals, glass, and so on. Each type probably has a trick to fit it in the deferred shading engine, but... I think I'll go back to the forward rendering. Crysis showed that there is nothing wrong with that. And I won't need hundreds of lights anyway.

One idea I have still uses some of the deferred tricks though. Correct me if I'm wrong, but most forward render engine probably have the following flow:
- Render surfaces with light 1 (for example "albedo * dot(L,N) + specular")- Render surfaces with light 2- ... render with light n- Render surfaces with ambient light / cubeMap reflections / emissive texture< all passes are added together >

Right? Or do they try to render as much lights into 1 pass? Would be faster of course, but sorting becomes more difficult, especially when there are different types of lights as well (spotlights, shadowMaps, omnilights, etc.).

Anyway, all passes need (world)normals. Instead of fetching the normals for each pass again, I'll add a pre-pass that stores world normals into a buffer. Just like you do in deferred shading. I can also do that for a specular term, and the diffuse color (albedo texture, mixing terrain textures, ...). All the passes that follow, pick their 'ready-to-use' data from these textures. It saves some calculations, less texture switches, and less different types of shader programs.

I tried it before, and it works. Except that the result looks pixelated, but that has probably to do something with the FBO quality, or sampling coordinates. Is this way to do it used more often? Or are there better/other ways?

Greetings,
Rick

##### Share on other sites
Quote:
 Original post by spekAnyway, all passes need (world)normals. Instead of fetching the normals for each pass again, I'll add a pre-pass that stores world normals into a buffer. Just like you do in deferred shading. I can also do that for a specular term, and the diffuse color (albedo texture, mixing terrain textures, ...). All the passes that follow, pick their 'ready-to-use' data from these textures. It saves some calculations, less texture switches, and less different types of shader programs.I tried it before, and it works. Except that the result looks pixelated, but that has probably to do something with the FBO quality, or sampling coordinates. Is this way to do it used more often? Or are there better/other ways?

Do you use just one buffer for the entire scene? Won't transparency still give you the same problem you had with deferred rendering? Only one normal per pixel, but you need to overdraw some pixels for the transparency.

##### Share on other sites
transparency is always a challenge. In real-world(TM) applications you render it in an extra pass. There are several solutions that offer the ability to render transparent stuff forth to back like deep depth buffer or inversed depth peeling but they all look too slow for real usage.

##### Share on other sites
@Solias
Well, the solid geometry is not rendered to a texture, its rendered normally, so it will also fill the depth buffer. Only difference is that they pick their normal/diffuse/specular color from 3 textures. Compare it with... Taking a snapshot of a scene, and project it on a surface. So when rendering the transparent part afterwards, it will just fit in and blend like when rendering normally. It saves some calculations in the lighting passes. Converting to a world normal only needs to be done once (per pixel), the same for parallax mapping, or when using detail normalMaps. And I save some registers and textures for complex shaders.

The transparent stuff won't write to these 3 textures though. They can't benefit from those buffers, otherwise you get problems indeed when mixing colors/normals, etc. The transparent part is rendered and litten the traditional way (for each light, it needs it normalMap/albedo textures again, if it uses these).

Maybe you can do the same trick on top of deferred shading. But that would require 2 lighting mechanisms. 1 normal forward rendering technique for the transparent part, and a deferred one for the opaque part. And you need to create a depth buffer somehow, to discard transparent parts in the additional pass that are behind opaque pixels. Another reason is that its easier to use different lighting models again (phong, BRDF, anisotropic, etc.).

I hope its a little bit clear. It's confusing, and hard to explain with my english :)

Thanks for helping everybody,
Rick

1. 1
2. 2
Rutin
19
3. 3
4. 4
frob
13
5. 5

• 9
• 17
• 11
• 9
• 17
• ### Forum Statistics

• Total Topics
632604
• Total Posts
3007369

×