depth peeling for transparency?

Started by
6 comments, last by TheFlyingDutchman 15 years, 7 months ago
Last few days I have been looking into rendering techniques that I could use for creating a new game. Deferred rendering looks very appealing to me, because implementing it seems relatively simple. Things like multiple render targets I've already encountered, so I should be able to implement this (certainly because I found several good sources describing the technique, like the articles in GPU gems 2 and GPU gems 3 about deferred rendering). The biggest problem for me seems translucency. Both articles in GPU gems say they use a forward renderer for this, but they mention depth peeling as a possible solution. So I've been looking into depth peeling and I found a few articles: Interactive Order-Independent Transparency (nvidia, 2001) Multi-Layer Depth Peeling via Fragment Sort (microsoft, 2006) It seems an interesting technique, but then I see the performance measures from the second article: only 2 fps for a scene containing only several hundreds polygons and about 20 layers (it also mentions 4-5 fps for a scene with 15.000 polys and about 30 layers, so the results look very strange to me)!? Obviously, this is way too slow to be useful, probably even if you use only about 4 layers (I don't think you'll encounter 20 layers of transparency often). So, what is the current status about depth peeling? Is it considered as a useful technique? Are there alternatives that are better? And what kind of techniques are usual used for handling transparency? I couldn't find that much about it on the internet, does anybody know some more recent resources on the technique? [edit] Perhaps I should add the following: Target hardware is OpenGL 3.0/DirectX 10 capable graphics cards. Perhaps also (but this is less important) nVidia 7xxx and Ati 1xxx series. [/edit]
Advertisement
The usual way of handling transparency is to render your transparent geometry after your opaque geometry in back-to-front order with z-buffer writing turned off. Thus, you have to calculate the transparent geometry's distance (along the camera's look axis) from the camera and sort by this z-depth. More complex transparent geometry might have to be split into smaller pieces to sort and draw correctly.

I don't believe depth peeling is a viable technique for a real game and it's not more useful in a deferred renderer than in a forward renderer.
I've never implemented depth peeling myself, but since I read about it for the first time I've never really considered it a good solution for transparency - because of its efficiency issues and limitation of max. N transparent layers.

It's not acceptable for games even on highest end GPUs today. It's probably fine for some demos or other non-real time applications where perfect inter-translucency intersections are critical.

I'd definitely go with traditional forward renderer for translucency nowadays. Also because in a "typical" (thinking about games) scenario you don't have too many translucent surfaces.
Maciej Sawitus
my blog | my games
Quote:Original post by kek_miyu
I don't believe depth peeling is a viable technique for a real game and it's not more useful in a deferred renderer than in a forward renderer.


Well, depth peeling combined with deferred renderer for translucency has same advantage as it has with opaque geometry. You could theoretically light translucent geometry in deferred manner the same way as you light opaque geometry because each transparency layer contains single surface layer (although that would be overkill, hence you'd need separate depth buffer for each transparency layer as well).

But, yeah, depth peeling was mainly invented to handle problem of sorting translucent geometry back to front which can't be solved even if you sort per triangle.
Maciej Sawitus
my blog | my games
Quote:Original post by MickeyMouse
Quote:Original post by kek_miyu
I don't believe depth peeling is a viable technique for a real game and it's not more useful in a deferred renderer than in a forward renderer.


Well, depth peeling combined with deferred renderer for translucency has same advantage as it has with opaque geometry. You could theoretically light translucent geometry in deferred manner the same way as you light opaque geometry because each transparency layer contains single surface layer (although that would be overkill, hence you'd need separate depth buffer for each transparency layer as well).

But, yeah, depth peeling was mainly invented to handle problem of sorting translucent geometry back to front which can't be solved even if you sort per triangle.


I was thinking something like that. The idea was (more or less):
- create a G-Buffer for all opaque geometry, the usual way, this also fills the depth-buffer
- now start rendering transparent geometry, using the depth buffer you already created, this way you will only encounter transparent geometry that is actually visible
- the difference between the initial z buffer and the z buffer after a first transparency pass tells you the locations in which transparent geometry is visible, and this could perhaps be used for culling
- start depth peeling until you are on the same z level as the opaque geometry
- to do deferred rendering on the depth peeled geometry, you could use a second G-Buffer, for the transparent geometry, this G-buffer might be a bit simpler then your first one, because you might not need as much information for transparent geometry (depends on your needs).
- another option might be, using a G-buffer that is actually larger then your screen, and use the remaining space for transparent surfaces. Off course this means that you only have so much room for transparent surfaces. Advantage is that you could do deferred rendering for opaque and transparent geometry the same pass.

possible problems:
- what if you are standing just in face of a window, that would mean almost your entire screen is filled up with transparent geometry and that would mean a large (second) G-Buffer.

So this depends largely on the speed of the depth peeling for a typical scene, which depends on the amount of transparency in the scene and the number of levels of transparency. I might just try this approach and see how well it scales, depending on how hard it would be to implement (don't want to spend weeks, just to test how well it scales and probably find out it is way to complex and absolutely unusable).
You should definitely have a look at this.
I should think it would be sufficient to do your normal, opaque-only deferred shading render first, as normal, and then with the Z-buffer still filled, do your translucent-surface rendering sorted back to front by objects; if you don't use double-sided surfaces and use only convex shapes, you should have few to no problems.

One issue tangled in is particle systems; particles can get everywhere, and it can completely ruin your scene if, say, the emitter for a particle stream gets sorted in front of a window, but some of its particles are behind it, causing them to be not rendered (Z-occlusion), or rendered as though the window wasn't there (insufficient data to reconstruct the window's effect on what's behind it).

Depth peeling resolves this by targetting that second issue; by rendering and saving enough data to reconstruct the window's effect, and then doing a composition pass of all the layers afterwards. But that doesn't mean it's an elegant solution, because you will still get artifacts if your scene gets viewed from an angle that lines up more transparent surfaces than you have layers.

My approach is simple, even if it does restrict the art of a scene a little bit; I don't organize my particles and dynamic transparent geometry by emitter, but by their 'sector' in a world made out of BSP-like planar slices, which are made from the relatively-static transparent geometry, like windows. My visible set of transparent surfaces is found by walking that tree and intersecting each sector with the viewing frustrum. I sort the visible sectors containing particles back to front, and render sector, partitions, sector, partitions, until I get back to the camera's sector.

This technique works fine for buildings, even somewhat curved ones, but I killed my poor CPU once when I, without thinking about the implication, made a dome-shaped glass greenhouse, and then raised a few dozen dust-cloud particles blowing around it. Just 120 or so surfaces to make the greenhouse, a subdivided icosahedron, but because of their orientation I was getting one dust-cloud every once in a while, while walking nearly 6,000 sectors.

On the plus side, it worked correctly. Not a single dust-cloud pixel was visible on the inside the greenhouse; you could see them blowing past outside, but the particles, though large, tried to intersect with the greenhouse walls and were clip-planed back to their rightful side. Because I could rely on particles staying in their sector on the far side of each partition (the greenhouse walls), I could do a distortion through the windows using the view rendered thus far, as well.

I give no promises to the performance of my technique; I am not a good optimizer, was not using deferred shading for the opaque geometry (though I don't see how it would change anything), and did everything by-the-book, using normal, unmodified algorithms instead of specializing everything to the problem domain. I think on my Radeon HD2600 & AMD Turion X2 2.0 GHz I just managed to break 9 frames per on a simple city scene, with transparent slice planes in the walls of buildings and a single unmoving car's windows (~20 slice planes).
RIP GameDev.net: launched 2 unusably-broken forum engines in as many years, and now has ceased operating as a forum at all, happy to remain naught but an advertising platform with an attached social media presense, headed by a staff who by their own admission have no idea what their userbase wants or expects.Here's to the good times; shame they exist in the past.
Quote:Original post by MJP
You should definitely have a look at this.


Ah, I had a quick look (tab still open, so I will look further) and I see it says texture array and Direct3D 10. Problem is: I work with OpenGL. You can't render to a 3D texture, or texture array's yet. I had a look into this yesterday, there is a very recent extension for OpenGL 2.1, which has texture arrays and it will be core functionality for OpenGL 3. So this might indeed be a very good solution, but not until my drivers support it.

I'll look further into this, to see what they say exactly.

[edit]
As far as I could see, they exactly do what I thought, use an array of textures so that G-buffers can be created for multiple layers. The biggest drawback, besides it is not supported in OpenGL yet, is that it takes up huge amounts of memory, for example: a normal 128 bits G-buffer for a resolution of 1680x1050 (fairly common nowadays) takes up 27Mb of memory. Multiply this with the number of transparency layers you want to support and you know how much video memory your g-buffer uses.

I think it might be better to create a separate g-buffer for transparent surfaces. Because they mostly don't cover up your whole screen, you can use a stack of significantly smaller g-buffers for the transparent surfaces.

I'll think this whole thing true a bit better and see if I can implement it and see how it performs.
[/edit]

[Edited by - TheFlyingDutchman on September 15, 2008 5:21:14 PM]

This topic is closed to new replies.

Advertisement