How do DoomIII/Quake4 handle transparent objects?
I was wondering how does the DoomIII engine handle transparent objects(the window glass that is). Do they use a BSP tree to sort back-front or make use of another technique?
[Edited by - Deliverance on October 20, 2006 12:00:06 PM]
I don't know about more recent versions, but I think Quake II did it that way (draw all non-transparent objects, then draw transparent ones in reverse distance order).
Quote:Original post by Bob Janova
(...then draw transparent ones in reverse distance order).
By reverse distance order, you mean front to back, or back to front?
Quote:Original post by MoeBack to front. It's very easy to make a mess of things trying to go the other way with the depth buffer and the transparency.Quote:Original post by Bob Janova
(...then draw transparent ones in reverse distance order).
By reverse distance order, you mean front to back, or back to front?
I guess you could even be a bit more specific and say:
1. Draw non-transparent geometry front to back
2. Draw transparent geometry back to front.
I have heard that you can squeeze out a bit more performance by rendering non-transparent geometry from front to back, as anything that is occluded by already rendered geometry is immediately rejected on modern hardware (or so I have heard).
1. Draw non-transparent geometry front to back
2. Draw transparent geometry back to front.
I have heard that you can squeeze out a bit more performance by rendering non-transparent geometry from front to back, as anything that is occluded by already rendered geometry is immediately rejected on modern hardware (or so I have heard).
I never understood the front to back thing for non transparent. You have to watch texture changes too. It's all so contradictory. [totally] I'm pretty sure I've read an artigle by Kilgard or someone that the depth buffering would take care of it for 'free'.
Well, the purpose of rendering non-transparent geometry from front to back is early rejection. Suppose we have two walls and a character between those walls (assuming we are facing wall 1):
If we draw wall 1 first, then the character, then wall 2, (or in other words, back to front) we are really drawing 3 things over the same piece of the backbuffer. If we draw wall 2 first, then when we go to draw the character or wall 1, the video card hardware can 'sense' that there is already something rendered there, and can reject what is supposed to be drawn. Its the early rejection test that speeds things up. Rather than rendering the same pixel 3 times, we do it once.
------- <-Wall 1* <- character------- <- Wall 20 <- us
If we draw wall 1 first, then the character, then wall 2, (or in other words, back to front) we are really drawing 3 things over the same piece of the backbuffer. If we draw wall 2 first, then when we go to draw the character or wall 1, the video card hardware can 'sense' that there is already something rendered there, and can reject what is supposed to be drawn. Its the early rejection test that speeds things up. Rather than rendering the same pixel 3 times, we do it once.
D3/Q4 lay down the Z buffer in the first rendering pass, so they don't have to worry about material sorting. So first you draw non-transparent things front to back with color writes disabled and without any real pixel shader. Then you your Z compare func and render your non-transparent things, sorted by material. Lastly you swing back and draw all your transparent things back to front.
With back to front order of drawing two non-transparent polys the card has to test and modify the z buffer twice, as well as calculate the pixels colour twice.
With front to back it has to modify the z buffer values once, but test them twice, and modify the pixel colour once.
With front to back it has to modify the z buffer values once, but test them twice, and modify the pixel colour once.
Quote:Original post by Promit
D3/Q4 lay down the Z buffer in the first rendering pass, so they don't have to worry about material sorting. So first you draw non-transparent things front to back with color writes disabled and without any real pixel shader. Then you your Z compare func and render your non-transparent things, sorted by material. Lastly you swing back and draw all your transparent things back to front.
I hope this thread isn't considered too old now, but I found this comment fascinating. Why would these engines take the time to go through a rendering pass just to record the Z buffer? I don't follow what this speeds up in later passes. You'd still sort materials during rendering to reduce state changes, so what effect does that have on the first pass?
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement