Archived

This topic is now archived and is closed to further replies.

Realistic see-through 3D meshes like water, smoke, fire...

This topic is 5002 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been thinking about this for a while and I'm surprised how I never saw it in even the most advanced games. Translucid objects don't look right - explosions, smoke and particle effects get culled when they intersect solid objects, so you can see that they're just billboards (they look like paper or something, which is rather silly), and liquids look like they're completely hollow, with pieces of glass wrapped around them (just take a good look at these things in real life and then in games). The thing is that in real life they're 3D - they have depth, and in games they don't. Most ray-tracers seem to handle this nicely (I mean the kind of ray-tracers that produce movie-quality stuff). What I came up with is a technique that allows you to do the same thing, using a modified shader. You can think of these objects as having fog inside them. Thinner parts are nearly transparent, while thicker parts are almost opaque. We already have this with the Z-buffer fog in most games - the more distant a pixel is, the more it is blended with the fog color. It shouldn't be too hard doing the same to render polygons. Specially considering all the benefits: realistic water, smoke, fire, any liquids and gases, glass, magic effects... 1) Draw the translucent polygons that are facing away from the viewer; but instead of actually drawing them, just store their distances in a different z-buffer; these are the distances at which the translucency must stop. To make it possible to have regular polygons *inside* a translucent object, you'd have to make something like this, to make sure that the translucency stops at the appropriate distance: if (zbuffer[x][y] > secondaryzbuffer[x][y]) secondaryzbuffer[x][y] = zbuffer[x][y] 2) When drawing a pixel from a translucent polygon (facing the viewer), you'd need 2 values, the z-buffer value for that pixel, and also the value at the same position from the secondary z-buffer: subtract them and you'll get the alpha value for that pixel There's more to this, you could have translucent objects inside translucent objects and the like, but I just wanted to show you the basic stuff to know what you think (just this "basic" stuff is enough to get the effects I described ) I was thinking about writing an article on this, but since I'm not an expert in the topic of 3D shaders, I'd like to get some opinions first [edited by - Jotaf on April 3, 2004 3:53:54 PM]

Share this post


Link to post
Share on other sites
I''m not sure that I understand your idea correct, but it seems to me that you propose rendering the depth values for both the front and back facing polygons for an object and then subtract these values to get the thickness of an object. This can then be used to attenuate light based on distance it travels trough the objects, which would only be correct for convex objects but none the less a good approximation for non-convex objects.

I seem to remember some nvidia presentation or paper about using this technique for subsurface scattering.

I don''t follow your point about using this for smoke, fire etc which is usually rendered with billboards as this technique requires a closed polygonal representation of the effect to render. Also, I can''t see rendering translucent objects inside translucent objects would work correctly.

Ofcourse all my thoughts are based on my vauge understanding of your approach, so please correct me ;-)

Share this post


Link to post
Share on other sites
Damn, I knew that if I told those guys they would steal my ideia =P

Actually they managed to expand the idea really well. They have some stuff that I never thought of. Of course the basics are kinda the same... Oh well =/ thanks for telling me it''s not new

Share this post


Link to post
Share on other sites
Heh I forgot to reply to you DonnieDarko =P

I said there was a lot more to this... For non-convex objects, you need to add the distances of polygons that are facing the viewer in one buffer, and add the distances of polygons that are facing away in another buffer. They covered it in the presentation. You can also see there a nice pic of some fog on ground of some crypt. Imagine an animated mesh instead. It would make fire and smoke that looks a lot better, because it has depth. For translucent objects inside translucent objects, it would be a really nice feature because then you could have different layers of materials for more complex effects. Hmm... I started writing here how to do it but it''s a bit too complicated and this post is getting kinda long already =P Then there''s the topic of shading, they completely overlooked that. The next time you see a fire on TV, look at the smoke more carefully. Of course it''s shaded, the areas at the top are brighter than the ones at the sides. For realistic shading it would probably get as complex as a translucent mesh of the light inside the smoke translucent mesh. I hope that made sense =) I''d like to help those guys but I''m an amateur and they work at nVIDIA ;P

Share this post


Link to post
Share on other sites