• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
spek

Doing local fog (again)

11 posts in this topic

Not the first time I asked this here (apologees), but I still didn't figure a comfortable way to render local (semi) volumetric fog. Think about "fog" above a pool of water, rooms filled with smoke or gas, or lightshafts in a dusty room. Whatever I tried, its too 
slow, too undetailed, too uncontrollable, too ugly, too whatever. So far, I tried 4 different techniques in my engine:
 
* Simple linear "distance" fog. 
Ultra simply and fast, but not suitable to make local fields.
 
* Volumetric lightshafts using raymarching or as a post-effect
As shown in several (nVidia) papers. Works nice, but is pretty expensive and only works within the direct range of lights (not as an ambient effect). Nor does it naturally involve textures or sprites to draw clouds, smoke or dust particles. And, the ray-marching based approach doesn't always look good for every view angle / camera position. Too thick, too thin. Hard to control.
 
* "True" volumetric fog
Since I have probes all over the place (typically covering 1 cubic meter) with stored ambient light, I can raymarch through that same grid and "inject" fog-thickness into probes locally, and use its indirect light. My hope is too create one universal technique that can be used for all kinds of local fog, rather than hacking with all kinds of different tricks.
 
* Particles
Honestly I didn't try this one for a while because it feels old/fake... but maybe I'm underestimating this effect (see below). My main problem was to hide the flattish look when coming nearby, properly depth ordering, and the amount of overdraw(slow) for more dense fields.
 
 
 
Cool in theory, complicated in practice. And again the amount of control is horrible. Depending on where you stand (how much "fog-filled" probes the ray hits), the fog is either too thick, or not visible at all. But moreoever, this effect has a heavy footprint. Rooms with no fog at all still have memory reserved and waste time on raytracing. Like with many GI kinda like tricks, I doubt if the result is worth the effort.
 
 
 
 
I guess I'm thinking too difficult. I mean, games 10 years ago already had fog fields, using (soft) particles. Sure it had its problems as well as a "fake" effect, but at least the artist has control over it, and it doesn't involve complicated tricks.
 
Do modern games still use billboards on a large scale? I red somewhere that Bioshock Infinite for example, which has a lot of clouds hanging between the buildings, uses simple billboards with a scrolling texture. The billboard wouldn't turn with the camera, and fades out as you approach. My guess with static billboards is that their 2D-nature is too obvious for the viewer nowadays, but I could be wrong of course.
 
Also noticed that UDK has a feature to place local fog volumes. Basically just a cube or other primitive that renders fog (with a 3D texture eventually?). But I wonder how to render a bunch of those when they overlap each other.
 
 
Inspiration links:
Dust in my own engine (but pretty expensive & only working within a direct light volume):
 
 
Since my game is mainly indoor, I'm most interested in placing strings of (textured) fog where you can walk through. But without seeing its a flat billboard of course.
 
Ciao!
Edited by spek
0

Share this post


Link to post
Share on other sites

Couldn't read all 78 pages yet, but that sounds like a nice read, thanks!

 

>> Soft particles

I think indeed that for many (smaller, very local) effects, this is still the way to go. It gets a bit tricky when doing larger sections. For example, I have a pretty wide & long corridor, where I want smokey "clouds" below the ceiling. Since you can look pretty far in this particular situation, filling the entire ceiling would need a big number of particles, which could hurt the performance. For a foggy Metro tube at the other hand, where you walk through the fog, you could have a limited particle cloud that simply travels along with the camera.

 

As with lighting and many other rendering problems such as lighting, the preferred way is to make 1 uniform solution that can handle everything. But when it comes to effects like these, it still might be a better idea to let the artist pick the right tools for each specific situation. But I didn't read the paper in REF_Cracker's answer yet, so who knows what that brings!

 

 

 

>> EDIT

Just red the paper (had some time left :)

The results are awesome, yet I have some practical issues. I tried something very similar before (but with cheaper/uglier math, and dumb upscaling), but always ran into a couple of issues. But maybe you guys have some good advices

 

* No light volume = no fog. My game has quite a lot relative dark area's and therefore often depends on indirect (baked) light

Now this can be solved by adding ambient light for each march-step as well, since I have it available in a 3D texture anyway. Defining how dense the fog should be at point X is still tricky though.

 

 

* Light selection

Don't know how these guys do it, but in my attempt I collected nearby lights that had "lightshafts enabled". Yet since I didn't want to possibly loop through dozens of lights for each ray-step, the amount was limited. Automatically selecting the most important lights would often cause distant lamps to suddenly pop-in or dissapear when another takes over.

 

Btw, performance wise, would it be smarter to do a separate pass for each light and add everything up? Or should I try to do everything in one big pass (ie, a loop in a loop)? At a first glance the latter sounds smarter, but then again unless lamp volumes overlap a lot, you would typically hit only 1 or at most 2 lights per ray-step, and waste energy on the others...

 

 

* Additive blend

I found balancing hard. Just adding up everything can quickly lead to super-bright results. Making lights less powerful helps, but its still hard to make it look good from all direction. Standing outside or inside a volume can drastically change the amount of succesive samples. Also looking a spotlight from asides, or straight into it makes a difference. The best way I could think of is simply stopping the loop if a MAX_DENSITY value was reached. However, that would make the result appear thick and flattish sometimes, when looking into the "core" of a bunch of lights.

 

Another problem comes with fog-types that should actually darken the screen; Smoke. That's not typical fog though, so maybe I should just treat that one different, and use soft-particles instead.

Edited by spek
0

Share this post


Link to post
Share on other sites

For particles, "Weighted blended order independent transparency" should be helpful: http://jcgt.org/published/0002/02/09/ performant OIT for non refractive stuff. As I saw on twitter concerning rendering "It's all smoke and mirrors. Except smoke and mirrors, that's hard to render."

 

And yeah that Lords of the Fallen paper is great. I can already see multiple games implementing something like it (Some people at Ubisoft did something fantastically similar for AC already) and artists just abusing the heck out of it. A million godrays blinding you in every level here we come.

 

Ninja edit to your edit- Yeah smoke should definitely be done differently, as you're doing two different phenomena. "Fog" represents particles smaller than the wavelength of the light, thus scattering the results but not absorbing. Smoke has particles bigger than the wavelength and causes direct absorption.

 

If you're going deferred the Lords of the Fallen guys have a neat per vertex deferred for small particles that they use for smoke. If you're going forward there are ways to make forward lit particles and Z-blurring work at the same time. Doing a lot of particles today should only be a problem depending on your targeted systems. There are nice ways to batch everything and avoid overdraw, so if you've got the performance then thousands of particles (and more) is doable with some work.

Edited by Frenetic Pony
2

Share this post


Link to post
Share on other sites

>> If you're going deferred the Lords of the Fallen guys have a neat per vertex deferred for small particles that they use for smoke

Thanks for pointing that out. Never thought about that really! For the others who are interested:

 

 

1* Render ("rasterize") your particles as single points into a (1D or 2D) texture.

Each particle would get its own pixel. This pixel typically contains the particle position. The target position depends on an id that is unique for each particle.

 

2* Apply all your lights on this texture, as you would normally do deferred rendering.

Except that each light is just a quad that covers the entire screen (or texture canvas so to say). For each particle(pixel), you know the position, and thus you'll know wether it can be litten or not. The normal won't be needed since particles typically face towards the viewer, so use that direction info as a normal.

 

3* Accumulate light colors into another "particle-diffuse-light" texture.

Eventually you could do another pass to add your ambient light as well.

 

4* When rendering the actual particles, refer to to the texture from step 4.

Each particle uses its unique index to fetch the light results from step 3. You may do this fetch in the vertex shader already, so the only thing your pixel shader has to do, is drawing the (animated) particle texture. Overdraw still sucks, but at least this allows to play with lights at a low cost.

 

 

 

Or read

http://www.slideshare.net/philiphammer/the-rendering-technology-of-lords-of-the-fallen

0

Share this post


Link to post
Share on other sites
For a general case blending solution that works for smoke, fog, and smog (is smog derived from smoke-fog??), you can either do (1) the blending yourself on the shader, or (2) use premultiplied alpha blending.

For (1) render you particles / do your ray-marching / etc to an off-screen buffer. Then composite to a new buffer while reading from this fog buffer and from your original scene buffer. Blend with whatever logic you like in the compositing shader.

For (2) the blend mode is [one+invSrcAlpha], instead o the usual [srcAlpha+invSrcAlpha]. The RGB result from your shader represents the amount of new light has been scattered/emitted towards the viewer, while the Alpha result represents the percentage of existing light along the view ray that's now been absorbed.
This is used in traditional particle systems for rendering additive fire and alpha-blended smoke at the same time - even on the one texture/billboard!

If you're having trouble balancing the brightness/absorption levels, make sure all your values have sensible physical units associated with them, and be aware that you'll have different sampling densities at different times. i.e. Sometimes a sample might be representative of 0.5m depth through the fog (e.g. 10 samples through a 5m volume), other times a sample might be representative of 100m depth!

Also, use HDR if you're not already ;-)
1

Share this post


Link to post
Share on other sites
Ninja edit to your edit- Yeah smoke should definitely be done differently, as you're doing two different phenomena. "Fog" represents particles smaller than the wavelength of the light, thus scattering the results but not absorbing. Smoke has particles bigger than the wavelength and causes direct absorption.

Are you sure that fog droplets a just a couple of hundred nm big? Do you have any references?

 

Edit:

This site says:

 

Cloud, fog and mist droplets are very small. Their mean diameter is typically only 10-15 micron (1 micron = 1/1000 mm) but in any one cloud the individual drops range greatly in size from 1 to 100 micron dia. Cloud droplets are 10 to 1000X smaller than raindrops.

 

 

Wiki confirms the smaller size of cloud droplets:

 

A typical raindrop is about 2 mm in diameter, a typical cloud droplet is on the order of 0.02 mm, and a typical cloud condensation nucleus (aerosol) is on the order of 0.0001 mm or 0.1 micrometer or greater in diameter.

 

 

According to this paper the geometric optical principles break down starting with approx. 5 microns (paragraph 2. Optical Considerations, line ~11). I dont know in how far this correlates with the absorption behaviour, though.

 

With a mean of 10-15 micron it looks to me that the droplets are substantially larger than the visible light's wavelength.

 

(There is also a whole book.)

 

Edited by Tasty Texel
0

Share this post


Link to post
Share on other sites

 

Ninja edit to your edit- Yeah smoke should definitely be done differently, as you're doing two different phenomena. "Fog" represents particles smaller than the wavelength of the light, thus scattering the results but not absorbing. Smoke has particles bigger than the wavelength and causes direct absorption.

Are you sure that fog droplets a just a couple of hundred nm big? Do you have any references?

 

 

Ok to be exact, actually more mie scattering (the water droplets are actually bigger than the wavelenth of visible light). But due to fog being water droplets (thus highly transparent and reflective, and barely absorbative) most atmosphere setting don't treat it properly. Most people just crank up some intensity factor (rayleigh and mie, or even simpler), set a color to grey and call it good. You're right about the droplet size on average for fog though.

Edited by Frenetic Pony
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0