• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
JDX_John

Multiple volumetric fog bodies

4 posts in this topic

It's not actually fog, but for medical visualization I am rendering a patient under simulated x-ray, and treating soft tissue as volumetric fog is quite appropriate - each organ a simple hollow hull representing a region of volumetric fog.

 

I found this on the topic which sounds pretty interesting - basically you render front-facing polys to one buffer and back-facing to another and work out the distance between them to give fog depth. But what if you have multiple bodies, and they don't have the same fog density? The link mentions you have to consider this, but didn't as far as I could see offer an answer.

 

Any suggestions? Either on this technique or alternative approaches? I'm happy to hear about fancy hi-tech solutions but I am targeting iPads which means I'm limited to GLSLES 2.0 and not uber-powerful GPUs.

 

Thankyou.

0

Share this post


Link to post
Share on other sites

Well it's the same thing, I think. Render back-facing and front-facing polys for each body, and at the end you have a list of depths and associated fog densities which can be sorted and the fog depth can be quickly estimated by multiplying fog depths for every distance between consecutive fog depths. For instance if you have one body B inside another body C, sort of like this:

CCCCCCCCBBBBCCCCC

Then you first render body B and obtain the points:

CCCCCCCCBBBBCCCCC
        ^  ^
        1  2

And you do the same for body C:

CCCCCCCCBBBBCCCCC
^               ^
3               4

You then sort them by distance inside some small table (you can use a sorting network to do this efficiently if you have a maximum number of bodies):

3 - C - distance 0.2
1 - B - distance 0.54
2 - B - distance 1.1
4 - C - distance 1.3

From this you can derive that the 3-1 section is inside body C (from which you can get your fog density), has distance 0.54 - 0.2 = 0.34, and derive the fog depth from that. Do the same for the 1-2 and 2-4 cases, and multiply them together (because intensity is multiplicative, not additive) and you have your fog depth from 3 to 4. To keep track of which body you are in you can use a simple inside-outside rule, which involves maintaining a stack of overlapping bodies from bottom to top, but it you know you will only ever have two overlapping bodies such as a single organ inside the human body (but possibly more than one in total, of course) you can simplify this to a single comparison with no added complexity.

 

The main complexity here is probably how to store all the intersection points in a memory-efficient way, since there may be a lot of them. One way is to render the bodies front to back for each pixel somehow, so that in the first pass you handle 3-1, then the next 1-2, and finally 2-4, only storing enough state between calls to keep track of the current fog depth for each pixel (perhaps combining them as you go using multiplicative blending) and the distance so far handled. I don't know how feasible this is, though. You'll probably need to put some hard limits on how many bodies there are in order to optimize it.

 

Perhaps a more efficient method is to approximate the problem using hacks, like calculating fog between 3-4, and then between 1-2 (one pass per object) and multiplying them, which isn't quite correct since you're accounting for the overlapping parts twice, but could probably be made to look good enough with some tweaking. I'd recommend this approach to be honest, it is probably good enough, only use the previous method if this one isn't accurate enough for your needs.

0

Share this post


Link to post
Share on other sites

You could represent fog volumes with analytical convex primitives (spheres, boxes, polyhedra etc), then calculate the intersection between these primitives and view rays directly in a pixel shader. The decomposition of a body into convex primitives is a separate, known topic and tools exist to handle it.

In order to have variable fog density you have to decide the granularity of the density representation and store it accordingly: per vertex, per primitive, in a volume texture or in a stack of screen-aligned textures, as done in some recent games (killzone ?)

The absorption component of fog is multiplicative and the in-scattering term is additive, so you can render the fog primitives in any order and accumulate the results in an offscreen texture. The same principles apply to the rendering of volumetric lights (physically the same thing).

I am not an expert of mobile development but a well optimized implementation should run very well on Ipads.

0

Share this post


Link to post
Share on other sites

I should clarify, this would need to run as a bunch of shader passes (ideally as few as possible)... @Bacterius I get the idea of what you're saying but not how one might implement this in shaders.

 

@Reitano, by variable density I mean each individual shape (polyhedron) has a constant density, but this value is different for different shapes. Not sure if that simplifies things; as you say the absorption is multiplicative (and scattering isn't a factor here - in X-ray the transmission fraction f through thickness d is basically e-kd so passing through different materials we quite nicely get total transmission t = e-k1.d1.e-k1.d2.e-k3.d3 = e-(k1.d1+k2.d2+k3.d3)

0

Share this post


Link to post
Share on other sites

I'd try this approach:

 

Initialization:

- Break your shapes into convex polyhedra with a face count <= 8, either manually or with a tool

- Allocate an RGBA render target; 8 bits per channel should be sufficient

 

Per frame:

- Clear render target to 1

- For each polyhedra:

   Rasterize its front faces if camera is outside, back faces otherwise

   For each pixel

        calculate analytical intersection between the associated view ray and the polyhedra

        calculate absorption as A = exp(-k1 * d), where k1 * d is the optical distance within the polyhedra

        multiply A with the value stored in the render target by using hardware blending

- Fetch the final absorption term and multiply it with the colour of the rendered scene

 

As an optimization, if your shader model allows it, you can handle more than one shape in the same  pass.

I have mode some assumptions in the code above, you might have to rearrange stuff depending on your actual pipeline and requirements. Also, I ignored occlusion of fog volumes by opaque geometry. If you have access to the scene depth, that is a trivial addition.

 

Hope that helps!

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0