Jump to content
  • Advertisement
Sign in to follow this  

Multiple volumetric fog bodies

This topic is 2088 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

It's not actually fog, but for medical visualization I am rendering a patient under simulated x-ray, and treating soft tissue as volumetric fog is quite appropriate - each organ a simple hollow hull representing a region of volumetric fog.


I found this on the topic which sounds pretty interesting - basically you render front-facing polys to one buffer and back-facing to another and work out the distance between them to give fog depth. But what if you have multiple bodies, and they don't have the same fog density? The link mentions you have to consider this, but didn't as far as I could see offer an answer.


Any suggestions? Either on this technique or alternative approaches? I'm happy to hear about fancy hi-tech solutions but I am targeting iPads which means I'm limited to GLSLES 2.0 and not uber-powerful GPUs.



Share this post

Link to post
Share on other sites

Well it's the same thing, I think. Render back-facing and front-facing polys for each body, and at the end you have a list of depths and associated fog densities which can be sorted and the fog depth can be quickly estimated by multiplying fog depths for every distance between consecutive fog depths. For instance if you have one body B inside another body C, sort of like this:


Then you first render body B and obtain the points:

        ^  ^
        1  2

And you do the same for body C:

^               ^
3               4

You then sort them by distance inside some small table (you can use a sorting network to do this efficiently if you have a maximum number of bodies):

3 - C - distance 0.2
1 - B - distance 0.54
2 - B - distance 1.1
4 - C - distance 1.3

From this you can derive that the 3-1 section is inside body C (from which you can get your fog density), has distance 0.54 - 0.2 = 0.34, and derive the fog depth from that. Do the same for the 1-2 and 2-4 cases, and multiply them together (because intensity is multiplicative, not additive) and you have your fog depth from 3 to 4. To keep track of which body you are in you can use a simple inside-outside rule, which involves maintaining a stack of overlapping bodies from bottom to top, but it you know you will only ever have two overlapping bodies such as a single organ inside the human body (but possibly more than one in total, of course) you can simplify this to a single comparison with no added complexity.


The main complexity here is probably how to store all the intersection points in a memory-efficient way, since there may be a lot of them. One way is to render the bodies front to back for each pixel somehow, so that in the first pass you handle 3-1, then the next 1-2, and finally 2-4, only storing enough state between calls to keep track of the current fog depth for each pixel (perhaps combining them as you go using multiplicative blending) and the distance so far handled. I don't know how feasible this is, though. You'll probably need to put some hard limits on how many bodies there are in order to optimize it.


Perhaps a more efficient method is to approximate the problem using hacks, like calculating fog between 3-4, and then between 1-2 (one pass per object) and multiplying them, which isn't quite correct since you're accounting for the overlapping parts twice, but could probably be made to look good enough with some tweaking. I'd recommend this approach to be honest, it is probably good enough, only use the previous method if this one isn't accurate enough for your needs.

Share this post

Link to post
Share on other sites

You could represent fog volumes with analytical convex primitives (spheres, boxes, polyhedra etc), then calculate the intersection between these primitives and view rays directly in a pixel shader. The decomposition of a body into convex primitives is a separate, known topic and tools exist to handle it.

In order to have variable fog density you have to decide the granularity of the density representation and store it accordingly: per vertex, per primitive, in a volume texture or in a stack of screen-aligned textures, as done in some recent games (killzone ?)

The absorption component of fog is multiplicative and the in-scattering term is additive, so you can render the fog primitives in any order and accumulate the results in an offscreen texture. The same principles apply to the rendering of volumetric lights (physically the same thing).

I am not an expert of mobile development but a well optimized implementation should run very well on Ipads.

Share this post

Link to post
Share on other sites

I should clarify, this would need to run as a bunch of shader passes (ideally as few as possible)... @Bacterius I get the idea of what you're saying but not how one might implement this in shaders.


@Reitano, by variable density I mean each individual shape (polyhedron) has a constant density, but this value is different for different shapes. Not sure if that simplifies things; as you say the absorption is multiplicative (and scattering isn't a factor here - in X-ray the transmission fraction f through thickness d is basically e-kd so passing through different materials we quite nicely get total transmission t = e-k1.d1.e-k1.d2.e-k3.d3 = e-(k1.d1+k2.d2+k3.d3)

Share this post

Link to post
Share on other sites

I'd try this approach:



- Break your shapes into convex polyhedra with a face count <= 8, either manually or with a tool

- Allocate an RGBA render target; 8 bits per channel should be sufficient


Per frame:

- Clear render target to 1

- For each polyhedra:

   Rasterize its front faces if camera is outside, back faces otherwise

   For each pixel

        calculate analytical intersection between the associated view ray and the polyhedra

        calculate absorption as A = exp(-k1 * d), where k1 * d is the optical distance within the polyhedra

        multiply A with the value stored in the render target by using hardware blending

- Fetch the final absorption term and multiply it with the colour of the rendered scene


As an optimization, if your shader model allows it, you can handle more than one shape in the same  pass.

I have mode some assumptions in the code above, you might have to rearrange stuff depending on your actual pipeline and requirements. Also, I ignored occlusion of fog volumes by opaque geometry. If you have access to the scene depth, that is a trivial addition.


Hope that helps!

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!