Jump to content

  • Log In with Google      Sign In   
  • Create Account

Multiple volumetric fog bodies


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
4 replies to this topic

#1 JDX_John   Members   -  Reputation: 284

Like
0Likes
Like

Posted 29 January 2014 - 03:11 AM

It's not actually fog, but for medical visualization I am rendering a patient under simulated x-ray, and treating soft tissue as volumetric fog is quite appropriate - each organ a simple hollow hull representing a region of volumetric fog.

 

I found this on the topic which sounds pretty interesting - basically you render front-facing polys to one buffer and back-facing to another and work out the distance between them to give fog depth. But what if you have multiple bodies, and they don't have the same fog density? The link mentions you have to consider this, but didn't as far as I could see offer an answer.

 

Any suggestions? Either on this technique or alternative approaches? I'm happy to hear about fancy hi-tech solutions but I am targeting iPads which means I'm limited to GLSLES 2.0 and not uber-powerful GPUs.

 

Thankyou.


www.simulatedmedicine.com - medical simulation software

Looking to find experienced Ogre & shader developers/artists. PM me or contact through website with a contact email address if interested.


Sponsor:

#2 Bacterius   Crossbones+   -  Reputation: 9304

Like
0Likes
Like

Posted 29 January 2014 - 04:13 AM

Well it's the same thing, I think. Render back-facing and front-facing polys for each body, and at the end you have a list of depths and associated fog densities which can be sorted and the fog depth can be quickly estimated by multiplying fog depths for every distance between consecutive fog depths. For instance if you have one body B inside another body C, sort of like this:

CCCCCCCCBBBBCCCCC

Then you first render body B and obtain the points:

CCCCCCCCBBBBCCCCC
        ^  ^
        1  2

And you do the same for body C:

CCCCCCCCBBBBCCCCC
^               ^
3               4

You then sort them by distance inside some small table (you can use a sorting network to do this efficiently if you have a maximum number of bodies):

3 - C - distance 0.2
1 - B - distance 0.54
2 - B - distance 1.1
4 - C - distance 1.3

From this you can derive that the 3-1 section is inside body C (from which you can get your fog density), has distance 0.54 - 0.2 = 0.34, and derive the fog depth from that. Do the same for the 1-2 and 2-4 cases, and multiply them together (because intensity is multiplicative, not additive) and you have your fog depth from 3 to 4. To keep track of which body you are in you can use a simple inside-outside rule, which involves maintaining a stack of overlapping bodies from bottom to top, but it you know you will only ever have two overlapping bodies such as a single organ inside the human body (but possibly more than one in total, of course) you can simplify this to a single comparison with no added complexity.

 

The main complexity here is probably how to store all the intersection points in a memory-efficient way, since there may be a lot of them. One way is to render the bodies front to back for each pixel somehow, so that in the first pass you handle 3-1, then the next 1-2, and finally 2-4, only storing enough state between calls to keep track of the current fog depth for each pixel (perhaps combining them as you go using multiplicative blending) and the distance so far handled. I don't know how feasible this is, though. You'll probably need to put some hard limits on how many bodies there are in order to optimize it.

 

Perhaps a more efficient method is to approximate the problem using hacks, like calculating fog between 3-4, and then between 1-2 (one pass per object) and multiplying them, which isn't quite correct since you're accounting for the overlapping parts twice, but could probably be made to look good enough with some tweaking. I'd recommend this approach to be honest, it is probably good enough, only use the previous method if this one isn't accurate enough for your needs.


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#3 Reitano   Members   -  Reputation: 553

Like
0Likes
Like

Posted 29 January 2014 - 05:43 AM

You could represent fog volumes with analytical convex primitives (spheres, boxes, polyhedra etc), then calculate the intersection between these primitives and view rays directly in a pixel shader. The decomposition of a body into convex primitives is a separate, known topic and tools exist to handle it.

In order to have variable fog density you have to decide the granularity of the density representation and store it accordingly: per vertex, per primitive, in a volume texture or in a stack of screen-aligned textures, as done in some recent games (killzone ?)

The absorption component of fog is multiplicative and the in-scattering term is additive, so you can render the fog primitives in any order and accumulate the results in an offscreen texture. The same principles apply to the rendering of volumetric lights (physically the same thing).

I am not an expert of mobile development but a well optimized implementation should run very well on Ipads.



#4 JDX_John   Members   -  Reputation: 284

Like
0Likes
Like

Posted 29 January 2014 - 08:11 AM

I should clarify, this would need to run as a bunch of shader passes (ideally as few as possible)... @Bacterius I get the idea of what you're saying but not how one might implement this in shaders.

 

@Reitano, by variable density I mean each individual shape (polyhedron) has a constant density, but this value is different for different shapes. Not sure if that simplifies things; as you say the absorption is multiplicative (and scattering isn't a factor here - in X-ray the transmission fraction f through thickness d is basically e-kd so passing through different materials we quite nicely get total transmission t = e-k1.d1.e-k1.d2.e-k3.d3 = e-(k1.d1+k2.d2+k3.d3)


www.simulatedmedicine.com - medical simulation software

Looking to find experienced Ogre & shader developers/artists. PM me or contact through website with a contact email address if interested.


#5 Reitano   Members   -  Reputation: 553

Like
0Likes
Like

Posted 30 January 2014 - 05:03 AM

I'd try this approach:

 

Initialization:

- Break your shapes into convex polyhedra with a face count <= 8, either manually or with a tool

- Allocate an RGBA render target; 8 bits per channel should be sufficient

 

Per frame:

- Clear render target to 1

- For each polyhedra:

   Rasterize its front faces if camera is outside, back faces otherwise

   For each pixel

        calculate analytical intersection between the associated view ray and the polyhedra

        calculate absorption as A = exp(-k1 * d), where k1 * d is the optical distance within the polyhedra

        multiply A with the value stored in the render target by using hardware blending

- Fetch the final absorption term and multiply it with the colour of the rendered scene

 

As an optimization, if your shader model allows it, you can handle more than one shape in the same  pass.

I have mode some assumptions in the code above, you might have to rearrange stuff depending on your actual pipeline and requirements. Also, I ignored occlusion of fog volumes by opaque geometry. If you have access to the scene depth, that is a trivial addition.

 

Hope that helps!






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS