Well it's the same thing, I think. Render back-facing and front-facing polys for each body, and at the end you have a list of depths and associated fog densities which can be sorted and the fog depth can be quickly estimated by multiplying fog depths for every distance between consecutive fog depths. For instance if you have one body B inside another body C, sort of like this:
Then you first render body B and obtain the points:
And you do the same for body C:
You then sort them by distance inside some small table (you can use a sorting network to do this efficiently if you have a maximum number of bodies):
3 - C - distance 0.2
1 - B - distance 0.54
2 - B - distance 1.1
4 - C - distance 1.3
From this you can derive that the 3-1 section is inside body C (from which you can get your fog density), has distance 0.54 - 0.2 = 0.34, and derive the fog depth from that. Do the same for the 1-2 and 2-4 cases, and multiply them together (because intensity is multiplicative, not additive) and you have your fog depth from 3 to 4. To keep track of which body you are in you can use a simple inside-outside rule, which involves maintaining a stack of overlapping bodies from bottom to top, but it you know you will only ever have two overlapping bodies such as a single organ inside the human body (but possibly more than one in total, of course) you can simplify this to a single comparison with no added complexity.
The main complexity here is probably how to store all the intersection points in a memory-efficient way, since there may be a lot of them. One way is to render the bodies front to back for each pixel somehow, so that in the first pass you handle 3-1, then the next 1-2, and finally 2-4, only storing enough state between calls to keep track of the current fog depth for each pixel (perhaps combining them as you go using multiplicative blending) and the distance so far handled. I don't know how feasible this is, though. You'll probably need to put some hard limits on how many bodies there are in order to optimize it.
Perhaps a more efficient method is to approximate the problem using hacks, like calculating fog between 3-4, and then between 1-2 (one pass per object) and multiplying them, which isn't quite correct since you're accounting for the overlapping parts twice, but could probably be made to look good enough with some tweaking. I'd recommend this approach to be honest, it is probably good enough, only use the previous method if this one isn't accurate enough for your needs.
The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.
- Pessimal Algorithms and Simplexity Analysis