Ray Tracer Design: Volume Rendering

Started by
7 comments, last by apocnever 17 years, 11 months ago
I'm getting ready to implement volume rendering in my ray tracer, and I'm having trouble figuring out some of the details. Primarily, when is a volume shader called? And how should intersection tests work with volumes? I'd like everything to fit into this algorithm: for every pixel, find the first intersection along the ray. call that surface's shader. I was thinking that, for volumes, the ray would hit immediately if the eye was inside the volume; but this requires special volume surfaces, and I would prefer that these details be left to the shader. But these seems to make the algorithm I mentioned above incompatible with volume shaders. But the alternative seems to require a very special treatment of volumes that threatens the simplicity and modularity of the renderer. What are some good approaches? I'd like to allow intersecting volumes, and viewpoints from both inside and outside volumes (i.e. clouds in an atmosphere). Thanks.
Advertisement
You need a few things to do volume rendering:

- A definite method of testing if a given point is inside or outside a volume (for checking if a ray's origin is in a volume)
- A way to list all intersection points of a ray and a given chunk of geometry
- A sampling mechanism that evaluates the volume lighting data at points along a ray that are inside the given volume


You cannot effectively render a volume by treating its lighting contributions as equivalent to those of a surface; you have to actually sample lighting continuously through the entire volume, and sum the samples to generate the final contribution of the volume to a given ray's light value. The specifics of how this sampling is done depend entirely on the type of volume effects you want to support.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Intersecting an eye-ray with a box is pretty straight-forward. After that, you march through the volume, accumulating colour and opacity information along the way. The eye-ray is finished when its opacity is >= 1.0, or when it exits the volume.

As you march through the volume, any directional light values can be determined by marching from the current location in the volume toward each light source.

The amount of light that reaches the current location is 1.0 - opacity of the trace toward the light.

You determine the eye-ray directions by constructing an image plane.

Which shader language are you going to use?
Thanks, ApochPiQ.
The first three I am able to do or can implement relatively easily.
-If the normal at the first intersection point with an object is facing away from the eye ray.
-Find the first hit, move a little bit further along, find the next hit, ...
-Sort of like implementing a shader for any arbitrary point.

What I'm trying to figure out is how to sort the objects. Should the highest level of the program treat volumes differently than surfaces? The problem I was having was with hiding the volume-specific stuff behind a layer of abstraction. But that's seeming problematic. This is the setup I was imagining:

For any ray color query:
Find the first non-volume hit. This point will be the new far distance.
In this shortened interval, produce a sorted list of all hits with volume objects.
If the eye is inside a volume, or volumes, append these hits to the front.

I'll get something like this: (hopefully you can make sense of it)
The numbers identify the volume; ( = front, ) = back, as if eye looked left to right.
_________a____b__c_____d_____e_____f___g
eye-->__0(___1(__)1___2(____3(_____)2__SURFACE

So then I would render volume 0 on a-b, 1 on b-c, 0 on c-d, 2 on d-e, and 3 on e-g. (The shader can ask for the color behind it as needed, but it won't be calculated until it's requested.)

There are obvious problems with this, but I don't know how to gracefully deal with intersecting volumes. I can thing of many cases where this would produce some very strange images, especially when animated.

Quote:Original post by taby
Which shader language are you going to use?

As of now, all the shaders are native to the renderer. Their settings can be adjusted in scene files, but they are compiled with the renderer itself. In the future I may implement a programmable shader type that will probably interpret Renderman Shading Language shaders. Right now I'm working on the architecture of the renderer itself.
The typical algorithm goes like this:

- For all rays, compute the closest intersection with a surface; shade the surface at the intersection point and store that color
- Build a list of all volumes through which the ray passes
- March the ray through each volume (often independently) and compute the contribution of that particular volume with the ray. Stop marching when the ray exits the volume OR hits the surface located earlier
- Combine the contributions of the surface and any/all volumes the ray passed through


The marching is easy to cap. Since you know the t-value where the ray hits the closest surface (distance to intersection), you can also know the t-value of when you need to stop marching the ray through any volumes. If the t-value is greater than the t-value of the ray's exit point from a given volume, you can safely sum all marching points within the volume. If the t-value of the closest surface is less than the t-value of the exit point from the volume, you simply stop at the surface, rather than marching out the "back" of the volume itself. If the t-value of the closest surface is less than the t-value of any entry into a volume, you don't have to march at all for that particular ray.

That handles occlusion of volumes with surfaces, intersecting volumes, and so on. Remember that most operations with light (when using the RGB model) are multiplicative (a * b where 0 <= b <= 1), which means you can combine many contributions in any order simply by multiplying them, since multiplication is commutative.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Thanks again. The only thing that bothers me is that I would think some of those volume shaders would want access to the color coming in from the end of its interval, and if there's another volume further along the ray, or if the volume is embedded in another volume, and the volumes are marched independently even if they intersect, then this would seem to give an incorrect result (though much better than what I suggested). I've got a lot of other questions about the algorithm, but I don't want to ask for the whole thing. Do you know of any sites that go into some of the nitty gritty stuff? Or even a small example (diagram like I threw out) and running through the algorithm at a high level would help. Thanks.
Thinking about it again, I feel I've got a better understanding about this. Is this pretty much what you had in mind?

Using my ugly example again:

________a____b___c___d_____e______f___g
eye-->__0(___1(__)1___2(____3(_____)2__SURFACE



Start by finding the color Cg of the surface at point g.
On interval f-g:
-Cin = Cg
-Cout = Cf = C(3 on f-g) * C(0 on f-g)

On interval e-f:
-Cin = Cf
-Cout = Ce = C(3 on e-f) * C(2 on e-f) * C(0 on e-f)

On interval d-e:
-Cin = Ce
-Cout = Cd = C(2 on d-e) * C(0 on d-e)

On interval c-d:
-Cin = Cd
-Cout = Cc = C(0 on c-d)

On interval b-c:
-Cin = Cc
-Cout = Cb = C(1 on b-c) * C(0 on b-c)

On interval a-b:
-Cin = Cb
-Cout = Ca = C(0 on a-b)

RayColor = Ca

apocnever: Got your PM.

I do something like this:

Start at camera with transparency=1 and brightness=0 and step towards surface.

On every step with length l :
brightness+=medium_luminance*l*transparency;// brightness+= how much of light this step size has * transparency between eye and this step
transparency*=exp(-l*medium_opacity); // exponent gives transparency of volume at this step length. The transparencies accamulate together with multiplication

when you hit the surface:
screen color = brightness + surface_brightness*transparency;
medium_luminance is how bright medium is, and medium_opacity is how much it eats out light.

Also, you can stop marching when transparency is too small.

Formula is bit more complex when steps are long (e.g. stepping through constant opacity fog). I'm using function like this:
inline void IntegrationStep(real64 L, real64 opacity, real64 luminance, real64 &brightness, real64 &transparency){	//	if(opacity*L > some_small_value){ // do this step exactly		real64 p = exp(-opacity*L);// how much light pass.		brightness += luminance * transparency *  ( 1/opacity ) * (1-p);		transparency *= p;	}else // approximate	{		brightness += luminance * transparency * L;		transparency *= exp(-opacity*L);	}}

L is step length. luminance and opacity is defined for medium.
Brightness and transparency is integration variables.

This is done for R,G,B of course to get color.

The stepping algorithm that i use in my Volumetrics is quite complicated because i can have many overlapping volumes where each needs it's own step size... also my step size is adaptive.
As for sources and high-level stuff, to derive formulas i used math knowledge that i have from school, reading math books and such, plus some physics.
It's not hard to express final color as (double) integral along the ray. Then all you need from renderer is to solve it numerically.
The integral is
integral(x=0..L, medium_brightness(x)*dx*exp(integral(j=0..x, log(medium_opacity(j)*dj))))
, then make numerical integrator and simplify it to compute both integrations in same pass.

[Edited by - Dmytry on May 11, 2006 9:39:50 AM]
Dmytry,

Thanks for the reply, didn't notice it til today. I got something working for the volumes - I'm rendering intervals of volumes back to front, but the shaders I've written render the particular intervals front-to-back, so there's some inefficiency if the volumes are fairly opaque, but I didn't want to make assumptions about what the volume shaders were doing; I'm trying to model them to allow, eventually, for the interpretation of Renderman SL volume shaders, which don't necessarily behave like physically-based volumes. But since I'm personally trying to create a certain class of images (with atmospheric scattering and clouds), maybe I should tailor the raytracer to these sorts of volumes. Nonetheless, this should help me out on writing the actual shaders, which have been turning out somewhat ugly/strange/unacceptable so far.

Example:
Thin Atmosphere
Besides being strangely grainy from the outside, it's becoming way too opaque (that'e a shiny white sphere under there!), but when I turn the density down the color disappears. Guess I'll mess with it in light of your post.

Thanks,
apoc

This topic is closed to new replies.

Advertisement