volume rendering

Started by
5 comments, last by wolverine 16 years, 8 months ago
I'm reading this: http://en.wikipedia.org/wiki/Volume_rendering It says under "Volume Ray Casting" that
Quote: The simplest way to project the image is to cast rays through the volume using ray casting. In this technique, a ray is generated for each desired image pixel. Using a simple camera model, the ray starts at the center of the projection of the camera (usually the eye point) and passes through the image pixel on the imaginary image plane floating in between the camera and the volume to be rendered. The ray is clipped by the boundaries of the volume in order to save time. Then the ray is sampled at regular intervals throughout the volume.
I don't get why the ray is sampled at regular intervals throughout the volume. For a render, don't you just want to render a particular slice? So wouldn't you define a plane that "cuts" through the volume. Then cast the rays and choose the sample from the volume that also intersects the cut plane?
-----Quat
Advertisement
All depends what you want to achieve - I can see uses for your suggestion, but for special effects the wikipedia quote seems more accurate.

Think of volumetric smoke (iirc there is a "hell gate london" slidedeck from GDC explaining this, and also some references in the DXSDK). In this case you trace the ray through the volume and you can effectively collect density information which can be used for colour and blending of the plane - giving some pretty high quality effects. In this instance an individual slice being rendered doesn't really have much value except as some sort of twist on old-school billboarding...

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

I don't know if I got your question right, but with volume rendering the whole point is that you're not interested only in a slice, but the whole, well, volume of the object :)

e.g. when rendering smoke you are interested which parts of the scene are occluded by the smoke and to what degree - for the latter you must determine 'how much' smoke is occluding the object - hence the sampling of the ray at regular intervals is necessary.
Sorry, for some reason I was thinking volume rendering was used mostly for medical imaging applications where they want to visualize, say, a slice of the brain or other soft tissue. But I can see for transparent objects why you'd have to go all the way.
-----Quat
There is a lot you can do with volume rendering with variations of interval sampling. Ultimately you will get artifacts based on your intervals. Course fixed intervals will miss geometry. Dithering your intervals can provide a much smoother results. Tons of possibility of temporal and spacial optimizations here as well. Such as incremental refinement of start position and interval length after each frame ... re-transforming these settings after view rotation or object changes ... fun research!
_|imothy Farrar :: www.farrarfocus.com/atom
There are at least two problems when it comes to using slices in volume rendering instead of the ray casting approach (at least on modern GPUs).

The first is a problem of performance. If you position your camera such that the volume is filling the whole screen, you'll have to render N alpha blended fullscreen quads if you use slices. This is going to be much less efficient than just sampling a texture N times per pixel (think about the amount of data travelling over the bus). I'm assuming here that your not doing too many operations per sample (for instance volume light rendering), so you're not going to fragment shader bound.

The second is a problem of quality. If the number of slices you use to visualize your volume is high, your going to have to blend onto a floating point colour buffer to get reasonable results. Obviously, this will take at least twice the bandwidth compare to a 32bit buffer. To see why, lets assume you're using 256 slices to render your volume, if you blend onto a buffer with only 32bit buffer, you're only going to preserve 1bit of accuracy when reading your volume texture. This isn't a problem when ray casting, because GPU registers have higher than 8 bit accuracy.

Here's a brief outline of a fragment shader I wrote to render shadowed volumetric lights (I can't post the source for NDA reasons):

Render the front faces of the light's frustum.
Calculate the ray from camera through the screen pixel.
Read the screen depth buffer at the screen pixel, project into world space.
Intersect the ray with the world space position of that pixel.
Clip the result of that against the light frustum to get the ray segment we want to sample.
Dither the starting point of the ray segment (see dithered shadow maps in GPU Gems 1).
Sample along the ray N times reading the light's shadow map and gobo texture.
Done!

There's some fudgery that you have to do because your texture coordinate derivatives can go all screwy because of the screen depth buffer reads, but this can be worked around. Currently I'm using 20 dithered samples and getting very good results. When I get the chance, I plan on rendering the light volumes to a quarter size texture and increasing the number of samples.
Quote:Original post by Quat
Sorry, for some reason I was thinking volume rendering was used mostly for medical imaging applications where they want to visualize, say, a slice of the brain or other soft tissue. But I can see for transparent objects why you'd have to go all the way.


Even in medical imaging applications, you may have tranparent zones like tissues and stuff that basically help the viewer to "situate" what they are visualizing.
For example, a few volume rendering images in medical applications with transparen zones after a google search:
img1
img2

This topic is closed to new replies.

Advertisement