volume rendering

Started by
3 comments, last by deftware 7 years, 4 months ago

Hi,

I have a medical volume data and I would like to learn how to write a HLSL shader to do ray-cast rendering, volume slicing etc.

Can anybody provide me some links for this kind of examples?

I use DX11.

Thanks a lot.

YL

Advertisement

This page talks about it in terms of OpenGL. http://prideout.net/blog/?p=64

You can surely spend 30 minutes figuring out how to translate it, I think there are even HLSL to GLSL (and vice-versa) converters out there as they are really just exposing the underlying hardware functions.

The whole point is that you load your volume into a 3D texture and then render a cube/rectangular prism where each vertex is given its corresponding texture coordinate for the volume. Then, in a fragment/pixel shader you receive the interpolated texture coordinate for the fragments of the surface of the geometry and that serves as the origin for each raycast insofar as the math is concerned. What you need to do next is figure the vector through texture space that represents the direction the ray much march through the texture along the line that travels from the camera through the volume's geometry. In some instances this is accomplished via a complicated manner where both the front and back faces of the volume geometry are rendered to generate the starting and ending points for each fragment's ray through the texture. I just use the modelview matrix to directly calculate a perfectly acceptable ray vector, which is just the normalized vector between the viewpoint and the fragment. The math is a little tricky to visualize, but if you think about it it's just one step beyond multiplying your modelview matrix against the vertex position, and then multiply that against your modelview matrix again to orient it in worldspace.

In my vertex shader it's as simple as multiplying the modelview matrix with the vertex position to get the final render position for the vertex, and then multiply that against the modelview matrix to effectively perform an inverse matrix transform. Normalize that and you have the vector.

I made a really simple GL program that does this exactly a while ago. I'll go see if I can dig that up. Even though it's in OpenGL/GLSL I'm sure it won't be hard to port to HLSL.

I found my old OpenGL project that I made to test out raymarching a 3D texture. It just generates the texture procedurally using sine waves. You can run it if you trust it, but all the code and shaders are there: https://dl.dropboxusercontent.com/u/62846912/FILE/bitphoria_020513.zip

I was hired to do this in 2005 with CT data. A cheap easy way is to load the volume data into a 3D/volume texture, then draw a series of camera-aligned quads (billboards) with alpha blending and sample the volume texture from your world vertex coords. This is nice because it's very quick and easy to implement and intuitive. If you're feeling clever, you can simply raytrace the data in the shader by drawing rays and subdividing, which is a simple extension once you have the aligned quads. The math is pretty similar to any shader raytrace, parallax mapping might be a good reference for how to do that in general. Computing rays from the camera is a simple unproject operation - calculate clip space XY position for the fragment, multiply using two different Z values by inverse viewproj and you get a ray. Or you can compute the frustum corners and interpolate if you prefer that approach.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

I was hired to do this in 2005 with CT data. A cheap easy way is to load the volume data into a 3D/volume texture, then draw a series of camera-aligned quads (billboards) with alpha blending and sample the volume texture from your world vertex coords. This is nice because it's very quick and easy to implement and intuitive. If you're feeling clever, you can simply raytrace the data in the shader by drawing rays and subdividing, which is a simple extension once you have the aligned quads. The math is pretty similar to any shader raytrace, parallax mapping might be a good reference for how to do that in general. Computing rays from the camera is a simple unproject operation - calculate clip space XY position for the fragment, multiply using two different Z values by inverse viewproj and you get a ray. Or you can compute the frustum corners and interpolate if you prefer that approach.

This was something I did as well before but without any vertex/fragment shaders, just using fixed-function OpenGL. I just drew a static set of quads on the screen with an orthographic projection. They don't need to actually be floating in space. Then for each one I generated texture coordinates based on the camera's position, orientation, and FOV, for each of the quads, and just let it fill the screen with its slice of the volume 3D texture. This works a lot better if you want to actually be inside of the volume, whereas the other method that raymarches from the outer surface of the bounding volume requires that the camera is external to the volume data.

This topic is closed to new replies.

Advertisement