Jump to content
  • Advertisement
Sign in to follow this  
dr4cula

Volume Rendering: Eye Position to Texture Space?

This topic is 1515 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,

 

I've been trying to set up a volume renderer but I'm having trouble getting the ray marching data set up correctly. I use a cube which has extents in positive directions, i.e. the vertex definitions look like (1,0,0), (1,1,0) etc which give me implicitly defined texture coordinates where the v-dir needs to be inverted later. Then what I do:

 

1) render cube with front face culling, record fragment distance from eye to the fragments position in the alpha channel (GPU Gems 3 article's source code uses the w-component of the vertex after world-view-projection multiplication), i.e. in pixel shader after interpolation of the per-vertex dist return float4(0.0f, 0.0f, 0.0f, dist);

2) turn on subtractive blending, render cube with back face culling, record negative generated texture coordinates in rgb channel and the distance to this fragment in the alpha channel, i.e. return float4(-texCoord, dist) where texCoord is the original vertex input coordinate.

 

I now have a texture where the rgb channels give the entry point of the ray in texture space and the alpha channel gives the distance through the volume.

 

However, how would I now get the direction vector? GPU Gems 3 says:

 

"The ray direction is given by the vector from the eye to the entry point (both in texture space)."

 

How does one transform the eye position to texture space, so I could put it in a constant buffer for the shader?

 

Thanks in advance!

Edited by dr4cula

Share this post


Link to post
Share on other sites
Advertisement

the entry point coordinates (in texture space) is the same as the RGB value at each fragment on the nearest sides of the cube that are facing the camera. The vector to step your ray is the opposite side RGB minus the origin RGB, normalized.

Share this post


Link to post
Share on other sites


the entry point coordinates (in texture space) is the same as the RGB value at each fragment on the nearest sides of the cube that are facing the camera. The vector to step your ray is the opposite side RGB minus the origin RGB, normalized.

 

Thanks for your reply! Yes, I know how to find the direction vector when I've got access to both back and front texture coordinate data. The GPU gems text refers to only a single rayData texture by combining the data like I have in the original post and then providing the camera's position in texture space in a constant buffer. J. Zink's blog (http://www.gamedev.net/blog/411/entry-2255423-volume-rendering/) uses the same method it seems: at least he is describing the usage of the camera's position in texture space.

 

For now, I've implemented the 2-rendertarget/texture version and I seem to be getting decent results. However, it seems for different data sets I need to scale the alpha component differently (see code below):

float4 rayEnd = rayDataBack.Sample(linearSampler, input.texCoord);
float4 rayStart = rayDataFront.Sample(linearSampler, input.texCoord);
 
float3 rayDir = rayEnd.xyz - rayStart.xyz;
float dist = length(rayDir);
rayDir = normalize(rayDir);
 
int steps = max(max(volumeDimensions.x, volumeDimensions.y), volumeDimensions.z);
float stepSize = dist / steps;
 
float3 pt = rayStart.xyz;
 
float4 color = float4(0.0f, 0.0f, 0.0f, 0.0f);
for(int i = 0; i < steps; ++i) {
 
float3 texCoord = pt;
texCoord.y = 1.0f - pt.y;
float vData = volumeData.SampleLevel(linearSampler, texCoord, 0).r;
 
float4 src = float4(vData, vData, vData, vData);
src.a *= 0.01f; // <- this scalar needs to be different for different data sets
 
color.rgb += (1.0f - color.a) * src.a * src.rgb;
color.a += src.a * (1.0f - color.a);
 
if(color.a >= 0.95f) {
break;
}
 
pt += stepSize * rayDir;
 
if((pt.x > 1.0f) || (pt.y > 1.0f) || (pt.z > 1.0f)) {
break;
}
}
 
return color;

For example:

teapot (scalar = 0.01f): http://postimg.org/image/668oi4imz/

foot (scalar = 0.05f): http://postimg.org/image/tjgggjmy5/

 

Is it normal to tweak the rendering for specific data sets or should one value fit all?

 

Thanks.

Edited by dr4cula

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!