# Volumetric Raycaster on GPU: Viewpoint in Volume?

This topic is 2574 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi Guys,

i am currently implementing a raycaster on the GPU. The idea is to render a volumetric dataset stored as a 3D-Texture via raycasting with an approach like this.

My problem: All approaches I found deal only with the scenario where the viewpoint is outside the bounding-box of the volume:

In this case the entry and exit points of the rays can be determined via rendering the back-faces and front faces of the b-box to textures, saving the x,y,z-coordinates as color values. In the final rendering the front-faces are rendered again with color values determined during raycasting.

My question:
I want my renderer to be able to handle the sitation where the viewpoint is inside the volume as well. I guess that cannot be achieved with the approach I mentioned? Do I have to render the scene as a fullscreen-quad (as an image plane) and do classical raytracing for each pixel in this scenario? Any hints/links/ideas on how to solve this problem?

Thanks in advance and kind regards,
XBTC

##### Share on other sites
you have your ray, your starting points, your hit distances to f and r, if the hit distance of f is <0.0, you take your eye position as starting coordinate.

##### Share on other sites

you have your ray, your starting points, your hit distances to f and r, if the hit distance of f is <0.0, you take your eye position as starting coordinate. [/quote]

1. Wouldn't this mean that ALL rays end in the same pixel if the eye-point is completely inside the volume?

2. What would I render if the eye-point is inside the volume? If the eye-point is outside, I render the front-faces of the bounding-boy with per-pixel-colors accumulated during raycasting...

##### Share on other sites

1. Wouldn't this mean that ALL rays end in the same pixel if the eye-point is completely inside the volume?
they would all START at the same point, your eye, they end on the intersection point with the wall (l). that doesn't sound that wrong to me, you don't need a near-plane or something in raycasting/raytracing, so it's perfectly fine to start with the eye position.

calculating the first intersection with the bounding box is actually just an optimization, the same for the 2nd hit. you could just as good always start at the eye position and make n-steps through space. would be very wasteful, but should work just as good.
pseudocode would be kinda

 vec3 o;//origin vec3 d;//direction float r;//lenght if(optimization_enabled) { float f = intersection_near(o,d); float l = intersection_far(o,d); o = o+d*max(f,0.f); r=l-f; } for( float a=r;a>0.f;a-=step_size) { C=sample3D(o+a*d); sum=blend(sum,C,c.alpha); } return sum; } //sorry for the empty lines, somehow the editor doesn't work for me properly

2. What would I render if the eye-point is inside the volume? If the eye-point is outside, I render the front-faces of the bounding-box with per-pixel-colors accumulated during raycasting...

you don't need to change any rendering code, it's just a different starting point.

##### Share on other sites
Thank you VERY much for even taking the time to post pseudo-code!

you don't need to change any rendering code, it's just a different starting point. [/quote]

That's exactly the point I dont get. When I render the volume from a distance, I render the front-faces of it's bounding box. When the eyepoint is inside the volume that's impossible: The front-faces lie behind the viewer then, so no matter how I do the raycasting nothing will be visible on screen. What kind of Primitive do I render in that case?

##### Share on other sites

Thank you VERY much for even taking the time to post pseudo-code!

you don't need to change any rendering code, it's just a different starting point.

That's exactly the point I dont get. When I render the volume from a distance, I render the front-faces of it's bounding box. When the eyepoint is inside the volume that's impossible: The front-faces lie behind the viewer then, so no matter how I do the raycasting nothing will be visible on screen. What kind of Primitive do I render in that case?
[/quote]

Do you need to support other kind of volumes than cubes?

##### Share on other sites
Do you need to support other kind of volumes than cubes? [/quote]

No!

##### Share on other sites

Do you need to support other kind of volumes than cubes?

No!
[/quote]

Well then, only thing you need to render is quad on the screen which contains the visible parts of the volume. You may even go with full screen quad if performance isn't a problem. in my honest opinion you don't really need a separate pass for drawing back and forward faces to different render targets when the shape of the object is something simple as a cube.

With the ray-box hit test you are able to calculate for each pixel on screen the entering and exiting point. If the either of the intersecting points ends up negative, you'll know that the current pixel doesn't hit the box (totally behind eye point) or the starting point is at the eye point.

Drawing the exact box for the volume is merely an optimization. Otherwise, when camera is inside the box, it would be enough to draw only the back faces.

Cheers!

##### Share on other sites

Drawing the exact box for the volume is merely an optimization.[/quote]

I think the main advantage (besides optimization) is that you can use standard D3D shaders for rotation/translation/projection of the volume that way...?

Otherwise, when camera is inside the box, it would be enough to draw only the back faces. [/quote]

Now it gets interesting...do you think that would really work? Because objects in the front have to be larger on screen...and that would limit the "drawing area" to the backfaces which are smaller than "geometry" to be found in the volume closer to the viewer...?

##### Share on other sites

Drawing the exact box for the volume is merely an optimization.

I think the main advantage (besides optimization) is that you can use standard D3D shaders for rotation/translation/projection of the volume that way...?

[/quote]

Sure. However, there is nothing non-standard with drawing full screen quads or any other shapes. Also, optimizing geometry through-put here isn't the issue. Volumetric rendering is pixel shader heavy.

Otherwise, when camera is inside the box, it would be enough to draw only the back faces.

Now it gets interesting...do you think that would really work? Because objects in the front have to be larger on screen...and that would limit the "drawing area" to the backfaces which are smaller than "geometry" to be found in the volume closer to the viewer...?
[/quote]

If you look at a box (or any convex shape), you can always remove the forward facing surfaces and the back facing surfaces will still cover all the area.

The point here is that you'll use ray tracing (ray-box hit test) to find out the entering point and the exit point to the cube. You'll need to describe the cube mathematically (instead of 3d triangle geometry). Usually AABB hit-test routines need minimum point and maximum point of the cube and (the view) ray to perform the hit test.

What you are drawing on the screen doesn't really matter as long as the drawn triangles cover the area of the cube.

Cheers!

Edit : I guess you'll be fine with rendering the geometry too (forward faces when outside and backward faces when inside). This maybe slightly easier.

1. 1
2. 2
Rutin
19
3. 3
4. 4
5. 5

• 13
• 26
• 10
• 11
• 9
• ### Forum Statistics

• Total Topics
633736
• Total Posts
3013600
×