GPU ray marching...

Started by
26 comments, last by Basiror 16 years, 1 month ago
Im trying to render a volume texture via GPU ray marching (as described in GPU Gems 3), but there is something I dont get. This is what im doing... First I render the entire scene to a texture (by entire scene I mean the scene with just my volume box in it). This captures the FRONT faces of my volume box. The shader used for this pass writes to a f16rgba render target (which I call FrontVolumeRT), storing the uv coords (to rgb) and the dept (a). Then for the next step, I render the BACK faces of my volume box with my gpu marching shader. Inside this shader I can see the uv coords of the backface, and since I pass in the pixels screen space position, I can also sample the FrontVolumeRT and get the uvcoord for the front face. I subtract the two uvcoords to get my marching ray. I also subtract the two depths to determine how many 'steps' to take. Seems to work ok, but there is one problem. When rendering the FrontVolumeRT (which again just stores a uvcoord and depth info for the front faces), if the camera near plane clips the volume, no uv/depth will be recorded for that particular position. This obviously leads to trouble later when trying to calculate the ray/volume depth. In my book (GPU Gems 3) they attempt to explain a way to fix this problem, but I just dont get it. Perhaps I just need to read it again, but how on earth are you supposed to recalculate these missing pixels when they are clipped by the near plane? Thanks for any help!
Advertisement
Anyone have any ideas?

It seems like if I could somehow set it up to render the clipped part of the volume, that would give me the uv coords I need. Is this possible? Im using Ogre for my rendering.

Or perhaps there is some better way?

*the GPU gems article just says - mark areas on the 'rayDataTexture' (which represents the texture space coords for the front & back faces of your volume object) where front faces have been clipped (that part is easy enough). But then they say "in the ray casting shader, we explicitly initialize the position of these marked pixels to the appropriarte texture space position of the corresponding point on the near plane'. HOW?! How can you reconstruct the FRONT face uv coord knowing only the BACK faces uv coord?
one suggestion, if your volume is only a cube you could use the near plane to pre clip your cube, then fill in any parts that get clipped on the cpu before sending it off to the GPU.

if you using dx10 though, you could do this all in 1 pass with out such clipping issues
Filling in the missing pixels on the cpu seems like it would be kinda slow. And I may want to change the shape of the volume down the line. *btw im using dx9.
Here is a article that describes the problem well...

http://medvis.vrvis.at/fileadmin/publications/CESCG2005.pdf

They suggest you 'render the parts clipped by the near plane'... How would one go about doing that exactly? Im using OGRE for my rendering, but I just dont see how to render something that is being clipped by the near plane...
one way I could think of would be to use the stencil buffer

1. mirror the geometry that is clipped against your near plane against the near plane
2. fill the stencil buffer to mark the pixels that are clipped
3. now turn off the depth buffer test and just render the near plane onto the screen with stencil test enabled
what you need to consider now are all the pixels whose stencil value ==1 thus only one intersection with a ray from eye to the surface,
pixels with more than >1 intersection are part of the front faces of the original mesh

this should provide you the correct result

if there is an easier way to achieve the same result I would be interested in that solution, since this ray casting effect can be used to render volumetric fog

I implemented it that way a year ago
http://www.8ung.at/basiror/theironcross.html
Hmm I am having a little trouble trying to visualize that...

I think I see how it would work, if I just knew how to do 'step 3' in Ogre. I know how to disable zwrite/read when rendering a object/material.. I dont think disabling zread would render stuff BEHIND the near plane though...

*will "turning off the depth buffer" really render pixels clipped by the near plane?
no
in step 1 I mentioned that you mirror the geometry on the near plane and render the front faces to increment the counter and the back faces to decrement it, so you actually render what is clipped away when rendering it the normal way

now everytime you render a fragment you increment the stencil value

when you are done you can examine the stencil buffer and mark all pixels with stencil<0 (was wrong here) as that are the fragments clipped away(on the backfaces are visible

this works as long as the mirrored geometry is completely inside your frustrum
(have a look at how the stencil shadow algorithm handles shadow volumes that are partially clipped away by the far plane to compensate that problem)
http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail

now that you have a mask that tells you which pixels have been clipped away you render the near plane and get the ray entry coordinates in the fragment shader and thus get the correct ray start

I would just disable the depth buffer so you don t get artifacts with rendering the near plane, you might have to offset it a little away from the real near plane
http://www.8ung.at/basiror/theironcross.html
Quote:
this works as long as the mirrored geometry is completely inside your frustrum


Ok, I see how this could work IF the mirrored object was completely inside your frustum (or rather, completely infront of the near plane?). The math for that sounds like it would be a little tricky no? Could you point me in the right direction?

Once I figure out how to possition/orient this mirrored geometry, the rest should be easy I think (step 1, render 'normal' front faces with the stencil buffer enabled. step 2, render 'mirrored' geometry using a stencil test. step 3, combine the two).

*also, couldnt I just write a shader that 'smushes' the zvalue for each vertice (after transforming them into clip/projection space)? Something like...

Quote:
outPos = mul( worldViewProj, inPos );
outPos.z = 0.5;


Shouldnt that work for rendering the 'clipped' portion of a simple cube?

[Edited by - ZealousEngine on February 22, 2008 11:22:12 AM]
Quote:Original post by ZealousEngine
Quote:
this works as long as the mirrored geometry is completely inside your frustrum


Ok, I see how this could work IF the mirrored object was completely inside your frustum (or rather, completely infront of the near plane?). The math for that sounds like it would be a little tricky no? Could you point me in the right direction?

Once I figure out how to possition/orient this mirrored geometry, the rest should be easy I think (step 1, render 'normal' front faces with the stencil buffer enabled. step 2, render 'mirrored' geometry using a stencil test. step 3, combine the two).

*also, couldnt I just write a shader that 'smushes' the zvalue for each vertice (after transforming them into clip/projection space)? Something like...

Quote:
outPos = mul( worldViewProj, inPos );
outPos.z = 0.5;


Shouldnt that work for rendering the 'clipped' portion of a simple cube?


mirroring is actually quite simple
transform it into camera space and invert the forward axis (negative scale) transform back to worldspace

As for your shader and the z value, I don t understand what you are willing to do.

Look up the depth fail stencil shadow algorithm about details when parts of the boundary( shadow volume) are clipped

http://www.8ung.at/basiror/theironcross.html

This topic is closed to new replies.

Advertisement