Sign in to follow this  

GPU ray marching...

This topic is 3579 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Im trying to render a volume texture via GPU ray marching (as described in GPU Gems 3), but there is something I dont get. This is what im doing... First I render the entire scene to a texture (by entire scene I mean the scene with just my volume box in it). This captures the FRONT faces of my volume box. The shader used for this pass writes to a f16rgba render target (which I call FrontVolumeRT), storing the uv coords (to rgb) and the dept (a). Then for the next step, I render the BACK faces of my volume box with my gpu marching shader. Inside this shader I can see the uv coords of the backface, and since I pass in the pixels screen space position, I can also sample the FrontVolumeRT and get the uvcoord for the front face. I subtract the two uvcoords to get my marching ray. I also subtract the two depths to determine how many 'steps' to take. Seems to work ok, but there is one problem. When rendering the FrontVolumeRT (which again just stores a uvcoord and depth info for the front faces), if the camera near plane clips the volume, no uv/depth will be recorded for that particular position. This obviously leads to trouble later when trying to calculate the ray/volume depth. In my book (GPU Gems 3) they attempt to explain a way to fix this problem, but I just dont get it. Perhaps I just need to read it again, but how on earth are you supposed to recalculate these missing pixels when they are clipped by the near plane? Thanks for any help!

Share this post


Link to post
Share on other sites
Anyone have any ideas?

It seems like if I could somehow set it up to render the clipped part of the volume, that would give me the uv coords I need. Is this possible? Im using Ogre for my rendering.

Or perhaps there is some better way?

*the GPU gems article just says - mark areas on the 'rayDataTexture' (which represents the texture space coords for the front & back faces of your volume object) where front faces have been clipped (that part is easy enough). But then they say "in the ray casting shader, we explicitly initialize the position of these marked pixels to the appropriarte texture space position of the corresponding point on the near plane'. HOW?! How can you reconstruct the FRONT face uv coord knowing only the BACK faces uv coord?

Share this post


Link to post
Share on other sites
one suggestion, if your volume is only a cube you could use the near plane to pre clip your cube, then fill in any parts that get clipped on the cpu before sending it off to the GPU.

if you using dx10 though, you could do this all in 1 pass with out such clipping issues

Share this post


Link to post
Share on other sites
Here is a article that describes the problem well...

http://medvis.vrvis.at/fileadmin/publications/CESCG2005.pdf

They suggest you 'render the parts clipped by the near plane'... How would one go about doing that exactly? Im using OGRE for my rendering, but I just dont see how to render something that is being clipped by the near plane...

Share this post


Link to post
Share on other sites
one way I could think of would be to use the stencil buffer

1. mirror the geometry that is clipped against your near plane against the near plane
2. fill the stencil buffer to mark the pixels that are clipped
3. now turn off the depth buffer test and just render the near plane onto the screen with stencil test enabled
what you need to consider now are all the pixels whose stencil value ==1 thus only one intersection with a ray from eye to the surface,
pixels with more than >1 intersection are part of the front faces of the original mesh

this should provide you the correct result

if there is an easier way to achieve the same result I would be interested in that solution, since this ray casting effect can be used to render volumetric fog

I implemented it that way a year ago

Share this post


Link to post
Share on other sites
Hmm I am having a little trouble trying to visualize that...

I think I see how it would work, if I just knew how to do 'step 3' in Ogre. I know how to disable zwrite/read when rendering a object/material.. I dont think disabling zread would render stuff BEHIND the near plane though...

*will "turning off the depth buffer" really render pixels clipped by the near plane?

Share this post


Link to post
Share on other sites
no
in step 1 I mentioned that you mirror the geometry on the near plane and render the front faces to increment the counter and the back faces to decrement it, so you actually render what is clipped away when rendering it the normal way

now everytime you render a fragment you increment the stencil value

when you are done you can examine the stencil buffer and mark all pixels with stencil<0 (was wrong here) as that are the fragments clipped away(on the backfaces are visible

this works as long as the mirrored geometry is completely inside your frustrum
(have a look at how the stencil shadow algorithm handles shadow volumes that are partially clipped away by the far plane to compensate that problem)
http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail

now that you have a mask that tells you which pixels have been clipped away you render the near plane and get the ray entry coordinates in the fragment shader and thus get the correct ray start

I would just disable the depth buffer so you don t get artifacts with rendering the near plane, you might have to offset it a little away from the real near plane

Share this post


Link to post
Share on other sites
Quote:

this works as long as the mirrored geometry is completely inside your frustrum


Ok, I see how this could work IF the mirrored object was completely inside your frustum (or rather, completely infront of the near plane?). The math for that sounds like it would be a little tricky no? Could you point me in the right direction?

Once I figure out how to possition/orient this mirrored geometry, the rest should be easy I think (step 1, render 'normal' front faces with the stencil buffer enabled. step 2, render 'mirrored' geometry using a stencil test. step 3, combine the two).

*also, couldnt I just write a shader that 'smushes' the zvalue for each vertice (after transforming them into clip/projection space)? Something like...

Quote:

outPos = mul( worldViewProj, inPos );
outPos.z = 0.5;


Shouldnt that work for rendering the 'clipped' portion of a simple cube?

[Edited by - ZealousEngine on February 22, 2008 11:22:12 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by ZealousEngine
Quote:

this works as long as the mirrored geometry is completely inside your frustrum


Ok, I see how this could work IF the mirrored object was completely inside your frustum (or rather, completely infront of the near plane?). The math for that sounds like it would be a little tricky no? Could you point me in the right direction?

Once I figure out how to possition/orient this mirrored geometry, the rest should be easy I think (step 1, render 'normal' front faces with the stencil buffer enabled. step 2, render 'mirrored' geometry using a stencil test. step 3, combine the two).

*also, couldnt I just write a shader that 'smushes' the zvalue for each vertice (after transforming them into clip/projection space)? Something like...

Quote:

outPos = mul( worldViewProj, inPos );
outPos.z = 0.5;


Shouldnt that work for rendering the 'clipped' portion of a simple cube?


mirroring is actually quite simple
transform it into camera space and invert the forward axis (negative scale) transform back to worldspace

As for your shader and the z value, I don t understand what you are willing to do.

Look up the depth fail stencil shadow algorithm about details when parts of the boundary( shadow volume) are clipped

Share this post


Link to post
Share on other sites
Hrmm wait a minute.. is mirroring as simple as scaling the object by (-1,-1,-1)? Turning it inside out basically? Ogre does all scalling on a 'scene node' basis so if I scaled this scene node by (-1,-1,-1), the volume cube would be mirrored the way we want?

I would just try it but im not home at the moment :(

*and my idea about manually setting the z position in screen space for verts was to ensure verts behind the clip plane got pushed infront of it. But I like your mirroring idea better.

Share this post


Link to post
Share on other sites
Wait a minute, shouldnt this simple shader do the trick?

Quote:

outPos = mul( worldViewProj, inPos );

outPos *= float4(-1,-1,-1,1);


Just mirroring/flipping the positions in screen space? Or do I need to do the mirror in another space?

*er wait wouldint it be...

Quote:

outPos = mul( worldView, inPos );

outPos *= float4(-1,-1,-1,1);

outPos = mul( proj, outPos );


since we want to do the mirroring in camera space right?

[Edited by - ZealousEngine on February 22, 2008 10:18:04 PM]

Share this post


Link to post
Share on other sites
snapping the vertices behind the near plane in the vertex shader onto the near plane is indeed a nice solution that should work .

but you need to consider 2 cases:

a) edges that are clipped
b) triangles that are completely behind the near plane

I guess you want to calculate the length of the ray volume intersection right?
so you accumulate the the distances to the back facing and the front facing triangles and subtract them in a 3rd pass right?


thats how I did it in my volumetric fog renderer

Share this post


Link to post
Share on other sites
Well the length of the ray is being calculated by comparing the zdepth of the backfaces and frontfaces. When the frontfaces are clipped by the nearplane, I plan to set their zdepth to 0 (or whatever the nearplane is actually).

So was I right earlier in saying the mirroring should be done in camera space? And a simple * float4(-1,-1,-1,1) should do the trick? Tried it earlier and got some strange results.. not sure if its working..

*and yeah my earlier idea was to basically snap the verts to the nearplane, think thats a better idea than mirroring?

Share this post


Link to post
Share on other sites
mirroring with -1-1-1-1 will only work in some special cases

its just negating you coordinates which isn t equal to mirroring against a arbitrary plane


snapping the vertices to the near plane is indeed the correct solution as far as i can see, you have to pass the information of the neighboring vertices of each triangle

e.g.:

tri(a,b,c)
//use texcoords to pass the vertices twice
info(b,c,a)
info(c,a,b)

then check if the triangle is behind the near plane
in this case just project it onto the near plane

if it spans the near plane you have to clip the triangle,
this can be done with geometry shaders, thus requires a GF8+ board

Share this post


Link to post
Share on other sites
Im sure this has to be possible without using a geometry shader. Are you sure I cant just snap ALL verts to the nearplane? *Remember were doing this in a vertex shader, so I dont see why you are concerned with edges, neighbors, clipping, ect... I ran a test last night and it LOOKS like its working...

Quote:

mirroring with -1-1-1-1 will only work in some special cases

its just negating you coordinates which isn t equal to mirroring against a arbitrary plane


But mirroring in CAMERA space seems like it should be equal to mirroring along the nearplane, no?

[Edited by - ZealousEngine on February 24, 2008 12:15:52 PM]

Share this post


Link to post
Share on other sites
think about the case where 1 vertex is behind the plane and 2 are in front of it the clipped triangle is a quad and thus can t be represented by a single triangle

I doubt it that mirroring can be done like this in a reliable way

to mirror properly(in world coordinates) you plug the vertices into the plane equation, get the shortest distance and offset the vertices by -2*distance*planenormal

+epsilon to place them slightly in front of the near plane in your case





Share this post


Link to post
Share on other sites
Quote:
think about the case where 1 vertex is behind the plane and 2 are in front of it the clipped triangle is a quad and thus can t be represented by a single triangle


But there is no clipping going on at this stage. We do the mirroring in the vertex shader, then send triangles along to be clipped. So in your example with 1 vert behind, two infront, you would miltiply by float4(-1,-1,-1,1) (in CAMERA space), and end up with the opposite (two verts in front, one behind).

THEN after vertex shading the triangle would be clipped accordingly. Am I still missing something?

Share this post


Link to post
Share on other sites
Ok well now I DO see a problem with just *= float4(-1,-1,-1,1). Imagine youre in the dead center of a box volume, looking straight ahead. The 'invert all positions' WILL put the faces clipped by the nearplane infront of the camera, but they will be upside down! That *= float4(-1,-1,-1,1) only works in certain cases as you pointed out.

So it sounds like I need to brush up on my 'mirroring math' (if anyone could provide a simple example for mirroring along the nearplane in a vertex shader that would be great!).

*And I ran some tests on my 'snap everything to the neaprlane idea', and didnt get very good results. The problem seems to be, when snapping stuff behind the camera to the nearplane, it isnt projected properly (you lose all depth), thus giving incorrect results.

I think the mirroring solution IS the way to go, if I can just figure out how to do TRUE mirroring (not just inverting everything).

Share this post


Link to post
Share on other sites
Quote:
Original post by ZealousEngine
Quote:
think about the case where 1 vertex is behind the plane and 2 are in front of it the clipped triangle is a quad and thus can t be represented by a single triangle


But there is no clipping going on at this stage. We do the mirroring in the vertex shader, then send triangles along to be clipped. So in your example with 1 vert behind, two infront, you would miltiply by float4(-1,-1,-1,1) (in CAMERA space), and end up with the opposite (two verts in front, one behind).

THEN after vertex shading the triangle would be clipped accordingly. Am I still missing something?


the clipping takes place after this stage, and the point is to project all triangles behind the near plane onto the near plane which is not possible if two edges of a triangle span the plane, thus you have make use of the geometry shader to discard the original triangle and insert 2 new triangles(clipping produces a quad and will be done in the geometry shader)

Share this post


Link to post
Share on other sites
But wait didnt you say you implemented it 'this way' (mirroring) a year ago with your volumetric fog? You used a geometry shader?

If mirroring DOES require a geometry shader, there HAS to be another way to do this. The GPU Gems 3 article makes no mention of using geometry shaders just to render stuff clipped by the near plane.

*and about 'proper' mirroring - When a point is transformed into camera space, wouldnt a simple 'invert the z value' mirror the point along the clip plane? Assuming the clip plane is located at 0,0,0 (the camera). So something like this...

Quote:

float4 wvPos = mul( worldView, inPos );
wvPos *= float4(1,1,-1,1);
outPos = mul( proj, wvPos );


Since the normal of the nearplane (in camera space) is 0,0,1 correct?

Share this post


Link to post
Share on other sites
Quote:
Original post by ZealousEngine
But wait didnt you say you implemented it 'this way' (mirroring) a year ago with your volumetric fog? You used a geometry shader?

If mirroring DOES require a geometry shader, there HAS to be another way to do this. The GPU Gems 3 article makes no mention of using geometry shaders just to render stuff clipped by the near plane.

*and about 'proper' mirroring - When a point is transformed into camera space, wouldnt a simple 'invert the z value' mirror the point along the clip plane? Assuming the clip plane is located at 0,0,0 (the camera). So something like this...

Quote:

float4 wvPos = mul( worldView, inPos );
wvPos *= float4(1,1,-1,1);
outPos = mul( proj, wvPos );


Since the normal of the nearplane (in camera space) is 0,0,1 correct?


further at the top we agreed that snapping the vertices is a more elegant solution than mirroring since it requires less passes


also what do you consider camera space? before the perspective transform or after?

before = gl_ModelViewMatrix*gl_Position;
negating the z axis should do the job i think just try it and see if it works

but if your hardware is capable of geometry shaders I would try the second approach I guess it will be quite a bit faster and is certainly more robust compared to the stencil buffer that has a maximum of 8 bits == 255 ray volume intersections



Share this post


Link to post
Share on other sites
Well I am having trouble getting either the 'snapping' OR mirroring solution to work.

In dx World is the same as Model, and I consdier View/Camera space the same thing. So in my CG/HLSL vertex shader, I am doing...

Quote:

float4 wvPos = mul( worldView, inPos );
wvPos *= float4(1,1,-1,1);
outPos = mul( proj, wvPos );


Now that SHOULD transform the vert into camera/view space, invert the z position (which, in camera space, should mirror the vert along the clip plane?), then we transform again by the projection matrix. In my mind it should work perfectly, but im getting some errors. These errors MAY be unrelated, thats why I just want somebody to confirm that the above vertex shader should work fine.

And im still a little fuzzy on how you can snap the vertices to the nearplane withotu causing any distortion. In what space should this snapping be done? Camera space?

Share this post


Link to post
Share on other sites
is see nothing wrong with your mirroring code

does it render anything at all? did you take care about backface culling?


as for snapping there shouldn t be any distortion
just make sure that you offset the polygons away from the near plane into forward direction a little

e.g.: you got a triangle with one vertex behind the clippingplane

clip it in the geometry shader and you get 3 triangles

a quad( 2 triangles) in front of the plane
and a triangle sharing an edge with the quad, the 3rd coordinate of that triangle is projected onto


at the top the clipping case
below a verification that snapping doesn t influence the final result
the gray area are actually wrong but the ray intersection distance is so tiny that it doesn t matter

the thin black line is some offset you may apply to the vertices to position them in front of the near plane

Share this post


Link to post
Share on other sites
What I meant was there is no way to 'snap' to the nearplane without distortion without using a geometry shader. Its silly to require a geometry shader when mirroring SHOULD work the same way.

I am glad you say my mirroring code looks ok (in my mind it looks ok too). Maybe there is something else wrong with my code, I will keep working on it...

Share this post


Link to post
Share on other sites

This topic is 3579 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this