OpenGL 4 ray test problem

Started by
9 comments, last by junkyska 12 years, 1 month ago
HI everyone,
I'm currently implementing a 3d game engine. It's done with c++ and opengl 4. I want to do ray test against meshes. Meshes data reside only in the gpu, not in the cpu.
I want to trace a ray against meshes (for example for a gun shoot) and retrive this information: Hit mesh, hit point, hit normal.
The ray test hasn't to be a bottleneck, so it has to be fast.

What I'm thinking is:

First cpu work:
Check ray against my loose octree space partition implementation (ray against AABB) to discard meshes.

Second gpu work:
Here is where i dont know exacly what to do, and I can't find many useful information.
What I was thinking is: Create a texture render target with two textures, one RGB (that acts like color attachment) and i will store depth distance, and one RGB32F that it will store point normal. The textures will be 1x1 pixels.
Then i can make an orthogonal view matrix with near distance = Ray.start and far distance = Ray.end and this matrix pointing to Ray.Dir.
To check each mesh, i will
1- bind the texture render target,
2- bind orthogonal view matrix
3- For each mesh:
---- 4- bind model matrix
---- 5- clear render target,
---- 6- Draw mesh with custom shader (that will draw into this two textures)
---- 7- Retrive textures information
---- 8- Hit point = Ray.start + (Normal(Ray.dir) * toFloat(texture1.rgb))
---- 9- Normal point = texture2.rgb32f
---- 10- Hit mesh = current mesh

I never did this before, so i'm thinking that is not correct. I'm really lost and i can find any useful information. Can someone help me please?
I will apreciate any information.

Sorry for my bad English.

Thanks to all.
Advertisement

[font=arial,helvetica,sans-serif]Just out of interest, although I've never done this before, doesn't OpenGL have a name stack, that's used with a render mode of [color=#000000]GL_SELECT? I don't know if this is in the core profile or not. I suspect not tongue.png.[/font]

Hi!

I made some very good experience with OptiX and used it in my last 3 or 4 projects. OptiX is a very flexible ray tracing framework from Nvidia. You can assign it some vertex buffers and it will generate traversal data structures on its own (kd-trees / several kinds of BVH trees), which enable you to easily trace 100k rays in a few milliseconds. Perhaps this is in your case a little overkill, but once you got it running it is quite easy to use. smile.png

The way you described sounds complicated and a little bit like the old times, where we had to solve every computation task with rasterization. smile.png
I strongly recommend against a GPU-based solution, even if your mesh data currently resides only on the GPU.

I use a QuadTree for my scene container (from MathGeoLib), which stores AABB's of objects, and for each object, I prebuild a kD-tree for raycasting. The kD-tree contains the triangles of the mesh. Testing on the Stanford Bunny (about 70k tris) gives typically raycast times of 0.001ms - 0.01ms per ray. For performing a raycast in the whole scene, I first traverse the QuadTree, and then for the object AABBs that the ray hits, I perform a kD-tree search.

The kD-tree approach works for static unskinned geometry. For skeletal animated meshes, an animated OBB-tree/Cylinder-tree could be used instead.

If you are using a GPU rasterization based approach, you would probably render the whole scene to the 1x1 render target, with a 1x1 depth buffer, and draw the object IDs to the color target. I think even for single meshes, getting the results back will be on the order of 0.1ms minimum, and for whole scenes, it'll probably take 0.5ms or more of CPU time alone to query and submit the whole scene for rendering. Additionally, you will have to stall to get the results back, and then your CPU and GPU will go in lockstep for a part of the frame. The GPU will then be idle to wait to receive the rest of the frame rendering commands.

So in summary, I'd definitely have a CPU-side copy of the geometry data (position only, no need to store UVs, normals, etc, if you don't need them), and build the approriate spatial acceleration structures to optimize the query speeds.

(I initially considered creating low-res meshes for raycasts alone to reduce the triangle counts for the CPU processing, but since using a kD-tree turned out to be so fast, I never bothered with it, but it's also an option to consider.)
Oh, and the code for the kD-tree ray query is from Vlastimil Havran's thesis.
Wow so much information!! Ok I will go for a CPU aproch. Tha bad new for me is that I have to touch some classes structures and implement a few thinks (actauly more than a few thinks... hehe). I know about kd-tree, what I don't undestand is for animated meshes. Becouse the bones, I will have to move the triangles before performing a ray cast and I don't know any tree structure to do this fast...

For AABB check I currently have a Fixed Size Loose Octree Space Partition implementation (currently used for frustum culling).

I never see MathGeoLib, I'm currently using GLM, but GLM doesn't have kd-trees and this kind of thinks. I think I can integrate MathGeoLib for ray casting objects and kd-tree the static mesh. I will read information about this library.

Thanks a lot.
Another question, I can't find much information about MathGeoLib. I see in the source codes that they implement QuadTree and Kd-Tree. You have some examples or something that i can learn about?
When I linked to the MathGeoLib implementation, I did not mean to imply that it would be a ready-made drop-in solution to use for your projects, mostly because MathGeoLib is not a library that is actively maintained for external developers to use. It is more intended as a source repository for copy-paste snippets of math algorithms for teaching purposes. That said, there is no reason why you could not use the whole library, but it may require some configuration to adapt it to your build system.

The spatial containers implementation in MathGeoLib is a very recent addition. Unfortunately there are no examples available in the documentation on how to use them. If you want to use them from MathGeoLib, I recommend you dig in to them well to get familiar with the implementation, because you will probably need to customize them for your own use.

Here is how I use the KdTree class:

KdTree<Triangle> kdTree; // 1. Instantiate a new empty KdTree. The KdTree object behaves much like other containers like std::vector

// 2. Populate triangles to the tree. Can do this one-by-one, or if the data is contiguous in memory, in one go.
for(each triangle)
kdTree.AddObjects(&triangle, 1);

// 3. 'Build' the KdTree to generate the spatial index. Before this step, the KdTree is not usable.
kdTree.Build();

// Now we're ready to do queries:
Ray ray;
ray.pos = float3(1,2,3);
ray.dir = float3(1,0,0);
FindNearestIntersection fni;
kdTree.RayQuery(ray, fni);
std::cout << "Hit triangle at index " << fni.nearestIndex << " at distance " << fni.nearestD << " along the ray." << std::endl;


To do a query at the last step, you must implement a query 'callback' function, which the KdTree class calls for each node it travels. I use the FindNearestIntersection query object, which can be found here.
Wow thanks for all that information! I really appreciated it!

I will follow your advice and go for a kd-tree for the static meshes. I will study MathGeoLib to undestand his implementation.
Just another question, how you deal with transformations (scale, rotation, translation) and kd-tree? You transform the ray to local coordinates before the ray query?
Yes, that is the norm. Since the kd-tree stores static pregenerated geometry in object local space, the rays are transformed from world space to the local space of each object before raycasting to the kd-tree. The result is usually desired in world space, so the local space raycast hit position is then transformed back to world space.

This topic is closed to new replies.

Advertisement