Mesh Intersect Inside Shader ?

Started by
1 comment, last by MJP 6 years, 2 months ago

Hi
I have a procedurally generated tiled landscape, and want to apply 'regional' information to the tiles at runtime; so Forests, Roads - pretty much anything that could be defined as a 'region'. Up until now I've done this by creating a mesh defining the 'region' on the CPU and interrogating that mesh during the landscape tile generation; I then add regional information to the landscape tile via a series of Vertex boolean properties. For each landscape tile vertex I do a ray-mesh intersect into the 'region' mesh and get some value from that mesh.

For example my landscape vertex could be;


struct Vtx
{    
   Vector3 Position;   
   bool IsForest;   
   bool IsRoad;   
   bool IsRiver;
}

I would then have a region mesh defining a forest, another defining rivers etc. When generating my landscape veretexes I do an intersect check on the various 'region' meshes to see what kind of landscape that vertex falls within.

My ray-mesh intersect code isn't particularly fast, and there may be many 'region' meshes to interrogate, and I want to see if I can move this work onto the GPU, so that when I create a set of tile vertexes I can call a compute/other shader and pass the region mesh to it, and interrogate that mesh inside the shader. The output would be a buffer where all the landscape vertex boolean values have been filled in.

The way I see this being done is to pass in two RWStucturedBuffer to a compute shader, one containing the landscape vertexes, and the other containing some definition of the region mesh, (possibly the region might consist of two buffers containing a set of positions and indexes). The compute shader would do a ray-mesh intersect check on each landscape vertex and would set the boolean flags on a corresponding output buffer.

In theory this is a parallelisable operation (no one landscape vertex relies on another for its values) but I've not seen any examples of a ray-mesh intersect being done in a compute shader; so I'm wondering if my approach is wrong, and the reason I've not seen any examples, is because no-one does it that way. If anyone can comment on;

  • Is this a really bad idea ?
  • If no-one does it that way, does everyone use a Texture to define this kind of 'region' information ?
    • If so - given I've only got a small number of possible types of region, what Texture Format would be appropriate, as 32bits seems really wasteful.
  • Is there a common other approach to adding information to a basic height-mapped tile system that would perform well for runtime generated tiles ?

Thanks

Phillip

Advertisement

Ray/mesh intersection is totally doable on a GPU, and there has been a whole lot of research into how to do it quickly. There are even entire libraries/frameworks like Nvidia's Optix and AMD's Radeon Rays that are aimed at doing arbitrary ray/mesh intersections for the purpose of ray-tracing. There are also libraries like Intel's Embree that aim to give you really fast ray/mesh intersection on a CPU.

For really fast ray/mesh intersections on both CPU and GPU you'll generally need to build some sort of acceleration structure that lets you avoid doing an O(N) lookup on all of your triangles. BVH trees tend to be popular for this purpose, although there are other options with different options in terms of lookup speed, memory usage, and building speed. All 3 of the libraries that I mentioned have "builders" that can take an arbitrary triangle mesh and build an acceleration structure for you, which you can then trace rays into. If you think you'd like to try one of these, I would suggest starting with Embree. It's very easy to integrate and use, and it's definitely going to be faster than a brute-force ray/mesh query. However it also takes some time to build the acceleration structure, which could cancel out the performance improvement.

If you decide to try out your compute shader approach (which doesn't sound totally crazy), you could probably get a lot of mileage by building even a simple acceleration structure for your data in order to avoid an O(N) scenario for each query. For instance, you can bin all of your feature meshes into a uniform grid based on a simple bounding box or sphere fit to the mesh. Then for each terrain vertex you could see which bin(s) it intersects, and grab the list of feature meshes for that grid cell.

Another option you can consider is to flip the probably around, and rasterize your feature meshes instead of performing ray casts. This approach is commonly used for runtime decal systems (except in screen space instead of in heightmap space), and it's not too hard to get going. Assuming that your terrain is a 2D height field, you would basically want to set up an orthographic projection that is basically rendering your terrain "top-down". Then you could rasterize your feature meshes using that projection, and in the pixel shader you could either write out some value to a render target, or use a UAV to write the info that you need to a buffer or texture.

 

This topic is closed to new replies.

Advertisement