# GPU collision detection

This topic is 2592 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi,

I've been thinking about a GPU collision detection technique and I'm looking for some feedback. The technique is fairly simple and should be accurate, tunnel proof and should work for complex models, here are the basic steps:

1) First determine the bounding radius of the collider during initialization.
2) Render the depth of the collider to a small texture from a distance equal to it's bounding radius in the opposite direction of it's movement using an orthographic projection matrix.
3) Render the depth of the objects that can be collided with from a point that's mirrored along a plane that's positioned at the collider's position with a normal equal to the direction of the movement, using an orthographic projection matrix.
4) Compare the depth values, taking into account the distance between the two render planes(2 times the bounding radius) and look for the shortest distance.

I think this should give correct results with fairly high accuracy using low resolution. Any feedback is welcome.

##### Share on other sites
Something I just realized is that in order to do some useful collision response, you'd need the surface normal information of the the collision points. I'm not sure how that information can be retrieved efficiently, anyone has an idea?

##### Share on other sites
You can draw the normals during the same rendering pass that you use for depth.

The method is a bit fuzzy to me, but if Im interpreting it correctly, heres a few things that come to mind:
- Searching for shortest distance given 2 depth maps will take a long time unless the depth maps are VERY small, or you use a clever trick to speed up searching (mip-maps maybe?)
- You might spend longer rendering all these depth maps than youd spend colliding the objects geometrically
- Once the objects are actually colliding, the shortest distance you get from these depth maps wont give you very useful contact information
- For objects / features smaller than the size of one texel in your depth map, you'll miss collisions

##### Share on other sites

- Searching for shortest distance given 2 depth maps will take a long time unless the depth maps are VERY small, or you use a clever trick to speed up searching (mip-maps maybe?)[/quote]
Don't have a lot of experience with shader programming, but I was thinking something along the lines of:
float depth1 = convertToDepth( tex2D(DepthMapObject, IN.TexCoord.xy) ); float depth2 = convertToDepth( tex2D(DepthMapWorld, IN.TexCoord.xy) ); float distance = depth2 + depth1 - distanceBetweenRenderPlanes; //store this into some global variable if it's the first or smaller than the previous value?
To clarify, the render planes lie on one line facing opposite directions along the line. I think that'd be fast enough, not sure if it's possible though.

- You might spend longer rendering all these depth maps than youd spend colliding the objects geometrically[/quote]
I was thinking of a single collider though (player colliding with environment), so that wouldn't require too many depth maps. Still it might not be faster yeah, I'd like to find out if this idea is plausible.

- Once the objects are actually colliding, the shortest distance you get from these depth maps wont give you very useful contact information[/quote]
Combined with normal information it should be enough for some basic collision response right? I've implemented Paul Nettle's collision detection algorithm a long time ago, if I recall correctly the collision response was based on that contact information, object position and velocity only, but that was for sphere to triangle collision. The types of collision responses might be limited, need to think about that some more.

- For objects / features smaller than the size of one texel in your depth map, you'll miss collisions[/quote]
You mean for objects other than the collider(e.g. the environment)? That's true, they'd have to be pretty small though. For example if you have a bounding radius of 2m for a player character and render the depth to a 200x200 texture, 1 pixel would represent a 1 by 1 cm square.

##### Share on other sites
It's a bit brute force, but it should work, and nicely supports arbitrary collision shapes (and GPU's are damn good at brute force).

As mentioned above, you'd output both depth and normals in your pixel shader (either using a native depthstencil + colour target, or via MRT).
Then you'd do a 3rd pass that uses something like the snippet you posted to detect if there was a collision for each pixel.
Then you'd repeatedly downsample the results (copy to a smaller texture) untill you've got a 1x1 result texture. In the down-sampling process, you take multiple distance/normal input samples, and then select/output the sample with the smallest distance value. The end result will be a single distance/normal pair, which you can easily download back to the CPU to use in the rest of your physics sim -- or leave it on the GPU if you've got a fully GPU-side physics simulation

##### Share on other sites
Thanks for the detailed reply Hodgman. Not only would it support arbitrary collision shapes, but also animated ones without any extra effort(though I don't think that's very useful in a game environment). The down sampling part is a bit unclear to me, why is it needed?

##### Share on other sites
[color="#880000"][font="CourierNew, monospace"]//store this into some global variable if it's the first or smaller than the previous value?[/font][/quote]This kind of global variable doesn't exist, except in the latest DX11 hardware. If you want to implement this on DX9+ hardware, you've got to do it in a way where each execution of a pixel shader is isolated from all the others (no global atomic read/write variables).

So (ignoring normals for simplicity) first you render out your depth data for each object:
200*200 Render Target A : depth map 1
200*200 Render Target B : depth map 2

Then you render a render-target sized quad to a 3rd render target (using A and B as inputs), which executes your above distance-test for each pixel.
200*200 Render Target C : distance results

Now you've got 200x200 distance results, but you're only interested in the single smallest distance value.
You could download all 40,000 results to the CPU, and have it do a linear search for the smallest, but it would likely be faster to have the GPU also do this work.

So using C as an input texture, you render a quad to a 100x100 render target. The shader used on this quad takes 4 samples from the input, selects the smallest and outputs it.
100*100 Render Target D : 1/4 of the original distance results - the largest 3/4 from each group of 2x2 texels from the original data have been discarded.

Then repeat that with a 50x50
50*50Render Target E : 1/16 of the original distance results - the largest 15/16 from each group of 4x4 texels from the original data have been discarded.

...and then 25x25, 13x13, 7x7, 4x4, 2x2, and finally 1x1 -- and now you've got the single smallest distance value that you're looking for.

You don't have to reduce the dimensions by half at each step -- you could jump from 200x200 to 25x25 in one go, but there will be a balance somewhere that performs the best.
For example, if you went straight from 200x200 down to 1x1 in a single step (this is the way that a single-core CPU would do it), then you would have a single pixel that had to take 40k input samples and compare them by itself -- this would be an extremely slow pixel shader, as GPUs are only powerful when you spread computations over multiple pixels.

##### Share on other sites
Thank you very much, you made it crystal clear to me .

This kind of global variable doesn't exist, except in the latest DX11 hardware. If you want to implement this on DX9+ hardware, you've got to do it in a way where each execution of a pixel shader is isolated from all the others (no global atomic read/write variables).[/quote]
Yeah I was afraid of that. Thanks for the clever solution!

• 36
• 15
• 9
• 23
• 10