Thesis idea: offline collision detection

Started by
11 comments, last by LorenzoGatti 9 years, 7 months ago
Hi
I'm currently working on my second degree in computer science, and I have a thesis idea - but neither I nor my advisor have enough experience in the field of collision detection to know whether it is relevant or interesting today.
The idea is, basically, to compute the point of collision of a specific pair of complex objects offline, in various situations - angles, velocities etc. (yes, MANY combinations and a lot of storage).
When the bounding spheres of these objects are predicted to collide in real-time, the point of collision can be extracted from the pre-computed database, and no real-time detection is required.
The goal, as you might have gathered, is to increase FPS, allow collisions between more complex objects, and to enable collisions in environments with limited resources.
Assume that I've though about the implementation, and have a rough idea of how to do this in a feasible way.
This technique obviously imposes various limitations. The main ones are that the objects must be available beforehand, must not distort in real-time, and it's only feasible to compute the collisions of few pairs of bodies.
My question is: is collision detection today, between highly complex meshes, still considered a problematic computational bottleneck? is it problematic enough that a very limited method that nevertheless allows very fast extraction of the point of collision is, in the very least, interesting?
Do you know whether this has been done before?
What are the top-notch methods today to efficiently compute collisions of complex bodies?
Thanks so very much in advance,
I'd be grateful for any thoughts on the matter.
Advertisement

I don't feel this is possible.

You'd need 2 transformation matrices per situation. Even if you do accuracy of 1 degree you get 360 * 360 * 360 = 46,656,000 combinations. Add to that matrix size of 16 * 4 * 2 = 128 and it's suddenly 6 gigabytes of data and you still haven't taken into account translation or scaling.

Unless I have wrong idea...

I'm currently working on my second degree in computer science, and I have a thesis idea - but neither I nor my advisor have enough experience in the field of collision detection to know whether it is relevant or interesting today.

This is bad. Not only do you have to spend a significant amount of time to learn, what exactly state of the art is. Your advisor can't help you whatsoever, if you have a problem. You should choose a field, that you are familiar with, and where you know the state of the art techniques and have implemented a few of them.

The idea is [snip]

As Zaoshi Kaba pointed out, you will likely need a significant amount of memory. Also consider, that collision detection is only half of the job. You also have to respond to collisions, and if the information from the collision detection step (ie. the contact pairs) is somehow inaccurate, then you can get into big trouble when responding to collisions.

The goal, as you might have gathered, is to increase FPS, allow collisions between more complex objects, and to enable collisions in environments with limited resources.

Actually, today the two most limited resources are memory and memory bandwidth. In relation, we actually have ample compute power. You are suggesting to save compute power, of which we have a lot, and instead trade it for memory and bandwidth, which we are currently lacking. So environments with "limited resources" are probably those, where your idea will work the least.

My question is: is collision detection today, between highly complex meshes, still considered a problematic computational bottleneck? is it problematic enough that a very limited method that nevertheless allows very fast extraction of the point of collision is, in the very least, interesting?
[...]
What are the top-notch methods today to efficiently compute collisions of complex bodies?

The top notch method is to not use complex bodies. The player can usually be approximated by an ellipsoid pretty well, at least for player vs world collision detection. The static world is probably the most complex "body", but even that is just a very simplified version of what the graphics show you. But even there, I really don't see, how you would want to store precomputes collisions for that.

Assume that I've though about the implementation, and have a rough idea of how to do this in a feasible way.

Would you care to elaborate? Maybe, if we knew more about the properties and capabilities, we could think of a use for it.

Even if your memory requirements were enormous for 3D this might still be an interesting experiment to open the discussion to other methods. In 2D this might be feasible. You can precompute the minkowski sum for each object for every rotation to simplify things down to a point vs the world simulation. If you could interpolate the stored state between different rotations you might end up with a very fast solution. There is already research for dynamic minkowski summations. Sounds like this would take a lot of research though and a lot of memory.

Sounds like you should pick a different topic if your advisor can't help you. That's just crazy.

Also, for real-time applications bottlenecks in performance are largely memory access related. Your idea sounds like it doesn't really apply well to the reality of real-time applications for this reason.

A very clever compression mechanism would be required.... This is basically just a big table/dictionary, with a key representing the relative orientation/position of two objects, which you could probably squeeze down to ~100 bits, and then a boolean reponse - touching / not touching.

If the key was 72 bits, that's only 47 thousand trillion bits worth of values....

However, all the values are either 0 or 1, so if you can find and exploit some kind of pattern that lets you organize the keys such that the values become sorted, it would compress incredibly well.

Maybe instead of storing boolean results, you could store intersection distances (positive values means that distance away from touching, negative means that penetrating by that distance). This would be a six-dimensional distance field. The field itself would be extremely smooth in most places = low rate of change = extremely low-frequency details, which also means it will compress very well. It doesn't have to be lossless compression either -- lossy compression of distance fields is common in computer graphics, and is often good enough. You could throw away the majority of the data, keeping only a small fraction of points that define the field, and interpolate the rest of it from those sparse points.

A similar idea was explored by this paper, which samples the high DOF configuration space between an object pair. In practice it doesn't seem to be very robust and requires a long precompute, high storage. Geometric techniques are much more accurate and robust.

This topic is closed to new replies.

Advertisement