Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


Thesis idea: offline collision detection


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
12 replies to this topic

#1 rugrat   Members   -  Reputation: 114

Like
0Likes
Like

Posted 17 August 2014 - 05:13 AM

Hi
I'm currently working on my second degree in computer science, and I have a thesis idea - but neither I nor my advisor have enough experience in the field of collision detection to know whether it is relevant or interesting today.
 
The idea is, basically, to compute the point of collision of a specific pair of complex objects offline, in various situations - angles, velocities etc. (yes, MANY combinations and a lot of storage).
When the bounding spheres of these objects are predicted to collide in real-time, the point of collision can be extracted from the pre-computed database, and no real-time detection is required.
The goal, as you might have gathered, is to increase FPS, allow collisions between more complex objects, and to enable collisions in environments with limited resources.
 
Assume that I've though about the implementation, and have a rough idea of how to do this in a feasible way.
 
This technique obviously imposes various limitations. The main ones are that the objects must be available beforehand, must not distort in real-time, and it's only feasible to compute the collisions of few pairs of bodies.
 
My question is: is collision detection today, between highly complex meshes, still considered a problematic computational bottleneck? is it problematic enough that a very limited method that nevertheless allows very fast extraction of the point of collision is, in the very least, interesting?
 
Do you know whether this has been done before?
What are the top-notch methods today to efficiently compute collisions of complex bodies?
 
Thanks so very much in advance,
I'd be grateful for any thoughts on the matter.

Edited by rugrat, 17 August 2014 - 05:14 AM.


Sponsor:

#2 Zaoshi Kaba   Crossbones+   -  Reputation: 4606

Like
1Likes
Like

Posted 17 August 2014 - 07:13 AM

I don't feel this is possible.

You'd need 2 transformation matrices per situation. Even if you do accuracy of 1 degree you get 360 * 360 * 360 = 46,656,000 combinations. Add to that matrix size of 16 * 4 * 2 = 128 and it's suddenly 6 gigabytes of data and you still haven't taken into account translation or scaling.

 

Unless I have wrong idea...



#3 Ohforf sake   Members   -  Reputation: 1832

Like
2Likes
Like

Posted 17 August 2014 - 10:33 AM

I'm currently working on my second degree in computer science, and I have a thesis idea - but neither I nor my advisor have enough experience in the field of collision detection to know whether it is relevant or interesting today.

This is bad. Not only do you have to spend a significant amount of time to learn, what exactly state of the art is. Your advisor can't help you whatsoever, if you have a problem. You should choose a field, that you are familiar with, and where you know the state of the art techniques and have implemented a few of them.

The idea is [snip]

As Zaoshi Kaba pointed out, you will likely need a significant amount of memory. Also consider, that collision detection is only half of the job. You also have to respond to collisions, and if the information from the collision detection step (ie. the contact pairs) is somehow inaccurate, then you can get into big trouble when responding to collisions.

The goal, as you might have gathered, is to increase FPS, allow collisions between more complex objects, and to enable collisions in environments with limited resources.

Actually, today the two most limited resources are memory and memory bandwidth. In relation, we actually have ample compute power. You are suggesting to save compute power, of which we have a lot, and instead trade it for memory and bandwidth, which we are currently lacking. So environments with "limited resources" are probably those, where your idea will work the least.

My question is: is collision detection today, between highly complex meshes, still considered a problematic computational bottleneck? is it problematic enough that a very limited method that nevertheless allows very fast extraction of the point of collision is, in the very least, interesting?
[...]
What are the top-notch methods today to efficiently compute collisions of complex bodies?

The top notch method is to not use complex bodies. The player can usually be approximated by an ellipsoid pretty well, at least for player vs world collision detection. The static world is probably the most complex "body", but even that is just a very simplified version of what the graphics show you. But even there, I really don't see, how you would want to store precomputes collisions for that.

Assume that I've though about the implementation, and have a rough idea of how to do this in a feasible way.

Would you care to elaborate? Maybe, if we knew more about the properties and capabilities, we could think of a use for it.

#4 Sirisian   Crossbones+   -  Reputation: 1793

Like
1Likes
Like

Posted 17 August 2014 - 06:38 PM

Even if your memory requirements were enormous for 3D this might still be an interesting experiment to open the discussion to other methods. In 2D this might be feasible. You can precompute the minkowski sum for each object for every rotation to simplify things down to a point vs the world simulation. If you could interpolate the stored state between different rotations you might end up with a very fast solution. There is already research for dynamic minkowski summations. Sounds like this would take a lot of research though and a lot of memory.



#5 Randy Gaul   Members   -  Reputation: 544

Like
0Likes
Like

Posted 17 August 2014 - 07:35 PM

Sounds like you should pick a different topic if your advisor can't help you. That's just crazy.

 

Also, for real-time applications bottlenecks in performance are largely memory access related. Your idea sounds like it doesn't really apply well to the reality of real-time applications for this reason.



#6 Hodgman   Moderators   -  Reputation: 31920

Like
4Likes
Like

Posted 17 August 2014 - 08:22 PM

A very clever compression mechanism would be required.... This is basically just a big table/dictionary, with a key representing the relative orientation/position of two objects, which you could probably squeeze down to ~100 bits, and then a boolean reponse - touching / not touching.

If the key was 72 bits, that's only 47 thousand trillion bits worth of values....

However, all the values are either 0 or 1, so if you can find and exploit some kind of pattern that lets you organize the keys such that the values become sorted, it would compress incredibly well.

 

Maybe instead of storing boolean results, you could store intersection distances (positive values means that distance away from touching, negative means that penetrating by that distance). This would be a six-dimensional distance field. The field itself would be extremely smooth in most places = low rate of change = extremely low-frequency details, which also means it will compress very well. It doesn't have to be lossless compression either -- lossy compression of distance fields is common in computer graphics, and is often good enough. You could throw away the majority of the data, keeping only a small fraction of points that define the field, and interpolate the rest of it from those sparse points.



#7 Aressera   Members   -  Reputation: 1487

Like
2Likes
Like

Posted 19 August 2014 - 06:16 PM

A similar idea was explored by this paper, which samples the high DOF configuration space between an object pair. In practice it doesn't seem to be very robust and requires a long precompute, high storage. Geometric techniques are much more accurate and robust.



#8 JohnnyCode   Members   -  Reputation: 311

Like
-1Likes
Like

Posted 21 August 2014 - 01:29 PM

but what is the point in prerecieving a collision contact for? To distribute computation event to happen out of less stressing moments? This would be possible only if objects that an object is examined against do not explicitly alter their tendence, what is rather rare in games (if games are well coded). Only benefit I see is to compute collision in relaxed state of aplication, still promising waste, but if waste comes free, ok then.. But that restricts the conditions of determination.



#9 HappyCoder   Members   -  Reputation: 2883

Like
0Likes
Like

Posted 22 August 2014 - 03:18 PM

As hodgman said, you would only need to store the relative transform and velocity of the two objects, although you probably could get away without taking the velocity into account and only doing overlap detection. You could also reduce the search space of relative transforms by rounding nearby configurations into discrete bins. This would give a parameter you could adjust. More bins means more accuracy but at a cost of memory. Fewer bins would be less accurate but use less memory.

One way to reduce the size of the database would be to only store the values that result in a collision. If the configuration is not found in the database, then there is no collision.

I think this could be an interesting avenue to explore. You can experiment with how many bins are needed to make a stable simulation. How do you divide the bins (eg rectangluar vs polar coordinates) It may end up working really well for simulating a large number of objects, but it may end up not being any faster or being very unstable.

#10 rugrat   Members   -  Reputation: 114

Like
0Likes
Like

Posted 23 August 2014 - 04:23 AM

 
1) Sirisian: I really liked the minkowski difference idea with dynamic rotation. Really liked it. Might end up using it.
 
2) Hodgman: I also really liked your notion of compressing the database in cases where the collision's result in similar initial configurations is the same. Definitely applicable, although the database I had in mind looks a bit different than the one you describe.
Representing the DB as a scalar function and compress it is very good as well. I've written it down and might use it if the penetration distance really is all I need from the DB. Whether lossy or lossless - should be tested. I suppose the amount of details you can afford to lose really depend on the meshes themselves and the precision you're after.
This train of though is extremely clever and useful.
 
3) Aressera: The paper you refer to is inspiring - thanks a lot. I'd be grateful for other related references.
 
4) JohnnyCode: a very good point. This is the way I see it - and I beg you to correct me if you see it differently. When two bodies which you couldn't predict would collide, end up colliding because of some human/AI decision, you still have to detect their collision and resolve it. Surely, that takes more time than looking it up in a DB?
 
5) HappyCoder: thanks for your input. That's pretty much the way I imagined the DB, though I'll have to contemplate hodgman's input in that regard.
 
I'd be grateful to hear: how would collisions be resolved today by a real time state-of-the-art application (for moderately complex bodies)? I need a benchmark.
How would they be solved by an offline application? (for the creation of the DB)
 
I'd be very happy to read further input - this has been infinitely useful.


#11 Lightness1024   Members   -  Reputation: 739

Like
2Likes
Like

Posted 10 September 2014 - 07:46 AM

Frankly, in this age, if you arn't aware of the whole historical bibliography on some subject, then ANY idea a human can possibly have, has already been: at least thought about, possibly tried and potentially published if is has any value.

I hope you're not thinking about a phd thesis ? because I don't see how any of this world's academy would allow someone to enter a 3/4 years cycle of research on an idea that sounds of little utility, propsed by a person who has basically next to no knowledge of the field.

 

Sorry to sound really harsh, I just want to calm down the game. At least go read everything you need to before, other ideas will come up when reading papers, only to realize later that it was proposed as a paper 2 years later, that you will also read, and have another idea, that either happens to not work, or be covered by some even later paper, and this cycle goes on until, if you are lucky and clever, you finally can get your idea that will actually bring progress to the world's status of the research. BUT, a phd being in 3/4 years the chance is great that some other team will publish a very close work before you finish... Yep.

 

Good luck anyway :-)



#12 JohnnyCode   Members   -  Reputation: 311

Like
0Likes
Like

Posted 10 September 2014 - 08:32 AM

Your thesis idea should issue a possibility of a research and experiment. You do not even need to be successfull in your goal but you have to support not being successfull with pile of experiments that did not work. That is a valuable thesis,

(For example, though you do not find out hot to extract sodium chloride from acid hydorlyzate, you have established and documented many attempts that failed and analyzed them.)



#13 LorenzoGatti   Crossbones+   -  Reputation: 2774

Like
0Likes
Like

Posted 11 September 2014 - 03:19 AM

Collision detection is an application in which good performance is extremely input-dependent (what objects are colliding? How do they move?) so that unconditional algorithmic improvements are out of the question, and there are a number of fairly good algorithms that can be combined and adapted to meet specific needs and assumptions, so that mastery lies in engineering a specific game: just about the opposite of what's suitable for CS research.

You might find out novel algorithms to deal with general continuous collision detection (e.g. helicoid generalizations), or parallel processing on a GPU, or other less obvious niches, but you would have to study a lot with a low likelihood of unearthing a good idea or a good research question.


Produci, consuma, crepa




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS