Edited by rugrat, 17 August 2014 - 05:14 AM.
Members - Reputation: 114
Posted 17 August 2014 - 05:13 AM
Crossbones+ - Reputation: 4425
Posted 17 August 2014 - 07:13 AM
I don't feel this is possible.
You'd need 2 transformation matrices per situation. Even if you do accuracy of 1 degree you get 360 * 360 * 360 = 46,656,000 combinations. Add to that matrix size of 16 * 4 * 2 = 128 and it's suddenly 6 gigabytes of data and you still haven't taken into account translation or scaling.
Unless I have wrong idea...
Members - Reputation: 1832
Posted 17 August 2014 - 10:33 AM
This is bad. Not only do you have to spend a significant amount of time to learn, what exactly state of the art is. Your advisor can't help you whatsoever, if you have a problem. You should choose a field, that you are familiar with, and where you know the state of the art techniques and have implemented a few of them.
I'm currently working on my second degree in computer science, and I have a thesis idea - but neither I nor my advisor have enough experience in the field of collision detection to know whether it is relevant or interesting today.
As Zaoshi Kaba pointed out, you will likely need a significant amount of memory. Also consider, that collision detection is only half of the job. You also have to respond to collisions, and if the information from the collision detection step (ie. the contact pairs) is somehow inaccurate, then you can get into big trouble when responding to collisions.
The idea is [snip]
Actually, today the two most limited resources are memory and memory bandwidth. In relation, we actually have ample compute power. You are suggesting to save compute power, of which we have a lot, and instead trade it for memory and bandwidth, which we are currently lacking. So environments with "limited resources" are probably those, where your idea will work the least.
The goal, as you might have gathered, is to increase FPS, allow collisions between more complex objects, and to enable collisions in environments with limited resources.
The top notch method is to not use complex bodies. The player can usually be approximated by an ellipsoid pretty well, at least for player vs world collision detection. The static world is probably the most complex "body", but even that is just a very simplified version of what the graphics show you. But even there, I really don't see, how you would want to store precomputes collisions for that.
My question is: is collision detection today, between highly complex meshes, still considered a problematic computational bottleneck? is it problematic enough that a very limited method that nevertheless allows very fast extraction of the point of collision is, in the very least, interesting?
What are the top-notch methods today to efficiently compute collisions of complex bodies?
Would you care to elaborate? Maybe, if we knew more about the properties and capabilities, we could think of a use for it.
Assume that I've though about the implementation, and have a rough idea of how to do this in a feasible way.
Crossbones+ - Reputation: 1770
Posted 17 August 2014 - 06:38 PM
Even if your memory requirements were enormous for 3D this might still be an interesting experiment to open the discussion to other methods. In 2D this might be feasible. You can precompute the minkowski sum for each object for every rotation to simplify things down to a point vs the world simulation. If you could interpolate the stored state between different rotations you might end up with a very fast solution. There is already research for dynamic minkowski summations. Sounds like this would take a lot of research though and a lot of memory.
Members - Reputation: 489
Posted 17 August 2014 - 07:35 PM
Sounds like you should pick a different topic if your advisor can't help you. That's just crazy.
Also, for real-time applications bottlenecks in performance are largely memory access related. Your idea sounds like it doesn't really apply well to the reality of real-time applications for this reason.
Moderators - Reputation: 30958
Posted 17 August 2014 - 08:22 PM
A very clever compression mechanism would be required.... This is basically just a big table/dictionary, with a key representing the relative orientation/position of two objects, which you could probably squeeze down to ~100 bits, and then a boolean reponse - touching / not touching.
If the key was 72 bits, that's only 47 thousand trillion bits worth of values....
However, all the values are either 0 or 1, so if you can find and exploit some kind of pattern that lets you organize the keys such that the values become sorted, it would compress incredibly well.
Maybe instead of storing boolean results, you could store intersection distances (positive values means that distance away from touching, negative means that penetrating by that distance). This would be a six-dimensional distance field. The field itself would be extremely smooth in most places = low rate of change = extremely low-frequency details, which also means it will compress very well. It doesn't have to be lossless compression either -- lossy compression of distance fields is common in computer graphics, and is often good enough. You could throw away the majority of the data, keeping only a small fraction of points that define the field, and interpolate the rest of it from those sparse points.
Members - Reputation: 1459
Posted 19 August 2014 - 06:16 PM
A similar idea was explored by this paper, which samples the high DOF configuration space between an object pair. In practice it doesn't seem to be very robust and requires a long precompute, high storage. Geometric techniques are much more accurate and robust.
Members - Reputation: 261
Posted 21 August 2014 - 01:29 PM
but what is the point in prerecieving a collision contact for? To distribute computation event to happen out of less stressing moments? This would be possible only if objects that an object is examined against do not explicitly alter their tendence, what is rather rare in games (if games are well coded). Only benefit I see is to compute collision in relaxed state of aplication, still promising waste, but if waste comes free, ok then.. But that restricts the conditions of determination.
Members - Reputation: 2818
Posted 22 August 2014 - 03:18 PM
One way to reduce the size of the database would be to only store the values that result in a collision. If the configuration is not found in the database, then there is no collision.
I think this could be an interesting avenue to explore. You can experiment with how many bins are needed to make a stable simulation. How do you divide the bins (eg rectangluar vs polar coordinates) It may end up working really well for simulating a large number of objects, but it may end up not being any faster or being very unstable.
Members - Reputation: 114
Posted 23 August 2014 - 04:23 AM
Members - Reputation: 737
Posted 10 September 2014 - 07:46 AM
Frankly, in this age, if you arn't aware of the whole historical bibliography on some subject, then ANY idea a human can possibly have, has already been: at least thought about, possibly tried and potentially published if is has any value.
I hope you're not thinking about a phd thesis ? because I don't see how any of this world's academy would allow someone to enter a 3/4 years cycle of research on an idea that sounds of little utility, propsed by a person who has basically next to no knowledge of the field.
Sorry to sound really harsh, I just want to calm down the game. At least go read everything you need to before, other ideas will come up when reading papers, only to realize later that it was proposed as a paper 2 years later, that you will also read, and have another idea, that either happens to not work, or be covered by some even later paper, and this cycle goes on until, if you are lucky and clever, you finally can get your idea that will actually bring progress to the world's status of the research. BUT, a phd being in 3/4 years the chance is great that some other team will publish a very close work before you finish... Yep.
Good luck anyway :-)
Members - Reputation: 261
Posted 10 September 2014 - 08:32 AM
Your thesis idea should issue a possibility of a research and experiment. You do not even need to be successfull in your goal but you have to support not being successfull with pile of experiments that did not work. That is a valuable thesis,
(For example, though you do not find out hot to extract sodium chloride from acid hydorlyzate, you have established and documented many attempts that failed and analyzed them.)
Crossbones+ - Reputation: 2734
Posted 11 September 2014 - 03:19 AM
Collision detection is an application in which good performance is extremely input-dependent (what objects are colliding? How do they move?) so that unconditional algorithmic improvements are out of the question, and there are a number of fairly good algorithms that can be combined and adapted to meet specific needs and assumptions, so that mastery lies in engineering a specific game: just about the opposite of what's suitable for CS research.
You might find out novel algorithms to deal with general continuous collision detection (e.g. helicoid generalizations), or parallel processing on a GPU, or other less obvious niches, but you would have to study a lot with a low likelihood of unearthing a good idea or a good research question.