Jump to content
  • Advertisement
Sign in to follow this  
gemin ta

3D collision detection

This topic is 3810 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I recently decided to get back into game programming (didn't get very far the first time around) and after doing a lot of reading, I'm starting to wonder whether these tutorial authors actually know anything at all about collision detection. The amount of tutorials that include the words "I'm going to leave this part out" is unreal. Why leave it out? 'cause you don't understand it, maybe?! Frustrations aside, I really need help with collision detection... rofl. Don't give a crap about response yet - I just want the detection sorted. So I stumble across tutorials that seem to have some clue of what they're going on about, but they make references to things that have never been defined anywhere else in the tutorial - or they talk about things seemingly unrelated or needlessly complex. Take for example... http://www.gamedev.net/reference/articles/article1026.asp
Quote:
Velocity vector - This is the vector that defines the direction and speed that the colliding object is traveling. If a colliding object is traveling two feet forward, then the direction of this vector is forward, and the length of this vector is two feet.
Okay. 1) What the hell does "velocity" have to do with increasing a floating point value for use with glTranslatef? 2) Why do I need "velocity" in a basic game engine? I'm not trying to realistically simulate physics here. Then there's the other 2 things before that.
Quote:
Source - The source point where a colliding object starts. Every colliding object travels from the source to a collision on its way to the destination. Destination - This is where the colliding object wants to go to, but can’t get there, because it’s too busy bumping into stuff.
There's no mention of how the source and destination are defined or measured. Are they just random values like the floating point I mentioned before - or are they products of yet more illusive calculations? Then there's talk of rays and ray vectors. How is a ray's direction calculated? Maybe my mind is being manipulated by the hallucinogenic shampoo I have to wash my hair with thanks to scalp eczema, but it seems to me that every tutorial is written by someone who either has no idea what they're on about or thinks that everyone already knows about collision detection so they don't have to be specific about anything. Once again... I beg ye... to help me.

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by gemin ta
I recently decided to get back into game programming (didn't get very far the first time around) and after doing a lot of reading, I'm starting to wonder whether these tutorial authors actually know anything at all about collision detection.

The amount of tutorials that include the words "I'm going to leave this part out" is unreal. Why leave it out? 'cause you don't understand it, maybe?!

Frustrations aside, I really need help with collision detection... rofl. Don't give a crap about response yet - I just want the detection sorted.

So I stumble across tutorials that seem to have some clue of what they're going on about, but they make references to things that have never been defined anywhere else in the tutorial - or they talk about things seemingly unrelated or needlessly complex. Take for example... http://www.gamedev.net/reference/articles/article1026.asp

Quote:
Velocity vector - This is the vector that defines the direction and speed that the colliding object is traveling. If a colliding object is traveling two feet forward, then the direction of this vector is forward, and the length of this vector is two feet.


Okay.

1) What the hell does "velocity" have to do with increasing a floating point value for use with glTranslatef?
2) Why do I need "velocity" in a basic game engine? I'm not trying to realistically simulate physics here.

Then there's the other 2 things before that.

Quote:
Source - The source point where a colliding object starts. Every colliding object travels from the source to a collision on its way to the destination.
Destination - This is where the colliding object wants to go to, but can’t get there, because it’s too busy bumping into stuff.


There's no mention of how the source and destination are defined or measured. Are they just random values like the floating point I mentioned before - or are they products of yet more illusive calculations?

Then there's talk of rays and ray vectors. How is a ray's direction calculated?

Maybe my mind is being manipulated by the hallucinogenic shampoo I have to wash my hair with thanks to scalp eczema, but it seems to me that every tutorial is written by someone who either has no idea what they're on about or thinks that everyone already knows about collision detection so they don't have to be specific about anything.

Once again... I beg ye... to help me.
At least in the examples you gave, the problem is not with the tutorials, but rather with your lack of familiarity with the subject matter (or so it would seem).

Now don't me wrong - this is not meant critically, and you're not the first to run into this problem.

For example, tutorials on how to build some library or other often assume a fairly in-depth knowledge of the development environment in question. If you (for example) don't already know about this path, and that environment variable, and how to run scripts or batch files, then the tutorial will read like Greek (assuming, of course, that you don't know Greek :).

It sounds like that's what you're running into here. Unfortunately, you'll probably just have to spend some time delving into 2-d and 3-d math, linear algebra, and basic physics before these tutorials will make much sense. It will be time well spent though, since this knowledge is more or less required if you want to develop any sort of non-trivial game or similar simulation.

Now, a couple of comments on your specific questions. Use of the term 'velocity' doesn't imply any sort of 'sophisticated' physics simulation. Velocity just means the direction and speed in which an object is moving (the velocity is a vector that points in the direction that the object is moving, and has a length equal to the speed with which the object is moving).

Most games incorporate velocities. In 'Asteroids', the ship, projectiles, asteroids, and flying saucers all have velocities associated with them. Likewise the projectiles and what not flying around in Quake 3, the marble in Marble Blast, the dive-bombing aliens in Galaga...the list goes on.

In other words, 'velocity' is nothing fancy - it's a basic description of how an object is moving that is relevant (in one way or another) for many types of game objects.

You mentioned 'increasing a floating point value for use with glTranslatef()'. First of all, the 'floating point values' that you submit to glTranslate*() are, in fact, the elements of a vector. So, whether you know it or not, you're actually working with vectors here (velocities and positions are usually represented using vectors). Also, the use of glTranslate*() is completely incidental to how we represent and manipulate game objects; it happens to be a way that we can communicate information to OpenGL about how to render geometry, but it really has nothing to do with simulating object motion.

As for the 'source' and 'destination' you mentioned, those are points in space. How these values are computed depends on the context.

Share this post


Link to post
Share on other sites
Thanks for the reply.

I know about linear algebra and physics, etc. I'm just having a hard time seeing the relevence.

Quote:
First of all, the 'floating point values' that you submit to glTranslate*() are, in fact, the elements of a vector.


Regardless of what they 'are', I don't/didn't understand how they tied in with this 'velocity' stuff.

I think I'm now starting to understand that the collision detection bizniz is separate to the graphics side of things. Even with that realization, I could code a non-graphical program to calculate distances, angles, etc. in an invisible 3D world... but that still leaves me with the problem of how it ties in with the graphics side of things. How do you take a pseudo-coordinate of two objects and then accurately convert it into something you can use with OpenGL?

Certain things will rely on an object's existence in OpenGL for their specific locations. For example... it's fair and well having a "logical" grid for the collision detection/physics and saying that object A's center is at (1, 1, 1), object B's center is at (3, 3, 3), ray A is pointing away from A in whatever direction & calculate things from there. Relying on that, you have an effective way of calculating the distances between two centers, but if you then load a model into OpenGL with its center at -- for the sake of simplicity -- (1.0f, 1.0f, 1.0f), the model's polygons won't stay at that coordinate - they'll expand outwards to form the geometry... so then calculating the distance between the centers is useless as the outsides will have collided by the time the centers reach each other.

I hope you can understand the difficulties I'm having grasping how these two things can work together when they seem to be entirely separate 'entities' which rely on each other to work, but don't seem to interact as not a single tutorial I've ever read has made a reference to any graphics libraries. How does it work? -_-

Thanks again for your time.

Share this post


Link to post
Share on other sites
Quote:
I know about linear algebra and physics, etc. I'm just having a hard time seeing the relevence.
When we talk about vectors, matrices, and so forth, we're talking about linear algebra. Almost all of 2-d and 3-d math (at least as it relates to game development) deals with vectors and matrices, and so therefore it deals with linear algebra.
Quote:
I think I'm now starting to understand that the collision detection bizniz is separate to the graphics side of things.
Yes, absolutely.
Quote:
Even with that realization, I could code a non-graphical program to calculate distances, angles, etc. in an invisible 3D world...
Yup.
Quote:
...but that still leaves me with the problem of how it ties in with the graphics side of things. How do you take a pseudo-coordinate of two objects and then accurately convert it into something you can use with OpenGL?

Certain things will rely on an object's existence in OpenGL for their specific locations. For example... it's fair and well having a "logical" grid for the collision detection/physics and saying that object A's center is at (1, 1, 1), object B's center is at (3, 3, 3), ray A is pointing away from A in whatever direction & calculate things from there. Relying on that, you have an effective way of calculating the distances between two centers, but if you then load a model into OpenGL with its center at -- for the sake of simplicity -- (1.0f, 1.0f, 1.0f), the model's polygons won't stay at that coordinate - they'll expand outwards to form the geometry... so then calculating the distance between the centers is useless as the outsides will have collided by the time the centers reach each other.

I hope you can understand the difficulties I'm having grasping how these two things can work together when they seem to be entirely separate 'entities' which rely on each other to work, but don't seem to interact as not a single tutorial I've ever read has made a reference to any graphics libraries. How does it work?
Yup, I certainly understand the confusion.

The first thing to realize here is that ideally, the simulation and the graphical representation of that simulation should be completely (or almost completely) separated from each other. (I say 'ideally' because in game development especially, practical concerns often lead us away from the ideal. However, it's still good to try and apply 'best practices' wherever possible and appropriate when designing software of any sort.)

Before getting into generalities, let's just look at your specific example of a model rendered via OpenGL.

The way this would typically be handled would be for the collision detection system to work with its own representation of the object, a representation that is completely separate from the model as rendered on screen.

The collision detection proxy might have the same geometrical data as the model, or it might be a simplified version thereof, or it might be a simpler bounding shape, such as a box or sphere. But, even if it is comprised of exactly the same geometry as the model, it 'lives' elsewhere and is accessed differently. That is, the collision proxy exists on the client side in memory somewhere, while the model data as rendered to the screen (presumably) resides on the hardware somewhere (if only temporarily).

One of the difficulties involved in working with OpenGL is that OpenGL does not offer any (easy) way to retrieve geometrical data that has been transformed. You issue a bunch of commands (glRotate*() and so forth), and then send a bunch of geometry off to the hardware, and somewhere or other the geometry is pushed through the pipeline and gets scaled, rotated, and translated (or whatever) before being rendered to the screen. However, this altered geometry isn't readily available to the client. So, how do you know where the geometry is, so that you can test it for intersection with other objects?

As I understand it at least, it pretty much comes down to this: you'll need to re-create the transforms that OpenGL is applying, but on the client side. You apply these transforms to your proxy (whether it be an exact copy of the model, a simplified version thereof, or a simpler bounding shape), and then perform collision detection using this transformed proxy. (This is one reason that it's important to have a good math library available.)

Okay, this post is getting a bit long, so I'm going to wrap it up (although I'll be happy to try and answer any other questions you might have). One thing I want to mention though is that it might be worth looking into 3rd-party collision detection or physics libraries. On the one hand, you're going to need to learn the basics (vectors, matrices, linear algebra) one way or another, and implementing a collision detection system is probably as good a way as any to get your feet wet. But, just be aware that there are a lot of pre-existing solutions available for this sort of thing, and in the end it might be more economical (at least in terms of development effort) to take advantage of what's already available rather that to try to create something from scratch.

Share this post


Link to post
Share on other sites
Quote:
Original post by gemin ta
Thanks for the reply.

I know about linear algebra and physics, etc. I'm just having a hard time seeing the relevence.

Quote:
First of all, the 'floating point values' that you submit to glTranslate*() are, in fact, the elements of a vector.


Regardless of what they 'are', I don't/didn't understand how they tied in with this 'velocity' stuff.

I think I'm now starting to understand that the collision detection bizniz is separate to the graphics side of things. Even with that realization, I could code a non-graphical program to calculate distances, angles, etc. in an invisible 3D world... but that still leaves me with the problem of how it ties in with the graphics side of things. How do you take a pseudo-coordinate of two objects and then accurately convert it into something you can use with OpenGL?

Certain things will rely on an object's existence in OpenGL for their specific locations. For example... it's fair and well having a "logical" grid for the collision detection/physics and saying that object A's center is at (1, 1, 1), object B's center is at (3, 3, 3), ray A is pointing away from A in whatever direction & calculate things from there. Relying on that, you have an effective way of calculating the distances between two centers, but if you then load a model into OpenGL with its center at -- for the sake of simplicity -- (1.0f, 1.0f, 1.0f), the model's polygons won't stay at that coordinate - they'll expand outwards to form the geometry... so then calculating the distance between the centers is useless as the outsides will have collided by the time the centers reach each other.

I hope you can understand the difficulties I'm having grasping how these two things can work together when they seem to be entirely separate 'entities' which rely on each other to work, but don't seem to interact as not a single tutorial I've ever read has made a reference to any graphics libraries. How does it work? -_-

Thanks again for your time.





The velocity part can be important because with static terrain you may only have to check the collisions with the terrain of the projected movement vector once (preassuming you will be going in a straight line for a while) instead of doing it every game cycle (VERY wasteful of CPU resoures).

Usually the algorithm has you compare the 3D line segment against all (or a large subset) of the terrain facets (triangles -- line intersect triangle tests...). Those terrain triangles are usually static so the result wont change for that vectors line segment. Basicly you can do in one pass a projections of the longer line segment instead of doing multiple passes against almost the same terrain data set. You may have to check against mobile/moving objects more frequently (with the immediate movement line segment, but that is usually a MUCH smaller intersection test subset to process.

The velocity can be used to calculate how long the line segment is which can then be used to calculate how big the subset of static terrain should be (again to optomize the processing). Usually you dont want to project too far ahead of the moving object if it constantly changes it course (needless processing of too large a subset). Usually game tuning will tell you how far ahead to project is optimal for your game mechanics.









Share this post


Link to post
Share on other sites
So as I understand it, to put it simply... if my dude is moving 1 coordinate in any direction on my imaginary grid, that's the velocity we're referring to? Nothing needlessly complex? Eeexcellent.

For the sake of experimentation, experience or something, I modified the example code from one of the early tutorials on NeHe and made it so that two prisms would stop moving when their centers collided. I guess it's a start. ;/

After some reading, the "bounding shape" philosophy seems like the best choice for me at the moment. It's all I need for the complexity (or lack thereof) of the game I'm planning to make & seems simple to start with. The problem I'm seeing with it is when a box/sphere/ellipsoid-enclosed model hits an object on the map. Not every map or world object can have a bounding shape as... a house which someone can go into would have its doorway blocked off.

In an attempt to counter-attack this, my idea is to have the player model enclosed in a bounding shape and have the outsides of that shape compared against the actual triangles of nearby objects... which would be sorted by some sort of tree or another. I'll cross that bridge when I come to it.

The problem I now face is the same as what I mentioned earlier. How can I compare the distance between my made-up bounding shape & the triangles/polygons? It's all fair and well for tutorials to throw out calculations, but there's very little info or advice in terms of how to apply that. I'm having a hard time believing that I'd be expected to factor the exact positions of those polygons into my pseudo-universe.

Can anyone (who has actually succeeded) offer guidance in the form of what they read when they were learning? I'd appreciate it.

Share this post


Link to post
Share on other sites
Well, my approach to these things is "Right, I don't know anything about 3d collision detection. Oh look, there are lots of physics libraries I can use..." as a result, I'm now using Newton GD as a physics library. And I am currently throwing televisions out of a widow and watching them roll down a hill, using this library.

Share this post


Link to post
Share on other sites
Quote:
...I am currently throwing televisions out of a widow...
<:-

@The OP: You're right about the 'house' scenario - a single bounding volume for the house object probably won't work very well there (assuming, of course, that you're supposed to be able to go inside it).

It's pretty common (I think) to use a simple bounding volume for game entities, and to use more accurate geometry (close to, or perhaps exactly the same, as the geometry used for rendering) for the world.

A lot of FPS-type games work with what's called 'binary space partitioning'. The Quake games, for example, use an axis-aligned bounding box for the objects in the game, and intersect them with a BSP tree corresponding to the world geometry. There are some simplifications involved (e.g. involving Bezier patches and other highly detailed parts of the map), but for the most part the world geometry used for collision detection closely matches that used for rendering (of course the world geometry in those games isn't terribly complex).

As for tutorials, there's a paper by K. Fauerby on swept collision detection using ellipsoids that is quite detailed, and a couple of papers on BSP coldet as well (can't remember the authors).

The unfortunate fact is that collision detection and response is a pretty involved topic - except for extremely simple cases, there aren't many 'shortcuts' to speak of (except, of course, for using a 3rd-party library, as speciesUnknown suggested).

Share this post


Link to post
Share on other sites
Bounding spheres seems to be the easiest way to go. My idea is to write a program which loads in a 3D model or map and allows me to specify points on a grid. A circle will appear around the points and I'll be able to alter the size of them or move them around. There'll also be a temporary visual representation of the circle in the 3D scene so I can measure how accurate the bounding circle is against the geometry as it'll be shown in OpenGL.

Grid will look something like: http://getitoutthere.org.uk/watashi/shit/grid.gif

Then in-game, the "physics engine" (it's almost comical to call it that) will check the distance between the character's bounding sphere & all bounding spheres in that part of the grid. It's an ugly and inaccurate way of doing things, but I'm not looking for pretty or accurate. As long as the player is kept a reasonable distance from solid objects, I'm fine with it.

Would you agree that this is at least a step in a positive direction - or am I setting myself up for disappointment later on? I do still have to work out how I can implement a gravity-esque system into this whole thing... ;/

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!