Quote:I know about linear algebra and physics, etc. I'm just having a hard time seeing the relevence.
When we talk about vectors, matrices, and so forth, we're talking about linear algebra. Almost all of 2-d and 3-d math (at least as it relates to game development) deals with vectors and matrices, and so therefore it deals with linear algebra.
Quote:I think I'm now starting to understand that the collision detection bizniz is separate to the graphics side of things.
Yes, absolutely.
Quote:Even with that realization, I could code a non-graphical program to calculate distances, angles, etc. in an invisible 3D world...
Yup.
Quote:...but that still leaves me with the problem of how it ties in with the graphics side of things. How do you take a pseudo-coordinate of two objects and then accurately convert it into something you can use with OpenGL?
Certain things will rely on an object's existence in OpenGL for their specific locations. For example... it's fair and well having a "logical" grid for the collision detection/physics and saying that object A's center is at (1, 1, 1), object B's center is at (3, 3, 3), ray A is pointing away from A in whatever direction & calculate things from there. Relying on that, you have an effective way of calculating the distances between two centers, but if you then load a model into OpenGL with its center at -- for the sake of simplicity -- (1.0f, 1.0f, 1.0f), the model's polygons won't stay at that coordinate - they'll expand outwards to form the geometry... so then calculating the distance between the centers is useless as the outsides will have collided by the time the centers reach each other.
I hope you can understand the difficulties I'm having grasping how these two things can work together when they seem to be entirely separate 'entities' which rely on each other to work, but don't seem to interact as not a single tutorial I've ever read has made a reference to any graphics libraries. How does it work?
Yup, I certainly understand the confusion.
The first thing to realize here is that ideally, the simulation and the graphical representation of that simulation should be completely (or almost completely) separated from each other. (I say 'ideally' because in game development especially, practical concerns often lead us away from the ideal. However, it's still good to try and apply 'best practices' wherever possible and appropriate when designing software of any sort.)
Before getting into generalities, let's just look at your specific example of a model rendered via OpenGL.
The way this would typically be handled would be for the collision detection system to work with its own representation of the object, a representation that is completely separate from the model as rendered on screen.
The collision detection proxy might have the same geometrical data as the model, or it might be a simplified version thereof, or it might be a simpler bounding shape, such as a box or sphere. But, even if it is comprised of exactly the same geometry as the model, it 'lives' elsewhere and is accessed differently. That is, the collision proxy exists on the client side in memory somewhere, while the model data as rendered to the screen (presumably) resides on the hardware somewhere (if only temporarily).
One of the difficulties involved in working with OpenGL is that OpenGL does not offer any (easy) way to retrieve geometrical data that has been transformed. You issue a bunch of commands (glRotate*() and so forth), and then send a bunch of geometry off to the hardware, and somewhere or other the geometry is pushed through the pipeline and gets scaled, rotated, and translated (or whatever) before being rendered to the screen. However, this altered geometry isn't readily available to the client. So, how do you know where the geometry is, so that you can test it for intersection with other objects?
As I understand it at least, it pretty much comes down to this: you'll need to re-create the transforms that OpenGL is applying, but on the client side. You apply these transforms to your proxy (whether it be an exact copy of the model, a simplified version thereof, or a simpler bounding shape), and then perform collision detection using this transformed proxy. (This is one reason that it's important to have a good math library available.)
Okay, this post is getting a bit long, so I'm going to wrap it up (although I'll be happy to try and answer any other questions you might have). One thing I want to mention though is that it might be worth looking into 3rd-party collision detection or physics libraries. On the one hand, you're going to need to learn the basics (vectors, matrices, linear algebra) one way or another, and implementing a collision detection system is probably as good a way as any to get your feet wet. But, just be aware that there are a lot of pre-existing solutions available for this sort of thing, and in the end it might be more economical (at least in terms of development effort) to take advantage of what's already available rather that to try to create something from scratch.