If you find this article contains errors or problems rendering it unreadable (missing images or files, mangled code, improper text formatting, etc) please contact the editor so corrections can be made. Thank you for helping us improve this resource |
Be sure to read the license agreement before using the code.
"Space isn't remote at all. It's only an hour's drive away if your car could go straight upwards." - Fred Hoyle
Well well, long time no see! While I'm not completely out of crunch time yet (E3 is coming up fast), things are starting to lighten up just a little bit. Not much, mind you... but enough at least to try and keep this series alive.
Today, we'll be starting our way towards working on game actor logic, but from a distance. In reality, today's article is a precursor to the actor logic section, one which some of you folks who were hoping for a 3D project may find encouraging. While this game may be 2D in terms of display, actors (i.e. game entities) are quite often not locked into a 2D world. They may be, depending on the game (and we haven't determined the game yet), but often they're not. So since things like physics and collision deal directly with actors, it's usually a good idea to plan for the worst, if you can afford it. We'll likely end up trimming things down a bit later on once we know what our particular game will need, but for now I'm opting to keep things flexible.
In order for our actors to be able to live and operate in a virtual world beyond the 2D plane we see them on, we'll deal with them on a 3D basis. It may not require 3D rendering, but it does require some basic 3D math support.
Up to this point, I've been relatively compliant with C in terms of the code written for the series. That ends today, and for a very good reason, one which reflects one of C++'s best features over C in my opinion (not to mention the reason I initially learned C++ in the first place)... operator overloading. Operator overloading is C++'s biggest gift to 3D programming, plain and simple. Why? Because it allows you to express 3D vector equations in a form closer to the real deal. By creating simple classes for vectors, etc. and slamming them with tons of operators, you end with data types that are just as easy to work with as ints or floats. After all, why have functions like VectorAdd when you can just use "v1 + v2"?
So, added to the code this week is a new header file vg_vec.h (no .cpp file required, since everything is inlined for speed reasons) which contains some initial classes needed for 3D vector geometry support. I've kept it relatively trimmed down, including only four classes at the moment (2D vector/point, 3D vector/point, axial frame, and OCS, or Orthogonal Coordinate System), but these four classes alone have a lot of life in them. All the class data elements are public, since I basically treat them like utility structures that happen to have operators. If this is your first exposure to C++ operator overloading, take a hard scan through the header file for a whirlwind tour.
I'm assuming that most of you have a little 3D experience, or at least enough math background to know what you're looking at when reading through the header file. If not, hopefully things will be become clear over time. If you can, though, I'd highly suggest reading through the first few chapters of a linear algebra book (at least enough so you'll understand dot and cross products), since I simply don't have the time to cover that kind of a foundation.
Aren't You Forgetting Something?
Some of you may be asking, why no matrix classes? 3D math revolves around vectors and matrices, and we have vector classes, why not matrices? There is a reason, and it's mostly semantic. I initially used matrix classes (3x3 or 4x4 depending on the scenario) when doing 3D work, but I've since stopped due to several problems I see with them. Aside from lacking a concrete component separation, the biggest problem I have with raw matrices is not being able to think about them as "representative" of something. If you just use matrices for raw transformation concatenation, you often end up thinking about them as nothing more than a block of data that does something you want, rather than something with meaning in and of itself. That's not good, because those matrices do have meaning, and taking advantage of it can only help your code.
As a result of this mentality, I don't work with raw matrices for transformations. Instead, I use coordinate systems, usually orthogonal (i.e. each axis perpendicular to the other two, which the header file currently supports). Coordinate systems describe a set of axes originating at some arbitrary origin in space, with the axes having arbitrary scale. The three axes, which I refer to as an "axial frame", correspond to the X, Y, and Z axes for that coordinate system. All these components describe the coordinate system relative to its parent coordinate system, i.e. the "world space" it lives in.
Why use coordinate systems instead of matrices? Because they're a physical manifestation of the same thing. If you're doing only scale, rotation, and translation, an orthogonal coordinate system will do the same job as a matrix will (shearing and other more esoteric affine operations usually require a full affine coordinate system, but we don't need those at this point). The coordinate system's origin and axis scales allow translation and scale transformations, and the axial frame allows rotation. There are cases in 3D where full matrices are required, such as if you combine perspective projection in the transform... but in the vast majority of cases, you're dealing with simple 3D worldspace transforms. And for those, OCSs are wonderful things indeed, because not only do they give you much of the power of matrices... but you can visualize them. When working with complex chains of transformations, visualization is no small gain.
With matrices, you transform points/vectors and other matrices by them and get your results. The transformation is uni-directional, so once you've transformed you can't go back, unless you get the inverse (which may or may not be an easy or even possible task depending on what's in the matrix).
Coordinate systems, however, are always bi-directional, since the components are still broken up. Plus, the transformation means something. Transforming by an OCS effectively means going "into" the OCS, i.e. taking something that's relative to the OCS's parent and find out what it is relative to the OCS itself. For example, say the Sun is at the origin of a parent space, and you have the positions of, say, Earth and Pluto relative to this space. Then let's say you have a coordinate system within this space where the origin is Earth, and the axes are tilted (due to the planet's rotation or whatever). Change the scale of the axes, too, if you like.
Now, if you want the position of Pluto relative to the Earth, just take its position relative the parent space (the Sun's) and transform it "into" the Earth's OCS. That's it, you're done. You're basically doing a matrix transformation, but you can visualize what the transformation is doing. And since the OCS is bi-directional, you can do the inverse operation (going "out of" a relative space up into the parent space) just as easily. The header file's OCS class reflects this by using the shift operators to do "inbound" and "outbound" transformations, which you can use to transform points or other OCSs. There is a difference between "point" and "vector" though, and while you often transform points by full OCSs, much of the time you won't want to do that with vectors (since vector translation is usually undesired). Because of this the axial frame member of the OCS also supports the shifting operators, in case you want to transform by a rotational component only.
What about performance? The scale is a minor hit, but it's marginal; the rest is effectively identical to matrix transformations. If speed becomes a concern (such as when you're transforming a large number of points), you can always generate a matrix directly from an OCS just by dumping the axial frame in as your upper left 3x3 (possibly transposed, depending on your matrix format), putting the translation in the lower row (or right column, once again depending on your matrix format), and multiplying the 3x3 diagonal by the scale. That way you can work with OCSs to build up your transforms and use the result in matrix form for bulk processing.
Why am I espousing on coordinate systems so much? Beyond making code easier to write (and read), they help 3D programming make a lot more logical sense altogether. Preaching coordinate systems over raw affine transform matrices is probably one of my two biggest "religious" 3D issues these days. The other is my "Euler Angles Must Die" campaign, since I am now convinced that Euler angles are the most annoying form of rotation representation ever conceived and they should be wiped off the face of mathematics. If you need pitch, yaw, and roll, so be it, sometimes that's unavoidable... but use them to build up a quaternion or an axial frame instead of working with them directly. Argh, it's frustrating just thinking about them... I'm not even going to get started on this one.
Over and Out
Anyway, take a skim through the new material and get familiar with it; we'll be using this stuff extensively when working with actors (their positions, velocities, orientations, etc). At some point I may drop in other classes to deal with quaternions, or geometric primitives like lines, planes, spheres, etc... but we'll deal with those when the time comes. In the end, this is the real meat.
Until next time,
Chris Hargrove
- Chris"Kiwidog" Hargrove is a programmer at 3D Realms Entertainment working on Duke Nukem Forever.
Reprinted with permission.