Jump to content
  • Advertisement
Sign in to follow this  
LostSource

Game Engine Architecture

This topic is 4216 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm currently working on my game engine, but I had a question about the "math system/support" for an engine. Q: In your engine, do you have a support system that contains all the math used in your engine? Or are you just using the DirectX math library? I ask this because currently I have support for: Vectors (4d), Matrices (4x4), Euler Angles, Quaternions, Lines, P-Rays, AABB, and Planes. Which I've tested them against linear alg. books and DirectX's math support system. But I'm curious, if creating your know math support system is worth the extra work, and a better path to follow when developing a game engine? Thanks a head of time for the replies.

Share this post


Link to post
Share on other sites
Advertisement
Doing your own math code is definitely worth the effort, even though you might end up using third party code (DX's math library or e.g. The Nebula Device 2).

Share this post


Link to post
Share on other sites
For the most part, I use what D3DX provides me, and implement anything missing in terms of the D3DX interfaces.

In moving my rendering framework from C++ to C#, I've encountered a reason to re-implement matrix and vector classes (the ones provided with MDX are sealed structs that do not have the appropriate CustomTypeDescriptor attributes to display in PropertyGrid objects the way I needed them to for a tool). The "re-implemented" matrix and vector classes were implemented in terms of the MDX classes. It isn't likely that I would actually use these re-implemented classes for production non-editor code, though, as they only contain the bare functionality to get them displaying the way I want; why re-invent the wheel?

Share this post


Link to post
Share on other sites
The world already has too many linear algebra / quaternions libraries. Many of them have been written by really really smart people with lots of experience in the subtle details of writing such libraries (in C++ there are a ton of such subtle details). Pick one that you like, and copy it.

Share this post


Link to post
Share on other sites
If you care about an extra 5% to 25% of speed, write your own in SIMD/SSE2+ assembly-language [with a separate version for 64-bit mode where you get twice as many SIMD/SSE registers]. If your engine also has fancy or extensive physics support, this increases the value of this do-it-yourself approach.

I would say adopt an open-source library that does this instead, except out of all the open-source libraries I know about, none of them are SIMD/SSE2+ in assembly-language optimized for 3D-graphics.

If you can tolerate 5% to 25% less throughput (usually around 5~10%), then adopt *one* set of routines - pick the most convenient, understandable or whatever is part of your low-level graphics system. Or if you really value understanding the what/why/how of the math/geometry/transformations/physics, then consider writing your own because that helps you understand more fully.

If you write a library, write it as straightforward C functions. That other message in this thread raises a good point about C++ and other languages that should scare every wise programmer back to C --- those "tons of subtle details" that mostly do not exist in straightforward, sensible C/ASM implementations!

BTW, if you care to think carefully about maximum throughput of your engine (which is your choice = not mandatory), you may beat the speed of all those extreme math wizards because you can integrate the routines with your code in the most optimal way. Here is one specific example from my own WIP engine.

By far the most speed-important code in my engine is the code that transforms every vertex-position and up-vector (AKA normal-vector) from local-coordinates to world-coordinates. After trying all the code organizations I could imagine, I found the fastest code BY FAR was not any of the conventional approaches.

To start with, it was fastest to multiply the vertex-position and up-vector by the transformation matrix "together" ("side-by-side" - in parallel) to produce the transformed versions of both simultaneously. I have never seen this done anywhere else, and almost certainly no general-purpose vector/matrix math library every considered this, much less did this.

This is even more advantageous if the vertex is part of a surface that needs fancy lighting (so-called "normal mapping" or "displacement/parallax mapping" or "parallel occlusion mapping"). In these cases, our engines also need to transform the east-vector (AKA "tangent-vector") and north-vector (AKA "binormal/bitangent-vector") transformed to world-coordinates. Again, the fastest way to transform them is "together" ("side-by-side" - in parallel) in exactly the same way as the vertex-position and up-vector.

Finally, to make these routines as fast as possible, our engines need to pre-fetch the vertex-position and surface-vectors of the next vertex *into cache* (not SIMD registers) at exactly the optimum time *during this transformation*. This may or may-not be easy/practical/optimum when libraries are adopted.

Final admission: I care about the speed and quality of my engine, most especially those few routines that consume the most CPU-cycles *by far*. Not everyone cares to be as crazy (or careful or diligent) as I.

Share this post


Link to post
Share on other sites
Quote:
Original post by bootstrap
-snip-

By far the most speed-important code in my engine is the code that transforms every vertex-position and up-vector (AKA normal-vector) from local-coordinates to world-coordinates. After trying all the code organizations I could imagine, I found the fastest code BY FAR was not any of the conventional approaches.

To start with, it was fastest to multiply the vertex-position and up-vector by the transformation matrix "together" ("side-by-side" - in parallel) to produce the transformed versions of both simultaneously. I have never seen this done anywhere else, and almost certainly no general-purpose vector/matrix math library every considered this, much less did this.

This is even more advantageous if the vertex is part of a surface that needs fancy lighting (so-called "normal mapping" or "displacement/parallax mapping" or "parallel occlusion mapping"). In these cases, our engines also need to transform the east-vector (AKA "tangent-vector") and north-vector (AKA "binormal/bitangent-vector") transformed to world-coordinates. Again, the fastest way to transform them is "together" ("side-by-side" - in parallel) in exactly the same way as the vertex-position and up-vector.

-snip-


Do you use the same matrix for the position and the normal? If that is the case that behaviour is potentially wrong (or example if you apply scaling, shearing and/or translation to the position). Unless of course you aren't doing this and I just misunderstood you.

Share this post


Link to post
Share on other sites
Quote:
Original post by l0calh05t
Quote:
Original post by bootstrap
-snip-

By far the most speed-important code in my engine is the code that transforms every vertex-position and up-vector (AKA normal-vector) from local-coordinates to world-coordinates. After trying all the code organizations I could imagine, I found the fastest code BY FAR was not any of the conventional approaches.

To start with, it was fastest to multiply the vertex-position and up-vector by the transformation matrix "together" ("side-by-side" - in parallel) to produce the transformed versions of both simultaneously. I have never seen this done anywhere else, and almost certainly no general-purpose vector/matrix math library every considered this, much less did this.

This is even more advantageous if the vertex is part of a surface that needs fancy lighting (so-called "normal mapping" or "displacement/parallax mapping" or "parallel occlusion mapping"). In these cases, our engines also need to transform the east-vector (AKA "tangent-vector") and north-vector (AKA "binormal/bitangent-vector") transformed to world-coordinates. Again, the fastest way to transform them is "together" ("side-by-side" - in parallel) in exactly the same way as the vertex-position and up-vector.

-snip-


Do you use the same matrix for the position and the normal? If that is the case that behaviour is potentially wrong (or example if you apply scaling, shearing and/or translation to the position). Unless of course you aren't doing this and I just misunderstood you.


No, actually you misunderstood me correctly! :-)

The issue you refer-to (me thinks) is the wrong result produced by transforming the normal-vector with the same transformation matrix as the vertex-position *AFTER* the transformation matrix is subjected to any non-uniform scaling OR skew/shear operation. And you are correct about that. The typical solution requires the normal-vector be multiplied by the transpose of the inverse of the transformation matrix, which can be an extremely slow (?and problematic?) computation.

This problem annoyed me. Finally I became determined to escape this disaster. Fortunately, the way out is straightforward indeed, and especially efficient because non-uniform scaling and skew/shear are fairly uncommon, and when they are applied, they are almost always applied only once (or very infrequently) to fundamentally change the shape of the object.

So, when needed I create the non-uniform scale or skew/shear matrix per usual practice, but I do not modify the transformation matrix with it. Instead, I transform the local-coordinates of every vertex-position in the object - and *actually* warp the object (though my engine keeps the originals too), and transform the vertex vectors by the appropriate matrix too - which does *not* require any painful or slow computations, because I found the inverse-transpose of the simple non-uniform scale or skew/shear matrix is trivial to create! Yummie!

Now the transformation matrix is ***never*** "perverted", and is always valid to apply to both vertex-positions AND all three of the vertex/surface-vectors in the vertex structures (the east-vector, west-vector and zenith-vector AKA tangent-vector, bitangent-vector/binormal-vector, and normal-vector).

Excellent try! :-)

If I remember correctly, to fix the normal for a non-uniform scale, you do:

normal.x = normal.x / scale.x; // divide normal.x by scale.x instead of *
normal.y = normal.y / scale.y; // divide normal.y by scale.y instead of *
normal.z = normal.z / scale.z; // divide normal.z by scale.z instead of *
normal = unit(normal); // normalize normal (make it a unit vector)



I cannot remember the shear process exactly, but it was even faster to compute for both vertex positions and vectors than scale (no divides). I can look in my game-engine code if anyone needs it. Basically, it was something like this: to transform the vertex-position, you create an identity matrix then plug the two skew/shear values into m[1,0] and m[2,0] or m[0,1] and m[2,1] or m[0,2] and m[1,2] then transform the vertex-positions with that. To transform the vertex-normals you put -1 * those skew/shear values into the transpose of those locations then transform the vertex-normals with that. Better send me a PM and make me check my code unless you can see I am right (or see how I am wrong).

Share this post


Link to post
Share on other sites
It entirely depends on what you're trying to do. If you're targeting multiple platforms, it might be beneficial to have your own math library that you can retarget. It might be beneficial to have your code more abstracted away from D3D. So, what are your goals for the engine?

The only way it would make sense to use C or assembly for a math library is if you're targeting an 80486. And if it were 1995. Maybe. *

(* Ok, sarcasm aside, if you're doing some seriously heavy math, it might make sense to use targeted intrinsics, but most engines these days aren't CPU-bound most of the time, so spending that much effort on something that doesn't usually matter is usually a complete waste of time.)

Share this post


Link to post
Share on other sites
Quote:
So, what are your goals for the engine?


My two main goals our (1) to gain a deeper understanding in the 3d mathematics in games (which I know I could just read a book but there is something more to learn from the math of an engine by coding it then reading it), and (2) to create a nice simple engine that I can create Tetris, Pacman, and a small Action/Adventure game with for my resume.

Now I know that I could use the DirectX or OpenGL API's instead of "re-inventing the wheel", which I'm familiar with both API's, but I've found that my understanding for game mathematics is becoming a lot stronger through out this journey in engine programming.

Currently I'm learning OpenGL while using DirectX for games, but I would like to make my little engine multi-API compatiable. Once I finish the engine I'm planning on making it open source with comments in the code for others to use if they wish or build upon or even use to aid them in helping them making their own engine. I know engines take a lot of work, but I really think more people should try to create a simple engine for the experience (although this is after making a few games before hand).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!