Sign in to follow this  
MePHyst0

D3DX vs own matrix/vector classes

Recommended Posts

MePHyst0    232
Hi all, I'm just wondering what's better(performance-wise and implementation_time-wise), D3DX matrix/vector usage or writing your own matrix/vector classes. I think, in the D3DX case it will spare a lot of time but you (probably) won't learn anything(= won't see what's behind the scenes). In the other case it will be a lot of work to write such a good matrix/vector library, but you will learn a lot of things and it can be faster as well. What's your opinion about this?

Share this post


Link to post
Share on other sites
Metus    168
if you know how to compute the matrix inverse, converting from quaternions to matrices and all that fancy stuff, i'd recommend you to use the D3DX since all the matrix operations are highly optimized with intel and amd extended assembly directly from their development teams so there is probably no chance in hell that you can write a faster lib than d3dx

Share this post


Link to post
Share on other sites
circlesoft    1178
Quote:
Original post by Metus
there is probably no chance in hell that you can write a faster lib than d3dx

I'd say this is correct. D3DX is one of the most optimized math libraries out there - it is as fast a you can get. Both Intel and AMD have teams working on it.

Math libraries take a while to write, and are pretty boring. It is a good educational experience, though.

Share this post


Link to post
Share on other sites
Armadon    1091
Hi there Mephysto,
I do agree with both Metus and Circlesoft,
The D3DX extensive list of classes and functions in the DX SDK are a absolute breeze to work with and apart from that they are extremely streamlined to be performance orientated. What you could do as a test is write your own classes to see how the internal items work. You never learn something in return for nothing.

After that you can play with them and see how fast each one is compared to that of D3DX for fun

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
One case where you will need to roll your own or otherwise is if your making a multiplayer game and plan on having Linux servers....

Share this post


Link to post
Share on other sites
deffer    754
Another case is when you would want to take advantage of sse instructions. D3DX routines would not help you with this.

Other than that - give yourself a break and just use what you already got, since it's worth it.

Share this post


Link to post
Share on other sites
S1CA    1418
Quote:
Original post by deffer
Another case is when you would want to take advantage of sse instructions. D3DX routines would not help you with this.


That's actually somewhat incorrect. Many D3DX maths functions do take full advantage of additional 3DNow!, SSE, and SSE2 instructions as well as hand tuned x86/x87.

You are correct that simple *single* operations such as D3DXVec4Add() are straight C++ code, as seen in d3dx9math.inl - the reason for that is if you only needed a *single* vector addition, the straight inlined C++ code will usually perform as well as, and sometimes even better than the equivilent SIMD code (particularly with the overhead from supporting multiple flavours of SIMD).

Where the SIMD instruction sets such as SSE are of the most benefit is in processing large lists of vectorised data... and so the D3DX*Array() type functions, and other complex operations [where it will be a win] will take full advantage of SIMD instructions.


The hand tuned SIMD in D3DX is beatable, but only if you:

a) are extremely good at low level profiling and optimisation (most people think they are, most aren't - MS had help for D3D/D3DX...), and have deep knowledge of all of the CPUs you're targeting (e.g. caching and pipeline differences between processors in the same family)

and/or

b) can lift the constraints of precision/accuracy/error (e.g. approximate square roots etc)

and/or

c) can lift the constraints of having to support multiple SIMD types from a single library - there's overhead, extremely small overhead, but still overhead involved with selecting the correct SIMD implementation for the current CPU type.

and

d) can lift the constraints of being a separate closed source library (e.g. inlined intrinsics)


With all the same constraints you'd be hard pushed to beat it...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this