Archived

This topic is now archived and is closed to further replies.

hlivingstone_

D3DMATRIX hardware acceleration?

Recommended Posts

hlivingstone_    122
I have recently written a 3d vector class and a 4x4 matrix class for a game engine I am currently developing, using directx 8.1. The classes are completely compatible with their respective directx counterparts; via a reinterpret_cast. My question is not whether or not this is good/safe programming. My question is: are the D3DVECTOR/D3DXVECTOR and D3DMATRIX/D3DXMATRIX class''s hardware accelerated? Thus giving them a significant edge over the ones I have written. Do the D3DX* functions like D3DXMatrixMultiply serve any purpose besides removing the need to write your own, i.e. do they use the graphics hardware, or just the CPU? thanks alot, hans livingstone

Share this post


Link to post
Share on other sites
masonium    118
the D3DX math libraries are optimized for various processors(?) (I believe), so most likely they are faster than most libraries. I think that''s the reason they''re so bloated, because they have multiple versions of the same code.

Share this post


Link to post
Share on other sites
hlivingstone_    122
interesting, but as far as you guys know the matrix multiplication code and the like isnt run on the gpu? i guess im just too lazy to time my own code agains theirs. eh ill get back to you with the results about that tommarow, im going to sleep now.

Share this post


Link to post
Share on other sites
jasonsa    384
Right. D3DX math ops are optimized for various CPU so they''re fast. But you don''t use these classes directly in GPU -- it doesn''t work like that.

You can hand off code (matrix multiplies/etc) to the GPU via vertex/pixel shaders. D3D fixed function allows you to put some of the graphics ops on the hardware (hardware T&L, simple lighting, etc) but your pretty limited compared to vs2_0 shaders. See www.ati.com/developer and www.nvidia.com/developer for example shaders that show you what you can do in hardware.

D3D''s high level shader language (HLSL) allows shader code look like C++ so you don''t have to learn GPU asm. D3DX will compile it down for you to GPU asm, so learning it is pretty easy.

Jason

Share this post


Link to post
Share on other sites
hlivingstone_    122
incase anyone was wondering here are the results of my code vs D3DX code;

on a AMD 1900+, GF4 4600, 512 ddr, in d3d debug mode:
100,000 matrix multiplys:
D3DX = 30 ms
my code = 80 ms

1,000,000 matrix multiplys:
D3DX = 70 ms
my code = 861 ms

on a P3 700 mhz, ati 8mb mobile, 256 sdram, in d3d debug mode:
100,000 matrix multiplys:
D3DX = 40 ms
my code = 300 ms

1,000,000 matrix multiplys:
D3DX = 170 ms
my code = 2934 ms

well if nothing else that sure answers my question.

Share this post


Link to post
Share on other sites
hlivingstone_    122
actually i have just realized that alot of that slow down was dude to memory copying and not the matrix multiply code, however there is still about a 10 ms diffrence between the two without memory copying.

Share this post


Link to post
Share on other sites