Jump to content
  • Advertisement
Sign in to follow this  
mrtroy

are matrix operations done in hardware?

This topic is 4503 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

do glRotate, glTranslate and glScale get performed by the graphics GPU? If I associate a 4x4 matrix with an object and perform rotate, translate, and scale on it one time at initializion then call glLoadMatrixf(associated_matrix) when I want to draw it would that increase frame rates? Thanks a bunch -Troy

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by mrtroy
do glRotate, glTranslate and glScale get performed by the graphics GPU?

Usually no. Most drivers just implement it in software on the CPU side.

Quote:
If I associate a 4x4 matrix with an object and perform rotate, translate, and scale on it one time at initializion then call glLoadMatrixf(associated_matrix) when I want to draw it would that increase frame rates?

It would be faster, yes, but probably not by a large amount to make that much of a difference.

Share this post


Link to post
Share on other sites
Quote:
Original post by mrtroy
do glRotate, glTranslate and glScale get performed by the graphics GPU?

If I associate a 4x4 matrix with an object and perform rotate, translate, and scale on it one time at initializion then call glLoadMatrixf(associated_matrix) when I want to draw it would that increase frame rates?

Thanks a bunch
-Troy


Yes T&L videocards have been around for years now. Probably not... polygons, shaders, texture load ect... have more to do with speed increase than matrix math

Share this post


Link to post
Share on other sites
The modifications that those functions perform to the current matrix are handled by software (generally SSE accelerated). The actual application of the matrix to the object in question is done in hardware.

Share this post


Link to post
Share on other sites
Hmm Delphi3D FAQ says

Q: How do I enable hardware T&L?

A: You don't. If your video card has hardware T&L, your OpenGL drivers will automatically make use of it. The only requirement is that you use OpenGL's transformation and lighting functionality, rather than rolling your own.

Share this post


Link to post
Share on other sites
Transform and lighting and building a matrix with rot/trans/scl are two completely different things.

T&L is the actual conversion of vertex data from local space into view space. The card does these transformations in hardware by multiplying the vertices and normals by the model view matrix you have specified.

You do not have to use T&L -- you can simply set the model view matrix to identity and transform all of the vertex data yourself on the CPU before sending it to the hardware. But even still the data will be transformed by the identity matrix in hardware anyway. :)

Share this post


Link to post
Share on other sites
Quote:
Original post by bpoint
Transform and lighting and building a matrix with rot/trans/scl are two completely different things.

T&L is the actual conversion of vertex data from local space into view space. The card does these transformations in hardware by multiplying the vertices and normals by the model view matrix you have specified.

You do not have to use T&L -- you can simply set the model view matrix to identity and transform all of the vertex data yourself on the CPU before sending it to the hardware. But even still the data will be transformed by the identity matrix in hardware anyway. :)


Ah I see now, T&L isn't the whole process just the matricies for modelviewmatrix by the normals and vertices and thats it to speed up those maths. I see now. :)

Share this post


Link to post
Share on other sites
It is possible to transfer matrix operations to GPU.

If you use HLSL or GLSL or CG or some other shader language, you can actually perform the certain matrix operation in the shader. Although operations that you asked such as glrotate etc are done on CPU. You can, but usually it isn't advisable, transfer as much calcutions on the GPU as you wish. There limits such as number

For example (pseudo-shader):

float4 worldspaceposition = mul(worldmatrix,in.position);
float4 cameraspaceposition = mul(cameramatrix,worldspaceposition);
float4 projectionspaceposition = mul(projectionmatrix,cameraspaceposition);

Each of those lines take something like 4 clocks of the GPU power, and for example the camera and projection matrices should be concatenated on CPU since they change like once per frame.

GPU's are superiour with matrix and vector math and I was personally surprised that I could actually gain more FPS when I did more operations on GPU (X800) and less on CPU (AMD64).

Share this post


Link to post
Share on other sites
Thanks for all the answers and advice!

If the performance gain is minimal I'll keep using the methods but I might look into an open source library which does matrix operations and run some comparison tests to see how much my scene time increases

Thanks again
-Troy

Share this post


Link to post
Share on other sites
dont be surprised if u make the changes and the framerate is exactly the same.
ie save some time, first find the bottleneck before u go optimizing something.
without seeing your app im 99% sure the bottleneck isnt with the matrix operations

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!