Jump to content
  • Advertisement
Sign in to follow this  
DemonDar

OpenGL Transformations

This topic is 2431 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have seen much 3D engines wich uses both DirectX and OpenGL, they required a custom matrix class in order to have custom transformations for both API. But now the question is:" Is really worth with OpenGL using glRotate, glTranslate, glScale when I can simply set the GL matrix from a mine matrix class?"

I think that glRotate , glTranslate etc. are provided for simpleness in use, so would be always better using directly matrices? If I want to do a custom scene manager with an animator for each node, I have to recompute all the matrices after every frame. So I have to take the matrix of a parent node and multiply by It the child transformation, than I have the aboslute transformation of that child that now can be rendered (if the parent did the same with its parent too of course). I tried to imagine a system wich uses Only OpenGL functionalities for doing that. But I found that there a limit in the matrix stack so that the maximum scene hierachy depth is 32. (Or more on some machines, but if want compatilibity I need to use lower limit). By the way I think that using OpenGL functionalities for that is both fast(not really needed) and most flexible(apart the limit of depth of 32). What are your thoughts about that? Does anyone seen someone using this system instead of the wide used child-parent model wich is not limited to a depth of 32? Or are glRotate,glTranslate etc. functions too slow because they need communications with videocard? (this depends on video drivers I suppose).

I'm asknig this because I have profiled my animated scene and I found that 27% of the CPU time is matrices computing. So maybe OpenGL provide some usefull functionality to reduce load of that task.

Share this post


Link to post
Share on other sites
Advertisement

I have seen much 3D engines wich uses both DirectX and OpenGL, they required a custom matrix class in order to have custom transformations for both API. But now the question is:" Is really worth with OpenGL using glRotate, glTranslate, glScale when I can simply set the GL matrix from a mine matrix class?"


No. GL3.0 core profile does not provide the transform methods, so your only option is to pass the matrix as a uniform variable to your shaders.


I think that glRotate , glTranslate etc. are provided for simpleness in use, so would be always better using directly matrices?

Always use matrices directly.....


I'm asknig this because I have profiled my animated scene and I found that 27% of the CPU time is matrices computing. So maybe OpenGL provide some usefull functionality to reduce load of that task.


Wow. That must be one hell of a lot of matrices! (Or the other alternative is that you have something like a division by zero error firing somewhat frequently, which is slowing down the process). If you *need* the additional speed, then it may be worth looking at the SSE CPU intrinsics <xmmintrin.h> (or NEON for tablets/smartphones).

Share this post


Link to post
Share on other sites
I'm not sure if I am interpreting your question right but I think the biggest thing that you could to optimise your program would be to "cache" transformations done to each node. Only recalculate the matrix for a node if it has been translated/rotated/scaled since the last frame and then store that matrix to avoid having to recalculate it during subsequent frames.

Share this post


Link to post
Share on other sites
I have only moving objects. like thousand of asteroids rendered with few draw calls. And I Had to separate the hierachy for transformations with composition to speed up a 5% the transformation time.

Does GL 3.0 does not provide glSetMatrixf method? And back compatibility? anyway I'm using OpenGL 2.1 right now.

Share this post


Link to post
Share on other sites

Does GL 3.0 does not provide glSetMatrixf method? And back compatibility? anyway I'm using OpenGL 2.1 right now.



You'll be looking to use one of the glUniform methods to set the modelviewprojection matrix in your shader. In GL2.1 glRotate/glScale etc simply map to the gl_ModelViewProjection uniform value, which is available GLSL. Hardware instancing may be something that could be of help....

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!