Jump to content
  • Advertisement
Sign in to follow this  
ben_02

OpenGL Scene graphs, hardware transforms, and matrix types

This topic is 4856 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I am implementing a scene graph, which will be traversed each frame to determine visibility of nodes, which get sent into a list sorted on texture/shader/etc. However I'm not sure what approach I should take with the matrix transforms. If the scene graph is traversed completely before any rendering takes place, how can I make use of the OpenGL matrix stack? Are these transformations commonly done in software or hardware? Also I'm having trouble deciding what kind of matrix to represent the transformations at transformation nodes. Should I just go for a 4x4 or a 4x3 to save some processing time? I've been reading through the scene graph resources but I haven't found the answer to these questions yet. Any suggestions?

Share this post


Link to post
Share on other sites
Advertisement
Get a really solid matrix class together and do your own calculations. The visibility should be calculated using a minimum of points (if you are using AABB or other bounding objects), so the performance hit will be minimal. Then either load the matricies at each node using glLoadMatrix, or reperform the transformations with glRotatef, glTranslatef as you step through.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!