Jump to content
  • Advertisement
Sign in to follow this  
tconkling

OpenGL Transform matrix calculation

This topic is 4456 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

This is a very basic question about 3D game engines, and the role of 3D hardware in transformation calculations. If you use a library like OpenGL to draw transformed polygons to the screen, you might push and pop matrices onto and off of the library's modelview matrix stack to achieve the correct transformation for each object being drawn. Using this method, you might be able to avoid performing expensive matrix multiplications in software (right? -- I'm a 3D newbie, so correct me if my assumptions are incorrect). However, if you're writing a game engine, you'll probably need access to these transform matrices for many different reasons -- collision detection, object picking, game-specific logic, etc -- so you'll be performing the multiplications in software and storing the results somewhere accessible by your engine (right?). So it seems like a game engine ends up having to perform model transforms twice for each object in the game -- once in hardware (inexpensive) while drawing transformed polys to the screen, and once in software (expensive) for engine operations unrelated to drawing. Is my understanding of the situation correct? Is there any way to have the computer's 3D hardware perform -- and return the results of -- various linear algebra operations for purposes unrelated to rendering? Since 3D hardware is inherently good at these sorts of operations, it seems silly to have the main CPU do them at all.

Share this post


Link to post
Share on other sites
Advertisement
It is true that 3D hardware is very good for those sorts of operations, but unless you are doing GPGPU stuff, the graphics pipeline is very much optimized in one direction, so getting those calculations back would be expensive. This is changing though and things like HavokFX do aim to use graphics hardware for exactly what you propose.

With the likes of needing to perform the calculations twice, this is one of the reasons why spatial partitioning and bounding volumes are used as early outs. Static worlds/levels are kept in model space for that reason, as they will in general be part of the most collision tests. For movable objects, bounding volumes are used to quickly reject most of the polygons so only a few need to be transformed and tested against for collisions (if you even need to do per-triangle collisions at all).

Regards,
ViLiO

Share this post


Link to post
Share on other sites
Quote:
Original post by tconkling
So it seems like a game engine ends up having to perform model transforms twice for each object in the game -- once in hardware (inexpensive) while drawing transformed polys to the screen, and once in software (expensive) for engine operations unrelated to drawing.


You'll probably stall trying to get matrices back from the GPU.

And your CPU calculated matrices are probably not as expensive as you think. A profiler is your friend.

If you do find your matrix/matrix or vector/matrix multiplications are costly, then use your library's functions (or write your own) to do these ops using the CPU's vector capabilities (i.e. SSE, VMX, whatever).

Share this post


Link to post
Share on other sites
While it's true that performing matrix transformations is more efficient in hardware than in software, the difference isn't that big. In fact, I'd be willing to go out on a limb and say that your CPU can transform more vertices per second than your GPU (when working dedicated), simply because the GPU is designed to do so many other other tasks at the same time, whereas the CPU is far more flexible. I digress.

Transforming a vertex twice is undeniably more demanding than transforming it once, which is exactly why we go to such lengths to design engines that don't need you to do so. An engine that requires all of its vertices to be transformed in software for physics purposes is one that is in need of optimisation. Usually, generalisations and approximations will be made so that the physical working set is smaller than the graphical one: It is (generally) beneficial to use generous axis-aligned bounding surfaces than accurate transformed ones - sacrificing some culling efficiency to save on transformational overhead. If a model has 3000 vertices, but is represented only by its centroid and a bounding radius, then you'd be a fool to worry about that extra one transform in a thousand.

As for picking: note that is is considerably more productive to untransform the picking ray into object space than it is to transform the object into screen space, so the accuracy tradeoff doesn't really apply.

Regards
Admiral

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!