Is GLM fast enough for games?

Started by
9 comments, last by RobTheBloke 10 years, 10 months ago

So I'm starting work on a little game engine with a friend of mine. As of now I'm tasked with the graphics side of things. I am willing and able to write the proper vector, matrix and quaternion classes with SIMD/SSE but GLM seems attractive.

What I would like to know is, is GLM fast? fast enough for a simple game? Are there benchmarks to compare it with someones native code?

Obviously if it's fast it might just be advantageous to use GLM instead of writing and debugging my own stuff. Another question is, would there be compatibility or speed issues using GLM with Direct3D?

Thanks a ton.

Advertisement

I have no idea what I'm talking about, but I just introduced about 6 lines of GLM to replace 30 lines of code I had, and instead of taking about 6 seconds to start up, now it takes 30+ seconds. This is in the init phase of the game. I tried using it during the game loop and it froze. I didn't wait long enough to see if it ever unfroze so I can't say for sure if the GLM was really slow or if I made a mistake with my code. Also, on my friend's computer there is never a delay no matter what I do (she has a gaming rig) so it could just be I have a crap computer. Hope that helps :-)

GLM the OpenGL mathematics library?


Speed and performance are all relative. Some games will push out billions of polygons per frame. Some games will have minimal to no graphics. What is perfect for one game may be unusable in another.

What is perfect for one game may be unusable in another.

Yeah, but when talking about vector math operations I don't think anyone has ever complained that it was "too fast".

Take into consideration that this engine would want to push polys at around the One hundred thousand to million range.

Obviously if it's fast it might just be advantageous to use GLM instead of writing and debugging my own stuff. Another question is, would there be compatibility or speed issues using GLM with Direct3D?

GLM is a math library that tries to be close to OpenGL conventions (ie vec4 instead of float4), it doesn't depends on OpenGL calls as far as I know (it shouldn't have any reason to). Anyway, you'll have to watch out for column-major matrices and right handed coordinate system that GLM uses.

Here's a link I found http://msdn.microsoft.com/en-us/library/windows/desktop/bb204853(v=vs.85).aspx

You can check GLM sources if you're doubting their quality.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

I am using GLM, I don't know performance-wise but their API is really good and does exactly what you want and no more. I couldn't ask for a better math library.

I have no idea what I'm talking about, but I just introduced about 6 lines of GLM to replace 30 lines of code I had, and instead of taking about 6 seconds to start up, now it takes 30+ seconds.

Possibly caused by the initialization time for the static SSE constants?


I tried using it during the game loop and it froze.

I assume that's because you're not aligning your data structures properly, and so the SSE instructions are therefore crashing the app. An easy way to tell is to look at the addresses within a debugger. If the HEX memory location ends in anything other than zero, your data structure is unaligned.

The performance of glm shouldn't matter too much, relatively speaking. A typical sequence for an object in a game might look something like this:

  • Declare a transformation matrix for the object.
  • Perform a few matrix ops (typically no more than 4 or 5).
  • Load the matrix to a shader uniform.
  • Draw hundreds or thousands of triangles.

So the actual proportion of your program where your choice of matrix library can have a measurable impact is really quite miniscule, and if it's otherwise then you may be doing something highly unusual.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Unless you have a lot of experience and invest a lot of time, GLM is as good or better than anything you can write performance-wise. It's likely better than anything I could write in finite time, anyway.

That, and it just works, and it "looks" the same as GLSL, which is a big plus.

As for SSE implementations, GLM has them, at least in some places, but they don't make much of a difference anyway. SSE is good for whacking through long streams of SoA data, but it is pretty useless for doing a few dozen dot products or multiplying 3-4 matrices, or for any data that you normally have (because AoS is the natural thing, not SoA). You have to work really hard to make SSE truly useful (other than in a contrieved, artificial example).

Special applications like audio/video codecs are of course an exception, but this is not surprising, it's what SSE was made for after all.

If your SSE-optimized dot product saves 1-2 cycles compared to a C implementation (compiled with optimizations turned on) you can consider yourself a happy man. So what, one branch prediction gone wrong costs 7-8 times as much.

If your quaternion/vector multiply saves 5-6 clock cycles, you're lucky. So even if you calculate a thousand of them per frame (for skeletal animation, or whatever) that's 5,000 clocks. Big thing. If that's an issue, then don't you ever dare making a call to a D3D function, or even access the disk.

It is open source so simply use it and if you find a specific functionality to be inefficient, just replace it.

http://tinyurl.com/shewonyay - Thanks so much for those who voted on my GF's Competition Cosplay Entry for Cosplayzine. She won! I owe you all beers :)

Mutiny - Open-source C++ Unity re-implementation.
Defile of Eden 2 - FreeBSD and OpenBSD binaries of our latest game.

This topic is closed to new replies.

Advertisement