Skeletal animation optimization

Started by
4 comments, last by BitMaster 7 years, 9 months ago

Hello,

I am trying to implement skeletal animation as part of my DirectX 11 game engine. I mostly followed the book by Frank Luna. My code is similar to this https://github.com/jjuiddong/Introduction-to-3D-Game-Programming-With-DirectX11/tree/master/Chapter%2025%20Character%20Animation/SkinnedMesh. Everything works as expected, but the frame rate is diminished greatly. I use DirectXMath library and I tried using aligned versions of data types (e.g. XMFLOAT3 to XMFLOAT3A) but that doesn't seem to affect performance in any way. Any help would be greatly appreciated.

Advertisement
Have you run a profiler like the one built into Visual Studio, or using Very Sleepy, or anything? What about a GPU profiler like (again) the one built into Visual Studio?

Do you have a flag in your code that can enable/disable the animation code to see if toggling that affects performance?

Loss of frame performance can come from literally anywhere. You need to narrow down what's causing the slowdown. And you have the tools to do so.


Also... what is the old/new FPS and why do you think the change matters? FPS is a _terrible_ measurement of speed (as in, don't use it for anything). The way the math works out, an infinitesimal decrease in frame time (the actual measurement you should care about) can result in an FPS drop of hundreds. Frametime is a linear measurement (and hence very easily to understand and reason about) while FPS is an inverse measurement (and hence very difficult to reason about).

If your code is running at a 1000+ FPS normally (very easy to do with a simple/toy example on a strong GPU with no framerate locking) then even a very very tiny change in frametime can result in an FPS loss in the hundreds. The further away you are from your target FPS, the bigger the drops will appear (i.e., the faster you are, the larger your losses will be), but as you near in on the monitor's refresh rate the FPS loss from an equivalent decrease in frametime loss will be smaller and smaller. A frametime loss that costs you 100+ FPS when you're already running at a 1000 might only cost you 1 FPS if you're already near to 60.

Do note that with framerate locking, though, this inverse affects you in another way: once you do drop below 60, you're going to only be hitting divisors of 60. That is, a frametime loss that takes from you 61 FPS to 60.9 FPS will also take you from 60 FPS to 30 FPS, and then to 20, 15, 10, etc. This is what FreeSync/G-Sync (aka adaptive sync) are attempting to fix. In general, though, you just need to make sure that you are hitting your monitor's refresh rate (60hz generally) and otherwise ignore FPS, instead looking only at frametime.

Sean Middleditch – Game Systems Engineer – Join my team!

You are right. Frame rate is not linear. It falls from ~4000 fps to ~300 fps just with one animated model. I also measured frame time and it rises from 0.3 ms to 3.8 ms. I think this is a lot for one model. As for what could be the cause, I am certain that it is in fact animation. Frame time lowers back to 0.3 ms when I disable animation. I am also sure its not GPU, its CPU. I disabled the shader that does skinning and noticed no change in performance. I will run a profiler when I get home.

I just figured something out. If I run my code in 'Release' configuration I get 0.2 ms frame time without animation and 0.3 ms frame time with. That's much more acceptable. Why does 'Debug' configuration result in such drastic change? Does this have something to do with DirectXMath library?

Your debug configuration may have iterator debugging turned on which is ridiculously slow compared to how useful it is (in addition to the fact that all memory allocations use the debug heap which is also slow, but at least useful).

Follow this mess and turn all of the iterator debugging off and see if Debug configuration is acceptably fast again.

https://msdn.microsoft.com/en-us/library/aa985982.aspx

I just figured something out. If I run my code in 'Release' configuration I get 0.2 ms frame time without animation and 0.3 ms frame time with. That's much more acceptable. Why does 'Debug' configuration result in such drastic change? Does this have something to do with DirectXMath library?


Debug builds are always slow. They contain checks which you would not do in a final build but are useful during development, they contain a lot of extra information useful for debugging, they are intentionally not optimized or optimized in a more limited way. MSVC also adds, for example, alternative memory management which is intentionally more wasteful and slower but also helps in catching certain otherwise highly difficult to notice problems. When you gain more experience you can define your own shades of different Debug builds which trade speed for increased difficulty in debugging.

But whatever you do, debug builds are not intended to evaluate their performance. Someone who knows what what they are doing and what they intend to debug with it can setup a debug configuration which is going to be almost as fast as the optimized release build. You still do not profile it, you still do make assumptions about the efficiency based on debug builds.

This topic is closed to new replies.

Advertisement