Sign in to follow this  
keinmann

Performance Overhead w/ debugging?

Recommended Posts

Just did a simple test with a C# testing project (console application) which is consuming a (rather complex) library I wrote. I noticed a rather simple method which did a (freestanding) mathematical calculation was a bit slow: running in about 1.6ms. Couldn't figure out why. So I copied the method from the library's source and pasted it into the console application I was using for testing. Sped it WAY up, so that it only took 0.0336ms. I tested with similar concepts, and had the same findings. It was slower if the method is in another assembly than the caller.

I'm speculating that this is a performance cost of debugging and that the debugger is slowing this down a little bit to keep tabs on everything. And I just wanted to confirm this, or get correct information about this phenomena. Can anyone explain and enlighten me about this?

Share this post


Link to post
Share on other sites
C# compilation with and without debugging really isn't that different. There are a few minor optimizations that don't occur in debug mode (inlining mainly).

If, however, you're marshalling between managed and unmanaged code, that can add quite a bit of overhead (although generally not 1.5ms)...

Share this post


Link to post
Share on other sites
Hmmm, this is very odd then. This particular case, no interoperation going on with this method. Just a purely managed method performing a mathematical calculation. I have wrappers for a large amount of C and C++ code, and the interop performance penalty is rather negligible for my project. But this little math thing is strange. I'll have to set up for a release build and see what the results are. Wouldn't you agree that's slow enough to worry about? This particular method, though I would discourage it, could be used in performance critical game code over the course of many frames. Though anything *I* write which uses it will only be for pre-calculations.

Share this post


Link to post
Share on other sites
Marshalled calls should generally do a LOT of work. That means if you're going into unmanaged code, avoid going into it for trivial reasons. A single matrix multiply is not a reason to transition from managed to unmanaged (it'll cost more to do the marshalling than the entire method).

If you're calling your math method in a loop... and going over the managed/unmanaged barrier each time... that will be quite expensive.

I have a bunch of blog posts profiling the performance of math operations in unmanaged vs managed when used in a managed context, you should check them out.

Share this post


Link to post
Share on other sites
Will do. Hopefully what I said wasn't confusing though; this particular case is 100% managed; no interop. But maybe your profiles can help me make some decisions on this framework design; even warrant some major changes if I was wrong on some assumptions (you never have enough time to test everything, and sometimes have to use a "best guess", haha).

Ok... I'm a very bad boy... but who hasn't used Reflector on the .NET framework to see how things tick? :) I was doing some digging a while back and found that some of their math methods are done via interop. I wondered "why?". Does it really warrant interop? Would it really make a notable gain in speed? Or just didn't feel like doing it in managed C++/C#? Maybe your work can shed some light on that and other things I worry about in my sleep. :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this