Good way to test a function's performance in VS

Started by
4 comments, last by cozzie 10 years, 2 months ago

Hello, what is a good way to measure the performance of a particular function in visual studios (2013)? I read some articles about using a stopwatch and looping the function a million times, but I want to know if VS or some plugin does this for you.

The code I'm testing is part of a first-person camera class:

#1: Using references


public void Forward()
{
    Vector3 tempdir;
    Vector3.Multiply(ref direction, Speed, out tempdir);
    Vector3.Multiply(ref tempdir, dt, out tempdir);
    Vector3.Add(ref position, ref tempdir, out position);
}

vs #2: Copying (need better terminologies)


public void Forward()
{
    position += direction * Speed * dt;
}

I would assume that #1 is faster than #2 by a very negligible amount at 60FPS, hence I may want to write it like #2 for readability. What is your take on optimization in game engines on modern hardware? Would it be alright to write code like #2 except for performance critical areas.

Advertisement

The tool you're looking for is a profiler such as SlimTune or AMD's CodeAnalyst.

I've never used it personally (the majority of my own code is PHP, JavaScript, or Python) but I believe at least some versions of Visual Studio do provide built-in profiling tools -- see "Analysing Application Performance by Using Profiling Tools" on MSDN.

What is your take on optimization in game engines on modern hardware? Would it be alright to write code like #2 except for performance critical areas.

Disclaimer: I mostly write business software, and small hobby games for family and friends, although I do intend to publicly release a couple of casual games later this year. I feel that my approach should scale nicely, but it may or may not gel with the common practices of AAA developers.

Unless I'm already aware of some possible performance pitfall from prior experience, know that it's definitely still applicable, and know a good method for mitigating that, I always start with the clearest/cleanest code I'm able to write so that it clearly communicates my intent and can be confident that it is a correct solution to the problem at hand. If there are performance problems when I'm done, I profile the code and optimise bottlenecks until my performance target is reached.

You should never simply assume that one method will be more efficient than another.

Hope that helps! smile.png

- Jason Astle-Adams

For testing a single or a fews function(s), a high resolution timer could do the job, for a more general approach, a profiler is the way to go, as jbadam pointed out.

It's likely the JIT will produce equivalent code for both cases, assuming your operator implementations aren't braindead. Especially if your vector is defined as a struct, since it will be allocated on the stack, and will fall out of scope automatically, without even invoking the GC. So, use the operator version. Much more readable => less potential for bugs + more easily maintained => more time for other, more relevant optimizations, such as better algorithms, or even more features. It's also recommended that vector objects such as these be immutable, especially if they are structs, as the C# struct memory model causes mutable structs to behave in unintuitive ways at times. Of course mutable objects are often required for performance, but for primitives like vectors I doubt it's worth the hit in readability.

In any case, for tiny functions like this, timing is usually not the solution, as the timings will be very dependent on context (read: your benchmarks will be meaningless). You should instead look at the output IL. There are tools to disassemble C# code into IL, so you can see what both functions map to, and see which one takes the least instructions/memory copies/etc...

And honestly I doubt such a micro-optimization on a function that's called maybe a handful of times per frame would even make a detectable blip on your profiler. Don't optimize unless there is a problem, because it's usually a performance/maintainability tradeoff and you ideally want to stay on the maintainable side as much as possible.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

The anwser to your question is simple. Follow the golden rule of programming:

"Make it work, make it right, make it fast"

This means that your first focus should be to make your code work. After your code is properly tested, you probably need some refactoring in order to clean up the design.

At this point you should have stable and working code. Because: "Only then, one should look at optimizing".

My experience tells me that premature optimization, especially at the cost of code readability or simplicity always comes back at you. The reason for this is that of the 10000 lines of code you write, probably only a few 100 of them will take up 90% of the processing time. Only optimize where you need to optimize.

Sidenote: I am not saying you should ignore algorithmic complexity until last.

----------------Don't play games, create them!The Wizards
Are you measuring it to prevent performance issues or do you have an actual issue/ bottleneck you're trying to brake down?

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

This topic is closed to new replies.

Advertisement