Jump to content

  • Log In with Google      Sign In   
  • Create Account


Good way to test a function's performance in VS


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 hl   Members   -  Reputation: 177

Like
1Likes
Like

Posted 11 February 2014 - 11:26 PM

Hello, what is a good way to measure the performance of a particular function in visual studios (2013)? I read some articles about using a stopwatch and looping the function a million times, but I want to know if VS or some plugin does this for you.

 

The code I'm testing is part of a first-person camera class:

 

#1: Using references

public void Forward()
{
    Vector3 tempdir;
    Vector3.Multiply(ref direction, Speed, out tempdir);
    Vector3.Multiply(ref tempdir, dt, out tempdir);
    Vector3.Add(ref position, ref tempdir, out position);
}

vs #2: Copying (need better terminologies)

public void Forward()
{
    position += direction * Speed * dt;
}

I would assume that #1 is faster than #2 by a very negligible amount at 60FPS, hence I may want to write it like #2 for readability. What is your take on optimization in game engines on modern hardware? Would it be alright to write code like #2 except for performance critical areas.



Sponsor:

#2 jbadams   Senior Staff   -  Reputation: 17989

Like
1Likes
Like

Posted 12 February 2014 - 12:26 AM

The tool you're looking for is a profiler such as SlimTune or AMD's CodeAnalyst.

 

I've never used it personally (the majority of my own code is PHP, JavaScript, or Python) but I believe at least some versions of Visual Studio do provide built-in profiling tools -- see "Analysing Application Performance by Using Profiling Tools" on MSDN.

 

 

 

What is your take on optimization in game engines on modern hardware? Would it be alright to write code like #2 except for performance critical areas.

 

Disclaimer: I mostly write business software, and small hobby games for family and friends, although I do intend to publicly release a couple of casual games later this year.  I feel that my approach should scale nicely, but it may or may not gel with the common practices of AAA developers.

 

Unless I'm already aware of some possible performance pitfall from prior experience, know that it's definitely still applicable, and know a good method for mitigating that, I always start with the clearest/cleanest code I'm able to write so that it clearly communicates my intent and can be confident that it is a correct solution to the problem at hand.  If there are performance problems when I'm done, I profile the code and optimise bottlenecks until my performance target is reached.

 

You should never simply assume that one method will be more efficient than another.

 

 

Hope that helps! smile.png



#3 Vortez   Crossbones+   -  Reputation: 2697

Like
2Likes
Like

Posted 12 February 2014 - 12:29 AM

For testing a single or a fews function(s), a high resolution timer could do the job, for a more general approach, a profiler is the way to go, as jbadam pointed out.



#4 Bacterius   Crossbones+   -  Reputation: 8530

Like
1Likes
Like

Posted 12 February 2014 - 02:51 AM

It's likely the JIT will produce equivalent code for both cases, assuming your operator implementations aren't braindead. Especially if your vector is defined as a struct, since it will be allocated on the stack, and will fall out of scope automatically, without even invoking the GC. So, use the operator version. Much more readable => less potential for bugs + more easily maintained => more time for other, more relevant optimizations, such as better algorithms, or even more features. It's also recommended that vector objects such as these be immutable, especially if they are structs, as the C# struct memory model causes mutable structs to behave in unintuitive ways at times. Of course mutable objects are often required for performance, but for primitives like vectors I doubt it's worth the hit in readability.

 

In any case, for tiny functions like this, timing is usually not the solution, as the timings will be very dependent on context (read: your benchmarks will be meaningless). You should instead look at the output IL. There are tools to disassemble C# code into IL, so you can see what both functions map to, and see which one takes the least instructions/memory copies/etc...

 

And honestly I doubt such a micro-optimization on a function that's called maybe a handful of times per frame would even make a detectable blip on your profiler. Don't optimize unless there is a problem, because it's usually a performance/maintainability tradeoff and you ideally want to stay on the maintainable side as much as possible.


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#5 MHGameWork   Members   -  Reputation: 151

Like
1Likes
Like

Posted 12 February 2014 - 06:35 AM

The anwser to your question is simple. Follow the golden rule of programming:

 

"Make it work, make it right, make it fast"

 

This means that your first focus should be to make your code work. After your code is properly tested, you probably need some refactoring in order to clean up the design.

At this point you should have stable and working code. Because: "Only then, one should look at optimizing".

 

My experience tells me that premature optimization, especially at the cost of code readability or simplicity always comes back at you. The reason for this is that of the 10000 lines of code you write, probably only a few 100 of them will take up 90% of the processing time. Only optimize where you need to optimize.

 

Sidenote: I am not saying you should ignore algorithmic complexity until last.


----------------Don't play games, create them!The Wizards

#6 cozzie   Members   -  Reputation: 1585

Like
0Likes
Like

Posted 12 February 2014 - 01:41 PM

Are you measuring it to prevent performance issues or do you have an actual issue/ bottleneck you're trying to brake down?




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS