Neither is more efficient. A good compiler will optimize them to roughly equivalent code with a function that small.
But you can't tell for sure: one may be more efficient on one system, and the other more efficient on the other system. Instead, we let the compiler optimize our code for us as the very lowest levels, because the compilers are custom tailored for whatever operating system and CPU hardware (x86, x64, ARM, PowerPC, etc...) they are running on.
Any chances we make might only save us a spec of processing power here, or a spec of processing power there. Instead, optimization is very important at higher levels. If a function is called 10,000 times a frame and takes 2 microseconds to call each time, you might be able to trim it down to 1.75 microseconds for a modest speed gain (2,500 microseconds). However, if you could instead optimize at a higher level so you don't need to call the function 10,000 times, and instead call it only 8,000 times (saving 4,000 microseconds).
But realize this: Your users won't care how fast it goes if you are optimizing the parts that already were running fast enough. Instead, you need to optimize just the parts of your code that happen to run when alot of other parts of the code is also running that together cause the program to lag.
Once your program is close to completion, you use what's called a 'Profiler' on your code, and your profiler will measure the performance of different parts of your code, and tell you where your program is spending the most time. Then, you can go to bat on that specific piece of code, and optimize it until it runs fast enough to please your users.
Honestly, even if something is running 10,000 times a frame, the CPU is fast enough to handle it. When I was optimizing a piece of my code that was lagging badly (it was doing a huge amount of work every so often, but not every frame), it was being called over a quarter of a million times (>250,000). The minor optimizations I was making were helping a little (because even 1 microsecond adds up when multiplied by 250,000). Instead, the real optimizations came when I figured out I could skip the 250,000 function calls entirely. Even more speed increases came when I started pre-optimizing the data that the code was working on, so the code itself will have to do less work.
The best optimizations come from figuring out how you can avoid having to run any code at all, not from figuring out how to get existing code to run faster. Leastways, that's what my own personal limited experience has taught me.
Edited by Servant of the Lord, 30 December 2012 - 03:54 PM.