Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


Does using libraries impair compiler optimization?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
7 replies to this topic

#1 mrheisenberg   Members   -  Reputation: 356

Like
0Likes
Like

Posted 06 January 2013 - 04:20 PM

Is there any place where it is explained how(if it actually does) does using a pre-compiled engine impair inter-procedural optimization?I mean to do that,the compiler needs the entire source code intact,right?So it's logical that using DLL's will impair optimization even more.Correct me if I'm wrong please.



Sponsor:

#2 Álvaro   Crossbones+   -  Reputation: 13933

Like
1Likes
Like

Posted 06 January 2013 - 04:41 PM

That sounds right. But it shouldn't matter for any function calls that do a significant amount of work.



#3 SiCrane   Moderators   -  Reputation: 9673

Like
0Likes
Like

Posted 06 January 2013 - 04:54 PM

With link time code generation a compiler/linker like recent versions of MSVC can inline code from a static library. Inlining code from a DLL is something a linker generally won't do, different computers may have different versions of the DLLs or DLLs may be replaced with patched versions with bug fixes.

#4 HappyCoder   Members   -  Reputation: 2884

Like
2Likes
Like

Posted 06 January 2013 - 05:09 PM

I wouldn't worry about micro-optimizing by avoiding the use of external libraries. Even if it was slightly slower, you shouldn't worry about that when designing your program. When dealing with performance you want to choose efficient algorithms from the beginning, but don't worry too much about having the fastest algorithm possible. Just pick one that will have reasonable running time and, possibly more importantly, is simple to implement. If your code is designed well you should be able to easily change the algorithm later you find you need a faster one. When choosing what algorithm is faster, the most important factor is how well is scales meaning how will it handle 1, 10, 100, 1000, ect... number of items to proccess. This is known as an algorithm's complexity class.

Once you have your code working and if you find it isn't running as fast as you need it to. Then you profile the code. Find out where your program spends most of its time and decide how to speed that up. This may involve changing the algorithm you used. Change the way memory is used an the amount used to reduce cache misses. If you find there is a lot of time spent in a single piece of code, you may find ways to call that code less to reduce the amount of work done. Micro optimizing pieces of code to try to get the most per clock cycle or reduce the number of clock cycles will only make a noticeable difference if that code being optimized is being executed very frequently in the code.

So my advice to you. Don't worry about the speed impact of using a dynamic library. Write your code in a way that makes sense and is clean and easy to manage. Once you are done, then you optimize if needed. If you are worried about optimizing things too early you may find your project harder to manage and things may end up being more obscure. This means it takes longer for your project to be done and there will likely be more bugs and in the end the optimizations may have not made any significant difference to how fast the code runs anyway. To sum things up your time is more precious than CPU time.

#5 AAA   Members   -  Reputation: 182

Like
0Likes
Like

Posted 06 January 2013 - 07:16 PM

Any optimization you gain is not worth the effort. Assume (correctly) that each component (DLL) is built in release mode with the proper optimizations. This is all you really need, even the most demanding games approach things in this fashion.



#6 Dan Mayor   Crossbones+   -  Reputation: 1714

Like
0Likes
Like

Posted 06 January 2013 - 09:08 PM

I personally believe that developers, especially programmers need to pick a side of the fence and stick to it.  If your making the engine yourself then you would want to consider and discover the most optimized and best performance oriented methods of doing everything yourself (outside of the DirectX / OpenGL hardware level API).  At this level you should focus more on building and benchmarking the engine itself and not so much on building a game (as it would distract you from making a good engine).

 

On the flip side of this same conversation if you are developing a game stop worrying so much about optimization of the engine itself and just learn to use it as it was intended.  The more time you worry about extending, porting, and otherwise trying to improve the engine itself the more your game design hurts for it.  You should be focusing more on understanding how the engine works and how you should use it properly as according to the documentation and the "proper" techniques as set forth by the engine coding team.  (Just do it like they tell you for optimal performance and your good).

 

But on to the actual question I have to agree with AAA on this one.  The micro optimizations and minor performance mods that may come from tuning your compilation are negligible at best.  The only worth while optimizations are when you optimize your code to specific hardware requirements which in turn drastically limit your potential audiences.  Most C++ compilers come ready to compile to the best possible cross hardware optimized versions as is and your not going to get much better performance by altering what the compiler is trying to do by it's default builds.

 

If you want noticeable gains inside your game you want to focus more on YOUR high level code and design.  Run tests and benchmarks and optimize your memory usage, disk read, caching and use event based logic triggers to minimize iterations of idle nodes and objects.  In a little more plain English, focus more on making your logic only calculate necessary snippets, limit the amount of stuff in ram and use pointers whenever possible, use dynamic memory and release when your done, cache things from disk that you may need to read often and try to minimize writes on your logic thread.


Digivance Game Studios Founder:

Dan Mayor - Dan@Digivance.com
 www.Digivance.com


#7 Ravyne   GDNet+   -  Reputation: 8191

Like
0Likes
Like

Posted 06 January 2013 - 09:20 PM

Game engine functions ought to be doing something non-trivial, therefore any of the overhead that *could* have been removed by such optimizations are such a tiny part of the cost of the function that it doesn't really matter.

 

For those writing an engine, this boundary crossing is something they should have to think about (e.g. they shouldn't provide a vector library that traverses into DLL code every time you add two vectors, but it would be appropriate for it to be provided as source or in a static  library portion of the engine's SDK) but you just need to worry about optimizing your own code. In most cases, this means doing things in algorithmically-sound ways, and getting the most out of the engine by using it in the manner prescribed by its developers. If they've done a reasonable job, your performance should be good unless you do something in a really strange way they didn't anticipate. In short, worry about optimizing *your use of the engine* rather than optimizing the engine itself.



#8 mhagain   Crossbones+   -  Reputation: 8281

Like
0Likes
Like

Posted 07 January 2013 - 08:05 AM

This is a clear tradeoff.  In exchange for a hypothetical (small) performance loss you get to use a bunch of pre-optimized, debugged, tested code that is known-good and has been used in the wild.  So -- evaluate concerns over a more-than-likely unnecessary micro-optimization versus gains from being able to use that bunch of good code, and decide which you prefer.


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS