• Advertisement
Sign in to follow this  

Win32 vs C++ Runtime

This topic is 3484 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

If I use func. like "memset" it is implemented using Win32 "FillMemory". Thus, if I want to avoid indirection, not caring for portability, I should always use Win32 directly. Correct?

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by Terefere
If I use func. like "memset" it is implemented using Win32 "FillMemory". Thus, if I want to avoid indirection, not caring for portability, I should always use Win32 directly. Correct?
There's absolutely no reason to do that - the Win32 version might not always be used at all; for instance memcpy() uses some assembly to do the copy.

I wouldn't be surprised if memset was just a macro for FillMemory anyway, or at least was optimised away to one function call.

Share this post


Link to post
Share on other sites
"Avoiding indirection" is a waste of time here. The only thing I can think that you might be considering is the performance implications, but those are completely trivial and will make no difference on the measurable performance of your code.

Share this post


Link to post
Share on other sites
No, functions like memset, memcpy,..., are, in most compilers (MSVC too), no real functions but compiler instrinsics, they're built-in in the compiler. The code is inserted inline, and it is more efficient than Win32 API functions.

'Normal' functions like fopen, fwrite,... might be a little slower than then the Win32 functions, but the performance gained is very little or nothing, depending on the implementation. And you should always use the standard library functions if you can. Win32 functions can become deprecated, they can change their behavior in next versions, the C/C++ standard library hasn't changed for a long time and it isn't likely it will be changed soon.

Share this post


Link to post
Share on other sites
Aren't C++ standard libs implemented using Windows API? I thought, when I use for ex. "fopen" it is implemented using Win32 etc. Is this true?

Edit:
So for mem - C++, for file i/o - Win32.

Share this post


Link to post
Share on other sites
Quote:

Aren't C++ standard libs implemented using Windows API? I thought, when I use for ex. "fopen" it is implemented using Win32 etc. Is this true?

In general, yes. OS controls the platform, so all the standard functionality you get from C++ or any language is implemented either 'from scratch' (like memcpy generally is) or in terms of some OS primitive function (like CreateFile for fopen).

Share this post


Link to post
Share on other sites
In general, when there's a portable solution, you're much better choosing the portable solution. You never know when today's requirements might change tomorrow. The only times when you should go with a non-portable solution is if it's proven (not theorized) that the non-portable solution is significantly faster (a 2% difference is not significant) or much easier to implement. Even then, I suggest wrapping OS-specific things in a more portable layer.

I've seen it happen, so far, at every company I've worked at, where the assumption was made of something only needing to work on a specific platform was made. Later on, management realized that another platform was viable. In some cases, it was relatively easy, because the platforms were similar enough. In one case, it was a two-man one year porting effort (no joke), which is an insane expense for the company, that they weren't aware they had incurred, until it happened. The head of the division, being a former software developer, was utterly shocked that the development team had made such a poor decision. In one case, it was determined that they were so tied into windows that that they didn't have the manpower to rewrite the code to a different platform and missed what appeared to be a golden opportunity that likely would've made the company a lot of money. This is the reality of life and software development.

In summary, be extremely careful about going non-portable, unless you have good justification for doing so. When you do go non-portable, do your best to isolate the non-portable pieces.

Share this post


Link to post
Share on other sites
How about allocations, considering speed on Windows only. HeapAlloc or malloc?

Share this post


Link to post
Share on other sites
Quote:
Original post by Rydinare
The only times when you should go with a non-portable solution is if it's proven (not theorized) that the non-portable solution is significantly faster (a 2% difference is not significant) or much easier to implement. Even then, I suggest wrapping OS-specific things in a more portable layer.


Or the portable one can't do what the OS specific one can, for example Async IO can't be done with the portable C or C++ IO functions.

I do second the advice that it's often worth while putting a portable layer around such things; in general you'll want to add abstraction in software development not take it away [smile]

Share this post


Link to post
Share on other sites
Quote:
Original post by Terefere
How about allocations, considering speed on Windows only. HeapAlloc or malloc?


The real question to ask is that, assuming there were a significant speed difference, why wouldn't the implementation make the substitution for you? It would be trivial to do so, so you can assume that either MSVC *does* make that substitution or there is no significant speed difference between the two.

Anyway, heap allocation is extremely slow compared to the cost of one function call - so it won't matter much anyway.

Share this post


Link to post
Share on other sites
Quote:
Original post by Terefere
How about allocations, considering speed on Windows only. HeapAlloc or malloc?
Considering malloc has substantially more debugging available for a minute speed hit, I really don't think HeapAlloc is viable.

You're really not taking things into perspective here - you're gaining a few cycles, a few tens or hundred at best. That's going to save you a fraction of a microsecond, which it's really not worth considering the portability and debugging you'll lose. What happens when you find you want to track all allocations to catch memory leaks in debug builds? You'll have to write your own memory manager which will do the exact same job of malloc.

If you do insist on doing this, then why stop there? Why not write a driver to bypass the OS memory allocation functions all together?

Share this post


Link to post
Share on other sites
Quote:
Original post by phantom
Quote:
Original post by Rydinare
The only times when you should go with a non-portable solution is if it's proven (not theorized) that the non-portable solution is significantly faster (a 2% difference is not significant) or much easier to implement. Even then, I suggest wrapping OS-specific things in a more portable layer.


Or the portable one can't do what the OS specific one can, for example Async IO can't be done with the portable C or C++ IO functions.

I do second the advice that it's often worth while putting a portable layer around such things; in general you'll want to add abstraction in software development not take it away [smile]


True on the asynchronous IO. That's a good point about functionality that just isn't provided in a standard way, so that should be in my list too. Overall, glad we're in agreement. [smile]

Share this post


Link to post
Share on other sites
Quote:
Original post by Evil Steve
What happens when you find you want to track all allocations to catch memory leaks in debug builds? You'll have to write your own memory manager which will do the exact same job of malloc.

If you do insist on doing this, then why stop there? Why not write a driver to bypass the OS memory allocation functions all together?


LOL, I already wrote a memory manager which does just that! Now, what kind of alloc call should I use in my overloaded "new" operator for max speed?

Share this post


Link to post
Share on other sites
Thx. Rating up. :) Could you pls explain why not HeapAlloc? Isn't HeapAlloc "closer to the metal"? Btw, I plan to keep this MM in release build.

Share this post


Link to post
Share on other sites
Quote:
Original post by Terefere
Thx. Rating up. :) Could you pls explain why not HeapAlloc? Isn't HeapAlloc "closer to the metal"?
malloc() gives you more safety and debugging help (Like allocation hooks, etc). There's just no reason to use HeapAlloc.
Besides, if you use HeapAlloc, you'll need to create your own heap in addition to the default heap which will be less efficient (slightly).

Quote:
Original post by Terefere
Btw, I plan to keep this MM in release build.
Why? Are you absolutely sure it is (or will be) more performant? Unless you've been doing low level coding for a long time, it's extremely unlikely that overloading the global memory allocation functions will be more performant - if you must, override operator new for specific classes, or create custom allocation functions for some types of memory, then you can use e.g. a pool allocator.

Share this post


Link to post
Share on other sites
Quote:
Original post by Terefere
Thx. Rating up. :) Could you pls explain why not HeapAlloc? Isn't HeapAlloc "closer to the metal"? Btw, I plan to keep this MM in release build.


Be careful with the thought of "closer to the metal". When something is faster because it is "closer to the metal", it's often because you're losing something that the other feature is giving you. You might run into problems three months down the line, and wind up having to manually implement something that malloc was already giving you, and actually wind up performing slower in the long run.

Not to mention that, as mentioned earlier, the odds that your memory manager will improve upon the default in C++ is probably unlikely.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement