Jump to content
  • Advertisement
Sign in to follow this  
BiGF00T

why do they use caps for the return values?

This topic is 4863 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey, this might be a rather stupid question but I am now wondering for a while... I just read the d3d tutorial on the msdn page which is pretty easy to understand BUT: I noticed that all the return values are VOID, INT etc... When I have a look at them it says "#define VOID void" or "typedef int INT". wtf? Is there a good reason to do this? I see no point or no advantage why someone would define something to the same thing except if one enjoys to write caps (which I don't!). I can understand people who define FALSE as 0 and TRUE as 1 but the int INT thing isn't very clear to me. It helps not to understand something better and pressing the shift key while typing doesn't make anything easier... Can anyone enlighten me? Thanks in advance :) BiGF00T

Share this post


Link to post
Share on other sites
Advertisement
As far as I know, it's for future combatibility, if they one day decide to define INT as, say, a 64 bit int, instead of the standard 32bit one. It just gives them the option to change the underlying data types in the future

Share this post


Link to post
Share on other sites
They're supposed to be redefiniable for compability with future systems and various other hacks..
INT and VOID may be pretty extreme examples (especially VOID, why on earth would you ever want to redefined that?) but I suppose Microsoft prefers to be consistent and wrap up everything in their own custom abstraction layer.

Quite a few types have been changed already between Win16, Win32 and Win64. Microsofts decision to keep ints as 32-bit on 64-bit platforms has probably clashed with other compilers already, so keeping INT as an explicit 32-bit type in public headers should help ensure binary compability.

Share this post


Link to post
Share on other sites
Typically, #defines macros and "constants" are put in all caps. This helps identify them as being defined using #define. Almost every type in the Microsoft functions are variable types created using #define. This basically means there is one place they need to modify something when they modify one of these. This also allows them to hide some of the Unicode/Ascii stuff.

Share this post


Link to post
Share on other sites
ok, at least the int thing makes sense now and as you say, they want to have it consistent. later people would complain "why do i have to write >INT< but >void<? that makes no sense!!"
thanks for your help ;) now i can sleep peacefully again :P

so if I'll just use the uppercase thingies in the future, I'll certainly have no problems and if I ever find a machine where the macros are not defined then i can redefine them...

Share this post


Link to post
Share on other sites
Quote:
Original post by BiGF00T
ok, at least the int thing makes sense now and as you say, they want to have it consistent. later people would complain "why do i have to write >INT< but >void<? that makes no sense!!"
Well, someone probably reasoned that "Why should we define every single built-in type except void? That's just so inconsistent!!".

Share this post


Link to post
Share on other sites
yes, rather make them all look the same.
what would your advice be? use the stuff I'm used to or switch to the fancy new caps redefines? or might there be situations when i should prefer one before the other?

Share this post


Link to post
Share on other sites
Quote:
Original post by BiGF00T
what would your advice be? use the stuff I'm used to or switch to the fancy new caps redefines? or might there be times when i should prefer one before the other?
I'd say stay away from Window's types unless you're interfacing directly with one of it's APIs. There's really no guarantee about what will happen to window's types in the future, and their are so generic that their virtually guaranteed to clash with other libraries/systems.

If you *really* need compability on that level (which you almost certainly don't, at least not for the majority of your code) then I advice you to create your own set of types with some fairly unique prefix.

For most of my own projects i define a set of types with exact sizes (you can base them on C99's stdint.h if you're lucky enough to have it) for various IO and when I need 64-bit integers. For the rest I simply assume at least 32-bit bit ints (you won't encounter anything less modern systems) and work with the built-in types. Oh, and size_t/ptrdiff_t/intptr_t can also do wonders for compability.

Binary compability for libraries is a whole topic on it's own. And using fixed-size types is just one of many messy things involved..

Share this post


Link to post
Share on other sites
For your own stuff, I would recommend using typedef instead of #define. It's a lot cleaner, and is actually handled by the compiler (I believe), instead of the preprocessor.

Also, I would only use the Microsoft named stuff when you absolutely need too. Otherwise if you know what you have is the same as their's, just cast to it.

Share this post


Link to post
Share on other sites
thanks to all of you. i never liked typing caps so it'll be no problem to return to my usual habit of using the "original" lowercase types. if i'll ever encounter problems then i'll know what might cause the problem.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!