Differentiating between original and typedef'd types

Started by
16 comments, last by SiCrane 17 years, 8 months ago
Is there any way to differentiate between a basic type and a type that has been typedef'd to it? I want to write two versions of a function, one for the 'int' type, and one for the 'BOOL' type - where BOOL is defined in the windows headers via "typedef int BOOL;". Is there any way of wiring it up to work without giving the two functions different names?

Richard "Superpig" Fine - saving pigs from untimely fates - Microsoft DirectX MVP 2006/2007/2008/2009
"Shaders are not meant to do everything. Of course you can try to use it for everything, but it's like playing football using cabbage." - MickeyMouse

Advertisement
I suspect you can't use bool instead of BOOL due to the win api? is this correct
No. A typedef is simply another name for an existing type, not a new type. You cannot tell it apart from the original type except by parsing the source code.

EDIT: Depending on the situtation you might be able to create your own type with implicit conversions to and from BOOL and use that instead of BOOL in your code.

Σnigma
Rats. Thanks anyway.

Quote:Original post by dmail
I suspect you can't use bool instead of BOOL due to the win api? is this correct
Yep. Ordinarily, converting from bool to BOOL isn't a problem, but I'm dealing with a situation involving an array of BOOLs, so conversion involves changing the stride of the array.

Richard "Superpig" Fine - saving pigs from untimely fates - Microsoft DirectX MVP 2006/2007/2008/2009
"Shaders are not meant to do everything. Of course you can try to use it for everything, but it's like playing football using cabbage." - MickeyMouse

Did you see my edit?

Σnigma
Maybe somebody could answer this related question, which I have never understood.
Why on earth did microsoft decide that a BOOL which only has two values occupies a 32bit int instead of an unsigned char?
Quote:Original post by dmail
Maybe somebody could answer this related question, which I have never understood.
Why on earth did microsoft decide that a BOOL which only has two values occupies a 32bit int instead of an unsigned char?


I guess its all about whats faster for the CPU. 32bit CPUs work faster if things ar 32 bit-aligned, and we seem to have a lot of RAM, but never enough speed, so that could explain their decision...
So would you exspect a BOOL to be quicker than bool?
And does a BOOL on a 64bit cpu get typedeffed as a 64bit int?
ints are typically 32-bit even with 64-bit compilers/OS's. This is true for both Windows and Linux. It may not be true for other compilers and/or OS's.

If you look around a bit you'll see endless debates on this forum about whether or not "typedef int BOOL" is better than "typedef char BOOL" for whatever definition of "better" the poster cares about. Whether or not it actually is depends heavily on the details of the scenarios you care about.

superpig - just think of it as "typedef" being pretty much the worst name in the entire C/C++ language since it doesn't actually define a type. It defines an alias for a type.
-Mike
Quote:Original post by dmail
Maybe somebody could answer this related question, which I have never understood.
Why on earth did microsoft decide that a BOOL which only has two values occupies a 32bit int instead of an unsigned char?


For a start, Microsoft's BOOL type has at least 3 distinct meaningful values. It's not a boolean type, it's an integral type.

Stephen M. Webb
Professional Free Software Developer

This topic is closed to new replies.

Advertisement