Forcing certain sized ints and floats

Started by
28 comments, last by Rattrap 18 years, 6 months ago
so, for sure, gcc uses 16 bits shorts, 32 bit ints? No matter what?
Advertisement
Not no matter what...they might (probably will) change sometime in the future...
{[JohnE, Chief Architect and Senior Programmer, Twilight Dragon Media{[+++{GCC/MinGW}+++{Code::Blocks IDE}+++{wxWidgets Cross-Platform Native UI Framework}+++
Do you mean, just in future versions? But right now they are standard?
what happens if you compile the code on a 64bit platform


can you still gaurentee the sizes then?
I believe the official word from Microsoft was that an int was going to stay 32-bit and that long long will be used for 64-bit ints in Visual Studio. (This could possibly change if the C++ standard changes to officially incorporate 64-bit variables).

"I can't believe I'm defending logic to a turing machine." - Kent Woolworth [Other Space]

"if the C++ standard changes to officially incorporate 64-bit variables"

The C++ Standard does not put many restrictions on the sizes of variables. I don't even think that it assumes an 8 bit byte. You could probably have a 10 bit char and a 50 bit int. So C++ has all of the 64 bit support that it needs. Other than a few simple rules (like int must be bigger than a char, and it must also be an even multiple, so you could have sizeof(int)==sizeof(char) )

Personally I think that int, float and char should be templates with typedefs supplied by the vendor.

so these two would be the same type:
integer<32,signed> i;
int j;

you could do this:
integer<64,unsigned> u;
For gcc, you can include the header file stdint.h, it includes types such as int32_t, uint32_t, uint8_t, etc. However, if you can't/don't want to do that, all you have to really do is declare your types in the one header file with #ifdefs for each platform you plan to compile on. Later on, if you port to other platforms, just add to that header file.
Quote:Original post by Glak
Personally I think that int, float and char should be templates with typedefs supplied by the vendor.

so these two would be the same type:
integer<32,signed> i;
int j;

you could do this:
integer<64,unsigned> u;

Seems like it shouldn't be too hard to do. Provide a basic 'big-int' type functionality by default, and provide explicit instances for the common values [16/32/64]. I might dabble with that over the next couple days if I get bored, sounds like an interesting way to spend an afternoon.

CM
Quote:Original post by colinisinhere
so, for sure, gcc uses 16 bits shorts, 32 bit ints? No matter what?

Nope. GCC compiles for far more than just the i386 architecture.
Quote:Original post by Glak
The C++ Standard does not put many restrictions on the sizes of variables. I don't even think that it assumes an 8 bit byte. You could probably have a 10 bit char and a 50 bit int. So C++ has all of the 64 bit support that it needs. Other than a few simple rules (like int must be bigger than a char, and it must also be an even multiple, so you could have sizeof(int)==sizeof(char) )


The standard does not in fact define a char as an 8-bit byte. It's defined as the smallest unit of addressable memory. sizeof(char) must be equal to 1. The x86 architecture happens to address 8-bit bytes. If C++ were implemented on the Symbolics 3600 series, it would have a 36-bit char.

This topic is closed to new replies.

Advertisement