Jump to content
  • Advertisement
Sign in to follow this  
agottem

In C#, is an 'int' really always 32 bits?

This topic is 3028 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to determine the minimum valid ranges of a C# int, and all sources seem to say an int is always 32 bits, no matter what. Is this really the case? It seems rather short-sighted of Microsoft to force all architectures supporting C# to virtualize the behavior of an int to be 32 bits. Also, if the size is forced to be 32 bits, is the endianness of the type also forced to be little endian? I really hope this isn't the case, since this seems like it'd be problematic for architectures where the ideal int size is, say, 31 bits.

Share this post


Link to post
Share on other sites
Advertisement
Quote:

I'm trying to determine the minimum valid ranges of a C# int, and all sources seem to say an int is always 32 bits, no matter what. Is this really the case?

The C# 'int' is an alias for the CLR type System.Int32 which, as the name implies, is always 32-bits.

Quote:

It seems rather short-sighted of Microsoft to force all architectures supporting C# to virtualize the behavior of an int to be 32 bits.

Why do you think that? The CLR is a virtualized runtime environment. It makes no difference if the defined size of a type is 32 bits or 76 bits. It's basically just as easy to implement.

Quote:

I really hope this isn't the case, since this seems like it'd be problematic for architectures where the ideal int size is, say, 31 bits.

"int" is a name. It's not a fundamental aspect of a computer architecture. The closest you get to something like an "ideal int size" is probably the register size of the GP registers on the chip, and dealing with a mismatch in the size of a fundamental type in the language and the size of a register is a well-known and more-or-less trivially solvable problem (and has been for some time now). In C or C++, for example, the size of the 'int' type doesn't always match the processor's register size, and this causes very little trouble.

Share this post


Link to post
Share on other sites
Quote:
Original post by jpetrie
Why do you think that? The CLR is a virtualized runtime environment. It makes no difference if the defined size of a type is 32 bits or 76 bits. It's basically just as easy to implement.


Performance wise, it does make a difference. If I'm running on an architecture that doesn't support a native 32 bit type, I now need to emulate 32 bit behavior such as overflow and underflow.

Why not just take the C approach, and define guaranteed minimums for these types? This would allow the compiler to use a more natural sized int, so long as it meets the minimum range requirements.

Share this post


Link to post
Share on other sites
Yes, the System.Int32 data type is always represented by 32 bits. Imagine that. It's actually amazingly useful to be able to depend on sizes of the built-in data types. No, it's not forced to any sort of "endianness". The two concepts aren't really related.

If you manage to find an architecture that likes integers to be sized to 31 bits, you're not going to be running common and widely used operating systems and APIs on it anyway, so it's a moot point.

Share this post


Link to post
Share on other sites
C/C++ is a systems level programming language and it helps to have the data sizes be different sizes sometimes to better fit the actual hardware platform the compiler is running on.

C# is an application level programming language aimed at rapid development. It runs on a virtual machine, and co-exists with, and inter-operates with all the .NET languages under the common language run time. It's nice to know things like that for sure.

C/C++ is a portable assembly meant to run right on top the hardware platform.

C# is part of the .NET tech. It runs on top the CLR, not the hardware. You shouldn't have to care about these details.

--edit NINJA'D x 100

Share this post


Link to post
Share on other sites
Quote:
Original post by Mike.Popoloski
Yes, the System.Int32 data type is always represented by 32 bits. Imagine that. It's actually amazingly useful to be able to depend on sizes of the built-in data types. No, it's not forced to any sort of "endianness". The two concepts aren't really related.

If you manage to find an architecture that likes integers to be sized to 31 bits, you're not going to be running common and widely used operating systems and APIs on it anyway, so it's a moot point.


You can have the same benefits of 'dependable sizes' by using minimum range guarantees. This provides the compiler with much greater flexibility.

Share this post


Link to post
Share on other sites
Quote:

Performance wise, it does make a difference. If I'm running on an architecture that doesn't support a native 32 bit type, I now need to emulate 32 bit behavior such as overflow and underflow.

What you're failing to grasp is that processors don't have "int types," they have register sizes. Generally very few register sizes -- 32 or 64 bit GP registers, and 80 bit floating point ones, for example. The problem of providing over/under flow behavior to types in a language that the processor wouldn't naturally do is something that almost all language compilers need to do anyway. It's not any extra effort.

Quote:

You can have the same benefits of 'dependable sizes' by using minimum range guarantees. This provides the compiler with much greater flexibility.

No, minimum sizes do not afford the same benefits. Minimum sizes come at the expense of much more difficult interop (which is a critical aspect of the CLR) and more headaches on the part of the developer because the size is not assured.

Share this post


Link to post
Share on other sites
Quote:
Original post by agottem
Quote:
Original post by Mike.Popoloski
Yes, the System.Int32 data type is always represented by 32 bits. Imagine that. It's actually amazingly useful to be able to depend on sizes of the built-in data types. No, it's not forced to any sort of "endianness". The two concepts aren't really related.

If you manage to find an architecture that likes integers to be sized to 31 bits, you're not going to be running common and widely used operating systems and APIs on it anyway, so it's a moot point.


You can have the same benefits of 'dependable sizes' by using minimum range guarantees. This provides the compiler with much greater flexibility.


No, you can't. Any situation where you're trying to port or interop with a different bit of code that requires you to match data type sizes is going to require a lot of extra work to ensure that you're using the right sizes as intended by the original writer of that system. For example, a lot of the lesser known file formats are documented by a simple C structure in source form. How exactly do you ensure that you are loading the right sized data from the file when your reference is giving things in ints and longs, which can have ANY size?

Share this post


Link to post
Share on other sites
Quote:
Original post by jpetrie
What you're failing to grasp is that processors don't have "int types," they have register sizes. Generally very few register sizes -- 32 or 64 bit GP registers, and 80 bit floating point ones, for example. The problem of providing over/under flow behavior to types in a language that the processor wouldn't naturally do is something that almost all language compilers need to do anyway. It's not any extra effort.


I'm well aware of the low level details. Ideally, an int would map to the architectures ideal size for integer operations.

Are C# floats also set to a specific size?

Also, most C compilers do not need to provide virtualized overflow/underflow behavior.

Share this post


Link to post
Share on other sites
Quote:
Original post by Mike.Popoloski
No, you can't. Any situation where you're trying to port or interop with a different bit of code that requires you to match data type sizes is going to require a lot of extra work to ensure that you're using the right sizes as intended by the original writer of that system. For example, a lot of the lesser known file formats are documented by a simple C structure in source form. How exactly do you ensure that you are loading the right sized data from the file when your reference is giving things in ints and longs, which can have ANY size?


If the advantage of forcing the size to be 32 bits is that you can now interact with files without having to worry about the sizes of integers that were stored, wouldn't the C# spec also need to define the endianness of the types?

Share this post


Link to post
Share on other sites

This topic is 3028 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Guest
This topic is now closed to further replies.
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!