WORD and DWORD

Started by
15 comments, last by sakky 19 years, 7 months ago
hey AP, you are 100% wrong ... a long is NEVER EVER shorter than an int ... so on a 64 bit platform a an int can be 32 or 64 bits, and a long can be 32 or 64 bits ... but a long will always be as long as an int ...

the BYTE, WORD, DWORD, and QWORD macros are ALWAYS 8, 16, 32, 64 bits respectively ... which is why they exist ...

if you want to write code that uses the natural size of the proc, use "int", if you always want to use 32 bits, use DWORD ... (for windows programming only)
Advertisement
Quote:Original post by Kwizatz
... the __int64 typedef is a bit missleading too, as it is really 32 bits in 32 bit architectures.

Not true. I use __int64 on a 32-bit system and it is definitely 64 bits.

Quote:Original post by Anonymous Poster
... Actually Kwizatz, and integer is the size of the processor ...

Actually AP, the sizes of the types are up to the compiler. Different compilers might (and do) have different sizes for the same types on the same platform.

Quote:Original post by Anonymous Poster
... There is a 64 bit variable, but it is not a structure of two DWORDs ...

The 64-bit type that Kwizatz is talking about is LARGE_INTEGER. It is the union of a 64-bit value and a struct with two 32-bit values.
John BoltonLocomotive Games (THQ)Current Project: Destroy All Humans (Wii). IN STORES NOW!
Quote:Post by Kwizatz
int and long are the same in 32 bit processors, they are both 32 bit and hence Double Words (DWORD), unsigned short is 16 bit, single word (WORD), I think there us an quad word (64 bit), but I think its a struct of 2 DWORDs, the __int64 typedef is a bit missleading too, as it is really 32 bits in 32 bit architectures.


Actually, the __int64 is 64 bits, hence it’s name. But the processor can only read 32 per cycle. So the 64 bits is broken down into two cycle reads of 32 bits.

Quote:Post by Xia
hey AP, you are 100% wrong ... a long is NEVER EVER shorter than an int ... so on a 64 bit platform a an int can be 32 or 64 bits, and a long can be 32 or 64 bits ... but a long will always be as long as an int ...

the BYTE, WORD, DWORD, and QWORD macros are ALWAYS 8, 16, 32, 64 bits respectively ... which is why they exist ...

if you want to write code that uses the natural size of the proc, use "int", if you always want to use 32 bits, use DWORD ... (for windows programming only)


Hey, the AP was me I said! And, I am not wrong. A long is shorter then an int on a 64 bit processor. Look it up on Intel’s and AMD’s site if you don’t believe me. A long is always 4 bytes! In C/C++, the int is the data type that has a variant size depending on the processor’s bus. A data type never changes sizes! A processor

And BYTE, WORD, DWORD are not macros; they are type definitions! QWORD is a macro though. Look it up on MSDN. Also, you sort of contradicted your self there on your last statement too :)

Quote:Original post by JohnBolton
Quote:Original post by Kwizatz
... the __int64 typedef is a bit missleading too, as it is really 32 bits in 32 bit architectures.

Not true. I use __int64 on a 32-bit system and it is definitely 64 bits.

Quote:Original post by Anonymous Poster
... Actually Kwizatz, and integer is the size of the processor ...

Actually AP, the sizes of the types are up to the compiler. Different compilers might (and do) have different sizes for the same types on the same platform.

Quote:Original post by Anonymous Poster
... There is a 64 bit variable, but it is not a structure of two DWORDs ...

The 64-bit type that Kwizatz is talking about is LARGE_INTEGER. It is the union of a 64-bit value and a struct with two 32-bit values.


Umm dude, I was the AP. Yes, I had a thought that Kwizatz was talking about the LARGE_INTEGER because he said QuadPart. The sizes of a variable are dependent of the language, true. C and C++ use the register extensions to get the size in bytes. AH, AX, EAX is 8, 16, 32 bits respectively. Technically, the CPU does not know what a long, short or an int is. So you are very much right on that one. It is the language that defines the data types range and what there size should be. The compiler’s code generation phase that depicts what the variable sizes will be based of the various output of the previous phases. But hey, I would like to know witch compilers dick with the types I use? Would it be safe to assume the ASCII compliant and non ASCII compliant ones do?

Reference Material
Guide: About the 80386 Architecture
Some FAQ I found on CPU stuff ??
AMD devSource article on Athlon 64 FX
Intel’s Micro-Pro articles
<a href="http://www.amd.com/us-en/Processors/DevelopWithAMD/0,,30_2252,00.html”>AMD’s Develop with AMD tutorials
Take back the internet with the most awsome browser around, FireFox
Quote:Original post by sakky
Hey, the AP was me I said! And, I am not wrong. A long is shorter then an int on a 64 bit processor. Look it up on Intel’s and AMD’s site if you don’t believe me. A long is always 4 bytes!

Again, we are telling you that you are wrong. The C++ standard guarantees that the long type is never smaller than the int type. By the very definition of long, it has to be at least as large as an int. I can only assume you are confused due to parallel terms outside of the language. Someone might refer to a long int outside of the C++ language and have it be a certain size, but that has no effect on the corresponding C++ long type, similar to how a C++ byte isn't necessarily the size of a system's byte.
The int is the same size as a long or larger on a 64 bit processor. Make a program that prints the size of the variables of an int, and you will see. The int is larger, however, it can not hold the values that a long can. It can only hold the values of what a 16 bit short would. I can run this program

#include <stdio.h>int main( ){	printf( “A int is %d in bytes\n", sizeof( int ) );	printf( “A long is %d in bytes\n", sizeof( long ) );	return 0;}


on my Athlon and Pentium and get the same answers, because they are both 32 bit processors. The program states that an int and a long are the same size. On my 386, the int is 2 bytes and the long is still 4 bytes. And on my friends Athlon 64 FX, the int is 8 bytes and the long is still 4 bytes.

The int may be bigger then a long, but it can never hold the value a long can, or it will cause an overflow. In C++ a int is smaller the a long in terms of storage capacity; the amount of data you may have in the variable. But the int is just like the short, except it’s size varies upon different systems. I even have C++ books that explain this. The int just can’t hold the value that a long could, even though on a 32 or 64 bit processor is could. But it is just a crappy variable that doesn’t have a set size; where as the byte, short, and long always have the same size.

In C/C++, the int is smaller because it can not hold as much information as a long. In fact it hold the same size as a short. But the space it takes up is what I’m referring to and the int is bigger then a long on a 64 bit processor, or the same size on a 32 bit processor.
Take back the internet with the most awsome browser around, FireFox
Quote:on my friends Athlon 64 FX, the int is 8 bytes and the long is still 4 bytes.


Unless my interpretation of the C++ standard is incorrect, that makes your compiler non-compliant to the standard. Here's my reasoning, taken directly from the standard:




1.7
-1-
Quote:"The fundamental storage unit in the C++ memory model is the byte." ...




5.3.3
-1-
Quote:"The sizeof operator yields the number of bytes in the object representation of its operand." ...




3.9.1
-2-
Quote:"There are four signed integer types: ``signed char'', ``short int'', ``int'', and ``long int.'' In this list, each type provides at least as much storage as those preceding it in the list." ...





Since 1.7 declares the fundamental storage unit to be a C++ byte, and 5.3.3 declares the sizeof operator to yield the number of bytes (amount of storage) that an instantiation of a type occupies, and 3.9.1 declares that each type in the mentioned list provides at least as much storage (defined in 1.7 as bytes) as the previous one in the list, then that means that sizeof long would always have to yield a value at least as large as sizeof int.

Also, your statements:
Quote:The short and long data types are the same size on all processors.


Quote:But it is just a crappy variable that doesn’t have a set size; where as the byte, short, and long always have the same size.

short and long (there is no type called byte), just like all of the other fundamental integral types (excluding char, signed char and unsigned char) do not always have the same size in bytes between compilers. The standard just provides the general requirements I mentioned earlier.

Edit: I just realized how horribly off topic from the original post we are getting with this. If we continue, let's take it to a new thread.

[Edited by - Polymorphic OOP on September 20, 2004 11:44:32 AM]
I created a new thread for this: go here
Take back the internet with the most awsome browser around, FireFox

This topic is closed to new replies.

Advertisement