Sign in to follow this  

WORD and DWORD

This topic is 4837 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi everyone, I had a look at win32 programming and now I am confused about the WORD and DWORD types. Which build-in typ(es) is/are behind them? And what exactly is a Low/High-Word, such as you can find in the wParam/lParam types? thx Quak

Share this post


Link to post
Share on other sites
WORD and DWORD are Win32-only. WORD is a 2-byte unsigned value (unsigned short) and DWORD is a 4-byte unsigned value (unsigned int).

There are two WORDs in a DWORD. The higher half is what HIWORD() returns and the lower half is what LOWORD() returns.

Share this post


Link to post
Share on other sites
Quote:
Original post by JohnBolton
WORD and DWORD are Win32-only. WORD is a 2-byte unsigned value (unsigned short) and DWORD is a 4-byte unsigned value (unsigned int).

There are two WORDs in a DWORD. The higher half is what HIWORD() returns and the lower half is what LOWORD() returns.

Keep in mind that you can still use a DWORD as a single value. The HIWORD/LOWORD functions just allow you to store and retrieve two WORD values.

Share this post


Link to post
Share on other sites
If you're using Visual Studio, just right click on WORD or DWORD and click "Go to Definition". It'll show you exactly what it is, and this is handy in the future when you encounter new types.

Share this post


Link to post
Share on other sites
Quote:
Original post by GroZZleR
If you're using Visual Studio, just right click on WORD or DWORD and click "Go to Definition".

Or even easier, just hover the mouse cursord over "WORD" for a second or two and it will pop up with its definition.

Share this post


Link to post
Share on other sites
Quote:
Original post by JohnBolton
WORD and DWORD are Win32-only. WORD is a 2-byte unsigned value (unsigned short) and DWORD is a 4-byte unsigned value (unsigned int).

There are two WORDs in a DWORD. The higher half is what HIWORD() returns and the lower half is what LOWORD() returns.


A DWORD is an unsigned long, not an unsigned int. A WORD is 16 bits and a DWORD is 32 bits. The D prefix in DWORD stands for double. On other words, DWORD mean double word, hence, 32 bits. If you read any old Assembly FAQ, you will find this information. You probebly know all thios already though.

Quak:
The WORD and DWORD are just type definitions of the build-in primitive data types. If you look in the WinDef.h file, you will find WORD & DWORDs type definitions; amung other type definitions.

Don't get confused with all those macros that M$ uses. The are just bit shift macros. The LOWORD and HIWORD retriev the low and hight order words of a DWORD. Hence, LOWORD and HIWORD return a WORD located in either the front or the back of a DWORD, so to speak. It's just a clever way of compressing information into one variable.

Share this post


Link to post
Share on other sites
Quote:
Original post by sakky
Quote:
Original post by JohnBolton
WORD and DWORD are Win32-only. WORD is a 2-byte unsigned value (unsigned short) and DWORD is a 4-byte unsigned value (unsigned int).

There are two WORDs in a DWORD. The higher half is what HIWORD() returns and the lower half is what LOWORD() returns.


A DWORD is an unsigned long, not an unsigned int. A WORD is 16 bits and a DWORD is 32 bits. The D prefix in DWORD stands for double. On other words, DWORD mean double word, hence, 32 bits. If you read any old Assembly FAQ, you will find this information. You probebly know all thios already though.

Quak:
The WORD and DWORD are just type definitions of the build-in primitive data types. If you look in the WinDef.h file, you will find WORD & DWORDs type definitions; amung other type definitions.

Don't get confused with all those macros that M$ uses. The are just bit shift macros. The LOWORD and HIWORD retriev the low and hight order words of a DWORD. Hence, LOWORD and HIWORD return a WORD located in either the front or the back of a DWORD, so to speak. It's just a clever way of compressing information into one variable.


int and long are the same in 32 bit processors, they are both 32 bit and hence Double Words (DWORD), unsigned short is 16 bit, single word (WORD), I think there us an quad word (64 bit), but I think its a struct of 2 DWORDs, the __int64 typedef is a bit missleading too, as it is really 32 bits in 32 bit architectures.

Cheers!

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by Kwizatz
int and long are the same in 32 bit processors, they are both 32 bit and hence Double Words (DWORD), unsigned short is 16 bit, single word (WORD), I think there us an quad word (64 bit), but I think its a struct of 2 DWORDs, the __int64 typedef is a bit missleading too, as it is really 32 bits in 32 bit architectures.

Cheers!


Actually Kwizatz, and integer is the size of the processor. Hence, an integer on a 8 bit processor is 8 bits, an integer on a 16 bit processor is 16 bits, and a integer on a 32 bit processor is 32 bits. Like wise, an integer on a 64 bit processor is 64 bits. The short and long data types are the same size on all processors.

The bad thing about using integers, is that there size is dependant on the system. One way that this would effect an application, is if the application where to use integers with file routines. Because one machine see an integer as a 64 bit data type and read 64 bits from the file stream. Were as, the host system that the application was designed on sees an integer as a 32 bit data type and will only read 32 bits of data. So bottom line is, never use an int unless you know you can get away with it.

Yes,Kwizatz, you are right! There is a 64 bit variable, but it is not a structure of two DWORDs. The Microsoft pacific version of this variable is __int64. The non Microsoft, or GCC definition is long long. Either of the two may be signed or unsigned. You will find that the size_t variable found in most standard C/C++ libraries, is in fact a 64 bit variable. A 64 bit integer is HUGE!

Share this post


Link to post
Share on other sites
hey AP, you are 100% wrong ... a long is NEVER EVER shorter than an int ... so on a 64 bit platform a an int can be 32 or 64 bits, and a long can be 32 or 64 bits ... but a long will always be as long as an int ...

the BYTE, WORD, DWORD, and QWORD macros are ALWAYS 8, 16, 32, 64 bits respectively ... which is why they exist ...

if you want to write code that uses the natural size of the proc, use "int", if you always want to use 32 bits, use DWORD ... (for windows programming only)

Share this post


Link to post
Share on other sites
Quote:
Original post by Kwizatz
... the __int64 typedef is a bit missleading too, as it is really 32 bits in 32 bit architectures.

Not true. I use __int64 on a 32-bit system and it is definitely 64 bits.

Quote:
Original post by Anonymous Poster
... Actually Kwizatz, and integer is the size of the processor ...

Actually AP, the sizes of the types are up to the compiler. Different compilers might (and do) have different sizes for the same types on the same platform.

Quote:
Original post by Anonymous Poster
... There is a 64 bit variable, but it is not a structure of two DWORDs ...

The 64-bit type that Kwizatz is talking about is LARGE_INTEGER. It is the union of a 64-bit value and a struct with two 32-bit values.

Share this post


Link to post
Share on other sites
Quote:
Post by Kwizatz
int and long are the same in 32 bit processors, they are both 32 bit and hence Double Words (DWORD), unsigned short is 16 bit, single word (WORD), I think there us an quad word (64 bit), but I think its a struct of 2 DWORDs, the __int64 typedef is a bit missleading too, as it is really 32 bits in 32 bit architectures.


Actually, the __int64 is 64 bits, hence it’s name. But the processor can only read 32 per cycle. So the 64 bits is broken down into two cycle reads of 32 bits.

Quote:
Post by Xia
hey AP, you are 100% wrong ... a long is NEVER EVER shorter than an int ... so on a 64 bit platform a an int can be 32 or 64 bits, and a long can be 32 or 64 bits ... but a long will always be as long as an int ...

the BYTE, WORD, DWORD, and QWORD macros are ALWAYS 8, 16, 32, 64 bits respectively ... which is why they exist ...

if you want to write code that uses the natural size of the proc, use "int", if you always want to use 32 bits, use DWORD ... (for windows programming only)


Hey, the AP was me I said! And, I am not wrong. A long is shorter then an int on a 64 bit processor. Look it up on Intel’s and AMD’s site if you don’t believe me. A long is always 4 bytes! In C/C++, the int is the data type that has a variant size depending on the processor’s bus. A data type never changes sizes! A processor

And BYTE, WORD, DWORD are not macros; they are type definitions! QWORD is a macro though. Look it up on MSDN. Also, you sort of contradicted your self there on your last statement too :)

Quote:
Original post by JohnBolton
Quote:
Original post by Kwizatz
... the __int64 typedef is a bit missleading too, as it is really 32 bits in 32 bit architectures.

Not true. I use __int64 on a 32-bit system and it is definitely 64 bits.

Quote:
Original post by Anonymous Poster
... Actually Kwizatz, and integer is the size of the processor ...

Actually AP, the sizes of the types are up to the compiler. Different compilers might (and do) have different sizes for the same types on the same platform.

Quote:
Original post by Anonymous Poster
... There is a 64 bit variable, but it is not a structure of two DWORDs ...

The 64-bit type that Kwizatz is talking about is LARGE_INTEGER. It is the union of a 64-bit value and a struct with two 32-bit values.


Umm dude, I was the AP. Yes, I had a thought that Kwizatz was talking about the LARGE_INTEGER because he said QuadPart. The sizes of a variable are dependent of the language, true. C and C++ use the register extensions to get the size in bytes. AH, AX, EAX is 8, 16, 32 bits respectively. Technically, the CPU does not know what a long, short or an int is. So you are very much right on that one. It is the language that defines the data types range and what there size should be. The compiler’s code generation phase that depicts what the variable sizes will be based of the various output of the previous phases. But hey, I would like to know witch compilers dick with the types I use? Would it be safe to assume the ASCII compliant and non ASCII compliant ones do?

Reference Material
Guide: About the 80386 Architecture
Some FAQ I found on CPU stuff ??
AMD devSource article on Athlon 64 FX
Intel’s Micro-Pro articles
  • Quote:
    Original post by sakky
    Hey, the AP was me I said! And, I am not wrong. A long is shorter then an int on a 64 bit processor. Look it up on Intel’s and AMD’s site if you don’t believe me. A long is always 4 bytes!

    Again, we are telling you that you are wrong. The C++ standard guarantees that the long type is never smaller than the int type. By the very definition of long, it has to be at least as large as an int. I can only assume you are confused due to parallel terms outside of the language. Someone might refer to a long int outside of the C++ language and have it be a certain size, but that has no effect on the corresponding C++ long type, similar to how a C++ byte isn't necessarily the size of a system's byte.

    Share this post


    Link to post
    Share on other sites
    The int is the same size as a long or larger on a 64 bit processor. Make a program that prints the size of the variables of an int, and you will see. The int is larger, however, it can not hold the values that a long can. It can only hold the values of what a 16 bit short would. I can run this program


    #include <stdio.h>

    int main( )
    {
    printf( “A int is %d in bytes\n", sizeof( int ) );
    printf( “A long is %d in bytes\n", sizeof( long ) );
    return 0;
    }



    on my Athlon and Pentium and get the same answers, because they are both 32 bit processors. The program states that an int and a long are the same size. On my 386, the int is 2 bytes and the long is still 4 bytes. And on my friends Athlon 64 FX, the int is 8 bytes and the long is still 4 bytes.

    The int may be bigger then a long, but it can never hold the value a long can, or it will cause an overflow. In C++ a int is smaller the a long in terms of storage capacity; the amount of data you may have in the variable. But the int is just like the short, except it’s size varies upon different systems. I even have C++ books that explain this. The int just can’t hold the value that a long could, even though on a 32 or 64 bit processor is could. But it is just a crappy variable that doesn’t have a set size; where as the byte, short, and long always have the same size.

    In C/C++, the int is smaller because it can not hold as much information as a long. In fact it hold the same size as a short. But the space it takes up is what I’m referring to and the int is bigger then a long on a 64 bit processor, or the same size on a 32 bit processor.

    Share this post


    Link to post
    Share on other sites
    Quote:
    on my friends Athlon 64 FX, the int is 8 bytes and the long is still 4 bytes.


    Unless my interpretation of the C++ standard is incorrect, that makes your compiler non-compliant to the standard. Here's my reasoning, taken directly from the standard:




    1.7
    -1-
    Quote:
    "The fundamental storage unit in the C++ memory model is the byte." ...




    5.3.3
    -1-
    Quote:
    "The sizeof operator yields the number of bytes in the object representation of its operand." ...




    3.9.1
    -2-
    Quote:
    "There are four signed integer types: ``signed char'', ``short int'', ``int'', and ``long int.'' In this list, each type provides at least as much storage as those preceding it in the list." ...





    Since 1.7 declares the fundamental storage unit to be a C++ byte, and 5.3.3 declares the sizeof operator to yield the number of bytes (amount of storage) that an instantiation of a type occupies, and 3.9.1 declares that each type in the mentioned list provides at least as much storage (defined in 1.7 as bytes) as the previous one in the list, then that means that sizeof long would always have to yield a value at least as large as sizeof int.

    Also, your statements:
    Quote:
    The short and long data types are the same size on all processors.


    Quote:
    But it is just a crappy variable that doesn’t have a set size; where as the byte, short, and long always have the same size.

    short and long (there is no type called byte), just like all of the other fundamental integral types (excluding char, signed char and unsigned char) do not always have the same size in bytes between compilers. The standard just provides the general requirements I mentioned earlier.

    Edit: I just realized how horribly off topic from the original post we are getting with this. If we continue, let's take it to a new thread.

    [Edited by - Polymorphic OOP on September 20, 2004 11:44:32 AM]

    Share this post


    Link to post
    Share on other sites

    This topic is 4837 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

    If you intended to correct an error in the post then please contact us.

    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

    Sign in to follow this