DWORD vs. unsigned int

Started by
2 comments, last by Spartacus 22 years, 9 months ago
Hi! Is there a good reason why I would prefer using DWORD instead of unsigned int (except that it''s shorter and easier to write) . The Win32 headers have lots of typedef''s for standard types like int (INT), float (FLOAT), const char* (LPCSTR) and so on. Why should I use the win32 typedef''s? To make it even worse, OpenGL has it''s own typedef''s for standard types as well. Standard types like float has been typedef''nd to GLfloat for some reason. What''s the reason for doing this? Is it because of comptibility reasons or what? -René

Real programmers don't document, if it was hard to write it should be hard to understand

Advertisement
An int on one architecture is not necessarily the same size as an int on another. Most architectures are 32-bit today, at least in the PC world, but there are architectures like the Alpha that have 64-bit ints. So, when you use "int" you have no guarantee how big it is. Not only that, but the definition of a "word" is different on different platforms too. From the windows world, a WORD is a 16-bit number, DWORD is 32, and QWORD is 64, but there are many architectures that call 16-bit values halfwords, and 32-bit values words (in fact, every platform that i''ve worked with other than Wintel does). To get around this, there are preferred typedefs to use for various types, that are guaranteed to be a given size. ALWAYS use the typedefs, even though you know they''re just an int now, they won''t be when the rest of the world moves on to 64-bit architectures.
One good example is NULL, to a C compiler NULL is a pointer to void (*void ) but to a C++ compiler its simply 0;
I don''t fully understand why integers need to be redefined. These seem to work:

__int8
__int16
__int32
__int64

This topic is closed to new replies.

Advertisement