Archived

This topic is now archived and is closed to further replies.

DWORD vs. unsigned int

This topic is 6019 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi! Is there a good reason why I would prefer using DWORD instead of unsigned int (except that it''s shorter and easier to write) . The Win32 headers have lots of typedef''s for standard types like int (INT), float (FLOAT), const char* (LPCSTR) and so on. Why should I use the win32 typedef''s? To make it even worse, OpenGL has it''s own typedef''s for standard types as well. Standard types like float has been typedef''nd to GLfloat for some reason. What''s the reason for doing this? Is it because of comptibility reasons or what? -René

Share this post


Link to post
Share on other sites
An int on one architecture is not necessarily the same size as an int on another. Most architectures are 32-bit today, at least in the PC world, but there are architectures like the Alpha that have 64-bit ints. So, when you use "int" you have no guarantee how big it is. Not only that, but the definition of a "word" is different on different platforms too. From the windows world, a WORD is a 16-bit number, DWORD is 32, and QWORD is 64, but there are many architectures that call 16-bit values halfwords, and 32-bit values words (in fact, every platform that i''ve worked with other than Wintel does). To get around this, there are preferred typedefs to use for various types, that are guaranteed to be a given size. ALWAYS use the typedefs, even though you know they''re just an int now, they won''t be when the rest of the world moves on to 64-bit architectures.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
One good example is NULL, to a C compiler NULL is a pointer to void (*void ) but to a C++ compiler its simply 0;

Share this post


Link to post
Share on other sites