Data types and #define questions

Started by
4 comments, last by Moot 23 years, 4 months ago
Why does so much code I see use data types such as BOOL, DWORD and HRESULT. Aren''t these MFC/OLE data types? Why use BOOL instead of ''bool'', which is a standard C++ type? Is it because the DirectX libraries use them? I''ve seen them being used in peoples own classes and functions, but surely this reduces the portability of someone''s code, doesn''t it? Or do most people accept that their code is always going to be running under Windows and just use the types for convenience? Even if this is the case I still don''t understand the use of BOOL instead of ''bool'' Another thing that I''ve often seen is #define WIN32_LEAN_AND_MEAN. I''ve also seen WIN32_EXTRA_LEAN. I''ve no idea what these do. Do they just stop unnecessary bits of the windows header files being compiled? What''s the difference between the two? Will leaving them out result in a larger executable or just a longer compilation time? Sorry if these are basic questions, but I''m writing my own Direct Draw wrapper at the moment and am copying some code from the DX samples. I''d like to fully understand what it does. Thanks, Moot
Advertisement
The refinition of the standar types (BOOL, CHAR, etc) probably somes from the fact that Micro$oft got bitten in the arse when Windose changed from being 16-bit to 32-bit.

The big problem with using the standard type "int" is that ANSI-C defines it as a 15-bit number (and doesn''t specify whether it''s signed or unsigned).

So by redefining their own types MS can control excatly how many bytes each type takes thus structures passed to the OS can be identical even if the sizeof(int) changes (again) - one day Windose may catch up with latest game consoles which use 64 and 128 bit ints.


The lean and mean directive stops windows.h including the shite associated with OLE, atoms, clipboards, etc... Take a look at your windows.h in the DevStudio\include directory.

I''ve never heard of extra lean, but a guess it''s just an extension to the above.
BOOL, DWORD and HRESULT are all standard windows defines. If you are porting, then you could always just typedef them yourself (they are all 32bit unsigned int''s I think).

bool is a C++ type, and a lot of the sample code is designed for C, that''s why BOOL is used.

And yes, the LEAN_AND_MEAN defines remove some stuff out of the windows headers. Just search for these in the header files if you want to know exactly what is happening.
Wow, those replies were quick!

Cheers!
To answer your first question, BOOL, DWORD and HRESULT are defined for the win32 API, not MFC (neither of which is directly related to OLE). This is an important difference. The win32 API is a C API, so it doesn''t know about the bool type. MFC is written in C++, so it can use a bool, but it chooses to stick with the win32 BOOL instead. They both reduce to the same type, though (a bool takes up at least one byte in memory, not one bit as many assume).

This doesn''t really reduce code portability if you''re writing windows programs. If you''re writing a console app that doesn''t use the win32 API--i.e. you''re just including "windows.h" for the type definitions--then that''s definitely reducing portability. But I''d say it''s portable to use any of these types in any windows program.

For your second question, if you don''t define WIN32_LEAN_AND_MEAN, the code within the checks in windows.h is included:
  #ifndef WIN32_LEAN_AND_MEAN#include <cderr.h>#include <dde.h>#include <ddeml.h>#include <dlgs.h>#ifndef _MAC#include <lzexpand.h>#include <mmsystem.h>#include <nb30.h>#include <rpc.h>#endif#include <shellapi.h>#ifndef _MAC#include <winperf.h>#if(_WIN32_WINNT >= 0x0400)#include <winsock2.h>#include <mswsock.h>#else#include <winsock.h>#endif /* _WIN32_WINNT >=  0x0400 */#endif#ifndef NOCRYPT#include <wincrypt.h>#endif#ifndef NOGDI#include <commdlg.h>#ifndef _MAC#include <winspool.h>#ifdef INC_OLE1#include <ole.h>#else#include <ole2.h>#endif /* !INC_OLE1 */#endif /* !MAC */#endif /* !NOGDI */#endif /* WIN32_LEAN_AND_MEAN */  



"int" is defined to be an signed integral type who''s size is greater than or equal to a "short" and less than to a "long". ANSI doesn''t say how big it is but does say that it is signed.

Also BOOL does not reduce to bool. Well, it does but at a cost. I don''t know if there is any standard that says how big a bool is but it does say that it''s value can only be 0 or 1 when it is converted to an integral type. Ensuring that it is 0 or 1 takes a bit of extra work by the compiler.

BOOL actually has slightly different semantics than bool. A BOOL is either 0 or not 0, a bool is either 0 or 1.

-Mike

This topic is closed to new replies.

Advertisement