cannot convert char* to LPCWSTR
Banned - Reputation: 100
Posted 12 July 2006 - 07:39 PM
Members - Reputation: 568
Posted 12 July 2006 - 07:44 PM
Crossbones+ - Reputation: 4783
Posted 12 July 2006 - 07:44 PM
I guess you're using Visual Studio 2005 which defaults all projects to Unicode. You can change that setting in the projects properties to MBCS code to use char* again.
You can also learn something new and use Unicode. In that case you have to use WCHAR* (or better for Windows programming TCHAR*).
To get a string literal to Unicode encoding you have to use the L macro:
WCHAR* WindowCaption = L"Main Window"
To make your code usable in both Unicode as well as MBCS setting you can use TCHAR* and the _T macro:
TCHAR* WindowCaption = _T( "Main Window" ).
Do NOT simply cast, as you noticed, that doesn't work.
Members - Reputation: 204
Posted 13 July 2006 - 02:58 AM
But here is the (awesome) answer that I got (from Pat) which helped me understand what was going on and why to use UNICODE.
As a sidenote, let me explain the issue a little more detailed so you understand what's really going on.
The Win32 API headers defines a macro for each API function that has one or more string arguments or returns a character pointer. The actual implementation exists in two versions - one ASCII and one UNICODE version. These versions are suffixed by a single letter to indicate the string type.
MessageBox -> the macro you are used to
MessageBoxA -> the ASCII version that you propably know and use, takes char* arguments, e.g. char const * title = "Some Text"
MessageBoxW -> the UNICODE version; same as MessageBoxA, but expects UNICODE input, e.g. wchar_t const * title = L"Some Text"
The header that defines the macro now looks at the character set that is used by examining the macro _UNICODE (or _UNICODE_) and maps the corresponding function:
# define MessageBox MessageBoxA
# define MessageBox MessageBoxW
The same is done for any struct that contains C-strings.
So far so good. Now why should you prefer Unicode? First of all, since the introduction of Windows NT (which Windows 2000, XP, 2003, and later are build on), the kernel uses UNICODE internally. This means that the ASCII versions convert the input to unicode at some point anyway, so you might as well use it, too.
Secondly, UNICODE avoids any problems with characters that are outside of the default ASCII set (e.g. character values >128), which are otherwise mapped to whatever charset the user has installed. This can lead to problems with non-English OS versions as the extended ASCII character set (128-255) differs slightly from region to region.
Last but not least, the Hungarian Notation used by the Win32 API indicates the type of the data as well. So if the compiler complains about a missing cast, you can easily spot the requested type by looking at its name. For example LPCSTR stand for "Long Pointer [to] Const STRing". The "Long"-part is an artifact from the 16-bit era and can safely be ignored. the const string part tells you that the ASCII character set is used and that the data will not be modified (i.e. const char *).
LPCWSTR, however, is almost the same with the exception that it's a "Wide STRing", e.g. UNICODE.
If you are still having problems take a look at my thread about it and see if any of those responses help.
GL to ya,