# Beginner Windows Question

This topic is 4468 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I have recently began my journey to learn how to program under the Windows API. I was doing alright under Dev-C++, but I have recently switched over to Visual Studio 2005. I am now have some problems just compiling the simplest of programs I was using as a test.
// Windowstest.cpp
//

#include "stdafx.h"
#include <windows.h>

int WINAPI WinMain(HINSTANCE, HINSTANCE, LPSTR, int)
{
MessageBox(NULL, "Hello World!", "Message", MB_OK);
return 0;
}


And I get these errors: Windowstest.cpp(9) : error C2664: 'MessageBoxW' : cannot convert parameter 2 from 'const char [13]' to 'LPCWSTR' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast Now, I can cast the two strings to LPCWSTR and it will compile fine but print out jiberish in the actual program. So I guess I just have no idea why it is not working as is...if it is something specific to Visual Studio or what...I have tried to search for an answer but can't seem to find anything useful, so any advice would be GREATLY appreciated... Thanks!

##### Share on other sites
Oh okay nevermind, I think I found my solution...

Something about defaulting to unicode standards...?

##### Share on other sites
VS 2005 defaults to Unicode, therefor you have to append the L-macro to simple text strings:

MessageBox( NULL, L"Hello World!", L"Message", MB_OK );

Either that or you turn off Unicode in your project setting.

##### Share on other sites
Yea thats what I read on another post...I ended up just turning unicode off for now. But I was curious if there was any benefit to using unicode, this is the first I've heard of it and if I would be better off using it from now on, I will do so.

##### Share on other sites
I hope I am not too late to reply...

Unicode is basically what the name says, Universal Character codes. Unicode has letters ranging from the Latin to Arabic alphabet. So pratically every letter/symbol/number known to man is defined somewhere in Unicode. The only thing is, Unicode has up to 20,000 values. That is why you have to use "L". Unlike 8 bit ASCII characters, Unicode uses 2 bytes to denote each character in its massive library. The downfall is that not a lot of programmers use unicode, so consequently Unicode is still an unheard word to some. Was that of any help?

Good question core-nuts [smile]. rating++

##### Share on other sites
Yes! Very helpful. Thank you so much.

Though I'm stuck on something else now, one of these days I'll be able to actually START coding under Visual Studio...ugghhh