Unicode with C++ String

Started by
6 comments, last by Krohm 12 years, 7 months ago
I'd like to support unicode text encoding with using STL Strings, however, it sounds like the wide-char datatype is different across different platforms. For example, a wchar_t is 2 bytes on Windows while it's 4 on Linux/Unix/Mac. If I were to save my strings to a file, there would be marshaling involved if I were to save a binary file with STL strings in it on a Windows machine and read it on a Unix-based machine.


Any ideas for handling unicode data? My goal is to make it cross platform, because the text data will be stored in game configuration files, but I'd like to convert it over to NSString info if I'd like for iOS devices.
Advertisement
This page might help get you started. Unicode is a big world :-)

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Use UTF-8. Tada, problem solved.

Operating systems like Linux already use UTF-8 in their APIs, and you can convert a UTF-8 string to UTF-16 in Windows before passing it to the Win32 API.
Except C++ does not understand UTF-8, and so things like std::string("some_unicode_magic").length() can lie to you, due to variable-length encoding in UTF-8. The encoding has many subtle pitfalls and gotchas, and is not a drop-in replacement for the 7-bit or 8-bit char encodings that C++ was designed around. Suggesting otherwise is misguided and just plain bad advice.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]


Except C++ does not understand UTF-8, and so things like std::string("some_unicode_magic").length() can lie to you, due to variable-length encoding in UTF-8. The encoding has many subtle pitfalls and gotchas, and is not a drop-in replacement for the 7-bit or 8-bit char encodings that C++ was designed around. Suggesting otherwise is misguided and just plain bad advice.


Thanks for the negative rating (whoever), but unfortunately you are wrong. C++ dosn't understand anything, it's a just programming language. string::length returns the number of code points, not the number of glyphs. The same goes for wstring when you are working with UTF-16. You might not know it, but UTF-16 is variable length too. Not all glyphs fit into one wchar_t. So nothing is different about using UTF-8 with std::string vs. using UTF-16 with std::wstring.

If you really need to know how many glyphs are in a string (a relatively rare thing to need) vs. how many code points are in it, there are ways of doing that, and it's the same for UTF-16.

I've managed to use it as a "drop in" replacement in all my applications.

Perhaps you are misguided.
If your target platforms all have the relevant C++11 language support you can try using the new u16string. Otherwise, I recommend grabbing a library like ICU. Actually, I'd still give it a look even if you can count on that.

Thanks for the negative rating (whoever), but unfortunately you are wrong. C++ dosn't understand anything, it's a just programming language. string::length returns the number of code points, not the number of glyphs. The same goes for wstring when you are working with UTF-16. You might not know it, but UTF-16 is variable length too. Not all glyphs fit into one wchar_t. So nothing is different about using UTF-8 with std::string vs. using UTF-16 with std::wstring.

If you really need to know how many glyphs are in a string (a relatively rare thing to need) vs. how many code points are in it, there are ways of doing that, and it's the same for UTF-16.

I've managed to use it as a "drop in" replacement in all my applications.

Perhaps you are misguided.


If you do any kind of textual processing on your strings, you can royally screw up Unicode by assuming that UTF-8 is some kind of magic that just makes std::string support Unicode. Even Microsoft crapped that up royally in the .Net string handling (c.f. all the normalization fiascos).

I didn't say anything about UTF-16 being magic, either, so I don't know why you're railing on UTF-16.

Sites like http://www.unicode.org/faq/ describe all kinds of pitfalls in handling Unicode strings, including UTF-8 encoded strings. If you haven't deeply considered and understood this stuff, you're probably going to break something subtle.


And for the record, I never claimed you weren't "successful" at using UTF-8. My point is that just because you might have done it right, doesn't mean that someone else doing it right is as easy as "use UTF-8, tada."

Suggesting that anything Unicode related is as easy as "foo, tada" is the bad advice I was referring to.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

...it sounds like the wide-char datatype is different across different platforms. For example, a wchar_t is 2 bytes on Windows while it's 4 on Linux/Unix/Mac. If I were to save my strings to a file...
Key point here: persistent encoding != runtime encoding. Choose the runtime encoding that fits you better. When saving, select an encoding and ensure to go through the proper steps.
Would you just write a binary [font="'Courier New"]size_t [/font]to a file knowing it can be 4 or 8 bytes wide, depending on build? Of course not! You would either save a 16, 32 or 64 bits blob in a given byte and bit order. Eventually use variable-length encoding on need...

I currently use utf-16 for encoding. It's ok, really.

Not using [font="'Courier New"]wchar_t [/font]in C/C++ has, in my opinion, huge error potential. Newer C++ versions not included.

Previously "Krohm"

This topic is closed to new replies.

Advertisement