Jump to content

  • Log In with Google      Sign In   
  • Create Account

Unicode with C++ String


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
7 replies to this topic

#1 Vincent_M   Members   -  Reputation: 744

Like
0Likes
Like

Posted 21 September 2011 - 03:50 PM

I'd like to support unicode text encoding with using STL Strings, however, it sounds like the wide-char datatype is different across different platforms. For example, a wchar_t is 2 bytes on Windows while it's 4 on Linux/Unix/Mac. If I were to save my strings to a file, there would be marshaling involved if I were to save a binary file with STL strings in it on a Windows machine and read it on a Unix-based machine.


Any ideas for handling unicode data? My goal is to make it cross platform, because the text data will be stored in game configuration files, but I'd like to convert it over to NSString info if I'd like for iOS devices.

Sponsor:

#2 ApochPiQ   Moderators   -  Reputation: 16397

Like
1Likes
Like

Posted 21 September 2011 - 04:17 PM

This page might help get you started. Unicode is a big world :-)

#3 Chris_F   Members   -  Reputation: 2461

Like
0Likes
Like

Posted 21 September 2011 - 04:17 PM

Use UTF-8. Tada, problem solved.

Operating systems like Linux already use UTF-8 in their APIs, and you can convert a UTF-8 string to UTF-16 in Windows before passing it to the Win32 API.

#4 ApochPiQ   Moderators   -  Reputation: 16397

Like
0Likes
Like

Posted 21 September 2011 - 04:21 PM

Except C++ does not understand UTF-8, and so things like std::string("some_unicode_magic").length() can lie to you, due to variable-length encoding in UTF-8. The encoding has many subtle pitfalls and gotchas, and is not a drop-in replacement for the 7-bit or 8-bit char encodings that C++ was designed around. Suggesting otherwise is misguided and just plain bad advice.

#5 Chris_F   Members   -  Reputation: 2461

Like
1Likes
Like

Posted 21 September 2011 - 04:32 PM

Except C++ does not understand UTF-8, and so things like std::string("some_unicode_magic").length() can lie to you, due to variable-length encoding in UTF-8. The encoding has many subtle pitfalls and gotchas, and is not a drop-in replacement for the 7-bit or 8-bit char encodings that C++ was designed around. Suggesting otherwise is misguided and just plain bad advice.


Thanks for the negative rating (whoever), but unfortunately you are wrong. C++ dosn't understand anything, it's a just programming language. string::length returns the number of code points, not the number of glyphs. The same goes for wstring when you are working with UTF-16. You might not know it, but UTF-16 is variable length too. Not all glyphs fit into one wchar_t. So nothing is different about using UTF-8 with std::string vs. using UTF-16 with std::wstring.

If you really need to know how many glyphs are in a string (a relatively rare thing to need) vs. how many code points are in it, there are ways of doing that, and it's the same for UTF-16.

I've managed to use it as a "drop in" replacement in all my applications.

Perhaps you are misguided.

#6 SiCrane   Moderators   -  Reputation: 9669

Like
0Likes
Like

Posted 21 September 2011 - 04:32 PM

If your target platforms all have the relevant C++11 language support you can try using the new u16string. Otherwise, I recommend grabbing a library like ICU. Actually, I'd still give it a look even if you can count on that.

#7 ApochPiQ   Moderators   -  Reputation: 16397

Like
0Likes
Like

Posted 21 September 2011 - 04:54 PM

Thanks for the negative rating (whoever), but unfortunately you are wrong. C++ dosn't understand anything, it's a just programming language. string::length returns the number of code points, not the number of glyphs. The same goes for wstring when you are working with UTF-16. You might not know it, but UTF-16 is variable length too. Not all glyphs fit into one wchar_t. So nothing is different about using UTF-8 with std::string vs. using UTF-16 with std::wstring.

If you really need to know how many glyphs are in a string (a relatively rare thing to need) vs. how many code points are in it, there are ways of doing that, and it's the same for UTF-16.

I've managed to use it as a "drop in" replacement in all my applications.

Perhaps you are misguided.


If you do any kind of textual processing on your strings, you can royally screw up Unicode by assuming that UTF-8 is some kind of magic that just makes std::string support Unicode. Even Microsoft crapped that up royally in the .Net string handling (c.f. all the normalization fiascos).

I didn't say anything about UTF-16 being magic, either, so I don't know why you're railing on UTF-16.

Sites like http://www.unicode.org/faq/ describe all kinds of pitfalls in handling Unicode strings, including UTF-8 encoded strings. If you haven't deeply considered and understood this stuff, you're probably going to break something subtle.


And for the record, I never claimed you weren't "successful" at using UTF-8. My point is that just because you might have done it right, doesn't mean that someone else doing it right is as easy as "use UTF-8, tada."

Suggesting that anything Unicode related is as easy as "foo, tada" is the bad advice I was referring to.

#8 Krohm   Crossbones+   -  Reputation: 3249

Like
0Likes
Like

Posted 22 September 2011 - 12:28 AM

...it sounds like the wide-char datatype is different across different platforms. For example, a wchar_t is 2 bytes on Windows while it's 4 on Linux/Unix/Mac. If I were to save my strings to a file...

Key point here: persistent encoding != runtime encoding. Choose the runtime encoding that fits you better. When saving, select an encoding and ensure to go through the proper steps.
Would you just write a binary size_t to a file knowing it can be 4 or 8 bytes wide, depending on build? Of course not! You would either save a 16, 32 or 64 bits blob in a given byte and bit order. Eventually use variable-length encoding on need...

I currently use utf-16 for encoding. It's ok, really.

Not using wchar_t in C/C++ has, in my opinion, huge error potential. Newer C++ versions not included.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS