mixing unicode and non-unicode

Started by
1 comment, last by dmatter 10 years, 9 months ago

Hi everyone,

I'm trying to figure out if I can use both strings in the same project. I just need to use common function with different strings, like the following:

char* a = "raw string"

wchar* = L"unicode string"

and use the same function like

strinLength(a), stringLength(b)

I'm looking for some template functions which can handle both strings, maybe something inside the std ? I'm not sure if find, replace etc.. provided in <algorithm> can handle this problem, any idea? Because I also need to use a lot of string util functions, not just the length.

Thanks,

J

Advertisement

Oh windows. wchar_t, wide strings, and so on are NOT really the only way or even the best way to "do Unicode". In general you should expect a Unicode-aware strlen type function to step through every character of the string and compute length as it goes, even with wide characters there are some code points that will take multiple characters. The obvious and least painful solution is to just avoid taking string lengths, this is particularly good because there are some characters that may have length one or two depending on how you look at them. Another option would be to convert stuff to some massive 32bit length strings to take the length and then convert back when you are done.

In general I use Unicode mode in windows just for more static checking but IMMEDIATELY convert stuff from wchars to UTF-8 narrow strings for use inside my code. On recent versions of visual studio you can use std::wstring_convert to do the narrowing/widening. If you need to use GCC you are going to have to use the C facets library. What follows is my implementation of widen and narrow, the commented code is the same function but with std::wstring_convert and likely less bugs. The C version very likely has overrun bugs and leaks and so on (but at least is ACTUALLY narrows/widens unlike most examples on the net)


	inline std::string narrow(const std::wstring& wstr) {
        std::mbstate_t state = std::mbstate_t();
        auto buffer = wstr.c_str();
        size_t len = 1 + std::wcsrtombs(nullptr, &buffer, 0, &state);
        std::vector<char> nstrbuf(len);
        std::wcsrtombs(&nstrbuf[0], &buffer, nstrbuf.size(), &state);
        return std::string(nstrbuf.data());
        //this stuff does not work in GCC, FML
        //std::wstring_convert<std::codecvt_utf8<wchar_t>> converter;
        //return converter.to_bytes(wstr);
	}
	inline std::wstring widen(const std::string& nstr) {
        std::mbstate_t state = std::mbstate_t();
        auto buffer = nstr.c_str();
        size_t len = 1 + std::mbsrtowcs(nullptr, &buffer, 0, &state);
        std::vector<wchar_t> wstrbuf(len);
        std::mbsrtowcs(&wstrbuf[0], &buffer, wstrbuf.size(), &state);
        return std::wstring(wstrbuf.data());
        //again does not work on GCC because reasons
        //std::wstring_convert<std::codecvt_utf8<wchar_t>> converter;
        //return converter.from_bytes(nstr);
	}

see http://utf8everywhere.org/ for how to handle text and an explanation of how weird windows is

Edit: I think the best bet for actually getting character lengths is to go ahead and widen your strings and then take the length of the wide string. Yes this is a lot of slow conversions but strings are slow /anyways/

wchar is just a wide (generally 2byte) character. It hasn't really got a lot to do with Unicode and was probably added to C++ before Unicode existed.

Unicode is just an encoding (or a family or encodings: utf-8, utf-16 and utf-32), so it's an algorithm that requires parsing and interpreting bytes.

UTF-8, for example, can be done with regular chars. Some unicode characters require up to 4 bytes, most are 1 or 2.

Obviously ASCII allowed us to use a fixed-width system, 1 character == 1 char == 1 byte. Not so with Unicode although that is pretty much what you get with UTF-32. Unicode is a variable-width encoding, different characters require different numbers of bytes.

char is fine for UTF-8.

But of course, with Unicode, 1 char (or wchar) does not equal 1 character (code point) necessarily.

As for string lengths, as Demos Sema said, you have to run the decoding algorithm which means parsing the bytes, so really a sequential scan through the string. Similarly, you cannot just random-access a character at a specific index. Of course, if you can make some assumptions about the text (e.g. I'm using UTF-8 but I know there won't be any multi-byte code points - Which is practically equivalent to saying you know your string is ASCII-only) then you can just count number of char elements and you can randomly index into the string.

There are no functions in the C++ Standard Library that are Unicode aware. Any strlen type functions, for example, will just be counting the number of chars in the array, which is going to yield too large a number if that string is a Unicode string.

The width of wchar_t is implementation defined, so it isn't really useful for unicode unless you know something about the implementation. Generally it'll be 2 bytes, which is probably good-enough and so can be used for UTF-16.

C++11 introduced char16_t and char32_t. These clearly have a well-defined fixed-size. So if you're using UTF-16 and C++11 you would be better off using char16_t instead of wchar_t.

Overall I would leave all of that alone; if you're serious about using Unicode in C++ then instead take a look at the defacto ICU library. Then there's also boost::locale.

This topic is closed to new replies.

Advertisement