|
From: Sunanda C. <cho...@ad...> - 2013-02-06 06:43:03
|
Hello, I have a wide char string containing Unicode characters which are UTF-8 encoded. I want to convert it into char* since file system API (Linux specific) requires path in char*. Current locale of Linux machine is set to "en_US.UTF-8". I used wcstombs() to convert but I get garbled output. For example: wchar_t* contains <U+00DB> <U+0081> (UTF-8) which when converted to char* results in c3 9b c2 81. Fopen() fails to locate the file since the file is stored as hexdump: DB 81 and my char* path is c3 9b c2 81. Since input encoding is UTF-8 I expect output to be byte by byte copy of input string (DB 81). This output confused me. Can I use ICU library to perform this conversion? |