$ cat test.c #include <stdio.h> #include <wchar.h> #include <utf8proc.h> int main() { wint_t c = 0x2630; printf("wcwidth returns %d\n", wcwidth(c)); printf("utf8proc_charwidth returns %d\n", utf8proc_charwidth(c)); return 0; } $ ./test wcwidth returns -1 utf8proc_charwidth returns 2 So glibc in wcwidth() should return 2 for 0x2630 Unicode character. More info: https://github.com/source-foundry/Hack/pull/236#issuecomment-345040104
If I add a call to setlocale(LC_ALL, "") with a proper LANG environment variable and installed locale data, I get the expected result (which is 1, and agrees with modern utf8proc): [schwarzgerat](0) $ cat test.c #include <stdio.h> #include <locale.h> #include <wchar.h> #include <utf8proc.h> int main() { setlocale(LC_ALL, ""); wint_t c = 0x2630; printf("wcwidth returns %d\n", wcwidth(c)); printf("utf8proc_charwidth returns %d\n", utf8proc_charwidth(c)); return 0; } [schwarzgerat](0) $ gcc -D_XOPEN_SOURCE test.c -lutf8proc [schwarzgerat](0) $ ./a.out wcwidth returns 1 utf8proc_charwidth returns 1 [schwarzgerat](0) $ I believe this bug can be closed. The test program provided requires a setlocale() call to produce the expected result.
Without setlocale, you are using the C locale, with the ASCII codeset.