Unicode in Microsoft Windows

This is an old revision of this page, as edited by Spitzak (talk | contribs) at 18:53, 7 May 2018 (UTF-8: No chcp did not work. Conversely, remove scare tactics, funcitons that think characters have only two bytes do not "fail" when handed the prefix of a UTF-8 character). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Microsoft started to consistently implement Unicode in their products quite early.[clarification needed] Windows NT was the first operating system that used "wide characters" in system calls. Using the UCS-2 encoding scheme at first, it was upgraded to UTF-16 starting with Windows 2000, allowing a representation of additional planes with surrogate pairs.

In various Windows families

Windows NT based systems

Modern Windows versions like Windows XP and Windows Server 2003, and prior to them Windows NT (3.x, 4.0) and Windows 2000 are shipped with system libraries which support string encoding of two types: UTF-16 (often called "Unicode" in Windows documentation) and an local (sometimes multibyte) encoding called the "code page" (or incorrectly referred to as ANSI code page). 16-bit functions have names suffixed with -W (from "wide"). Code page oriented functions use the suffix -A for "ANSI". This split was necessary because many languages, including C, did not provide a clean way to pass both 8-bit and 16-bit strings to the same function. Most such 'A' functions are implemented as a wrapper that translates the code page to UTF-16 and calls the 'W' function.

Microsoft attempted to support Unicode "portably" by providing a "UNICODE" switch to the compiler, that switches unsiffixed "generic" calls from the 'A' to the 'W' interface and converts all string constants to "wide" UTF-16 versions.[1][2] This does not actually work because it does not translate UTF-8 outside of string constants, resulting in code that attempts to open files just not compiling or accidentally calling the 'A' version anyway.

Earlier, and independent of the "UNICODE" switch, Windows also provides the "MBCS" API switch.[3] This switch turns on some C functions prefixed with_mbs, and selects the 'A' functions for the current locale.[4]

Windows CE

In Windows CE UTF-16 was used almost exclusively, with the 'A' API mostly missing.[5] A limited set of ANSI API is available in Windows CE 5.0, for use on a reduced set of locales that may be selectively built onto the runtime image.[6]

Windows 9x

In 2001, Microsoft released a special supplement to Microsoft’s old Windows 9x systems. It includes a dynamic link library unicows.dll (only 240 KB) containing the 16-bit flavor (the ones with the letter W on the end) of all the basic functions of Windows API.

UTF-8

Despite being one of the earliest proponents of Unicode, it can be claimed that Windows does not support Unicode in portable files. This is because, due to a number of odd decisions, the file system api used by standard interfaces in C and C++ libraries cannot be convinced to take UTF-8 which is the standard method of providing them with Unicode. Microsoft Windows has a code page designated for UTF-8, code page 65001. Until recently it was impossible to set the locale code page to 65001 (the code page only available for explicit conversion functions such as MultiByteToWideChar). If this was possible then it would be possible to write code to open a file using a UTF-8 string. There are (were?) also serious problems with getting Microsoft compilers to produce UTF-8 string constants. The most reliable method is to turn off UNICODE, not mark the input file as being UTF-8, and arrange the string constants to have the UTF-8 bytes (perhaps using an editor that would edit UTF-8 but not put a UTF byte order mark on the start of the saved file).

There are (were?) proposals to add new API to portable libraries such as Boost to do the necessary conversion, with new functions for opening and renaming files that take UTF-8. These functions would pass filenames through unchanged on Unix, but translate them to UTF-16 on Windows.[7]

Many applications imminently have to support UTF-8 because it is the most-used Unicode encoding scheme in various network protocols, including the Internet Protocol Suite. An application which has to pass UTF-8 to or from a 'W' Windows API should call the functions MultiByteToWideChar and WideCharToMultiByte.[8] To get predictable handling of errors and surrogate halves, and to insure UTF-8 is used, it is more common for software to implement their own versions of these functions.

Since insider build 17035 and the April 2018 update (nominal build 17134) for Windows 10[9], a "Beta: Use Unicode UTf-8 for worldwide language support" checkbox is available for setting the locale code page to UTF-8.[a] Assuming a process can just force this state on itself at startup, and that the compilers have been improved to not translate bytes from the source when making string constants, it can be claimed that Windows has solved this problem and now fully supports Unicode.

Notes

  1. ^ Found under control panel, "Region" entry, "Administative" tab, "Change system locale" button.

References

  1. ^ "Unicode in the Windows API". Retrieved 7 May 2018.
  2. ^ "Conventions for Function Prototypes (Windows)". MSDN. Retrieved 7 May 2018.
  3. ^ "Support for Multibyte Character Sets (MBCSs)".
  4. ^ "Double-byte Character Sets". MSDN. Retrieved 7 May 2018. our applications use DBCS Windows code pages with the "A" versions of Windows functions.
  5. ^ "Differences Between the Windows CE and Windows NT Implementations of TAPI". MSDN. Retrieved 7 May 2018. Windows CE is Unicode-based. You might have to recompile source code that was written for a Windows NT-based application.
  6. ^ "Code Pages (Windows CE 5.0)". Microsoft Docs. Retrieved 7 May 2018.
  7. ^ "Boost.Nowide".
  8. ^ "UTF-8 in Windows". Stack Overflow. Retrieved July 1, 2011.
  9. ^ "Windows10 Insider Preview Build 17035 Supports UTF-8 as ANSI". Hacker News. Retrieved 7 May 2018.