Unicode in Microsoft Windows

This is an old revision of this page, as edited by Artoria2e5 (talk | contribs) at 16:45, 9 May 2018 (Windows NT based systems: You can't accidentally call the "A" API when it's #define'd to "W".). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Microsoft started to consistently implement Unicode in their products quite early.[clarification needed] Windows NT was the first operating system that used "wide characters" in system calls. Using the UCS-2 encoding scheme at first, it was upgraded to UTF-16 starting with Windows 2000, allowing a representation of additional planes with surrogate pairs.

In various Windows families

Windows NT based systems

Modern Windows versions like Windows XP and Windows Server 2003, and prior to them Windows NT (3.x, 4.0) and Windows 2000 are shipped with system libraries which support string encoding of two types: UTF-16 (often called "Unicode" in Windows documentation) and an local (sometimes multibyte) encoding called the "code page" (or incorrectly referred to as ANSI code page). 16-bit functions have names suffixed with -W (from "wide"). Code page oriented functions use the suffix -A for "ANSI". This split was necessary because many languages, including C, did not provide a clean way to pass both 8-bit and 16-bit strings to the same function. Most such 'A' functions are implemented as a wrapper that translates the code page to UTF-16 and calls the 'W' function.

Microsoft attempted to support Unicode "portably" by providing a "UNICODE" switch to the compiler, that switches unsiffixed "generic" calls from the 'A' to the 'W' interface and converts all string constants to "wide" UTF-16 versions.[1][2] This does not actually work because it does not translate UTF-8 outside of string constants, resulting in code that attempts to open files just not compiling.

Earlier, and independent of the "UNICODE" switch, Windows also provides the "MBCS" API switch.[3] This switch turns on some C functions prefixed with_mbs, and selects the 'A' functions for the current locale.[4]

Windows CE

In Windows CE UTF-16 was used almost exclusively, with the 'A' API mostly missing.[5] A limited set of ANSI API is available in Windows CE 5.0, for use on a reduced set of locales that may be selectively built onto the runtime image.[6]

Windows 9x

In 2001, Microsoft released a special supplement to Microsoft’s old Windows 9x systems. It includes a dynamic link library unicows.dll (only 240 KB) containing the 16-bit flavor (the ones with the letter W on the end) of all the basic functions of Windows API.

UTF-8

Microsoft Windows has a code page designated for UTF-8, code page 65001. Prior to Windows 10 insider build 17035 (November 2017)[7], it was impossible to set the locale code page to 65001, leaving this code page only available for:

  • Explicit conversion functions such as MultiByteToWideChar
  • A manual "chcp" command that only changes the code page for the current program's context.

Since insider build 17035 and the April 2018 update (nominal build 17134) for Windows 10, a "Beta: Use Unicode UTf-8 for worldwide language support" checkbox is available for setting the locale code page to UTF-8.[a] However, this option can break "narrow" functions such as fopen() in legacy applications as many of them assume that all MBCS code pages are DBCS (double-byte). In fact, all "locales" available from the control panel dropdown are at most double-byte, and code pages with more bytes such as GB 18030 (cp54936) and UTF-8 has been unavailable for selection from the dropdown list.

There are proposals to add an API to portable libraries such as Boost to do the necessary conversion, by adding new functions for opening and renaming files. These functions would pass filenames through unchanged on Unix, but translate them to UTF-16 on Windows.[9]

Many applications imminently have to support UTF-8 because it is the most-used Unicode encoding scheme in various network protocols, including the Internet Protocol Suite. An application which has to pass UTF-8 to or from a 'W' Windows API should call the functions MultiByteToWideChar and WideCharToMultiByte.[10] To get predictable handling of errors and surrogate halves it is more common for software to implement their own versions of these functions.

Notes

  1. ^ Found under control panel, "Region" entry, "Administative" tab, "Change system locale" button.

References

  1. ^ "Unicode in the Windows API". Retrieved 7 May 2018.
  2. ^ "Conventions for Function Prototypes (Windows)". MSDN. Retrieved 7 May 2018.
  3. ^ "Support for Multibyte Character Sets (MBCSs)".
  4. ^ "Double-byte Character Sets". MSDN. Retrieved 7 May 2018. our applications use DBCS Windows code pages with the "A" versions of Windows functions.
  5. ^ "Differences Between the Windows CE and Windows NT Implementations of TAPI". MSDN. Retrieved 7 May 2018. Windows CE is Unicode-based. You might have to recompile source code that was written for a Windows NT-based application.
  6. ^ "Code Pages (Windows CE 5.0)". Microsoft Docs. Retrieved 7 May 2018.
  7. ^ "Windows10 Insider Preview Build 17035 Supports UTF-8 as ANSI". Hacker News. Retrieved 7 May 2018.
  8. ^ "Unknown encoding: 65001 · Issue #2009 · Microsoft/WSL". GitHub. Retrieved 9 May 2018.
  9. ^ "Boost.Nowide".
  10. ^ "UTF-8 in Windows". Stack Overflow. Retrieved July 1, 2011.