Archive for the ‘UTF-16’ Category
Posted by jpluimers on 2022/02/09
Nowadays, some 35 years after the first Unicode ideas got drafted and 30+ years after the Unicode Consortium saw the light, UTF-8 is served my more than 95% of the web as shown in yesterday’s post UTF-8 web adoption is huge, closing 100%, but only soured up since around 2006..
I mentioned this:
It means that nowadays there is a very small chance you will see mangled characters (what Japanese call mojibake) when you’re surfing the web.
Serving UTF8 does not mean no unicode problems.
Below are some issues that happened not too long ago and still happen. I have reported them to all parties involved through web-care, but no response whatsoever, and this is bad: Unicode support beyond basic ASCII for the below systems are still broken even for relatively simple non-ASCII characters based in diacritics decorating a standard ASCII character.
Yes, I know the realm of encoding and code pages is a mess, especially when handling data in multiple layers of an application stack. That’s why I wrote this post in the first place, and have a whole encoding category of blog posts plus a Mojibake subset.
Read the rest of this entry »
Posted in Communications Development, CP850, Dark Pattern, Development, Encoding, ISO-8859, ISO8859, Mojibake, Software Development, Unicode, User Experience (ux), UTF-16, UTF-8, Windows-1252 | Leave a Comment »
Posted by jpluimers on 2022/02/09
Note: notepad cannot correctly guess the encoding, see the “old new thing”: [Wayback] Some files come up strange in Notepad | The Old New Thing (talking about ANSI a.k.a. Windows-1252, UTF-16LE, UTF-16BE, UTF-8, UTF-7 somewith and some without BOM as Notepad does not understand all permutations)
David Cumps discovered that certain text files come up strange in Notepad. The reason is that Notepad has to edit files in a variety of encodings, and when its back against the wall, sometimes it’s forced to guess.
[Wayback] C# Effective way to find any file’s Encoding – Stack Overflow shows how to detect various byte order marks in C#.
–jeroen
Posted in ASCII, Development, Encoding, Software Development, Unicode, UTF-16, UTF-32, UTF-8, UTF16, UTF32, UTF8 | Leave a Comment »
Posted by jpluimers on 2021/09/29
The below one will fail in a script, both both work from the PowerShell prompt:
Success
Get-NetFirewallRule -DisplayGroup "File and Printer Sharing" | ForEach-Object { Write-Host $_.DisplayName ; Get-NetFirewallAddressFilter -AssociatedNetFirewallRule $_ }
Failure
Get-NetFirewallRule –DisplayGroup "File and Printer Sharing" | ForEach-Object { Write-Host $_.DisplayName ; Get-NetFirewallAddressFilter -AssociatedNetFirewallRule $_ }
The error you get this this:
At C:\bin\Show-File-and-Printer-Sharing-firewall-rules.ps1:5 char:52
+ ... -TCP-NoScope" | ForEach-Object { Write-Host $_.DisplayName ; Get-NetF ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The string is missing the terminator: ".
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : TerminatorExpectedAtEndOfString
Via [WayBack] script file ‘The string is missing the terminator: “.’ – Google Search, I quickly found these that stood out:
Cause and solution
Before DisplayGroup, the first line has a minus sign and the second an en-dash. You can see this via [WayBack] What Unicode character is this ?.
Apparently, when using Unicode on the console, it does not matter if you have a minus sign (-), en-dash (–), em-dash (—) or horizontal bar (―) as dash character. You can see this in [WayBack] tokenizer.cs at function [WayBack] NextToken and [WayBack] CharTraits.cs at function [WayBack] IsChar).
When saving to a non-Unicode file, it does matter, even though it does not display as garbage in the error message.
Similarly, PowerShell has support for these special characters:
internal static class SpecialChars
{
// Uncommon whitespace
internal const char NoBreakSpace = (char)0x00a0;
internal const char NextLine = (char)0x0085;
// Special dashes
internal const char EnDash = (char)0x2013;
internal const char EmDash = (char)0x2014;
internal const char HorizontalBar = (char)0x2015;
// Special quotes
internal const char QuoteSingleLeft = (char)0x2018; // left single quotation mark
internal const char QuoteSingleRight = (char)0x2019; // right single quotation mark
internal const char QuoteSingleBase = (char)0x201a; // single low-9 quotation mark
internal const char QuoteReversed = (char)0x201b; // single high-reversed-9 quotation mark
internal const char QuoteDoubleLeft = (char)0x201c; // left double quotation mark
internal const char QuoteDoubleRight = (char)0x201d; // right double quotation mark
internal const char QuoteLowDoubleLeft = (char)0x201E; // low double left quote used in german.
}
The easiest solution is to use minus signs everywhere.
Another solution is to save files as Unicode UTF-8 encoding (preferred) or UTF-16 encoding (which I dislike).
–jeroen
Posted in .NET, CommandLine, Development, Encoding, PowerShell, PowerShell, Scripting, Software Development, Unicode, UTF-16, UTF-8, UTF16, UTF8 | Leave a Comment »
Posted by jpluimers on 2019/12/31
A while back there were a few G+ threads sprouted by David Heffernan on decoding big files into line-ending splitted strings:
Code comparison:
Python:
with open(filename, 'r', encoding='utf-16-le') as f:
for line in f:
pass
Delphi:
for Line in TLineReader.FromFile(filename, TEncoding.Unicode) do
;
This spurred some nice observations and unfounded statements on which encodings should be used, so I posted a bit of history that is included below.
Some tips and observations from the links:
- Good old text files are not “good” with Unicode support, neither are TextFile Device Drivers; nobody has written a driver supporting a wide range of encodings as of yet.
- Good old text files are slow as well, even with a changed SetTextBuffer
- When using the TStreamReader, the decoding takes much more time than the actual reading, which means that [WayBack] Faster FileStream with TBufferedFileStream • DelphiABall does not help much
- TStringList.LoadFromFile, though fast, is a memory allocation dork and has limits on string size
- Delphi RTL code is not what it used to be: pre-Delphi Unicode RTL code is of far better quality than Delphi 2009 and up RTL code
- Supporting various encodings is important
- EBCDIC days: three kinds of spaces, two kinds of hyphens, multiple codepages
- Strings are just that: strings. It’s about the encoding from/to the file that needs to be optimal.
- When processing large files, caching only makes sense when the file fits in memory. Otherwise caching just adds overhead.
- On Windows, if you read a big text file into memory, open the file in “sequential read” mode, to disable caching. Use the FILE_FLAG_SEQUENTIAL_SCAN flag under Windows, as stated at [WayBack] How do FILE_FLAG_SEQUENTIAL_SCAN and FILE_FLAG_RANDOM_ACCESS affect how the operating system treats my file? – The Old New Thing
- Python string reading depends on the way you read files (ASCII or Unicode); see [WayBack] unicode – Python codecs line ending – Stack Overflow
Though TLineReader is not part of the RTL, I think it is from [WayBack] For-in Enumeration – ADUG.
Encodings in use
It doesn’t help that on the Windows Console, various encodings are used:
Good reading here is [WayBack] c++ – What unicode encoding (UTF-8, UTF-16, other) does Windows use for its Unicode data types? – Stack Overflow
Encoding history
+A. Bouchez I’m with +David Heffernan here:
At its release in 1993, Windows NT was very early in supporting Unicode. Development of Windows NT started in 1990 where they opted for UCS-2 having 2 bytes per character and had a non-required annex on UTF-1.
UTF-1 – that later evolved into UTF-8 – did not even exist at that time. Even UCS-2 was still young: it got designed in 1989. UTF-8 was outlined late 1992 and became a standard in 1993
Some references:
–jeroen
Read the rest of this entry »
Posted in Delphi, Development, Encoding, PowerShell, PowerShell, Python, Scripting, Software Development, The Old New Thing, Unicode, UTF-16, UTF-8, Windows Development | Leave a Comment »
Posted by jpluimers on 2018/12/04
Uh-oh: [WayBack] Unicode in Microsoft Windows: UTF-8 – Wikipedia:
Microsoft Windows has a code page designated for UTF-8, code page 65001. Prior to Windows 10 insider build 17035 (November 2017),[7] it was impossible to set the locale code page to 65001, leaving this code page only available for:
- Explicit conversion functions such as MultiByteToWideChar
- The Win32 console command
chcp 65001 to translate stdin/out between UTF-8 and UTF-16.
This means that “narrow” functions, in particular fopen, cannot be called with UTF-8 strings, and in fact there is no way to open all possible files using fopen no matter what the locale is set to and/or what bytes are put in the string, as none of the available locales can produce all possible UTF-16 characters.
On all modern non-Windows platforms, the string passed to fopen is effectively UTF-8. This produces an incompatibility between other platforms and Windows. The normal work-around is to add Windows-specific code to convert UTF-8 to UTF-16 using MultiByteToWideChar and call the “wide” function.[8] Conversion is also needed even for Windows-specific api such as SetWindowText since many applications inherently have to use UTF-8 due to its use in file formats, internet protocols, and its ability to interoperate with raw arrays of bytes.
There were proposals to add new API to portable libraries such as Boost to do the necessary conversion, by adding new functions for opening and renaming files. These functions would pass filenames through unchanged on Unix, but translate them to UTF-16 on Windows.[9] This would allow code to be “portable”, but required just as many code changes as calling the wide functions.
With insider build 17035 and the April 2018 update (nominal build 17134) for Windows 10, a “Beta: Use Unicode UTF-8 for worldwide language support” checkbox appeared for setting the locale code page to UTF-8.[a] This allows for calling “narrow” functions, including fopen and SetWindowTextA, with UTF-8 strings. Microsoft claims this option might break some functions (a possible example is _mbsrev[10]) as they were written to assume multibyte encodings used no more than 2 bytes per character, thus until now code pages with more bytes such as GB 18030 (cp54936) and UTF-8 could not be set as the locale.[11]
- Jump up^ [WayBack] “UTF-8 in Windows”. Stack Overflow. Retrieved July 1, 2011.
- Jump up^ [WayBack] “Boost.Nowide”.
- Jump up^ [WayBack] https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/strrev-wcsrev-mbsrev-mbsrev-l
- Jump up^ [WayBack] “Code Page Identifiers (Windows)”. msdn.microsoft.com.
Via [WayBack] Microsoft Windows Beta UTF-8 support for Ansi API could break things. Wiki Article of the Change… – Tommi Prami – Google+
Related, as handling encoding is hard, especially if it is changed or not your default:
–jeroen
Posted in .NET, C, C++, Delphi, Development, Encoding, GB 18030, Power User, Software Development, UTF-16, UTF-32, UTF-8, UTF16, UTF32, UTF8, Windows, Windows 10 | 2 Comments »
Posted by jpluimers on 2017/11/07
A well worth long rad:
We all recognize emoji. They’ve become the global pop stars of digital communication. But what are they, technically speaking? And what might we learn by taking a closer look at these images, characters, pictographs… whatever they are 🤔 (Thinking Face). We will dig deep to learn about how these thingamajigs work. Please note: Depending on your browser, you may not be able to see all emoji featured in this article (especially the Tifinagh characters). Also, different platforms vary in how they display emoji as well. That’s why the article always provides textual alternatives. Don’t let it discourage you from reading though! Now, let’s start with a seemingly simple question. What are emoji?
[WayBack] You, Me And The Emoji: Character Sets, Encoding And Emoji – Smashing Magazine
Via: [WayBack] Everything you ever wanted to know about characters, encodings, glyphs… and, oh yeah, emoji: bit.ly/2fNKeW3Long, rewarding read. – Ilya Grigorik – Google+
Here is just the ToC:
TABLE OF CONTENTS LINK
- Character Sets And Document Encoding: An Overview
- Characters
- Character Sets
- Coded Character Sets
- Encoding
- Declaring Character Sets And Document Encoding On The Web
- content-type HTTP Header Declaration
- Checking HTTP Headers Using A Browser’s Developer Tools
- Checking HTTP Headers Using Web-based Tools
- Using A Meta Element With charset Attribute
- An Encoding By Any Other Name
- What Were We Talking About Again? Oh Yeah, Emoji!
- So What Are Emoji?
- How Do We Use Emoji?
- Character References
- Glyphs
- How Do We Know If We Have These Symbols?
- The Great Emoji Proliferation Of 2016
- Emoji OS Support
- Emoji Support: Apple Platforms (macOS and iOS)
- Emoji Support: Windows
- Emoji Support: Linux
- Emoji Support: Android
- Emoji On The Web
- Emoji One
- Twemoji
- Conclusion
–jeroen
Posted in ASCII, Development, Encoding, ISO-8859, ISO8859, Shift JIS, Unicode, UTF-16, UTF-8, UTF16, UTF8, Windows-1252 | Leave a Comment »
Posted by jpluimers on 2017/06/21
A while ago, I had to fix some stuff in an application that would write – using a binary mechanism – UTF-8 and UTF-16 strings (part of it XML in various flavours) to the same byte stream without converting between the two encodings.
Some links that helped me investigate what was wrong, choose what encoding to use for storage and fix it:
–jeroen
Posted in Delphi, Delphi 10 Seattle, Delphi 10.1 Berlin (BigBen), Delphi XE8, Development, Encoding, Software Development, UTF-16, UTF-8, UTF16, UTF8, XML, XML/XSD | 3 Comments »
Posted by jpluimers on 2017/05/31
A while ago I bumped into applications that write alternating UTF-16 and UTF-8 to files without checking what type of encoding the files were using.
So here are some notes to at least save some of the contents.
TODO: figure out how to strip the BOM.
–jeroen
Posted in Development, Encoding, Software Development, UTF-16, UTF-8, UTF16, UTF8 | Leave a Comment »
Posted by jpluimers on 2016/11/22
A while ago, I needed to get the various date, time and week values from WMIC to environment variables with pre-padded zeros. I thought: easy job, just write a batch file.
Tough luck: I couldn’t get the values to expand properly. Which in the end was caused by WMIC emitting UTF-16 and the command-interpreter not expecting double-byte character sets which messed up my original batch file.
| What I wanted |
What I got |
wmic_Day=21
wmic_DayOfWeek=04
wmic_Hour=15
wmic_Milliseconds=00
wmic_Minute=02
wmic_Month=05
wmic_Quarter=02
wmic_Second=22
wmic_WeekInMonth=04
wmic_Year=2015
|
Day=21
wmic_DayOfWeek=4
wmic_Hour=15
wmic_Milliseconds=
wmic_Minute=4
wmic_Month=5
wmic_Quarter=2
wmic_Second=22
wmic_WeekInMonth=4
wmic_Year=2015
|
WMIC uses this encoding because the Wide versions of Windows API calls use UTF-16 (sometimes called UCS-2 as that is where UTF-16 evolved from).
As Windows uses little-endian encoding by default, the high byte (which is zero) of a UTF-16 code point with ASCII characters comes first. That messes up the command interpreter.
Lucikly rojo was of great help solving this.
His solution is centered around set /A, which:
- handles integer numbers and calls them “numeric” (hinting floating point, but those are truncated to integer; one of the tricks rojo uses)
- and (be careful with this as 08 and 09 are not octal numbers) uses these prefixes:
- 0 for Octal
- 0x for hexadecimal
Enjoy and shiver with the online help extract:
Read the rest of this entry »
Posted in Algorithms, Batch-Files, Development, Encoding, Floating point handling, Scripting, Software Development, UCS-2, UTF-16, UTF16 | Leave a Comment »
Posted by jpluimers on 2016/08/17
After yesterdays post on Testing and static methods don’t go well together, I read around on Source (kunststube [WayBack]) a bit more and found these very nice articles on encoding,Unicode and text:
Related on those, some other nice readings:
–jeroen
Posted in Ansi, ASCII, CP437/OEM 437/PC-8, Development, EBCDIC, Encoding, ISO-8859, ISO8859, Shift JIS, Software Development, Unicode, UTF-16, UTF-8, UTF16, UTF8, Windows-1252 | Leave a Comment »