EthiCS: Harms and Unicode

Our Embedded EthiCS module touches on data representation—specifically the representation of human language in terms computers can understand. This is an ethically rich and complicated area where technical innovations have had huge impact. This pre-reading starts with a discussion of harms, then turns to technical material on coding systems.

Harm

If someone punches me, or steals my credit card information, it’s intuitive to say that I have been harmed. But there are different types of harms. When we name different classes of harm, we can distinguish them, weigh them, and think carefully about how to address them.

A punch is an instance of physical harm. When someone steals money from my bank account, this is economic harm. And there are other types of harms, including sexual harm, political harm, and emotional harm. But two less-familiar types of harm will come up in our discussion.1

Allocative harm occurs when a system withholds resources or opportunities from an individual or a group. If, for example, a mortgage support app routinely denied mortgages to people under the age of 30, then this would be an instance of allocative harm: people under 30 are denied access to a resource. If a health care app gave fewer resources, on average, to African Americans (controlling for other variables besides race), this would be an instance of allocative harm.

Representational harm occurs when a system reinforces the subordination of some group on the basis of identity (e.g., race, class, or gender), or makes it impossible to represent that identity at all. The “confusion and frustration” Asian Americans and Asians can experience on encountering terms like “underrepresented minority” is a reaction to representational harm.

A technical example of representational harm harm occurred when Google Photos algorithms labeled some black people as “gorillas”, reinforcing a racist stereotype. This incident was damaging enough that Google “solved” it by removing the terms “gorilla” and “monkey” from Google Photos image labels. They filtered other terms too: “Typing ‘black man,’ ‘black woman,’ or ‘black person,’ caused Google’s system to return black-and-white images of people, correctly sorted by gender, but not filtered by race.” (ref) The solution to one representational harm committed another! More recently, consider generative AI systems. In an attempt to avoid one representational harm—showing mostly men when asked to present images of CEOs, for example—image generation algorithms were tweaked to put more weight on ethnic and gender diversity. The result was historically inaccurate image generation and, arguably, another form of harm: when asked to generate images of a “1943 German soldier,” for example, Google Gemini generated images of Asian and Black people as well as whites. “Prodded further, it seemed to actively resist generating images of white people altogether.” (ref)

No resources were lost in these examples, yet the individuals and the groups involved were still harmed. The notion of representational harm helps us capture and talk about the type of harm that was done.

Harm can be caused with or without intent. If person A says something that person B finds mean or disparaging, B can be emotionally harmed even though A did not intend to harm. Furthermore, in some frameworks, groups as well as individuals can experience harm. The practice of redlining, which prevented people who lived in historically-black areas from obtaining credit, caused economic harm to a group.2

Finally, harms can be more or less severe. Having one’s home burn down is a more severe harm than, say, having $20 stolen from one’s wallet.

Reasonable accommodation and undue burden

Unfortunately, mitigating a harm can cause another form of harm. For example, rectifying an allocative harm for some may require causing an economic harm for others.

How do we decide whether to mitigate some harm, when doing so comes at a cost? The concepts of reasonable accommodation and undue burden can help clarify our thinking about the relevant trade-offs. A reasonable accommodation is an adjustment to a system that allows individuals or groups with certain needs to participate in that system on an equal basis with others, without imposing an undue burden on those providing the accommodation. An undue burden is an accommodation that requires “significant difficulty and expense,” considered in context.

There is no easy answer or simple algorithm that can help us determine whether some trade-off is a reasonable accommodation or an undue burden. It takes careful attention to the details of a case and an ability to think critically. Having this vocabulary at your disposal—the taxonomy of harms described above, plus the concepts of reasonable accommodation and undue burden—can help you as you think through these difficulties.

Additionally, once equipped with this new vocabulary, it can help to ask the following when considering a design decision:

  1. Who will likely benefit from this decision?
  2. Who could it harm?
  3. In what ways, specifically, could it benefit or harm them?
  4. What will it take to avoid or mitigate such harm(s)?
  5. Does the work to avoid the harm constitute a reasonable accommodation or an undue burden? For instance,
    • Who has the resources to shoulder this burden?
    • Who has the responsibility to shoulder it?

Written coding systems

With these thoughts and terms in mind, let’s talk about coding systems for human language.

Human language originates in speech,3 and languages were exclusively spoken for most of human history (we think).4 Language communication in a form other than speech requires a coding system: a representation of speech (or sign) suitable for another technology.

There are many coding systems, and it’s easy to create one. (Children make up coding systems for fun! Did you ever make up a secret alphabet, or a code for passing notes in school?) Important coding systems are standardized and mutually intelligible among many people.

Any written language is a coding system. You’re using a coding system right now, namely the English language, as represented in the Latin alphabet, as mediated by a computer coding system that represents letters as integers. Written coding systems divide into three broad categories:

  1. In alphabetic coding systems, a mark represents a part of a sound. Latin is an alphabetic script: a Latin letter can represent a vowel (“a”) or consonant (“b”), but sometimes a sequence of letters represents a single sound (“ch”). Other alphabets include Cyrillic, Coptic, Korean (Hangul), Hebrew, Arabic, and Devanagari.5

  2. In syllabic coding systems, a mark represents a full syllable—a sequence of related sounds. The most-used syllabaries today are Japanese.

  3. In logographic coding systems, a mark represents a larger concept, such as a full word. The Chinese script is the only broadly-used logographic writing system, but it is very broadly used.

Actual written languages can combine features of all these systems. The Japanese language is written using a combination of four distinct scripts: kanji (logographic Chinese characters), hiragana (a syllabary used for native Japanese words), katakana (a syllabary used mostly for emphasis, for foreign words, and for words representing sounds), and alphabetic Latin script (for instance, for numerals6 and some foreign borrowings, like Tシャツ “T-shirt”).

Users of a coding system or script learn to identify many visually-distinct symbols as the same fundamental symbol, which we call a character. For instance, this, this, this, this, this, and this all appear to Latin-script readers as the same character sequence—‘t h i s’—even though the pixel patterns differ. Some character symbols vary even more dramatically in visual expression; for instance, the Arabic characters ا ل ع ر ب ي ة appear as العربية when combined into one word. It is useful to distinguish the underlying character content of a text—the sequence of characters—from the visual presentation of those characters, which might vary depending on context (nearby characters) or preference (color, size, typeface).

Different writing systems have radically different numbers of characters.

QUESTION. What is the smallest x86-64 integer type that could represent every character in the Latin alphabet as a distinct bit pattern?

QUESTION. What is the smallest x86-64 integer type that could represent every character in all the coding systems listed above as a distinct bit pattern?

DISCUSSION QUESTIONS. What are some advantages of representing characters with small types? Conversely, what are some advantages of representing characters using large types?

Early computer coding systems

Coding systems are particularly suitable for translation into other forms, such as computer storage. So how should a computer represent characters? Or—even more concretely, since computers represent data as sequences of bytes, which are numbers between 0 and 255—which byte sequences should correspond to which characters? This question may seem easy, or even irrelevant (who cares? just pick something!), but different choices impact storage costs as well as the ability to represent different languages.

The earliest computer coding systems arose when computer storage was extremely expensive, slow, and scarce. Since computers were mainly used for mathematical computations and data tabulation, their users valued minimizing the space required per character. Supporting a wide range of applications was a secondary concern. Coding system this designers aimed for parsimony, or minimal size for encoded data. Each coding system was designed for a specific application, and extremely parsimonious representations are possible when the data being represented comes from a restricted domain.

The resulting systems, though well-suited for the technologies of the time, have limited expressiveness. For example, this is the 40-character Binary-Coded Decimal Interchange Code, or BCDIC, introduced by IBM in 1928 (!) for punchcard systems, which were electro-mechanical computers. The BCDIC code is actually based on a code developed in the late 1880s for the 1890 US Census; Herman Hollerith, the inventor of the census machine, founded a company that eventually became IBM.

BCDIC

Reference

The BCDIC code has a couple interesting features. Lower-case letters cannot be represented; letters are not represented in order; and the only representable punctuation marks are - and &. Though sufficient for the 1890 Census, this encoding can’t even represent this simple sentence, which features an apostrophe, commas, and an exclamation point, as well as lower case letters!

Compare BCDIC to this coding system, CCITT (Comité Consultatif International Téléphonique et Télégraphique) International Telegraph Alphabet No. 2, an international standard for telegraph communication introduced in 1924:

CCITT #2

Reference

Telegraphs were designed to communicate human-language messages rather than census data, so it makes sense that ITA2 supports a greater range of punctuation. Letters are not coded in alphabetical order, but there is an underlying design: common letters in English text are represented using fewer 1-valued bits. E and T (the most common letters in English text) are represented as 10000 00001, while X and Q (rare letters) are represented as 10111 11101. Some telegraph machinery represented 1-valued bits as holes punched in paper tape, so the fewer 1 bits transmitted, the less mechanical wear and tear!

ITA2 manages to represent 58 characters using a 5-bit code with only 32 bit patterns. This is possible thanks to a complex shift system, where bit patterns have different meaning depending on context. For instance, 01010 might mean either R or 4. A telegraph encoder or decoder is in one of two shift modes, letter shift or figure shift, at any moment. A message always begins in letter shift, so at the beginning of a message 01010 means R. Thereafter, the bit pattern 11011 switches to figure shift, after which 01010 means 4. 11111 switches back to letter shift.

Shift-based coding systems cleverly pack many characters into a small range of bit patterns, but they have two important disadvantages. First, the shift characters themselves take up space. A text that frequently switches between letters and numbers may take more space to represent in ITA2 than it would in a 6-bit code, because every switch requires a shift character. More critically, though, transmission errors in shift-based coding systems can have catastrophic effects. Say a bird poops on a telegraph wire, causing one bit to flip in a long message. In a direct coding system with no shifts, this bit-flip will corrupt at most one character. However, in a shift-based coding system, the bit-flip might change the meanings of all future symbols. For instance:

More complex shift-based systems tend to have even worse failure modes.

ASCII, ISO, and national standards

As computer technology improved and cheaper storage technologies became available, the disadvantages of proliferating context-specific coding systems began to outweigh the individual systems’ advantages. Should a given bit pattern be interpreted according to ITA2 telegraphy rules or BCDIC electromechanical computing rules? On a proprietary telegraph wire owned by Western Union, there was no ambiguity (it was ITA2), but on general-purpose computers, ambiguity causes real problems. A wave of coding systems aimed to avoid these ambiguities. Unfortunately, though, each of these coding systems was limited by linguistic culture.

The American Standard Code for Information Interchange, which is the core of the text encodings we use now, was developed in 1961–1963. It looks like this:

ASCII

ASCII supports lower case letters and punctuation without any shift system. There’s space for all those characters because ASCII is a 7-bit encoding with 128 bit patterns. It might seem amazing now, but this caused some controversy; the committee designing the standard apparently deadlocked for months before it agreed to go beyond 5 bits:

MORE THAN 64 CHARACTERS!

Reference

ASCII, an American standard, was adopted by ISO, the International Standards Organization, as ISO/IEC 646. However, ISO is an explicitly international organization (albeit Eurocentric, especially early on; it’s headquartered in Switzerland). It had to consider the needs of many countries, not just English-speaking America; and ASCII had no space for letters outside the basic Latin alphabet. ISO’s solution was to reserve many of the ASCII punctuation characters for national use. Different variants of ISO/IEC 646 could reassign those code points. Specifically, the code points for #$@[\]^`{|}~ could represent other symbols.7 For example, 0x23 meant # in America and £ in Britain. One Dutch standard used 0x40, 0x5B, and 0x5C for ¾ ij ½, not @[\. The first French variant encoded à ç é ù è in the positions for @\{|}, but did not encode uppercase versions or other accented letters. Swedish encoded ÉÄÖÅÜéäöåü in preference to @[\]^`{|}~.

The resulting coding systems facilitated communication within national borders and linguistic systems, but still caused problems for communication across borders or linguistic systems. A given bit pattern would be displayed in very different ways depending on country. Humans were able to adapt, but painfully. For example, this C code:

{ a[i] = '\n'; }

would show up like this on a Swedish terminal:

ä aÄiÜ = n'; ü

Some Swedish programmers learned to read and write C code in that format! Alternately, programmers might set their terminal to American mode—but then then their native language looked weird: “What’s up?” might appear as “Hur {r l{get?” The authors of the C standard tried to introduce an alternate punctuation format that didn’t rely on reserved characters, but everyone hated it; this just sucks:

??< a??(i??) = '??/n'; ??>

Meanwhile, other cultures with written language not based on the Latin alphabet developed their own coding systems, totally unrelated to ASCII, in which bit patterns used for Latin letters in ASCII might represent characters from Japanese syllabaries, or even shift commands. Such a file would look like gibberish on an American computer, and American files would look like gibberish on those computers.

ISO/IEC 8859

With increasing international data communication and cheaper computer storage, the ambiguity and misinterpretation caused by national character set standards grew more painful and less justifiable. It was clear what to do: add another bit and eliminate the national character sets. The ISO 8859 standards represent up to 256 characters, not 128, using 8-bit codes. In each ISO 8859 coding system, code points 0x00–0x7F (0–127) are encoded according to ASCII. Here are the meanings of code points 0x80–0xFF in ISO-8859-1:

ISO 8859-1

Reference

Finally Swedish programmers could ask what’s up and program C on the same terminal! ISO 8859-1 includes code points for all of ÉéÄäÖöÅåÜü@[\]^`{|}~.

But ISO 8859 was a stopgap. A single 8-bit coding system can support more languages than a 7-bit system, but not many more, and computer data remained ambiguous. ISO 8859-1, the most common version of ISO 8859, supports Western European languages, but not Central or Eastern European languages (for example, it lacks Hungarian’s ŐőŰű), let alone Greek or Cyrillic. Some of the encoding choices in ISO 8859 may seem strange to us, or at least governed by concerns other than the number of readers of a language. ISO 8859-1 includes the six characters ÝýÐðÞþ because they are required to support Icelandic, which has 360,000 speakers. Why were these selected instead of ĞğŞşıİ, the six additional characters required to support Turkish? Turkish has more than 200x as many speakers (88,000,000). Why was geographic proximity (“Western Europe”) more important for ISO 8859 than number of readers? And if geographic proximity was important, why were some Western European languages left out (Welsh, Catalan, parts of Finnish)?

Unicode

In the 1980s, some employees of Xerox and Apple got together to discuss a better way forward: a single encoding that could support all human languages using one code point per character (no external metadata to define the context for a bit pattern; no shifts). This effort eventually became Unicode, the current universal standard for character encoding.

The initial Unicode standard encoded all characters, including Chinese logographs, in a 16-bit coding system with two bytes per character. The attentive reader may note that this seems too small: a current Chinese dictionary lists more than 100,000 characters, while a 16-bit coding system has 65,536 code points available for all languages. The solution was a process called Han unification, in which all characters encoded in East Asian national standards were combined into a single set with no duplicates and certain “uncommon” characters were left out.8

Han unification was quite controversial when Unicode was young. People didn’t like the idea of American computer manufacturers developing a worldwide standard, especially after those manufacturers had downplayed the needs of other writing systems for many years. Conspiracy theories and ad hominem arguments flew. Here’s an anti-Unicode argument:

I did not attend the meetings in which ISO 10646 was slowly turned into a de facto American industrial standard. I have read that the first person to broach the subject of "unifying" Chinese characters was a Canadian with links to the Unicode project. I have also read that the people looking out for Japan's interests are from a software house that produces word processors, Justsystem Corp. Most shockingly, I have read that the unification of Chinese characters is being conducted on the basis of the Chinese characters used in China, and that the organization pushing this project forward is a private company, not representatives of the Chinese government. … However, basic logic dictates that China should not be setting character standards for Japan, nor should Japan be setting character standards for China. Each country and/or region should have the right to set its own standard, and that standard should be drawn up by a non-commercial entity. (reference)

And a pro-Unicode argument:

Have these people no shame?

This is what happens when a computing tradition that has never been able to move off ground-zero in associating 1 character to 1 glyph keeps grinding through the endless lists of variants, mistakes, rare, obsolete, nonce, idiosyncratic, and novel ideographs available through the millenia in East Asia. (reference)

In the West, however, anti-Unicode arguments tended to focus on space and transmission costs. Computer storage is organized in terms of 8-bit bytes, so the shift from ASCII to ISO 8859—from 7- to 8-bit encodings—did not much affect the amount of storage required to represent a text. But the shift from ISO 8859 to Unicode mattered. Any text representable in ISO 8859 doubled in size when translated to Unicode 1, which used a fixed-width 16-bit encoding for characters. When Western users upgraded to a computer system based on Unicode—such as Windows 2000 and successors, Java environments, and macOS—half the memory previously used for meaningful characters was given over to zero bytes required by Unicode’s UCS-2 representation.

Meanwhile, Unicode itself grew. The original 65,536-character code set didn’t have enough room to encode historic scripts, such as Egyptian hieroglyphs. More relevantly for living people, the Han unification process had been based on a misunderstanding about rarely-used Chinese characters: some rarely-used characters were truly obsolete, but others remained in use in proper names, and thus were critically important for particular people. It is not good if a supposedly-universal character encoding standard makes it impossible to write one’s name! This is a targeted, severe representational harm. It became clear that 65,536 characters would not suffice to express all human languages, and in 1996 Unicode 2.0 expanded the number of expressible code points by roughly 17x to 1,112,064. The most recent Unicode standard defines meanings for 144,697 of these 1.1M code points, including a wide range of rare and historic languages, mathematical alphabets, symbols, and emoji.

Summing up

All of this leaves us with some competing interests.

So what is to be done? It’s not obvious: transmission and storage costs are not just economic but environmental. Should Western systems use ISO 8859 by default, resulting in smaller texts and lower costs but problems of representation and ambiguity (e.g., if a user wants to add a single non-ISO-8859 character to a text, the whole text must be converted to Unicode and will quadruple in size)? Or is there some clever data representation that can reduce the cost of representing Unicode without losing its benefits?

Footnotes


  1. See Kate Crawford’s NIPS 2017 Keynote address: https://www.youtube.com/watch?v=fMym_BKWQzk ↩︎

  2. How to compensate individuals for harms committed to a group is a thorny ethical conundrum. ↩︎

  3. Deaf communities use sign languages that do not originate in speech. ↩︎

  4. “Behaviorally modern” humans arose around 500,000 years ago; the earliest evidence of written language is around 5,000 years old. A centenarian has been alive for around 2% of the time as written language. ↩︎

  5. To be pedantic, Hebrew and Arabic are “abjads”, not alphabets, because marks represent consonants and vowel sounds are implied. Devanagari and other South Asian scripts are “abugidas” because vowel sounds are indicated by accent-like marks on the more-foundational marks for consonants. ↩︎

  6. The Latin script uses what are called “Western Arabic” numerals, 0123456789. These numerals are derived from Arabic, and they are used in some countries that use the Arabic script, but in other countries—Iran, Egypt, Afghanistan—“Eastern Arabic” numerals are used: ٠١٢٣٤٥٦٧٨٩ ↩︎

  7. Less frequently, the code points for !":?_—0x21, 0x22, 0x3A, 0x3F, 0x5F—were also reassigned. ↩︎

  8. Han unification built on work by librarians and others, including Taiwan’s Chinese Character Code for Information Interchange (CCCII) and the Research Libraries Information Network’s East Asian Character Code (EACC). The work continued through a Joint Research Group, convened by Unicode, with expert members from China, Japan, and Korea (the successor group has experts from Vietnam and Taiwan as well; reference). For an example of how unification works, consider this image from the Unicode standard of character U+43B9. This character is unified from five closely-related characters in five distinct standards—from left to right, these are from mainland China, Hong Kong, Taiwan, Japan, and Korea.

    U+43B9 ↩︎

  9. Very few systems use this “natural” encoding, though some do. Instead, many systems use an encoding called UTF-16, in which code points above 0xFFFF are expressed as so-called surrogate pairs of code points in the range 0xD800–0xDFFF. This encoding is shiftless, but it is variable-length, requiring up to 4 bytes to represent a character. This can cause problems with algorithms that assume fixed-with encodings↩︎