2017/Datarep4

From CS61
Jump to: navigation, search

Lecture 4

Undefined behavior in context: Shim libraries

Undefined behavior is a requirement failure, not an effect. Undefined behavior is triggered when a C program in execution fails to obey certain requirements of the abstract machine. For example, the C specification says: “If an object is referred to outside of its lifetime, the behavior is undefined.” (§6.2.4, ISO/IEC 9899:1999) The following program causes undefined behavior in main, where the x object is referred to (via a dereferenced pointer) after its automatic lifetime has ended.

int* return_bad_pointer(void) {
    int x = 40;
    return &x;
}
int main() {
    printf("%d\n", *return_bad_pointer());
}

But actually running this program might or might not cause anything concretely bad to happen. Although the program could crash, it might also just print 40. In abstract terms, the undefined behavior occurs every time, because the lifetime requirement is violated every time. We call the concrete behavior is “undefined” because anything is allowed. The C compiler and C libraries may assume that undefined behavior never occurs, so if the user causes undefined behavior, the C compiler and libraries have no responsibilities.

Undefined behavior always depends on context. Change the requirements in force when a program runs, and you change whether undefined behavior occurs. For example, modern C compilers have a flag -fwrapv that makes signed integer overflow defined behavior, rather than undefined.

The problem set takes advantage of this. Your m61 functions do not call the system malloc and free directly, they call base_malloc and base_free. And these functions have different requirements than malloc and free. Specifically, dynamic-lifetime data can be accessed after it is passed to base_free, as long as that data has not been reused since by a later call to base_malloc. This allows a debugging allocator, such as yours, to check metadata belonging to a freed allocation, without causing undefined behavior.

There are thus two shim libraries in between the system allocator and the m61 application programs (testXXX.c). Each library uses the library below, and must abide by that library’s requirements. Each library provides an interface for the library above, and enforces its own requirements. In each case, the interface is basically the same—malloc and free—but the requirements differ.

  1. Highest level: application code. Calls m61_malloc and m61_free (because of macros defined in m61.h).
    • Requirements: Accessing dynamic-lifetime data after m61_free is undefined behavior.
    • Undefined behavior effects: The m61 library is designed to catch certain application problems; to that end, it should process the handout tests without incurring additional undefined behavior.
  2. Next level: m61_malloc and m61_free. Calls base_malloc and base_free.
    • Requirements: Accessing dynamic-lifetime data after base_free is defined behavior (until that data is reused by a future base_malloc call).
  3. Next level: base_malloc and base_free. Calls malloc and free.
    • Requirements: Accessing dynamic-lifetime data after free is undefined behavior.
    • Undefined behavior effects: Truly undefined; may cause no problems, may trigger a sanitizer error.
  4. Lowest level: malloc and free. Calls system calls, such as sbrk and mmap.

The looser requirements of base_malloc and base_free let you solve the problem set, including O(1) free with double-free detection and no memory leaks (e.g., an infinite loop calling free(malloc(1)) will not leak memory), without too much trouble. If you called malloc and free instead of base_malloc and base_free, the job would be much harder. Requirements are a critical and consequential part of a library’s specification.

Computer arithmetic and bitwise operators

Computers offer not only the usual arithmetic operators like + and -, but also a set of bitwise operators. The basic ones are & (and), | (or), ^ (xor/exclusive or), and the unary operator ~ (complement). In truth table form:

& (and) 0 1
0 0 0
1 0 1
| (or) 0 1
0 0 1
1 1 1
^ (xor) 0 1
0 0 1
1 1 0
~ (complement)
0 1
1 0

In C, these operators work on integers. But they work bitwise: the result of an operation is determined by applying the operation independently at each bit position. Here’s how to compute 5 ^ 9 in 4-bit unsigned arithmetic:

    5 == 0b 0 1 0 1
^   9 == 0b 1 0 0 1
-------------------
         0b 1 1 0 0 == 12

Two other operators, << (left shift) and >> (right shift), move bits from position to position within a word. The expression x << y “shifts x to the left” by dropping the y most significant bits and adding y zero bits at the least significant end. For instance, in 4-bit arithmetic, 3 << 1 == 6, because 3 == 0b0011, and adding 1 least-significant 0 bit produces 0b0110 == 6. The expression x >> y does the converse: it drops the y least significant bits and adds y zero bits at the most significant end. So 6 >> 2 == 1, because 6 == 0b0110, and adding two most-significant 0 bits produces 0b0001 == 1.

Bit twiddling

A glory of binary arithmetic is that many important or useful arithmetic computations have bitwise counterparts. Since bitwise operators are cheap for computers to implement, optimizing compilers will transform your code to use these operators whenever possible.

Say that x = 2N is a power of two. The binary representation of x is a 1 bit followed by N 0 bits. The binary representation of x - 1 is N 1 bits.

Knowing this about x, how can we computen % x with bitwise arithmetic? Answer: n % x == n & (x - 1). For example, n % 16 == (n & 15). (Note that unfortunately the C bitwise operators have low precedence, so it’s good to parenthesize them whenever they appear in an expression.) The bitwise operation will be far cheaper for the computer to calculate, since remainder is the most expensive basic integer arithmetic operation.

(Why does this work? Think by analogy to decimal. You can quickly compute n % 100 by taking the two least significant digits of n. This works in any base, including binary.)

When is (n & (n - 1)) == 0? Answer: when n is 0 or a power of two. Write n as M + x, where M is a power of two and x < M. If x > 0, then subtracting one will change x, but won’t change the M bit; then (n & (n - 1)) will at least have M on. On the other hand, if x == 0, then n and n - 1 share no one bits.

What is (x | y) - (x & y)? Answer: x ^ y.

What’s a faster way to compute n * x, where x = 2N is a power of two? Answer: n << N.

Signed integer representation

But how are signed integers represented in memory? What bit patterns should represent negative numbers?

The best representation for signed integers (and the choice made by x86-64) is two’s complement. Two’s complement representation is based on this principle: Addition and subtraction of signed integers shall use the same instructions as addition and subtraction of unsigned integers.

To see what this means, let’s think about what -x should mean when x is an unsigned integer. Wait, negative unsigned?! This isn’t an oxymoron because C unsigned integer arithmetic is modular: the result of an arithmetic operation on unsigned values is always taken modulo 2N, where N is the number of bits in the unsigned value type. Thus, on x86-64,

unsigned a = 0xFFFFFFFFU; // = 2^32 - 1
unsigned b = 0x00000002U;
assert(a + b == 0x00000001U); // True because 2^32 - 1 + 2 = 1 (mod 2^32)!

-x is simply the number that, when added to x, yields 0. For example, when unsigned x = 0xFFFFFFFFU, then -x == 1U, since x + -x equals zero (mod 232).

More generally, -x is equivalent to ~x + 1 in modular arithmetic. To see why, consider the bit representations. What is x + (~x + 1)? Well, (~x)i is 1 whenever xi is 0, and vice versa. That means that every bit of x + ~x is 1 (there are no carries), and x + ~x is the largest unsigned integer, with value 2N-1. If we add 1 to this, we get 2N. Which is 0 (mod 2N)! The highest “carry” bit is dropped, leaving zero.

Two’s complement arithmetic uses half of the unsigned integer representations for negative numbers. A two’s-complement signed integer with N bits has the following values:

  • If the most-significant bit is 1, the represented number is negative. Specifically, the represented number is –(~x + 1), where the outer negative sign is mathematical negation (not computer arithmetic).
  • If every bit is 0, the represented number is 0.
  • If the most-significant but is 0 but some other bit is 1, the represented number is positive.

The most significant bit is also called the “sign bit.” Another way to think about two’s-complement is that, for N-bit integers, the most-significant bit has place value 2N–1 in unsigned arithmetic and negative 2N–1 in signed arithmetic. (All other bits have the same place values in both kinds of arithmetic.)

In two’s-complement representation, the bit pattern for x + y is the same whether x and y are considered as signed or unsigned values. For example, in 4-bit arithmetic, 5 has representation 0b0101, while the representation 0b1100 represents 12 if unsigned and –4 if signed (~0b1100 + 1 = 0b0011 + 1 == 4). Let’s add those bit patterns and see what we get:

  0b0101
+ 0b1100
--------
 0b10001 == 0b0001 (mod 2^4)

Note that this is the right answer for both signed and unsigned arithmetic: 5 + 12 = 17 = 1 (mod 16), and 5 + -4 = 1.

Subtraction and multiplication also produce the same results for unsigned arithmetic and signed two’s-complement arithmetic. (For instance, 5 * 12 = 60 = 12 (mod 16), and 5 * -4 = -20 = -4 (mod 16).) This is not true of division. (Consider dividing the 4-bit representation 0b1110 by 2. In signed arithmetic, 0b1110 represents -2, so 0b1110/2 == 0b1111 (-1); but in unsigned arithmetic, 0b1110 is 14, so 0b1110/2 == 0b0111 (7).) And, of course, it is not true of comparison. In signed 4-bit arithmetic, 0b1110 < 0, but in unsigned 4-bit arithmetic, 0b1110 > 0. This means that a C compiler for a two’s-complement machine can use a single add instruction for either signed or unsigned numbers, but it must generate different instruction patterns for signed and unsigned division (or less-than, or greater-than).

There are a couple quirks with C signed arithmetic. First, in two’s complement, there are more negative numbers than positive numbers. A representation with sign bit is 1, but every other bit 0, has no positive counterpart at the same bit width: for this number, -x == x. (In 4-bit arithmetic, -0b1000 == ~0b1000 + 1 == 0b0111 + 1 == 0b1000.) Second, and far worse: arithmetic overflow on signed integers is undefined behavior.