Ancient CS 61 Content Warning!!!!!1!!!
This is not the current version of the class.
This site was automatically translated from a wiki. The translation may have introduced mistakes (and the content might have been wrong to begin with).

Final Sample Answers

DATAREP-24. Integer representation

Write the value of the variable or expression in each problem, using signed decimal representation.

For example, if we gave you:

  1. int i = 0xA;
  2. int j = 0xFFFFFFFF;

you would write A) 10 B) -1

QUESTION DATAREP-24A. int i = 0xFFFF; (You may write this either in decimal or as an expression using a power of 2)

216 − 1 or 65535

QUESTION DATAREP-24B. short s = 0xFFFF; (You may write this either in decimal or as an expression using a power of 2)

−1

QUESTION DATAREP-24C. unsigned u = 1 \<\< 10;

1024 (or 210).

QUESTION DATAREP-24D. From WeensyOS: unsigned long l = PTE_P \| PTE_U;

5

QUESTION DATAREP-24E. int j = ~0;

−1

QUESTION DATAREP-24F. From WeensyOS: sizeof(x86_64_pagetable);

4096 or 212

QUESTION DATAREP-24G. Given this structure:

struct s {
    char c;
    short s;
    long l;
};
struct s* ps;

This expression: sizeof(ps);

TRICK QUESTION! 8

QUESTION DATAREP-24H. Using the structure above: sizeof(\*ps);

16

QUESTION DATAREP-24I. unsigned char u = 0xABC;

0xBC == 11*16 + 12 == 160 + 16 + 12 == 188

QUESTION DATAREP-24J. signed char c = 0xABC;

0xBC has bit 0 on, so the value is less than zero. We seek the value x so that 0xBC + x == 0x100; the answer will equal −x. The answer is 0x44: 0xBC + 4 == 0xC0, and 0xC0 + 0x40 == 0x100. So −0x44 == −4*16 − 4 == −68.

DATAREP-25. Data representation

In gdb, you observe the following values for a set of memory locations.

0x100001020:   0xa0   0xb1   0xc2   0xd3    0xe4   0xf5   0x06   0x17
0x100001028:   0x28   0x39   0x4a   0x5b    0x6c   0x7d   0x8e   0x9f
0x100001030:   0x89   0x7a   0x6b   0x5c    0x4d   0x3e   0x2f   0x10
0x100001038:   0x01   0xf2   0xe3   0xd4    0xc5   0xb6   0xa7   0x96

For each C expression below, write its value in hexadecimal. For example, if we gave you:

char *cp = (char*) 0x100001020; cp[0] = 

the answer would be 0xa0.

Assume the following structure and union declarations and variable definitions.

struct _s1 {
       int i;
       long l;
       short s;
};
struct _s2 {
       char c[4];
       int i;
       struct _s1 s;
};
union _u {
       char c[8];
       int i;
       long l;
       short s;
};
char* cp = (char*) 0x100001020;
struct _s1* s1 = (struct _s1*) 0x100001020;
struct _s2* s2 = (struct _s2*) 0x100001020;
union _u* u = (union _u*) 0x100001020;

QUESTION DATAREP-25A. cp[4] =

0xe4 (-28)

QUESTION DATAREP-25B. cp + 7 =

0x100001027

QUESTION DATAREP-25C. s1 + 1 =

0x100001038

QUESTION DATAREP-25D. s1->i =

0xd3c2b1a0 (-742215264)

QUESTION DATAREP-25E. sizeof(s1) =

8

QUESTION DATAREP-25F. &s2->s =

0x100001028

QUESTION DATAREP-25G. &u->s =

&u->s = 0x100001020

QUESTION DATAREP-25H. s1->l =

0x9f8e7d6c5b4a3928 (-6949479270644565720)

QUESTION DATAREP-25I. s2->s.s =

s2->s.s = 0xf201 (-3583)

QUESTION DATAREP-25J. u->l =

u->l = 0x1706f5e4d3c2b1a0 (1659283875886707104)

ASM-13. Assembly language

Consider the following four assembly functions.

# Code Sample 1
       movq    %rdi, %rax
       testq   %rdx, %rdx
       je      .L2
       addq    %rdi, %rdx
       movq    %rdi, %rcx
.L3:
       addq    $1, %rcx
       movb    %sil, -1(%rcx)
       cmpq    %rdx, %rcx
       jne     .L3
.L2:
       rep ret
# Code Sample 2
       movq    %rdi, %rax
       testq   %rdx, %rdx
       je      .L2
       addq    %rdi, %rdx
       movq    %rdi, %rcx
.L3:
       addq    $1, %rcx
       addq    $1, %rsi
       movzbl  -1(%rsi), %r8d
       movb    %r8b, -1(%rcx)
       cmpq    %rdx, %rcx
       jne     .L3
.L2:
       rep ret
# Code Sample 3
      movb    (%rsi), %al
      testb   %al, %al
      je      .L3
      incq    %rsi
.L2:
      movb    %al, (%rdi)
      incq    %rdi
      movb    (%rsi), %al
      incq    %rsi
      testb   %al, %al
      jne     .L2
.L3:
      movq    %rdi, %rax
      ret
# Code Sample 4
       testq   %rdx, %rdx
       je      .L3
       movq    %rdi, %rax
.L2:
       movb    %sil, (%rax)
       incq    %rax
       decq    %rdx
       jne     .L2
.L3:
       movq    %rdi, %rax
       ret

(Note: The %sil register is the lowest-order byte of register %rsi, just as %al is the lowest-order byte of %rax and %r8b is the lowest-order byte of %r8.)

QUESTION ASM-13A. Which two of the assembly functions perform the exact same task?

1 and 4

QUESTION ASM-13B. What is that task? You can describe it briefly, or give the name of the corresponding C library function.

memset

QUESTION ASM-13C. Explain how the other two functions differ from each other.

One is memcpy (#2) and the other is strcpy (#3), so the only real difference is that the #2 terminates after copying a number of bytes indicated by the parameter while the other terminates when it encounters a NUL value in the src string.

IO-11. Caching

QUESTION IO-11A. If it takes 200ns to access main memory, which of the following two caches will produce a lower average access time?

Let's compute average access time for each case:

.9 * 10 + .1 * 200 = 9 + 20 = 29
.98 * 20 + .02 * 200 = 19.6 + 4 = 23.6

The 20ns cache produces a lower average access time.

QUESTION IO-11B. Let’s say that you have a direct-mapped cache with four slots. A page with page number N must reside in the slot numbered N % 4. What is the best hit rate this could achieve given the following sequence of page accesses?

3 6 7 5 3 2 1 1 1 8

Since it's direct mapped, each item can go in only one slot, so if we list the slots for each access, we get:

3 2 3 1 3 2 1 1 1 0

The only hits are the 2 1's, so your hit rate is 2/10 or 20% or .2.

QUESTION IO-11C. What is the best hit rate a fully-associative four-slot cache could achieve for that sequence of page accesses? (A fully-associative cache may put any page in any slot. You may assume you know the full reference stream in advance.)

Now we can get hits for 3, 1, and 1, so our hit rate goes to 3/10 or 30%.

QUESTION IO-11D. What hit rate would the fully-associative four-slot cache achieve if it used the LRU eviction policy?

Still 30% (3/10, .3)

KERN-1. Virtual memory

QUESTION KERN-1A. What is the x86-64 page size? Circle all that apply.

  1. 4096 bytes
  2. 64 cache lines
  3. 256 words
  4. 0x1000 bytes
  5. 216 bits
  6. None of the above

#1, #2, #4. The most common x86-64 cache line size is 64 = 26 bytes, and 26×26 = 212, but there ma have been some x86-64 processors with 128-byte cache lines. The word size is 8; 256×8 = 2048, not 4096. There are 8 bits per byte; 216/8 = 213, not 212.

The following questions concern the sizes of page tables. Answer the questions in units of pages. For instance, the page tables in WeensyOS each contained one level-1 page table page, one level-2 page table page, one level-3 page table page, and two level-4 page table pages, for a total size of 5 pages per page table.

QUESTION KERN-1B. What is the maximum size (in pages) of an x86-64 page table (page tables only, not destination pages)? You may write an expression rather than a number.

                      1    level-1 page table page
+                   512    level-2 page table pages
+             512 * 512    level-3 page table pages
+       512 * 512 * 512    level-4 page table pages
-----------------------
  2^27 + 2^18 + 2^9 + 1
= 0x8040201 = 134480385 page table pages

QUESTION KERN-1C. What is the minimum size (in pages) of an x86-64 page table that would allow a process to access 221 distinct physical addresses?

One.

Whaaat?! Consider a level-1 page table whose first entry refers to the level-1 page table page itself, and the other entries referred to different pages. Like this:

Physical
Page

Index

(Physical
Address)

Contents

0x1000

0

(0x1000)

0x1007

1

(0x1008)

0x2007

2

(0x1010)

0x3007

3

(0x1018)

0x4007

...

511

(0x1FF8)

0x200007

With this page table in force, the 221 virtual addresses 0x0 through 0x1FFFFF access the 221 distinct physical addresses 0x1000 through 0x200FFF.

The 64-bit x86-64 architecture is an extension of the 32-bit x86 architecture, which used 32-bit virtual addresses and 32-bit physical addresses. Extensions to this architecture increased both these limits.

QUESTION KERN-1D. Which of these two machines would support a higher number of concurrent processes?

  1. x86-32 with PAE with 100 GB of physical memory.
  2. x86-64 with 20 GB of physical memory.

#1 x86-32 with PAE. Each concurrent process occupies some space in physical memory, and #1 has more physical memory.

(Real operating systems swap, so either machine could support more processes than fit in virtual memory, but this would cause thrashing. #1 supports more processes before it starts thrashing.)

QUESTION KERN-1E. Which of these two machines would support a higher maximum number of threads per process?

  1. x86-32 with PAE with 100 GB of physical memory.
  2. x86-64 with 20 GB of physical memory.

#2 x86-64. Each thread in a process needs some address space for its stack, and an x86-64 process address space is much bigger than an x86-32’s.

KERN-2. Virtual memory and kernel programming

These problems consider implementations of virtual memory features in a WeensyOS-like operating system. Recall the signatures and specifications of the virtual_memory_lookup and virtual_memory_map functions:

// virtual_memory_map(pagetable, va, pa, sz, perm, allocator)
` //    Map virtual address range `[va, va+sz)` in `pagetable`. `
` //    When `X >= 0 && X < sz`, the new pagetable will map virtual address `
` //    `va+X` to physical address `pa+X` with permissions `perm`. `
//
` //    Precondition: `va`, `pa`, and `sz` must be multiples of PAGESIZE `
//    (4096).
//
` //    Typically `perm` is a combination of `PTE_P` (the memory is Present), `
` //    `PTE_W` (the memory is Writable), and `PTE_U` (the memory may be `
` //    accessed by User applications). If `!(perm & PTE_P)`, `pa` is ignored. `
//
//    Sometimes mapping memory will require allocating new page tables. The
` //    `allocator` function should return a newly allocated page, or NULL `
//    on allocation failure.
//
//    Returns 0 if the map succeeds, -1 if it fails because a required
//    page table could not be allocated.
int virtual_memory_map(x86_64_pagetable* pagetable, uintptr_t va,
                       uintptr_t pa, size_t sz, int perm,
                       x86_64_pagetable* (*allocator)(void));
// virtual_memory_lookup(pagetable, va)
` //    Returns information about the mapping of the virtual address `va` in `
` //    `pagetable`. The information is returned as a `vamapping` object, `
//    which has the following components:
typedef struct vamapping {
    int pn;           // physical page number; -1 if unmapped
    uintptr_t pa;     // physical address; (uintptr_t) -1 if unmapped
    int perm;         // permissions; 0 if unmapped
} vamapping;
vamapping virtual_memory_lookup(x86_64_pagetable* pagetable, uintptr_t va);

Also recall that WeensyOS tracks physical memory using an array of pageinfo structures:

typedef struct physical_pageinfo {
    int8_t owner;
    int8_t refcount; // 0 means the page is free
} physical_pageinfo;
static physical_pageinfo pageinfo[PAGENUMBER(MEMSIZE_PHYSICAL)];

The WeensyOS kernel occupies virtual addresses 0 through 0xFFFFF; the process address space starts at PROC_START_ADDR == 0x100000 and goes up to (but not including) MEMSIZE_VIRTUAL == 0x300000.

QUESTION KERN-2A. True or false: On x86-64 Linux, like on WeensyOS, the kernel occupies low virtual addresses.

False

QUESTION KERN-2B. On WeensyOS, which region of a process’s address space is closest to the kernel’s address space? Choose from code, data, stack, and heap.

Code

QUESTION KERN-2C. On Linux on an x86-64 machine, which region of a process’s address space is closest to the kernel’s address space? Choose from code, data, stack, and heap.

Stack

Recall that the WeensyOS sys_page_alloc(addr) system call allocates a new physical page at the given virtual address. Here’s an example kernel implementation of sys_page_alloc, taken from the WeensyOS interrupt function:

case INT_SYS_PAGE_ALLOC: {
    uintptr_t addr = current->p_registers.reg_rdi;

    // [A]

    int free_pn = find_free_physical_page();
    if (free_pn < 0) { // no free physical pages
        console_printf(CPOS(24, 0), 0x0C00, "Out of physical memory!\n");
        current->p_registers.reg_rax = -1; // return result in %rax
        break; // will call run(current)
    }

    // [B]

    // otherwise, allocate the page
    assert(pageinfo[free_pn].refcount == 0);
    pageinfo[free_pn].refcount += 1;
    pageinfo[free_pn].owner = current->p_pid;

    // [C]

    // and map it into the user’s address space
    virtual_memory_map(current->p_pagetable, addr, PAGEADDRESS(free_pn), PAGESIZE, PTE_P | PTE_U | PTE_W, NULL);
    current->p_registers.reg_rax = 0;

    // [D]

    break;
}

QUESTION KERN-2D. Thanks to insufficient checking, this implementation allows a WeensyOS process to crash the operating system or even take it over. This kernel is not isolated. What the kernel should do is return −1 when the calling process supplies bad arguments. Write code that, if executed at slot [A], would preserve kernel isolation and handle bad arguments correctly.

if (addr % PAGESIZE != 0 || addr < PROC_START_ADDR || addr >= MEMSIZE_VIRTUAL) {
    current->p_registers.reg_rax = -1;
    break;
}

QUESTION KERN-2E. This implementation has another problem, which the following process would trigger:

void process_main(void) {
    heap_top = ROUNDUP((uint8_t*) end, PAGESIZE); // first address in heap region
    while (1) {
        sys_page_alloc(heap_top);
        sys_yield();
    }
}

This process code repeatedly allocates a page at the same address. What should happen is that the kernel should repeatedly deallocate the old page and replace it with a newly-allocated page. But that’s not what will happen given the example implementation.

What will happen instead? And what is the name of this kind of problem?

Eventually the OS will run out of physical memory. At least it will print “Out of physical memory!” (that was in the code we provided). This is a memory leak.

QUESTION KERN-2F. Write code that would fix the problem, and name the slot in the INT_SYS_PAGE_ALLOC implementation where your code should go.

vamapping vm = virtual_memory_lookup(current->p_pagetable, addr);
if (vm.perm) {
    pageinfo[vm.pn].refcount -= 1;
}

This goes in slot B or slot C. Slot D is too late; it would free the newly mapped page. Slot A is too early, for a subtle reason. Imagine that the page at addr was shared with another process, so pageinfo[vm.pn].refcount > 1. Then, if there was no free memory, it would be possible for the implementation to dereference the old page, but fail to allocate a new page! This would break the kernel’s invariants, since that the pageinfo reference count would be one off from the actual number of references in page tables.

KERN-3. Kernel programming

WeensyOS processes are quite isolated: the only way they can communicate is by using the console. Let’s design some system calls that will allow processes to explicitly share pages of memory. Then the processes can communicate by writing and reading the shared memory region. Here are two new WeensyOS system calls that allow minimal page sharing; they return 0 on success and –1 on error.

int share(pid_t p, void* addr) Allow process p to access the page at address addr.

int attach(pid_t p, void* remote_addr, void* local_addr) Access the page in process p’s address space at address remote_addr. That physical page is added to the calling process’s address space at address local_addr, replacing any page that was previously mapped there. It is an error if p has not shared the page at remote_addr with the calling process.

Here’s an initial implementation of these system calls, written as clauses in the WeensyOS kernel’s exception function.

case INT_SYS_SHARE: {
    pid_t p = current->p_registers.reg_rdi;
    uintptr_t addr = current->p_registers.reg_rsi;

    // [A]

    int shindex = current->p_nshared;
    if (shindex >= MAX_NSHARED) {
        goto return_error;
    }

    // [B]

    ++current->p_nshared; 
    current->p_shared[shindex].sh_addr = addr;
    current->p_shared[shindex].sh_partner = p;
    current->p_registers.reg_rax = 0;
    break;
}
case INT_SYS_ATTACH: {
    pid_t p = current->p_registers.reg_rdi;
    uintptr_t remote_addr = current->p_registers.reg_rsi;
    uintptr_t local_addr = current->p_registers.reg_rdx;

    // [C]

    int shindex = -1;
    for (int i = 0; i < processes[p].p_nshared; ++i) {
        if (processes[p].p_shared[i].sh_addr == remote_addr
            && processes[p].p_shared[i].sh_partner == current->p_pid) {
            shindex = i;
        }
    }
    if (shindex == -1) {
        goto return_error;
    }

    // [D]

    vamapping vam = virtual_memory_lookup(processes[p].p_pagetable, remote_addr);

    // [E]

    virtual_memory_map(current->p_pagetable, local_addr,
                       vam.pa, PAGESIZE, PTE_P|PTE_W|PTE_U);

    // [F]

    current->p_registers.reg_rax = 0;
    break;
}
return_error:
    current->p_registers.reg_rax = -1;
    break;

Some notes:

QUESTION KERN-3A. True or false: Given this implementation, a single WeensyOS process can cause the kernel to crash simply by calling share one or more times (with no process ever calling attach). If true, give an example of a call or calls that would likely crash the kernel.

False

QUESTION KERN-3B. True or false: Given this implementation, a single WeensyOS process can cause the kernel to crash simply by calling attach one or more times (with no process ever calling share). If true, give an example of a call or calls that would likely crash the kernel.

True. If the user supplies an out-of-range process ID argument, the kernel will try to read out of bounds of the processes array. Example call: attach(0x1000000, 0, 0).

QUESTION KERN-3C. True or false: Given this implementation, WeensyOS processes 2 and 3 could work together to obtain write access to the kernel code located at address KERNEL_START_ADDR. If true, give an example of calls that would obtain this access.

True, since the attach and share code don’t check whether the user process is allowed to access its memory. An example:

#2: share(3, KERNEL_START_ADDR)
#3: attach(2, KERNEL_START_ADDR, 0x110000)

QUESTION KERN-3D. True or false: Given this implementation, WeensyOS processes 2 and 3 could work together to obtain write access to any memory, without crashing or modifying kernel code or data. If true, give an example of calls that would obtain access to a page mapped at address 0x110000 in process 5.

The best answer here is false. Processes are able to gain access to any page mapped in one of their page tables. But it’s not clear whether 5’s 0x110000 is mapped in either of the current process’s page tables. Now, 2 and 3 could first read the processes array (via share/attach) to find the physical address of 5’s page table; then, if 2 and 3 are in luck and the page table itself is mapped in their page table, they could read that page table to find the physical address of 0x110000; and then, if 2 and 3 are in luck again, map that page using the VA accessible in one of their page tables (which would differ from 0x110000). But that might not work.

QUESTION KERN-3E. True or false: Given this implementation, WeensyOS child processes 2 and 3 could work together to modify the code run by a their shared parent, process 1, without crashing or modifying kernel code or data. If true, give an example of calls that would obtain write access to process 1’s code, which is mapped at address PROC_START_ADDR.

True; since process code is shared after step 6, the children can map their own code read/write, and this is the same code as the parent’s.

#2: share(3, PROC_START_ADDR)
#3: attach(2, PROC_START_ADDR, PROC_START_ADDR)

QUESTION KERN-3F. Every “true” answer to the preceding questions is a bug in WeensyOS’s process isolation. Fix these bugs. Write code snippets that address these problems, and say where they go in the WeensyOS code (for instance, you could refer to bracketed letters to place your snippets); or for partial credit describe what your code should do.

Here’s one possibility.

Prevent share from sharing an invalid address:

[A] if ((addr & 0xFFF) || addr < PROC_START_ADDR)
        return -1;

NB don’t need to check addr < MEMSIZE_VIRTUAL as long as we check the return value from virtual_memory_lookup below (but that doesn’t hurt).

Prevent attach from accessing an invalid process or mapping at an invalid address:

[C] if (p >= NPROC || (local_addr & 0xFFF) || local_addr < PROC_START_ADDR || local_addr >= MEMSIZE_VIRTUAL)
        return -1;

We do need to check MEMSIZE_VIRTUAL here.

Check the mapping at remote_addr before installing it:

[E] if (!(vam.perm & PTE_U)
        return -1;
In virtual_memory_map: Use vam.perm instead of PTE_U|PTE_P|PTE_W

For greatest justice we would also fix a potential memory leak caused by attaching over an address that already had a page, but this isn’t necessary.

KERN-4. Teensy OS VM System

The folks at Teensy Computers, Inc, need your help with their VM system. The hardware team that developed the VM system abruptly left and the folks remaining aren't quite sure how VM works. I volunteered you to help them.

The Teensy machine has a 16-bit virtual address space with 4 KB pages. The Teensy hardware specifies a single-level page table. Each entry in the page table is 16-bits. Eight of those bits are reserved for the physical page number and 8 of the bits are reserved for flag values. Sadly, the hardware designers did not document what the bits do!

QUESTION KERN-4A. How many pages are in the Teensy virtual address space?

16 (24)

QUESTION KERN-4B. How many bits comprise a physical address?

20 (8 bits of physical page number + 12 bits of page offset)

QUESTION KERN-4C. Is the physical address space larger or smaller than the virtual address space?

Larger!

QUESTION KERN-4D. Write, in hex, a PAGE_OFFSET_MASK (the value that when anded with an address returns the offset of the address on a page).

0xFFF

QUESTION KERN-4E. Write a C expression that takes a virtual address, in the variable vaddr, and returns the virtual page number.

(vaddr >> 12) OR ((vaddr) & 0xF000 >> 12)

You are now going to work with the Teensy engineers to figure out what those other bits in the page table entries mean! Fortunately, they have some engineering notes from the hardware team—they need your help in making sense of them. Each letter below has the contents of a note, state what you can conclude from that note about the lower 8 bits of the page table entries.

QUESTION KERN-4F. “Robin, I ran 8 tests using a kernel that did nothing other than loop infinitely -- for each test I set a different bit in all the PTEs of the page table. All of them ended up in the exception handler except for the one where I set bit 4. Any idea what this means?”

Bit 4 is the “preset/valid bit”, the equivalent of x86 PTE_P.

QUESTION KERN-4G. “Lynn, I'm writing a memory test that iterates over all of memory making sure that I can read back the same pattern I write into memory. If I don't set bit 7 of the page table entries to 1, I get permission faults. Do you know what might be happening?”

Bit 1 is the “writable bit”, the equivalent of x86 PTE_W.

QUESTION KERN-4H. “Pat, I almost have user level processes running! It seems that the user processes take permission faults unless I have both bit 4 and bit 3 set. Do you know why?”

Bit 3 is the “user/unprivileged bit”, the equivalent of x86 PTE_U.

KERN-5. Teensy OS Page Tables

The Teensy engineers are well on their way now, but they do have a few bugs and they need your help debugging the VM system. They hand you the following page table, using the notation we used for Assignment 6 for permissions, and need your help specifying correct behavior for the operations that follow.

Index

Physical
Page Number

Permissions

0

0x00

PTE_U

1

0x01

PTE_P

2

0x02

PTE_P PTE_W

3

0x03

PTE_P PTE_U PTE_W

4

0xFF

PTE_U PTE_W

5

0xFE

PTE_U

6

0x80

PTE_W

7

0x92

PTE_P PTE_W PTE_U

8

0xAB

PTE_P PTE_W PTE_U

9

0x09

PTE_P PTE_U

10

0xFE

PTE_P PTE_U

11

0x00

PTE_W

12

0x11

PTE_U

Rest of PTEs follow and are all invalid

For each problem below, write either the physical address of the given virtual address or identify what fault would be produced. The fault types should be one of:

  1. Invalid page access (there is no mapping for the requested page)
  2. Privilege violation (user level process trying to access a supervisor page)
  3. Permission violation (attempt to write a read-only page)

QUESTION KERN-5A. The kernel dereferences a NULL pointer

Invalid page access

QUESTION KERN-5B. A user process dereferences a NULL pointer

Invalid page access

QUESTION KERN-5C. The kernel writes to the address 0x8432

0xAB432

QUESTION KERN-5D. A user process writes to the address 0xB123

Invalid page access (when both PTE_P and PTE_U are missing, it's PTE_P that counts)

QUESTION KERN-5E. The kernel reads from the address 0x9876

0x09876

QUESTION KERN-5F. A user process reads from the address 0x7654

0x92654

QUESTION KERN-5G. A user process writes to the address 0xABCD

Permission violation

QUESTION KERN-5H. A user process writes to the address 0x2321

Privilege violation

KERN-6. Virtual Memory

You may recall that Professor Seltzer loves inventing strange and wonderful virtual memory systems—she’s at it again! The Tom and Ginny (TAG) processor has 16-bit virtual addresses and 256-byte pages. Virtual memory translation is provided via two-level page tables as shown in the figure below.

kern6-va.png

QUESTION KERN-6A. How many entries are in an L1 page table?

16

QUESTION KERN-6B. How many entries are in an L2 page table?

16

QUESTION KERN-6C. If each page table entry occupies 2 bytes of memory, how large (in bytes) is a single page table?

32 (= 2 * 16). Since the handout question said each PTE took 4B, 64 (= 4 * 16) is full credit. 256 also makes some sense, if you assume each page table must be coincident with a page—every page is 256B.

QUESTION KERN-6D. What is the maximum number of L1 page tables that a process can have?

1

QUESTION KERN-6E. What is the maximum number of L2 page tables that a process can have?

16

The Figure below shows how the PTEs are organized.

kern6-pte.png

QUESTION KERN-6F. Given the number of bits allocated to the physical page number in the PTE, how much physical memory can the TAG processor support?

12 bits of page number plus 8 bits of page offset is 20 bits, which is 1 MB.

Finally, you’ll actually perform virtual address translation in software. We will define a TAG page table entry as follows:

typedef unsigned short tag_pageentry;

QUESTION KERN-6G. Write a function

unsigned virtual_to_physical(tag_pageentry* pagetable, unsigned vaddr)

that takes as arguments:

and returns a physical address if a valid mapping exists and an invalid physical address if no valid mapping exists. Comment your code to explain each step that you want the function to take.

#define PTE_SHIFT       4
#define PT_NDX_MASK     0xF
#define PAGE_SHIFT      8
#define PAGE_MASK       0xFF
#define L1_SHIFT        12
#define PTE_U   2
#define PTE_W   4
#define PTE_P   8
typedef unsigned short *tag_pagetable;
typedef unsigned short pte_t;
unsigned virtual_to_physical(tag_pagetable pagetable, unsigned vaddr) {
    // grab the PTE for the L2 page table
    pte_t pte = pagetable[(vaddr >> L1_SHIFT) & PT_NDX_MASK];
    if ((pte & PTE_P) == 0) {
        return INVALID_PHYSADDR;
    }
    // Calculate L2 page table
    unsigned l2_pt = (unsigned) (pte >> PTE_SHIFT);  // lose permission bits
    l2_pt <<= PAGE_SHIFT;                   // Put page offset in
    tag_pagetable l2_pagetable = (tag_pagetable) l2_pt;
    // Now grab pte from L2 page table
    pte = l2_pagetable[(vaddr >> PAGE_SHIFT) & PT_NDX_MASK];
    if ((pte & PTE_P) == 0) {
        return INVALID_PHYSADDR;
    }
    unsigned physaddr = (unsigned) (pte >> PTE_SHIFT); // lose permissions
    physaddr <<= PAGE_SHIFT;
    physaddr |= vaddr & PAGE_MASK;
    return physaddr;
}

KERN-7. Cost expressions

In the following questions, you will reason about the abstract costs of various operations, using the following tables of constants.

Table of Basic Costs

S System call overhead (i.e., entering and exiting the kernel)
F Page fault cost (i.e., entering and exiting the kernel)
P Cost of allocating a new physical page
M Cost of installing a new page mapping
B Cost of copying a byte

Table of Sizes

nk Number of memory pages allocated to the kernel
Per-process sizes (defined for each process p)
np Number of memory pages allocated to p
rp Number of read-only memory pages allocated to p
wp = nprp Number of writable memory pages allocated to p
mp Number of memory pages actually modified by p after the previous fork()

One of our tiny operating systems from class (OS02) included a program that called a recursive function. When the recursive function’s argument grew large enough, the stack pointer moved beyond the memory actually allocated for the stack, and the program crashed.

QUESTION KERN-7A. In our first solution for this problem, the process called the sys_page_alloc(void *addr) system call, which allocated and mapped a single new page at address addr (the new stack page). Write an expression for the cost of this sys_page_alloc() system call in terms of the constants above.

S + P + M

QUESTION KERN-7B. Our second solution for this problem changed the operating system’s page fault handler. When a fault occurred in a process’s stack region, the operating system allocated a new page to cover the corresponding address and restarted the process. Write an expression for the cost of such a fault in terms of the constants above.

F + P + M

QUESTION KERN-7C. Design a revised version of sys_page_alloc that supports batching. Give its signature and describe its behavior.

Example: sys_page_alloc(void *addr, int npages)

QUESTION KERN-7D. Write an expression for the cost of a call to your batching allocation API.

Can vary; for this example, S + npages*(P + M)

In the remaining questions, a process p calls fork(), which creates a child process, c.

Assume that the base cost of performing a fork() system call is Φ. This cost includes the fork() system call overhead (S), the overhead of allocating a new process, the overhead of allocating a new page directory with kernel mappings, and the overhead of copying registers. But it does not include overhead from allocating, copying, or mapping other memory.

QUESTION KERN-7E. Consider the following implementations of fork():

A. Naive fork: Copy all process memory (WeensyOS, Step 5).
B. Eager fork: Copy all writable process memory; share read-only process memory, such as code (WeensyOS, Step 6).
C. Copy-on-write fork: initially share all memory as read-only. Create writable copies later, on demand, in response to write faults (WeensyOS extra credit).

Which expression best represents the total cost of the fork() system call in process p, for each of these fork implementations? Only consider the system call itself, not later copy-on-write faults.

(Note: Per-process variables, such as n, are defined for each process. So, for example, np is the number of pages allocated to the parent process p, and nc is the number of pages allocated to the child process c.)

  1. Φ
  2. Φ + np × M
  3. Φ + (np + wp) × M
  4. Φ + np × 212 × (B + F)
  5. Φ + np × (212B + P + M)
  6. Φ + np × (P + M)
  7. Φ + wp × (212B + P + M)
  8. Φ + np × (212B + P + M) − rp × (212B + P)
  9. Φ + np × M + mc × (P + F)
  10. Φ + np × M + mc × (212B + F + P)
  11. Φ + np × M + (mp+mc) × (P + F)
  12. Φ + np × M + (mp+mc) × (212B + F + P)

A: #5, B: #8 (good partial credit for #7), C: #2

QUESTION KERN-7F. When would copy-on-write fork be more efficient than eager fork (meaning that the sum of all fork-related overheads, including faults for pages that were copied on write, would be less for copy-on-write fork than eager fork)? Circle the best answer.

  1. When np < nk.
  2. When wp × F < wp × (M + P).
  3. When mc × (F + M + P) < wp × (M + P).
  4. When (mp+mc) × (F + M + P + 212B) < wp × (P + 212B).
  5. When (mp+mc) × (F + P + 212B) < wp × (P + M + 212B).
  6. When mp < mc.
  7. None of the above.

#4

SH-1. Processes

This question builds versions of the existing system calls based on new abstractions. Here are three system calls that define a new abstraction called a rendezvous.

int newrendezvous(void) Returns a rendezvous ID that hasn’t been used yet.

int rendezvous(int rid, int data) Blocks the calling process P1 until some other process P2 calls rendezvous() with the same rid (rendezvous ID). Then, both of the system calls return, but P1’s system call returns P2’s data and vice versa. Thus, the two processes swap their data. Rendezvous acts pairwise; if three processes call rendezvous, then two of them will swap values and the third will block, waiting for a fourth.

void freezerendezvous(int rid, int freezedata) Freezes the rendezvous rid. All future calls to rendezvous(rid, data) will immediately return freezedata.

Here's an example. The two columns represent two processes. Assume they are the only processes using rendezvous ID 0.

int result = rendezvous(0, 5); printf("About to rendezvous\n");
int result = rendezvous(0, 600);
/* The processes swap data; both become runnable */
printf("Process A got %d\n", result); printf("Process B got %d\n", result);

This code will print

About to rendezvous
Process B got 5
Process A got 600

(the last 2 lines might appear in either order).

QUESTION SH-1A. How might you implement pipes in terms of rendezvous? Try to figure out analogues for the pipe(), close(), read(), and write() system calls (perhaps with different signatures), but only worry about reading and writing 1 character at a time.

Here’s one mapping.

  • pipe(): newrendezvous(). We use a rendezvous ID as the equivalent of a pipe file descriptor.
  • write(p, &ch, 1): To write a single character ch to the “pipe” p (that is, the rendezvous with ID p), call rendezvous(p, ch).
  • read(p, &ch, 1): To read a single character ch from the “pipe” p, call ch = rendezvous(p, -1).
  • close(p): To close the “pipe” p, call freezerendezvous(p, -1). Now all future read and write calls will return -1.

Most mappings will have these features.

QUESTION SH-1B. Can a rendezvous-pipe support all pipe features?

No. For example, a rendezvous-pipe doesn’t deliver a signal when a process tries to write to a closed pipe. Since the rendezvous-pipe doesn’t distinguish between read and write ends, and since rendezvous aren’t reference-counted like file descriptors, if a “writer” process exits without closing the rendezvous-pipe, a reader won’t get EOF when they read—it will instead block indefinitely. Unlike pipes, which like all file descriptors are protected from access by unrelated processes, rendezvous aren’t protected; anyone who can guess the rendezvous ID can use the rendezvous. Etc.

SH-2. Process management

Here’s the skeleton of a shell function implementing a simple two-command pipeline, such as “cmd1 | cmd2”.

void simple_pipe(const char* cmd1, char* const* argv1, const char* cmd2, char* const* argv2) {
    int pipefd[2], r, status;

    [A]

    pid_t child1 = fork();
    if (child1 == 0) {

        [B]

        execvp(cmd1, argv1);
    }
    assert(child1 > 0);

    [C]

    pid_t child2 = fork();
    if (child2 == 0) {

        [D]

        execvp(cmd2, argv2);
    }
    assert(child2 > 0);

    [E]

}

And here is a grab bag of system calls.

[1] close(pipefd[0]);
[2] close(pipefd[1]);
[3] dup2(pipefd[0], STDIN_FILENO);
[4] dup2(pipefd[0], STDOUT_FILENO);
[5] dup2(pipefd[1], STDIN_FILENO);
[6] dup2(pipefd[1], STDOUT_FILENO);
[7] pipe(pipefd);
[8] r = waitpid(child1, &status, 0);
[9] r = waitpid(child2, &status, 0);

Your task is to assign system call IDs, such as “1”, to slots, such as “A”, to achieve several behaviors, including a correct pipeline and several incorrect pipelines. For each question:

QUESTION SH-2A. Implement a correct foreground pipeline.

A

B (child1)

C

D (child2)

E

7

6, 1, 2
or 6, 2, 1
or 1, 6, 2

3, 1, 2
or 3, 2, 1
or 2, 3, 1

1, 2, 8, 9
or 2, 1, 9, 8, etc.

OR

7

6, 1, 2
or 6, 2, 1
or 1, 6, 2

2

3, 1

1, 8, 9
or 1, 9, 8

QUESTION SH-2B. Implement a pipeline so that, given arguments corresponding to “echo foo | wc -c”, the wc process reads “foo” from its standard input but does not exit thereafter. For partial credit describe in words how this might happen.

Anything that doesn’t close the pipe’s write end will do it. Below we leave both ends of the pipe open in the shell. We could also enter just “3” in slot D.

A B (child1) C D (child2) E
7 6, 1, 2 3, 1, 2 8, 9

QUESTION SH-2C. Implement a pipeline so that, given arguments corresponding to “echo foo | wc -c”, “foo” is printed to the shell’s standard output and the wc process prints “0”. (In a correctly implemented pipeline, “wc” would print 4, which is the number of characters in “foo\n”.) For partial credit describe in words how this might happen.

Anything that doesn’t redirect the left-hand side’s standard output will do it. It is important that the read end of the pipe be properly redirected, or wc would block reading from the shell’s standard input.

A

B (child1)

C

D (child2)

E

7

1, 2
(anything without 6)

3, 1, 2

1, 2, 8, 9

QUESTION SH-2D. Implement a pipeline that appears to work correctly on “echo foo | wc -c”, but always blocks forever if the left-hand command outputs more than 65536 characters. For partial credit describe in words how this might happen.

This happens when we execute the two sides of the pipe in series: first the left-hand side, then the right-hand side. Since the pipe contains 64KiB of buffering, this will often appear to work.

A B (child1) C D (child2) E
7 6, 1, 2 8 3, 1, 2 1, 2, 9

QUESTION SH-2E. Implement a pipeline so that, given arguments corresponding to “echo foo | wc -c”, both echo and wc report a “Bad file descriptor” error. (This error, which corresponds to EBADF, is returned when a file descriptor is not valid or does not support the requested operation.) For partial credit describe in words how this might happen.

Given these system calls, the only way to make this happen is to redirect the wrong ends of the pipe into stdin/stdout.

A B (child1) C D (child2) E
7 4, 1, 2 5, 1, 2 1, 2, 8, 9

SH-3. Processes

Consider the two programs shown below.

// Program 1
#include <stdio.h>
#include <unistd.h>
int
main(void)
{
    printf("PID %d running prog1\n", getpid());
}
// Program 2
#include <stdio.h>
#include <unistd.h>
int
main(void)
{
    char *argv[2];
    argv[0] = "prog1";
    argv[1] = NULL;
    printf("PID %d running prog2\n", getpid());
    int r = execv("./prog1", argv);
    printf("PID %d exiting from prog2\n", getpid());
}

QUESTION SH-3A. How many different PIDs will print out if you run Program 2?

1. exec doesn’t change the process’s PID.

QUESTION SH-3B. How many lines of output will you see?

2: “PID xxx running prog2” and “PID xxx running prog1”

Now, let's assume that we change Program 2 to the following:

// Program 2B
#include <stdio.h>
#include <unistd.h>
int
main(void)
{
    char* argv[2];
    argv[0] = "prog1";
    argv[1] = NULL;
    printf("PID %d running prog2\n", getpid());
    pid_t p = fork();
    if (p == 0) {
        int r = execv("./prog1", argv);
    } else {
        printf("PID %d exiting from prog2\n", getpid());
    }
}

QUESTION SH-3C. How many different PIDs will print out if you run Program 2B?

2, one for the parent and one for the child.

QUESTION SH-3D. How many lines of output will you see?

3: “PID xxx running prog2”, “PID yyy running prog1”, and “PID xxx exiting from prog2”.

Finally, consider this version of Program 2.

// Program 2C
#include <stdio.h>
#include <unistd.h>
int
main(void)
{
    char *argv[2];
    argv[0] = "prog1";
    argv[1] = NULL;
    printf("PID %d running prog2\n", getpid());
    pid_t p = fork();
    pid_t q = fork();
    if (p == 0 || q == 0) {
        int r = execv("./prog1", argv);
    } else {
        printf("PID %d exiting from prog2\n", getpid());
    }
}

QUESTION SH-3E. How many different PIDs will print out if you run Program 2C?

4:

  1. The initial ./prog2 prints its PID.
  2. The ./prog2 will fork twice, creating a p-child and a q-child.
  3. The p-child forks once more, creating a p/q-child.
  4. All three children exec ./prog1, which prints their PIDs.

QUESTION SH-3F. How many lines of output will you see?

5

SH-4. Be a CS61 TF!

You are a CS61 teaching fellow. A student working on A4 is having difficulty getting pipes working. S/he comes to you for assistance. The function below is intended to traverse a linked list of commands, fork/exec the indicated processes, and hook up the pipes between commands correctly. The student has commented it reasonably, but is quite confused about how to finish writing the code. Can you help? Figure out what code to add at points A, B, and C.

#include "sh61.h"
typedef struct command command;
struct command {
    command *next; // Next in sequence of commands
    int argc;      // number of arguments
    int ispipe;    // pipe symbol follows this command
    char** argv;   // arguments, terminated by NULL
    pid_t pid;     // pid running this command
};
void
do_pipes(command *c)
{
    pid_t newpid;
    int havepipe = 0;   // We had a pipe on the previous command
    int lastpipe[2] = {-1, -1};
    int curpipe[2];
    do {
        if (c->ispipe) {
            int r = pipe(curpipe);
            assert(r == 0);
        }
        newpid = fork();
        assert(newpid >= 0);
        if (newpid == 0) {
            if (havepipe) {
                // There was a pipe on the last command; It's stored
                // in lastpipe; I need to hook it up to this process???
                // **** PART A ****
            }
            if (c->ispipe) {
                // The current command is a pipe -- how do I hook it up???
                // **** PART B ****
            }
            execvp(c->argv[0], c->argv);
            fprintf(stderr, "Exec failed\n");
            _exit(1);
        }
        // I bet there is some cleanup I have to do here!?
        // **** PART C ****
        // Set up for the next command
        havepipe = c->ispipe;
        if (c->ispipe) {
            lastpipe[0] = curpipe[0];
            lastpipe[1] = curpipe[1];
        }
        c->pid = newpid;
        c = c->next;
    } while (newpid != -1 && havepipe);
}

QUESTION SH-4A. What should go in the Part A space above, if anything?

close(lastpipe[1]);
dup2(lastpipe[0], STDIN_FILENO);
close(lastpipe[0]);

QUESTION SH-4B. What should go in the Part B space above, if anything?

close(curpipe[0]);
dup2(curpipe[1], STDOUT_FILENO);
close(curpipe[1]);

QUESTION SH-4C. What should go in the Part C space above, if anything?

if (havepipe) {
    close(lastpipe[0]);
    close(lastpipe[1]);
}

SH-5. Spork

Patty Posix has an idea for a new system call, spork. Her system call combines fork, file descriptor manipulations, and execvp. It’s pretty cool:

typedef struct {
    int type; // equals SPORK_OPEN, SPORK_CLOSE, or SPORK_DUP2
    int fd;
    int old_fd;            // SPORK_DUP2 only
    const char* filename;  // SPORK_OPEN only
    int flags;             // SPORK_OPEN only
    mode_t mode;           // SPORK_OPEN only
} spork_file_action_t;
pid_t `**`spork`**`(const char* file, const spork_file_action_t* file_actions, int n_file_actions, char* argv[]);

Here’s how spork works.

  1. First, spork forks a child process.
  2. The child process loops over the file_actions array (there are n_file_actions elements) and performs each file action in turn. A file action fa means different things depending on its type. Specifically:
    • fa-\>type == SPORK_OPEN: The child process opens the file named fa-\>filename with flags fa-\>flags and optional mode fa-\>mode, as if by open(fa-\>filename, fa-\>flags, fa-\>mode). The opened file descriptor is given number fa-\>fd. (Note that this requires multiple steps, since the file must be first opened and then moved to fa-\>fd.)
    • fa-\>type == SPORK_CLOSE: The child process closes file descriptor fa-\>fd.
    • fa-\>type == SPORK_DUP2: The child process makes fa-\>fd a duplicate of fa-\>old_fd.
  3. Finally, the child process executes the given file with argument list argv.
  4. If all these steps succeed, then spork returns the child process ID. If any of the steps fails, then either spork returns –1 and creates no child, or the child process exits with status 127. In particular, if a file action fails, then the child process exits with status 127 before calling execvp.

This function uses spork to print the number of words in a file to standard output.

void print_word_count(const char* file) {
    spork_file_action_t file_actions[1];
    file_actions[0].type = SPORK_OPEN;
    file_actions[0].fd = STDIN_FILENO;
    file_actions[0].filename = file;
    file_actions[0].flags = O_RDONLY;
    const char* argv[2] = {"wc", NULL};
    pid_t p = spork("wc", file_actions, 1, argv);
    assert(p >= 0);
    waitpid(p, NULL, 0);
}

QUESTION SH-5A. Use spork to implement the following function.

` // Create a pipeline like `argv1 | argv2`. `
// The pipeline consists of two child processes, one running the command with argument
` // list `argv1` and one running the command with argument list `argv2`. The standard `
` // output of `argv1` is piped to the standard input of `argv2`. `
` // Return the PID of the `argv2` process or -1 on failure. `
pid_t `**`make_pipeline`**`(char* argv1[], char* argv2[]);
pid_t make_pipeline(char* argv1[], char* argv2[]) {
    int pipefd[2];
    if (pipe(pipefd) < 0)
        return -1;
    spork_file_actions_t fact[3];
    fact[0].type = SPORK_DUP2;
    fact[0].fd = STDOUT_FILENO;
    fact[0].old_fd = pipefd[1];
    fact[1].type = SPORK_CLOSE;
    fact[1].fd = pipefd[0];
    fact[2].type = SPORK_CLOSE;
    fact[2].fd = pipefd[1];
    if (spork(argv1[0], fact, 3, argv1) < 0) {
        // this is optional:
        close(pipefd[0]);
        close(pipefd[1]);
        return -1;
    }
    close(pipefd[1]);
    fact[0].fd = STDIN_FILENO;
    fact[0].old_fd = pipefd[0];
    // fact[1] is already set up
    pid_t p = spork(argv2[0], fact, 2, argv2);
    close(pipefd[0]);
    return p;
}

QUESTION SH-5B. Now, implement spork in terms of system calls you already know. For full credit, make sure you catch all errors. Be careful of SPORK_OPEN.

pid_t spork(const char* file, const spork_file_action_t* fact, int nfact, char* argv[]) {
    pid_t p = fork();
    if (p == 0) {
        for (int i = 0; i < nfact; ++i) {
            switch (fact[i].type) {
            case SPORK_OPEN: {
                int fd = open(fact[i].filename, fact[i].flags, fact[i].mode);
                if (fd < 0) {
                    _exit(127);
                }
                if (fd != fact[i].fd) {
                    if (dup2(fd, fact[i].fd) < 0
                        || close(fd) < 0) {
                        _exit(127);
                    }
                }
                break;
            }
            case SPORK_DUP2:
                if (dup2(fact[i].old_fd, fact[i].fd) < 0) {
                    _exit(127);
                }
                break;
            case SPORK_CLOSE:
                if (close(fact[i].fd) < 0) {
                    _exit(127);
                }
                break;
            default:
                _exit(127);
            }
        }
        execvp(file, argv);
        _exit(127);
    }
    return p;
}

Errors that don't need to be handled: close, close in SPORK_OPEN case, type is unknown (we didn't say what to do in that case).

QUESTION SH-5C. Can fork be implemented in terms of spork? Why or why not?

No, it can’t, because fork makes a copy of the process’s current memory state and such, while spork always calls execvp (which creates a fresh process) or exits.

QUESTION SH-5D. At least one of the file action types is redundant, meaning a spork caller could simulate its behavior using the other action types and possibly some additional system calls. Say which action types are redundant, and briefly describe how they could be simulated.

SPORK_OPEN` is redundant: it can be implemented by running `open

in the parent (before calling spork), creating a new actions list with a SPORK_DUP2/SPORK_CLOSE pair (to dup2 the opened fd into place and then close the opened fd), calling spork with that new actions list, and then closeing the opened fds in the parent.

SPORK_CLOSE is also redundant, because you could set the close-on-exec bit in the parent. However, you cannot fully simulate an arbitrary file-actions list using just close-on-exec, because a SPORK_CLOSE can cause a later file action to deterministically fail before the exec.

SH-6. File descriptor facts

Here are twelve file descriptor-oriented system calls.

accept bind close connect dup2 listen
open pipe read select socket write

QUESTION SH-6A. Which of these system calls may cause the number of open file descriptors to increase? List all that apply.

accept`, `dup2`, `open`, `pipe`, `socket

QUESTION SH-6B. Which of these system calls may close a file descriptor? List all that apply. Note that some system calls might close a file descriptor even though the total number of open file descriptors remains the same.

close`, `dup2

QUESTION SH-6C. Which of these system calls can block? List all that apply.

accept, connect, read, select, write.

The following system calls can also block but only in rare situations, such as closing a file descriptor on an NFS file system, or for short times, such as opening a file on disk or binding a Unix-domain socket on a disk file system: bind, open, close.

I don’t believe the others—dup2, listen, pipe, socket—ever meaningfully block.

QUESTION SH-6D. Which system calls can open at least one file descriptor where that file descriptor is suitable for both reading and writing? List all that apply.

open (O_RDWR), accept, socket.

connect is also OK to mention, even though it doesn’t create a new file descriptor.

dup2 is also OK to mention, even though it doesn’t really “open” a file descriptor (though it does unambiguously cause the number of open file descriptors to increase).

QUESTION SH-6E. Which system calls must a network server make in order to receive a connection on a well-known port? List all that apply in order, first to last. Avoid unnecessary calls.

socket`, `bind`, `listen`, `accept

QUESTION SH-6F. Which system calls must a network client make in order to (1) connect to a server, (2) send a message, (3) receive a reply, and (4) close the connection? List all that apply in order, first to last. Avoid unnecessary calls.

socket`, `connect`, `write`, `read`, `close

NET-1. Networking

QUESTION NET-1A. Which of the following system calls should a programmer expect to sometimes block (i.e., to return after significant delay)? Circle all that apply.

1. socket 5. connect
2. read 6. write
3. accept 7. usleep
4. listen 8. None of these

#2 read, #3 accept, #5 connect, (#6 write), #7 usleep. (write can definitely block if the reading end hasn’t caught up, but we didn’t emphasize this in class, so no points off.)

QUESTION NET-1B. Below are seven message sequence diagrams demonstrating the operation of a client–server RPC protocol. A request such as “get(X)” means “fetch the value of the object named X”; the response contains that value. Match each network property or programming strategy below with the diagram with which it best corresponds. You will use every diagram once.

1. Loss 4. Duplication 7. Exponential backoff
2. Delay 5. Batching
3. Reordering 6. Prefetching

#1—B, #2—C, #3—F, #4—D, #5—G, #6—A, #7—E (A—#6, B—#1, C—#2, D—#4, E—#7, F—#3, G—#5)

While G could also represent prefetching, A definitely does not represent batching at the RPC level—each RPC contains one request—so under the rule that each diagram is used once, we must say G is batching and A is prefetching.

Finalfig2012_1.gif
Finalfig2012_2.gif
Finalfig2012_3.gif
Finalfig2012_4.gif

A

B

C

D

Finalfig2012_5.gif
Finalfig2012_6.gif
Finalfig2012_7.gif

E

F

G

NET-2. Making Network Servers Robust

QUESTION NET-2A. You've built a network server, list the resources that you might run out of if someone launched a DoS attack on you.

At least: file descriptors, memory (stack), processes. There’re a lot of correct answers, though! You can run out of virtual memory or even physical memory.

QUESTION NET-2B. Sam suggests that you just create a separate thread to handle each incoming connection. Why isn't this necessarily going to work?

Each thread requires a stack and it's easy to run out of space for those stacks. Threads also occupy other resources—in Linux, each thread even has a PID!

QUESTION NET-2C. A server sets up a socket to listen on a connection. When a client wants to establish a connection, how does the server manage the multiple clients? In your answer indicate what system call or calls are used and what they do.

You call accept, which creates a new fd that is particular to the connection with a particular client. That is, the server uses a different fd for each client.

QUESTION NET-2D. Which of the following system calls might block?

accept, connect, select

SYNCH-1. Threads

The following code performs a matrix multiplication, c = ab, where a, b, and c are all square matrices of dimension sz. It uses the cache-friendly ikj index ordering.

#define MELT(matrix, sz, row, col) (matrix)[(row)*(sz) + (col)]
void matrix_multiply(double* c, const double* a, const double* b, size_t sz) {
    for (size_t i = 0; i < sz; ++i)
        for (size_t j = 0; j < sz; ++j)
            MELT(c, sz, i, j) = 0;
    for (size_t i = 0; i < sz; ++i)
        for (size_t k = 0; k < sz; ++k)
            for (size_t j = 0; j < sz; ++j)
                MELT(c, sz, i, j) += MELT(a, sz, i, k) * MELT(b, sz, k, j);
}

But matrix multiplication is a naturally parallelizable problem. Here’s some code that uses threads to multiply even faster on a multicore machine. We use sz parallel threads, one per row of c.

     typedef struct matrix_args {
         double* c;
         const double* a;
         const double* b;
         size_t sz;
         size_t i;
     } matrix_args;
 
     void* matrix_multiply_ikj_thread(void* arg) {
 (α)     matrix_args* m = (matrix_args*) arg;
 (β)     for (size_t j = 0; j < m->sz; ++j)
 (γ)         MELT(m->c, m->sz, m->i, j) = 0;
 (δ)     for (size_t k = 0; k < m->sz; ++k)
 (ε)         for (size_t j = 0; j < m->sz; ++j)
 (ζ)             MELT(m->c, m->sz, m->i, j) += MELT(m->a, m->sz, m->i, k) * MELT(m->b, m->sz, k, j);
 (η)     return NULL;
      }
 
     void matrix_multiply_ikj(double* c, const double* a, const double* b, size_t sz) {
 (1)     pthread_t* threads = (pthread_t*) malloc(sizeof(pthread_t) * sz);
 (2)     for (size_t i = 0; i < sz; ++i) {
 (3)         matrix_args m = { c, a, b, sz, i };
 (4)         int r = pthread_create(&threads[i], NULL, &matrix_multiply_ikj_thread, &m);
 (5)         assert(r == 0);
 (6)     }
 (7)     for (size_t i = 0; i < sz; ++i)
 (8)         pthread_join(threads[i], NULL);
 (9)     free(threads);
     }

But when run, this code gives wildly incorrect results.

QUESTION SYNCH-1A. What is wrong? Describe why the problem is a synchronization issue.

The matrix_multiply_ikj function starts many threads, each with its own logically different set of matrix_args. But matrix_multiply_ikj allocates these matrix_args structures on the stack! The m variable is initialized on each loop iteration, but then the variable’s stack space is immediately reused for the next loop iteration. It is very likely that during a thread’s execution, its matrix_args have been replaced by other arguments. This is a synchronization issue because the code would work correctly if matrix_multiply_ikj_thread was called serially, rather than concurrently. The matrix_multiply_ikj_thread function must synchronize with matrix_multiply_ikj’s reuse of m.

QUESTION SYNCH-1B. Write C code showing how the problem could be fixed with changes only to matrix_multiply_ikj. Refer to the numbered lines to indicate replacements and/or insertions. Use one or more additional heap allocations and no additional calls to pthread functions. Free any memory you allocate once it is safe to do so.

There are many solutions, but here’s one: we place each thread’s arguments in different, heap-allocated memory. This solves the synchronization issue by eliminating the memory reuse. New and changed lines are marked with ***.

     void matrix_multiply_ikj(double *c, const double *a, const double *b, size_t sz) {
 (1)     pthread_t *threads = (pthread_t *) malloc(sizeof(pthread_t) * sz);
   *     matrix_args *marr = (matrix_args *) malloc(sizeof(matrix_args) * sz);         
 (2)     for (size_t i = 0; i < sz; ++i) {
 (3)         matrix_args m = { c, a, b, sz, i };
   *         marr[i] = m;
   *         int r = pthread_create(&threads[i], NULL, &matrix_multiply_ikj_thread,
   *                                &marr[i]);
 (5)         assert(r == 0);
 (6)     }
 (7)     for (size_t i = 0; i < sz; ++i)
 (8)         pthread_join(threads[i], NULL);
 (9)     free(threads);
   *     free(marr);
     }

On single-core machines, the kij order performs almost as fast as the ikj order. Here’s a version of the parallel matrix multiplication code that uses kij.

     typedef struct matrix_args_kij {
         double* c;
         const double* a;
         const double* b;
         size_t sz;
         size_t k;
     } matrix_args_kij;
 
     void* matrix_multiply_kij_thread(void* arg) {
 (α)     matrix_args_kij* m = (matrix_args_kij*) arg;
 (β)     for (size_t i = 0; i < m->sz; ++i)
 (γ)         for (size_t j = 0; j < m->sz; ++j)
 (δ)             MELT(m->c, m->sz, i, j) += MELT(m->a, m->sz, i, m->k) * MELT(m->b, m->sz, m->k, j);
 (ε)     return NULL;
      }
 
     void matrix_multiply_kij(double* c, const double* a, const double* b, size_t sz) {
 (1)     pthread_t* threads = (pthread_t*) malloc(sizeof(pthread_t) * sz);
 (2)     for (size_t i = 0; i < sz; ++i)
 (3)         for (size_t j = 0; j < sz; ++j)
 (4)             MELT(c, sz, i, j) = 0;
 (5)     for (size_t k = 0; k < sz; ++k) {
 (6)         matrix_args_kij m = { c, a, b, sz, k };
 (7)         int r = pthread_create(&threads[k], NULL, &matrix_multiply_kij_thread, &m);
 (8)         assert(r == 0);
 (9)     }
(10)     for (size_t k = 0; k < sz; ++k)
(11)         pthread_join(threads[k], NULL);
(12)     free(threads);
     }

This problem has the same problem as the previous version, plus another problem. Even after your fix from 8A–8B is applied, this version produces incorrect results.

QUESTION SYNCH-1C. What is the new problem? Describe why it is a synchronization issue.

The new problem is that now, different matrix_multiply_kij_thread functions might try to modify the same matrix element at the same time. This is a synchronization issue because the concurrent access to a matrix element might cause some updates to get lost. In the previous version, each matrix_multiply_ikj_thread thread worked on a different matrix row, so no synchronization was required.

QUESTION SYNCH-1D. Write pseudocode or C code that fixes this problem. You should refer to pthread functions. For full credit your solution should have low contention.

The simplest solutions introduce mutual exclusion locking. This means different threads can’t modify the same matrix element simultaneously. To reduce contention, the locking should be fine-grained—for instance, there could be one lock per matrix element. But other tradeoffs are possible; one lock per matrix element is a lot; you could instead have one lock per matrix row or column.

Here’s working code, including the fix for the matrix_args reuse problem. (We didn’t require working code.)

     typedef struct matrix_args_kij {
         double *c;
         const double *a;
         const double *b;
  *      pthread_mutex_t *locks;
         size_t sz;
         size_t k;
     } matrix_args;
 
     void *matrix_multiply_kij_thread(void *arg) {
         matrix_args_kij *m = (matrix_args_kij *) arg;
         for (size_t i = 0; i < m->sz; ++i)
             for (size_t j = 0; j < m->sz; ++j) {
  *              pthread_mutex_lock(&m->locks[i * m->sz + j]);
                 MELT(m->c, m->sz, i, j) += MELT(m->a, m->sz, i, m->k) * MELT(m->b, m->sz, m->k, j);
  *              pthread_mutex_unlock(&m->locks[i * m->sz + j]);
             }
         return NULL;
      }
 
     void matrix_multiply_kij(double *c, const double *a, const double *b, size_t sz) {
         pthread_t *threads = (pthread_t *) malloc(sizeof(pthread_t) * sz);
  *      matrix_args_kij *marr = (matrix_args_kij *) malloc(sizeof(matrix_args_kij) * sz);
  *      pthread_mutex_t *locks = (pthread_mutex_t *) malloc(sizeof(pthread_mutex_t) * sz * sz);
         for (size_t i = 0; i < sz; ++i)
             for (size_t j = 0; j < sz; ++j) {
                 MELT(c, sz, i, j) = 0;
  *              pthread_mutex_init(&locks[i * sz + j], NULL);
             }
         for (size_t k = 0; k < sz; ++k) {
  *          matrix_args_kij m = { c, a, b, locks, sz, k };
  *          marr[k] = m;
  *          int r = pthread_create(&threads[k], NULL, &matrix_multiply_kij_thread,
                                    &marr[k]);
             assert(r == 0);
         }
         for (size_t k = 0; k < sz; ++k)
             pthread_join(threads[k], NULL);
         free(threads);
  *      free(marr);
  *      free(locks);
     }

SYNCH-2. Synchronization and concurrency

Most synchronization objects have at least two operations. Mutual-exclusion locks support lock and unlock; condition variables support wait and signal; and from section notes you may remember the semaphore synchronization object, one of the earliest synchronization objects ever invented, which supports P and V.

In this problem, you’ll work with a synchronization object with only one operation, which we call a hemiphore. Hemiphores behave like the following; it is very important that you understand this pseudocode.

typedef struct hemiphore {
    int value;
} hemiphore;
// Initialize the hemiphore to value 0.
void hemiphore_init(hemiphore* h) {
    h->value = 0;
}
` // Block until the hemiphore has value >= `bound`, then  ``**`atomically`**``  increment its value by `delta`. `
void H(hemiphore* h, int bound, int delta) {
    // This is pseudocode; a real hemiphore implementation would block, not spin, and would
    // ensure that the test and the increment happen in one atomic step.
    while (h->value < bound) {
        sched_yield();
    }
    h->value += delta;
}

Once a hemiphore is initialized with hemiphore_init, application code should access the hemiphore only through the H operation.

QUESTION SYNCH-2A. Use hemiphores to implement mutual-exclusion locks. Fill out the code below. (You may not need to fill in every empty slot. You may use standard C constants; for example, INT_MIN is the smallest possible value for a variable of type int, which on a 32-bit machine is −2147483648.)

typedef struct mutex {
    hemiphore h;
} mutex;
void mutex_init(mutex* m) {
    hemiphore_init(&m->h);
}
void mutex_lock(mutex* m) {

    H(&m->h, 0, -1);

}
void mutex_unlock(mutex* m) {

    H(&m->h, -1, 1);

}

QUESTION SYNCH-2B. Use hemiphores to implement condition variables. Fill out the code below. You may assume that the implementation of mutex is your hemiphore-based implementation from above (so, for instance, cond_wait may access the hemiphore m->h). See the Hints at the end of the question.

typedef struct condvar {
    mutex m;
    hemiphore h;

    int nwaiting;

} condvar;
void cond_init(condvar* c) {
    mutex_init(&c->m);
    hemiphore_init(&c->h);

    c->nwaiting = 0;

}
void cond_signal(condvar* c) {

    mutex_lock(&c->m);     if (c->nwaiting > 0) {         H(&c->h, INT_MIN, 1);         --c->nwaiting;     }     mutex_unlock(&c->m);

}
void cond_wait(condvar* c, mutex* m) {

    mutex_lock(&c->m);     ++c->nwaiting;     mutex_unlock(&c->m);     mutex_unlock(m);     H(&c->h, 1, -1);     mutex_lock(m);

}

The nwaiting variable ensures that cond_signal(c) does nothing if no one is waiting. The mutex_unlock(m) must happen before H (which can block); it must happen after mutex_lock(&c->m) (to avoid sleep-wakeup races).

The following code is broken if no one is waiting when signal is called, because it “stores” the signal for later. It works otherwise, though—it even avoids sleep-wakeup races.

typedef struct condvar {
    hemiphore h;
} condvar;
void cond_init(condvar* c) {
    hemiphore_init(&c->h);
}
void cond_signal(condvar* c) {

    H(&c->h, INT_MIN, 1);

}
void cond_wait(condvar* c, mutex* m) {

    mutex_unlock(m);     H(&c->h, 0, -1);     mutex_lock(m);

}

Hints. For full credit:

QUESTION SYNCH-2C. Use pthread mutexes and condition variables to implement hemiphores. Fill out the code below. See the hints after the question.

typedef struct hemiphore {
    pthread_mutex_t m;
    int value;
    pthread_cond_t c;
} hemiphore;
void hemiphore_init(hemiphore* h) {
    pthread_mutex_init(&h->m);
    h->value = 0;
    pthread_cond_init(&h->c);
}
void H(hemiphore* h, int bound, int delta) {

    pthread_mutex_lock(&h->m);     while (h->value < bound) {         pthread_cond_wait(&h->c, &h->m);     }     h->value += delta;     pthread_cond_broadcast(&h->c);     pthread_mutex_unlock(&h->m);

}

The pthread_mutex_locks protect h->value from access conflicts. It is not correct to simply pthread_cond_signal(&h->c), since different waiters might be waiting for different bounds (-1). You don’t need to broadcast/signal each time; if delta <= 0 there’s no point.

Hints. The pthread mutex and condition variable operations have the following signatures. You should pass NULL for any attributes arguments. Don’t worry about the pthread_mutex_destroy and pthread_cond_destroy operations, and feel free to abbreviate (e.g. “lock” instead of “pthread_mutex_lock”).

QUESTION SYNCH-2D. Consider the following two threads, which use a shared hemiphore h with initial value 0.

Thread 1                      Thread 2

H(&h, 1000, 1);               while (1) {
printf("Thread 1 done\n");        H(&h, 0, 1);
                                  H(&h, 0, -1);
                              }

Thread 2 will never block, and the hemiphore’s value will alternate between 1 and 0. Thread 1 will never reach the printf, because the hemiphore’s value never reaches 1000. However, in most people’s first implementation of hemiphores using pthread mutexes and condition variables, Thread 1 will not block. Every call to H in Thread 2 will effectively wake up Thread 1. Though Thread 1 will then check the hemiphore’s value and immediately go back to sleep, doing so wastes CPU time.

Design an implementation of hemiphores using pthread mutexes and condition variables that solves this problem. In your revised implementation, Thread 1 above should block forever. For full credit, write C code. For partial credit, write pseudocode or English describing your design.

Hint. One working implementation constructs a linked list of “waiter” objects, where each waiter object is on a different thread’s stack, as initially sketched below. You can use such objects or not as you please.

This is a tough one.

typedef struct hemiphore_waiter {
    struct hemiphore_waiter* next;

    int bound;     pthread_cond_t c;

} hemiphore_waiter;
typedef struct hemiphore {
    pthread_mutex_t m;
    int value;
    hemiphore_waiter* waiters;
} hemiphore;
void hemiphore_init(hemiphore* h) {
    pthread_mutex_init(&h->m);
    h->value = 0;
    h->waiters = NULL;
}
void H(hemiphore* h, int bound, int delta) {
    hemiphore_waiter w;

    w.bound = bound;

    `**`pthread_cond_init(&w.c);`**`                // no points off if missing

    pthread_mutex_lock(&h->m);     while (h->value < bound) {         w.next = h->waiters;         h->waiters = &w;         pthread_cond_wait(&w.c, &h->m);     }     h->value += delta;

    // no points off for linked list messups

    for (hemiphore_waiter** ww = &h->waiters; *ww; ) {         if (h->value >= (*ww)->bound) {             pthread_cond_signal(&(*ww)->c);             *ww = (*ww)->next;         } else {             ww = &(*ww)->next;         }     }     pthread_mutex_unlock(&h->m);

}

Here’s a substantial-partial-credit solution that tracks the lowest bound anyone cares about.

typedef struct hemiphore {
    pthread_mutex_t m;
    int value;

    int max_bound;     pthread_cond_t c;

} hemiphore;
void hemiphore_init(hemiphore* h) {
    pthread_mutex_init(&h->m);
    h->value = 0;

    h->max_bound = INT_MIN;     pthread_cond_init(&h->c);

}
void H(hemiphore* h, int bound, int delta) {

    pthread_mutex_lock(&h->m);     while (h->value < bound) {         if (bound > h->max_bound) {             h->max_bound = bound;         }         pthread_cond_wait(&h->c, &h->m);     }     h->value += delta;     if (h->value >= h->max_bound) {         h->max_bound = INT_MIN;         pthread_cond_broadcast(&h->c);     }     pthread_mutex_unlock(&h->m);

}

SYNCH-3. Pipes and synchronization

In the following questions, you will implement a mutex using a pipe, and a limited type of pipe using a mutex.

The definitions of the pthread mutex and condition variable operations are as follows.

int pthread_mutex_init(pthread_mutex_t* m, const pthread_mutexattr_t* attr) Create a new mutex with attributes defined by attr. (For this question, attr is ignored.)

int pthread_mutex_lock(pthread_mutex_t* m) Locks m. If the mutex is already locked, the calling thread will block until the mutex becomes available.

int pthread_mutex_unlock(pthread_mutex_t* m) Unlocks m. Calling pthread_mutex_unlock with a mutex that the calling thread does not hold will result in undefined behavior.

int pthread_cond_init(pthread_cond_t* c, const pthread_condattr_t* attr) Create a new condition variable with attributes defined by attr. (For this question, attr is ignored.)

int pthread_cond_signal(pthread_cond_t* c) Unblocks one thread waiting for c.

int pthread_cond_wait(pthread_cond_t* c, pthread_mutex_t* m) Atomically unlocks m and blocks the calling thread on the condition c. When the condition is signaled, the thread locks m and returns. Calling pthread_cond_wait with an unlocked mutex will result in undefined behavior.

The operations return 0 on success. Although errors are possible (for instance, ENOMEM if there’s not enough memory to allocate a new mutex) you may assume that they don’t occur.

QUESTION SYNCH-3A. In this question, you are to implement mutex functionality using a pipe. Fill in the definitions of pipe_mutex_init, pipe_mutex_lock, and pipe_mutex_unlock. You should be able to implement the same functionality as the pthread versions (assuming no other code accesses the pipe).

typedef struct pipe_mutex {
    int fd[2];
} pipe_mutex;
int pipe_mutex_init(pipe_mutex* m) {     
    if (pipe(&m->fd) < 0) {
        return -1;
    }
    write(m->fd[1], "X", 1);
    return 0;
}
int pipe_mutex_lock(pipe_mutex* m) {        
    char buf;
    read(m->fd[0], &buf, 1);

A while loop would be in some ways even better, but you were allowed to assume no rando error returns.

}
int pipe_mutex_unlock(pipe_mutex* m) {       
    write(m->fd[1], "X", 1);
}

In the next questions, you will help implement pipe functionality using an in-memory buffer and a mutex. This “mutex pipe” will only work between threads of the same process (in contrast to a regular pipe, which also works between processes). An initial implementation of mutex pipes is as follows; you will note that it contains no mutexes.

    typedef struct mutex_pipe {

1.    char buf[BUFSIZ]; 2.    size_t head; 3.    size_t sz;

    } mutex_pipe;
    int mutex_pipe_init(mutex_pipe* p) {

6.    p->head = p->sz = 0; 7.    memset(&p->buf[0], 0, sizeof(p->buf)); 8.    return 0;

    }
`     // Read up to `sz` bytes from the mutex_pipe into `buf` and return the number of bytes `
    // read. If no bytes are available, wait until at least one byte can be read.
    ssize_t mutex_pipe_read(mutex_pipe* p, char* buf, size_t sz) {

10.    size_t n = 0; 11.    while (n < sz && (p->sz != 0 || n == 0)) { 12.        size_t ncopy = p->sz; 13.        if (ncopy > sizeof(p->buf) - p->head) 14.            ncopy = sizeof(p->buf) - p->head; 15.        if (ncopy > sz - n) 16.            ncopy = sz - n; 17.        memcpy(&buf[n], &p->buf[p->head], ncopy); 18.        n += ncopy; 19.        p->head += ncopy; 20.        p->head = p->head % sizeof(p->buf); 21.        p->sz -= ncopy; 22.    } 23.    return n;

    }
`     // Write up to `sz` bytes from `buf` into the mutex_pipe and return the number of bytes `
    // written. If no space is available, wait until at least one byte can be written.
    ssize_t mutex_pipe_write(mutex_pipe* p, const char* buf, size_t sz) {

30.    size_t n = 0; 31.    while (n < sz && (p->sz != sizeof(p->buf) || n == 0)) { 32.        size_t tail = p->head + p->sz; 33.        tail = tail % sizeof(p->buf); 34.        size_t ncopy = sizeof(p->buf) - p->sz; 35.        if (ncopy > sizeof(p->buf) - tail) 36.            ncopy = sizeof(p->buf) - tail; 37.        if (ncopy > sz - n) 38.            ncopy = sz - n; 39.        memcpy(&p->buf[tail], &buf[n], ncopy); 40.        n += ncopy; 41.        p->sz += ncopy; 42.    } 43.    return n;

    }

The last page of this exam has a copy of that code that you can remove and keep.

NOT A QUESTION. It would be wise to work through an example. For example, assume BUFSIZ == 4, and figure out how the following calls would behave.

mutex_pipe_write(p, "Hi", 2);
mutex_pipe_read(p, buf, 4);
mutex_pipe_write(p, "Test", 4);
mutex_pipe_read(p, buf, 3);

First let’s reason about this code in the absence of threads.

QUESTION SYNCH-3B. Which of the following changes could, if made in isolation, result in undefined behavior when a mutex pipe was used? Circle all that apply.

  1. Eliminating line 6
  2. Eliminating line 7
  3. Eliminating lines 13–14
  4. Eliminating lines 15–16
  5. Eliminating line 18
  6. Eliminating line 19

6, 13–14, and 15–16.

6: Accesses to uninitialized variables cause undefined behavior. 13–14: Could cause accesses off the end of p->buf. 15–16: Could cause accesses off the end of buf.

7: If this is the only change, no problem; the existing code never accesses bytes that were not written by mutex_pipe_write. 18: This causes mutex_pipe_read to spin forever (since n is not increased), but that’s not undefined. 19: This causes mutex_pipe_read to read the same data over and over again (since p->head never advances), but that’s not undefined.

QUESTION SYNCH-3C. Which of the following changes could, if made in isolation, cause a mutex_pipe_read to return incorrect data (that is, the byte sequence produced by read will not equal the byte sequence passed to write)? Circle all that apply.

  1. Eliminating line 33
  2. Eliminating lines 35–36
  3. Eliminating lines 37–38
  4. Eliminating line 39
  5. Eliminating line 40
  6. Eliminating line 41

35–36, 37–38, and 39.

35–36: Copies some of the written data past the end of the buffer. Not only does this cause undefined behavior, the data’s lost to the reader. 37–38: Copies some unwritten data (data past the end of the write buffer) into the pipe. 39: Take this away and nothing gets written to the buffer! But the size still grows so the reader thinks there’s data there. 33: This is a tricky one. Here’s what happens: No mutex_pipe_write can write data that “wraps around” the buffer. Assume BUFSIZ == 4, p->head == 3, p->sz == 1, and mutex_pipe_write(p, "X", 1) is called. Basically tail is set to 4, and lines 35–36 will set tail = 0! So then mutex_pipe_write will spin forever; but in the meantime mutex_pipe_read will not read anything incorrect. 40: n never advances, so the same portion of the data (the first portion) is written over and over again into the pipe until it fills up. This leaves the pipe in a bad state—containing data that was never written in that order—but then mutex_pipe_write spins forever, so the reader can never read it. 41: Doesn’t advance sz: the buffer always appears empty, so the reader never observes incorrect data.

It should be considered OK to select #40.

QUESTION SYNCH-3D. Which of the following changes could, if made in isolation, cause a call to mutex_pipe_write to never return (when a correct implementation would return)? Circle all that apply.

  1. Eliminating line 33
  2. Eliminating lines 35–36
  3. Eliminating lines 37–38
  4. Eliminating line 39
  5. Eliminating line 40
  6. Eliminating line 41

33, 35–36, 37–38, and 40.

33 and 40: covered above. 35–36 and 37–38: undefined behavior can cause anything to happen! Maybe no points off for this, though.

QUESTION SYNCH-3E. Write an invariant for p->sz. An invariant is a statement about the value of p->sz that is always true. Write your invariant in the form of an assertion; for full credit give the most specific true invariant you can. (“p->sz is an integer” is unspecific, but true; “p->sz == 4” is specific, but false.)

assert(  p->sz >= 0 && p->sz <= BUFSIZ  );

But in fact p->sz is a size_t, so p->sz >= 0 is a tautology and assert(p->sz <= BUFSIZ) works too.

QUESTION SYNCH-3F. Write an invariant for p->head. For full credit give the most specific true invariant you can.

assert(  p->head >= 0 && p->head < BUFSIZ  );

Again, assert(p->head < BUFSIZ) is equivalent.

In the remaining questions, you will add synchronization objects and operations to make your mutex pipe work in a multithreaded program. Here is your starting point:

    typedef struct mutex_pipe {

1.    char buf[BUFSIZ]; 2.    size_t head; 3.    size_t sz; 4.    pthread_mutex_t m;

    } mutex_pipe;
    int mutex_pipe_init(mutex_pipe* p) {

5.    pthread_mutex_init(&p->m, NULL); 6.    p->head = p->sz = 0; 7.    memset(&p->buf[0], 0, sizeof(p->buf)); 8.    return 0;

    }
    (the rest of the code as in the prior questions)

QUESTION SYNCH-3G. Add calls to “lock” (pthread_mutex_lock) and “unlock” (pthread_mutex_unlock) that protect the mutex pipe from race condition bugs. Write one or more snippets of C code and give line numbers after which the snippets should appear. For full credit, your solution must not deadlock—if one thread is reading from a pipe and another thread is writing to the pipe, then both threads must eventually make progress.

After #17 & #39 (or anywhere between #17-#21 and #39-#41):

if (ncopy == 0) {
    pthread_mutex_unlock(&p->m);   // or just "unlock"
    sched_yield();   // optional
    pthread_mutex_lock(&p->m);     // or just "lock"
}

Some people might put a sched_yield() in: nice!

Alternately, you could add the following code after #21 & #41:

if (n == 0) {
    pthread_mutex_unlock(&p->m);   // or just "unlock"
    sched_yield();   // optional
    pthread_mutex_lock(&p->m);     // or just "lock"
}

But this n == 0 test doesn’t work if placed immediately after #17 or #39. It is important that the pipe be in a consistent state before the mutex is released, meaning that all data copied to/from the pipe must be reflected in the pipe data structure. For instance, if we unlocked in read before updating p-\>head or p-\>sz, there’s a risk that some pipe data would be read twice. It’s safe to unlock immediately when ncopy == 0 because in that case the pipe data structure remains unchanged.

The full code:

`     // Read up to `sz` bytes from the mutex_pipe into `buf` and return the number of bytes `
    // read. If no bytes are available, wait until at least one byte can be read.
    ssize_t mutex_pipe_read(mutex_pipe* p, char* buf, size_t sz) {

10.     size_t n = 0;         pthread_mutex_lock(&p->m); 11.     while (n < sz && (p->sz != 0 || n == 0)) { 12.         size_t ncopy = p->sz; 13.         if (ncopy > sizeof(p->buf) - p->head) 14.             ncopy = sizeof(p->buf) - p->head; 15.         if (ncopy > sz - n) 16.             ncopy = sz - n; 17.         memcpy(&buf[n], &p->buf[p->head], ncopy);             if (ncopy == 0) {                 pthread_mutex_unlock(&p->m);                 sched_yield();                 pthread_mutex_lock(&p->m);             } 18.         n += ncopy; 19.         p->head += ncopy; 20.         p->head = p->head % sizeof(p->buf); 21.         p->sz -= ncopy; 22.     }         pthread_mutex_unlock(&p->m); 23.     return n;

    }
`     // Write up to `sz` bytes from `buf` into the mutex_pipe and return the number of bytes `
    // written. If no space is available, wait until at least one byte can be written.
    ssize_t mutex_pipe_write(mutex_pipe* p, const char* buf, size_t sz) {

30.     size_t n = 0;         pthread_mutex_lock(&p->m); 31.     while (n < sz && (p->sz != sizeof(p->buf) || n == 0)) { 32.         size_t tail = p->head + p->sz; 33.         tail = tail % sizeof(p->buf); 34.         size_t ncopy = sizeof(p->buf) - p->sz; 35.         if (ncopy > sizeof(p->buf) - tail) 36.             ncopy = sizeof(p->buf) - tail; 37.         if (ncopy > sz - n) 38.             ncopy = sz - n; 39.         memcpy(&p->buf[tail], &buf[n], ncopy);             if (ncopy == 0) {                 pthread_mutex_unlock(&p->m);                 sched_yield();                 pthread_mutex_lock(&p->m);             } 40.         n += ncopy; 41.         p->sz += ncopy; 42.     }         pthread_mutex_unlock(&p->m); 43.     return n;

    }

QUESTION SYNCH-3H. Your solution to the last question has poor utilization. For instance, a thread that calls mutex_pipe_read on an empty mutex pipe will spin forever, rather than block. Introduce one or more condition variables so that mutex_pipe_read will block until data is available. Write one or more snippets of C code and give line numbers after which the snippets should appear.

After #17, instead of the code above:

if (ncopy == 0)
    pthread_cond_wait(&p->c, &p->m);

After #40:

if (ncopy != 0)
    pthread_cond_broadcast(&p->c);

(This can go anywhere after n is calculated.)

The full code:

`     // Read up to `sz` bytes from the mutex_pipe into `buf` and return the number of bytes `
    // read. If no bytes are available, wait until at least one byte can be read.
    ssize_t mutex_pipe_read(mutex_pipe* p, char* buf, size_t sz) {

10.     size_t n = 0;         pthread_mutex_lock(&p->m); 11.     while (n < sz && (p->sz != 0 || n == 0)) { 12.         size_t ncopy = p->sz; 13.         if (ncopy > sizeof(p->buf) - p->head) 14.             ncopy = sizeof(p->buf) - p->head; 15.         if (ncopy > sz - n) 16.             ncopy = sz - n; 17.         memcpy(&buf[n], &p->buf[p->head], ncopy);             if (ncopy == 0) {                 pthread_cond_wait(&p->c, &p->m);             } 18.         n += ncopy; 19.         p->head += ncopy; 20.         p->head = p->head % sizeof(p->buf); 21.         p->sz -= ncopy; 22.     }         pthread_mutex_unlock(&p->m); 23.     return n;

    }
`     // Write up to `sz` bytes from `buf` into the mutex_pipe and return the number of bytes `
    // written. If no space is available, wait until at least one byte can be written.
    ssize_t mutex_pipe_write(mutex_pipe* p, const char* buf, size_t sz) {

30.     size_t n = 0;         pthread_mutex_lock(&p->m); 31.     while (n < sz && (p->sz != sizeof(p->buf) || n == 0)) { 32.         size_t tail = p->head + p->sz; 33.         tail = tail % sizeof(p->buf); 34.         size_t ncopy = sizeof(p->buf) - p->sz; 35.         if (ncopy > sizeof(p->buf) - tail) 36.             ncopy = sizeof(p->buf) - tail; 37.         if (ncopy > sz - n) 38.             ncopy = sz - n; 39.         memcpy(&p->buf[tail], &buf[n], ncopy);             if (ncopy == 0) {                 pthread_mutex_unlock(&p->m);                 sched_yield();                 pthread_mutex_lock(&p->m);             }             if (ncopy != 0) {                 pthread_cond_broadcast(&p->c);             } 40.         n += ncopy; 41.         p->sz += ncopy; 42.     }         pthread_mutex_unlock(&p->m); 43.     return n;

    }

SYNCH-4. Race conditions

Most operating systems support process priority levels, where the kernel runs higher-priority processes more frequently than lower-priority processes. A hypothetical Unix-like operating system called “Boonix” has two priority levels, normal and batch. A Boonix parent process changes the priority level of one of its children with this system call:

int setbatch(pid_t p) Sets process p to have batch priority. All future children of p will also have batch priority. Returns 0 on success, –1 on error. Errors include ESRCH, if p is not a child of the calling process.

Note that a process cannot change its own batch status.

You’re writing a Boonix shell that can run commands with batch priority. If c->isbatch is nonzero, then c should run with batch priority, as should its children. Your start_command function looks like this:

    pid_t start_command(command* c) {

1.    c->pid = fork(); 2.    if (c->pid == 0) { 3.        handle_pipes(c); 4.        handle_redirections(c); 5.        (void) execvp(c->argv[0], c->argv); 6.        // if we get here, execvp failed 7.        perror("execvp"); 8.        exit(1); 9.    } 10.    assert(c->pid > 0); 11.    if (c->isbatch) 12.        setbatch(c->pid); 13.    return c->pid;

    }

This shell has two race conditions, one more serious.

QUESTION SYNCH-4A. In some cases, c will change to batch priority after it starts running. Draw a dependency diagram demonstrating this race condition, or briefly describe it.

This happens if the child manages to execvp before the parent calls setbatch.

QUESTION SYNCH-4B. In some cases, c or one of its children could run forever with normal priority. Draw a dependency diagram demonstrating this race condition, or briefly describe it.

This happens if the child execvps, and then forks another child, before the parent calls setbatch. The grandchild will stick at normal priority.

In the remaining questions, you will fix these race conditions in three different ways. The first uses a new system call:

int isbatch() Returns 1 if the calling process has batch priority, 0 if it has normal priority.

QUESTION SYNCH-4C. Use isbatch to prevent both race conditions. Write a snippet of C code and give the line number after which it should appear. You should need one code snippet.

After #2:

while (c->isbatch && !isbatch())
    /* spin */;

QUESTION SYNCH-4D. Use the pipe system call and friends to prevent both race conditions. Write snippets of C code and give the line numbers after which they should appear. You should need several snippets. Make sure you clean up any extraneous file descriptors before running the command or returning from start_command.

A lot of different ways to do this. Here we create the pipe always but use it only if c->isbatch.

Before #1:

int pipefd[2];
pipe(pipefd);

After #3:

if (c->isbatch) {
    char buf;
    read(pipefd[0], &buf, 1);
}
close(pipefd[0]);
close(pipefd[1]);

After #12:

write(pipefd[1], "X", 1);
close(pipefd[0]);
close(pipefd[1]);

This alternate solution relies on end-of-file.

Before #1:

int pipefd[2];
pipe(pipefd);

After #3:

close(pipefd[1]);
if (c->isbatch)
    read(pipefd[0], &pipefd, 1);  // won’t ever read any chars, so this is safe
close(pipefd[0]);

After #12:

close(pipefd[0]);
close(pipefd[1]);

QUESTION SYNCH-4E. Why should the pipe solution be preferred to the isbatch solution? A sentence, or the right single word, will suffice.

Utilization. The pipe will block; the setbatch polls.

QUESTION SYNCH-4F. Suggest a change to the setbatch system call’s behavior that could fix both race conditions, and say how to use this new setbatch in start_command. Write one or more snippets of C code and give the line numbers after which they should appear.

Simple: Allow a process to set its own batchness. Then get rid of the call in the parent. In the child, after #2:

if (c->isbatch)
    setbatch(getpid());

SYNCH-5. Minimal minimal minimal synchronization synchronization synchronization

Minimalist composer Philip Glass, who prefers everything minimal, proposes the following implementation of condition variables based on mutexes. He’s only implementing wait and signal at first.

typedef struct {
    pthread_mutex_t cv_mutex;
} philip_cond_t;
int philip_cond_init(philip_cond_t* cond) {
    pthread_mutex_init(&cond->cv_mutex);
    pthread_mutex_lock(&cond->cv_mutex);   // start in LOCKED state
}
int philip_cond_wait(philip_cond_t* cond, pthread_mutex_t* mutex) {
    pthread_mutex_unlock(mutex);
    pthread_mutex_lock(&cond->cv_mutex);   // block until another thread calls philip_cond_signal
    pthread_mutex_lock(mutex);
}
int philip_cond_signal(philip_cond_t* cond) {
    pthread_mutex_unlock(&cond->cv_mutex);
}

Philip wants to use his condition variables to build a bank. Banco Glasso accounts support these operations:

void deposit(gacct* a, unsigned amt) Adds amt to a-\>balance.

void withdraw(gacct* a, unsigned amt) Blocks until a-\>balance \>= amt; then deducts amt from a-\>balance and returns.

Here’s Philip’s code.

typedef struct {
    unsigned long balance;
    pthread_mutex_t mutex;
    philip_cond_t cv;
} gacct;
  void deposit(gacct* a, unsigned amt) {       void withdraw(gacct* a, unsigned amt) {

D1    pthread_mutex_lock(&a->mutex);         W1    pthread_mutex_lock(&a->mutex); D2    a->balance += amt;                     W2    while (a->balance < amt) D3    philip_cond_signal(&a->cv);            W3        philip_cond_wait(&a->cv, &a->mutex); D4    pthread_mutex_unlock(&a->mutex);       W4    a->balance -= amt;

  }                                          `**`W5`**`    pthread_mutex_unlock(&a->mutex);
                                               }

Philip’s friend Pauline Oliveros just shakes her head. “You got serious problems,” she says, pointing at this section of the Mac man page for pthread_mutex_lock:

Calling pthread_mutex_unlock() with a mutex that the calling thread does not hold will result in undefined behavior.

QUESTION SYNCH-5A. Briefly explain how Philip’s code can trigger that undefined behavior.

The thread that initializes the cv_mutex, which locks the mutex, could easily differ from the thread that unlocks it in pthread_cond_signal. That unlocks the mutex on a thread that does not hold the mutex. Also, if two deposits are called in a row, the first one unlocks the mutex and the second unlocks it again—another instance of undefined behavior.

Philip switches to Linux’s “fast” mutexes, which do not have this undefined behavior. An unlocked “fast” mutex can be unlocked again without error, and it is OK to unlock a “fast” mutex on a different thread than the thread that locked it. A “fast” mutex can have two values, 0 (unlocked) and 1 (locked).

Below, we've begun to write out an execution where Philip’s code is called by two threads. We write the line numbers each thread executes and the values in a after each line. We’ve left lines blank for you to fill in; you do not need to turn in the full table.

a->balance

a->mutex

a->cv.cv_mutex

Initial values:

5

0

1

T1:
deposit(a, 10)

T2:
withdraw(a, 12)

W1

5

1

1

D1 (blocks)

5

1

1

________

W2

5

1

1

________

W3

5

0

1

(unblocks) D2

________

15

1

1

D3

________

15

1

0

________

W3 (gets cv_mutex,
blocks on mutex)

15

1

1

D4

________

15

0

1

________

W3 (gets mutex)

15

1

1

________

W2

15

1

1

________

W4

3

1

1

________

W5

3

0

1

Complete that execution and then answer the following questions.

QUESTION SYNCH-5B. In the above execution, what are the final values for a-\>balance, a-\>mutex, and a-\>cv.cv_mutex?

3, 0, 1

QUESTION SYNCH-5C. In the above execution, which line of code (W1–5) unblocks Thread T1?

W3

QUESTION SYNCH-5D. In the above execution, which, if any, line(s) of code (D1–4 and/or W1–5) set a-\>cv.cv_mutex to zero?

D3

For the remaining two questions, consider all possible concurrent executions of threads running withdraw and/or deposit.

QUESTION SYNCH-5E. Philip’s code always gives a correct balance. Why? List all that apply.

  1. Access to a-\>balance is protected by a condition variable.
  2. Access to a-\>balance is protected by a mutex.
  3. Arithmetic instructions like a-\>balance += amt; have atomic effect.

Only #2.

QUESTION SYNCH-5F. Philip’s code can sometimes block incorrectly: a thread running withdraw(5) might block indefinitely even though the current balance is 10. Describe briefly how this can happen.

This can happen if more than one thread is waiting and the wrong one is woken up by philip_cond_signal. Example:

  1. Initial balance 0
  2. withdraw(5) blocks
  3. withdraw(15) blocks
  4. deposit(10) calls philip_cond_signal, releasing the cv_mutex
  5. withdraw(5) and withdraw(15) threads race to get the cv_mutex; withdraw(15) wins, locking it
  6. withdraw(15) can’t withdraw, so it tries to lock the cv_mutex again
  7. withdraw(5) never runs again

SYNCH-6. Weensy threads

Betsy Ross is changing her WeensyOS to support threads. There are many ways to implement threads, but Betsy wants to implement threads using the processes array. “After all,” she says, “a thread is just like a process, except it shares memory with some other process!”

Betsy has defined a new system call, sys_create_thread, that starts a new thread running a given thread function, with a given argument, and a given stack pointer:

typedef void* (*thread_function)(void*);
pid_t `**`sys_create_thread`**`(thread_function f, void* arg, void* stack_top);

The system call’s return value is the ID of the new thread.

Betsy’s kernel contains the following code for her sys_fork implementation.

// in exception()
case INT_SYS_FORK:
    current->p_registers.reg_rax = handle_fork(current);
    break;
uint64_t `**`handle_fork`**`(proc* p) {
    proc* new_p = find_unused_process();
    if (!new_p)
        return -1;
    new_p->p_pagetable = copy_pagetable(p->p_pagetable);
    if (!new_p->p_pagetable)
        return -1;
    new_p->p_registers = p->p_registers;
    new_p->p_registers.reg_rax = 0;
    new_p->p_state = P_RUNNABLE;
    return 0;
}

And here’s the start of her sys_create_thread implementation.

// in exception()
case INT_SYS_CREATE_THREAD:
    current->p_registers.reg_rax = handle_create_thread(current);
    break;
uint64_t `**`handle_create_thread`**`(proc* p) {
    // Whoops! Got a revolution to run, back later
    return -1;
}

QUESTION SYNCH-6A. Complete her handle_create_thread implementation. Assume for now that the thread function never exits. You may use these helper functions if you need them (you may not):

proc\* find_unused_process(void)

Return a proc\* that has state P_FREE. Returns NULL if no unused process exists.

x86_64_pagetable\* copy_pagetable(x86_64_pagetable\* pgtbl)

Return a copy of pagetable pgtbl, with all unprivileged writable pages copied. Returns NULL if any allocation fails.

x86_64_pagetable\* allocate_page(void)

Allocates a new physical page, zeros it, and returns its physical address. (Recall that the WeensyOS kernel always installs kernel_pagetable, so the page data may be accessed using the same virtual address.) May be passed as the allocator to virtual_memory_map.

…or any function from the WeensyOS handout code See reference material at end of quiz.

Recall that system call arguments are passed according to the x86-64 calling convention: first argument in %rdi, second in %rsi, third in %rdx, etc.

uint64_t handle_create_thread(proc* p) {
    proc* np = find_unused_process();
    if (!np)
        return (uint64_t) -1;
    np->p_registers = p->p_registers;
    np->p_registers.reg_rip = p->p_registers.reg_rdi;
    np->p_registers.reg_rdi = p->p_registers.reg_rsi;
    np->p_registers.reg_rsp = p->p_registers.reg_rdx;
    np->p_state = P_RUNNABLE;
    np->p_pagetable = p->p_pagetable;
    return np;
}

QUESTION SYNCH-6B. Betsy’s friend Prince Dimitri Galitzin thinks Betsy should give processes even more flexibility. He suggests that sys_create_thread take a full set of registers, rather than just a new instruction pointer and a new stack pointer. That way, the creating thread can supply all registers to the new thread, rather than just a single argument.

pid_t `**`sys_create_thread`**`(x86_64_registers* new_registers);

The kernel will simply copy \*new_registers into the proc structure for the new thread. Easy!

Which of the following properties of x86_64_registers would allow Dimitri’s plan to violate kernel isolation? List all that apply.

  1. reg_rax contains the thread’s %rax register.
  2. reg_rip contains the thread’s instruction pointer.
  3. reg_cs contains the thread’s privilege level, which is 3 for unprivileged.
  4. reg_intno contains the number of the last interrupt the thread caused.
  5. reg_rflags contains the EFLAGS_IF flag, which indicates that the thread runs with interrupts enabled.
  6. reg_rsp contains the thread’s stack pointer.

#3, #5 only, though it is OK to list #4.

Now Betsy wants to handle thread exit. She introduces two new system calls, sys_exit_thread and sys_join_thread:

void `**`sys_exit_thread`**`(void* exit_value);
void* `**`sys_join_thread`**`(pid_t thread);

sys_exit_thread causes the thread to exit with the given exit value; it does not return. sys_join_thread behaves like pthread_join or waitpid. If thread corresponds is a thread of the same process, and thread has exited, sys_join_thread cleans up the thread and returns its exit value; otherwise, sys_join_thread returns (void\*) -1.

QUESTION SYNCH-6C. Is the sys_join_thread specification blocking or polling?

Polling

Betsy makes the following changes to WeensyOS internal structures to support thread exit.

  1. She adds a void\* p_exit_value member to struct proc.
  2. She adds a new process state, P_EXITED, that corresponds to exited threads.

QUESTION SYNCH-6D. Complete the case for INT_SYS_EXIT_THREAD in exception(). Don’t worry about the case where the last thread in a process calls sys_exit_thread instead of sys_exit.

case INT_SYS_EXIT_THREAD:
    current->p_state = P_EXITED;
    current->p_exit_value = p->p_registers.reg_rdi;
    break;

Note that we don’t even need p_exit_value, since we could use p_registers.reg_rdi to look up the exit value elsewhere!

If you wanted to clean up the last thread in a process, you might do something like this:

    int nlive = 0;
    for (int i = 1; i < NPROC && !nlive; ++i) {
        if (is_thread_in(i, current) && processes[i].p_state != P_EXITED) {
            ++nlive;
        }
    }
    if (!nlive) {
        for (int i = 1; i < NPROC; ++i) {
            if (is_thread_in(i, current)) {
                do_sys_exit(&processes[i]);
            }
        }
    }

QUESTION SYNCH-6E. Complete the following helper function.

` // Test whether `test_pid` is the PID of a thread in the same process as `p`. `
` // Return 1 if it is; return 0 if `test_pid` is an illegal PID, it corresponds to `
// a freed process, or it corresponds to a thread in a different process.
int is_thread_in(pid_t test_pid, proc* p) {
    return test_pid >= 0 && test_pid < NPROC && processes[test_pid].p_state != P_FREE
        && processes[test_pid].p_pagetable == p->p_pagetable;

QUESTION SYNCH-6F. Complete the case for INT_SYS_JOIN_THREAD in exception(). Remember that a thread may be successfully joined at most once: after it is joined, its PID is made available for reallocation.

case INT_SYS_JOIN_THREAD:
    int pid = current->p_registers.reg_rdi;
    if (pid != current->p_registers.reg_rdi
        || !is_thread_in(pid, current)
        || processes[pid].p_state != P_EXITED) {
        current->p_registers.reg_rax = -1;
    } else {
        current->p_registers.reg_rax = processes[pid].p_exit_value;
        processes[pid].p_state = P_FREE;
    }
    break;

Note that we can’t distinguish a –1 return value from error from a –1 return value from sys_exit_thread(-1). That’s by design.

QUESTION SYNCH-6G. In pthreads, a thread can exit by returning from its thread function; the return value is used as an exit value. So far, that’s not true in Weensy threads: a thread returning from its thread function will execute random code, depending on what random garbage was stored in its initial stack in the return address position. But Betsy thinks she can implement pthread-style behavior entirely at user level, with two changes:

  1. She’ll write a two-instruction function called thread_exit_vector.
  2. Her create_thread library function will write a single 8-byte value to the thread’s new stack before calling sys_create_thread.

Explain how this will work. What instructions will

thread_exit_vector` contain? What 8-byte value will `create_thread

write to the thread’s new stack? And where will that value be written relative to sys_create_thread’s stack_top argument?

thread_exit_vector:
    movq %rax, %rdi
    jmp sys_exit_thread  -OR-  int $INT_SYS_EXIT_THREAD

The 8-byte value will equal the address of thread_exit_vector, and it will be placed in the return address slot of the thread’s new stack. So it will be written starting at address stack_top.

MISC-4. Debugging

In the following short-answer questions, you have access to five debugging tools: top, strace, gdb, valgrind, and man. You can’t change program source code or use other tools. Answer the questions briefly (a couple sentences at most).

QUESTION MISC-4A. You are given a program that appears to “get stuck” when run. How would you distinguish whether the program blocked forever (e.g., made a system call that never returned) or entered an infinite loop?

You can use top: does it report the process is using 100% of the CPU?

You can use strace: is the last thing in the strace an incomplete system call?

QUESTION MISC-4B. You are given a program that uses a lot of memory. How would you tell whether the program leaks memory?

Use valgrind and check if it reports any memory leaks.

QUESTION MISC-4C. You are given a program that produces weird answers. How would you check if it invoked undefined behavior?

Use valgrind and check if it reports undefined behavior. GDB is also acceptable here.

QUESTION MISC-4D. You are given a program that blocks forever. How would you tell where the program blocked (which function called the blocking system call)?

Run it under gdb. When it blocks, hit Ctrl-C and then enter backtrace/bt to get a backtrace.

QUESTION MISC-4E. You are given a program that takes a long time to produce a result. How would you tell whether the program was using system calls unintelligently?

Run it under strace and look for stupidity, such as many system calls that report errors, many system calls that are redundant, lots of reads that return short counts, etc.

QUESTION MISC-4F. You are given a program that exits with a system call error, but doesn’t explain what happened in detail. How would you find what error condition occurred and understand the conditions that could cause that error?

Run it under strace to find the error condition: look for a system call that returned the error. Then use man on that system call and read about the error (the errno description).

MISC-5. Miscellany

QUESTION MISC-5A. True or false in conventional Unix systems?

  1. File descriptors are often used to communicate among processes on the same machine.

    True

  2. File descriptors are often used to communicate among processes on different machines.

    True

  3. File descriptors are often used to communicate with persistent storage.

    True

  4. File descriptors are often used to access primary memory.

    False

  5. File descriptors are often used to create child processes.

    False

QUESTION MISC-5B. Match the process isolation feature on the left with the hardware feature that helps enforce it on the right. Use each hardware feature once (make the best match you can).

  1. Protected control transfer (processes can transfer control to the kernel only at defined entry points)
  2. Memory protection (one process cannot modify another process’s memory)
  3. Interrupt protection (only the kernel can disable interrupts)
  4. CPU protection (the kernel always regains control of the CPU eventually)
  1. Traps
  2. Privileged mode (dangerous instructions fault unless the CPU is in privileged mode)
  3. Timer interrupts
  4. Page tables

1—a, 2—d, 3—b, 4—c

The remaining questions refer to the following lines of code.

1.   close(fd); 2.   connect(fd, sockaddr, socklen); 3.   listen(fd); 4.   mmap(NULL, 4096, PROT_READ, MAP_SHARED, fd, 0); 5.   read(fd, buf, 4096); 6.   write(fd, buf, 4096);

QUESTION MISC-5C. If a program executes the following line without error, which lines could be executed next without error? List all numbers that apply.

fd = open("/home/cs61user/cs61-psets/pset6/pong61.c", O_RDWR);

1, 4, 5, 6

QUESTION MISC-5D. If a program executes the following line without error, which lines could be executed next without error? List all numbers that apply.

fd = socket(AF_INET, SOCK_STREAM, 0);

1, 2, 3

QUESTION MISC-5E. If a program executes the following lines without error, which lines could be executed next without error? List all numbers that apply.

pipe(pipefd); fd = pipefd[0];

1, 5

MISC-6. More Miscellany

QUESTION MISC-6A. True or false: Any C arithmetic operation has a well-defined result.

False

QUESTION MISC-6B. True or false: Any x86 processor instruction has a well-defined result.

True

QUESTION MISC-6C. True or false: By executing a trap instruction, a process can force an operating system kernel to execute arbitrary code.

False

QUESTION MISC-6D. True or false: By manipulating process memory and registers, an operating system kernel can force a process to execute arbitrary instructions.

True

QUESTION MISC-6E. True or false: All signals are sent explicitly via the kill() system call.

False

QUESTION MISC-6F. True or false: An operating system’s buffer cache is generally fully associative.

True

QUESTION MISC-6G. True or false: The least-recently-used eviction policy is more useful for very large files that are read sequentially than it is for stacks.

False

QUESTION MISC-6H. True or false: Making a cache bigger can lower its hit rate for a given workload.

True

QUESTION MISC-6I. True or false: x86 processor caches are coherent (i.e., always appear to contain the most up-to-date values).

True

QUESTION MISC-6J. True or false: A socket file descriptor supports either reading or writing, but not both.

False; it supports both

MISC-7. Pot Pourri

Parts A-D pertain to the data structures and hexdump output shown here.

struct x {
    unsigned long ul;
    unsigned short us;
    unsigned char uc;
} *sp;
// Hexdump output of some program running on the appliance
08c1b008  e9 11 cf d0 0d d0 3f f3  63 61 74 00 0d f0 fe ca  |......?.cat.....|
08c1b018  5e ea 15 0d de c0 ad de                           |^.......|

You are told that sp = 0x08c1b008.

QUESTION MISC-7A. What is the value (in hex) of sp->ul?

0xd0cf11e9

QUESTION MISC-7B. What is the value (in hex) of sp->uc?

0x3f

QUESTION MISC-7C. At what address will you find the string "cat"?

0x08c1b010

QUESTION MISC-7D. You think that the bytes after the string "cat" comprise an array of 3 integers; what is the value (in hex) of the middle one of those?

0x0d15ea5e

QUESTION MISC-7E. What is the following binary value expressed in hexadecimal: 01011010?

0x5a

QUESTION MISC-7F. What is the value of the hex number 0x7FF in decimal?

255 + 7*256 == 8*256 − 1 == 2*4*256 − 1 == 2*1024 − 1 == 2047

QUESTION MISC-7G. Is 0x98765432 a valid return from malloc?

No, because it isn’t aligned properly—malloc will always return a pointer whose alignment could work for any basic type, which on x86-64, means the last digit must be either 0 or 8 (and most x86-64 mallocs actually align their data to 16-byte boundaries!)

QUESTION MISC-7H. What is the minimum number of x86 instruction bytes you need to write an infinite loop?

Two bytes: 0xeb 0xfe

QUESTION MISC-7I. True or False: Every declaration in C code allocates space for an object.

False. Extern declarations, such as function declarations or extern int x;, don’t allocate space.

QUESTION MISC-7J. True or False: Processes cannot share memory.

False; immediately after fork the parent and child processes share all physical memory!

For parts K–O, assume we are running on the appliance and we initialize ival, p, and q as shown below. Write the value of the expression -- you may express the values in hex if that's simpler, just be sure to prefix them with 0x to make it clear that you are doing so. For True/False questions, there is no need to correct or provide a counterexample for any statements that are false.

int ival[4] = {0x12345678, 0x9ABCDEF0, 0x13579BDF,0x2468ACE0};
int* p = &ival[0];
int* q = &ival[3];
int* x = p + 1;
char* cp = (char*) (q - 2);

QUESTION MISC-7K. q - p

3

QUESTION MISC-7L. ((char \*)q - (char \*)p)

12

QUESTION MISC-7M. x - p

1

QUESTION MISC-7N. \*((short \*)((char \*)x+2))

0x9ABC

QUESTION MISC-7O. \*cp

0xF0

QUESTION MISC-7P. What system call allows you to block on a collection of file descriptors?

select (also poll, pselect, epoll, …)

QUESTION MISC-7Q. What system call creates a communication channel that can only be used among related processes?

pipe

QUESTION MISC-7R. What system call can change the attributes of a file descriptor so you can poll on it rather than block?

fcntl

QUESTION MISC-7S. What system call produces a file descriptor on which a server can exchange messages with a client?

socket

QUESTION MISC-7T. True or False: A program and a process are the same thing.

False

MISC-8. CS61 in Real Life

QUESTION MISC-8A. The CS61 Staff have built a jet (the NightmareLiner) modeled on the Boeing Dreamliner. Unfortunately, they modeled it just a bit too closely on the Dreamliner, which needs to be rebooted periodically to avoid failure. In the case of the NightmareLiner, it needs to be rebooted approximately every 16 days. Your job is to use what you've learned in CS61 about data representation to hypothesize why.

Hint: There are 86,400,000 ms in a day. 86,400,000 is between 226 and 227.

OK, 16 is 24 and 27+4 is 31 -- it looks to me like the have a signed 32-bit number somewhere and if they don't reboot, it overflows.

Google recently discovered (and reported) a bug in the GNU libc implementation of getaddrinfo. This function can perform RPC calls, which involve sending and receiving messages. In some cases, getaddrinfo failed to check that a received message could fit inside a buffer variable located on the stack (2048 bytes).

QUESTION MISC-8B. True or false: This flaw means getaddrinfo will always execute undefined behavior.

False: it only executes undefined behavior if the buffer exceeds the buffer size of 2048.

QUESTION MISC-8C. Give an example of a message that will cause getaddrinfo to exhibit undefined behavior.

A message larger than the buffer (2048 bytes).

QUESTION MISC-8D. Briefly describe the contents of a message that would cause the getaddrinfo function to return to address 0x400012988 rather than to its caller.

In the message, the value 0x400012988 should appear beyond the 2048-byte limit of the buffer so that it ends up overwriting the return value on the stack (for example, a message that is 4096 bytes long, and the second half of the message contains repeated instances of 0x400012988).

This code used to appear in the Linux kernel:

1. struct tun_struct *tun = ...;   // This is a valid assignment;
2. struct sock *sk = tun->sk;
3. if (!tun)
4.    return POLLERR;              // This is an error return

QUESTION MISC-8E. The compiler removed lines 3 and 4. Why was that a valid thing for the compiler to do?

Dereferencing a NULL pointer is undefined behavior. Since tun is dereferenced in line 2, the compiler assumes that it cannot be NULL; therefore it is free to remove the check in line 3 and the accompanying code in line 4.

MISC-9. Miscellany

QUESTION MISC-9A. Name the property that implies a process cannot cause the kernel to execute code at an arbitrary address.

Kernel isolation. Anything reasonable should be accepted; “process isolation” is certainly acceptable too.

QUESTION MISC-9B. True or false: It’s safe to call any C library function from a signal handler.

False

QUESTION MISC-9C. Assume that 10 processes in a synchronous distributed system are attempting to agree on whether red or blue is the majority of the processes’ favorite color. What is the maximum number of processes that can fail (i.e., go insane and start performing arbitrary actions) for agreement to be possible?

3 processes can fail, since this is Byzantine failure