Synchronization Section 2: Synchronization Exercises

College students: Attendance at section is required and recorded for all college students. All college students should fill out this attendance form when attending section. For more on the attendance policy and what to do if you miss section refer to the syllabus here and here.

Extension School students: Extension School students are required to self-report weekly section participation using this participation form. There are two ways to participate in section: (1) attend section live (for most students, this is one of the zoom sections, but local students are welcome to attend in-person sections) OR (2) watch a section recording (Canvas > Zoom > Published Recordings). Both participation methods are equally valid towards your participation grade.

In today’s section, we’ll walk through a selection of our synchronization exercises, which are problems taken from prior exams. Also take the opportunity to ask questions about the problem set and class material.

All exercises assume a 64-bit x86-64 architecture unless otherwise stated.

The exercises are:

SYNCH-4. Race conditions

Most operating systems support process priority levels, where the kernel runs higher-priority processes more frequently than lower-priority processes. A hypothetical Unix-like operating system called “Boonix” has two priority levels, normal and batch. A Boonix parent process changes the priority level of one of its children with this system call:

Note that a process cannot change its own batch status.

You’re writing a Boonix shell that can run commands with batch priority. If c->isbatch is nonzero, then c should run with batch priority, as should its children. Your command::make_child function looks like this:

        void command::make_child() {
/* 1*/      ... // maybe create a pipe
/* 2*/      this->pid = fork();
/* 3*/      if (this->pid == 0) {
/* 4*/          ... // handle pipes and redirections
/* 5*/          (void) execvp(...);
/* 6*/          perror("execvp");
/* 7*/          _exit(EXIT_FAILURE);
/* 8*/      }
/* 9*/      assert(this->pid > 0);
/*10*/      if (this->isbatch) {
/*11*/          setbatch(this->pid);
/*12*/      }
/*13*/      ... // clean up pipes and such
        }

This shell has two race conditions, one more serious.

QUESTION SYNCH-4A. In some cases, a child command will change to batch priority after it starts running. Briefly describe how this can occur.

QUESTION SYNCH-4B. In some cases, a child command, or one of its own forked children, could run forever with normal priority. Briefly describe how this can occur.


In the remaining questions, you will fix these race conditions in three different ways. The first uses a new system call:

QUESTION SYNCH-4C. Use isbatch to prevent both race conditions. Write a snippet of C code and give the line number after which it should appear. You should need one code snippet.

QUESTION SYNCH-4D. Use the pipe system call and friends to prevent both race conditions. Write snippets of C code and give the line numbers after which they should appear. You should need several snippets. Make sure you clean up any extraneous file descriptors before running the command or returning from command::make_child.

QUESTION SYNCH-4E. Why should the pipe solution be preferred to the isbatch solution? A sentence, or the right single word, will suffice.

QUESTION SYNCH-4F. Suggest a change to the setbatch system call’s behavior that could fix both race conditions, and say how to use this new setbatch in command::make_child. Write one or more snippets of C code and give the line numbers after which they should appear.

SYNCH-5. Minimal minimal minimal synchronization synchronization synchronization

Minimalist composer Philip Glass, who prefers everything minimal, proposes the following implementation of condition variables based on mutexes. He’s only implementing wait and notify_one at first.

struct pg_condition_variable {
    std::mutex cvm;

    pg_condition_variable() {
        // start the mutex in LOCKED state!
        this->cvm.lock();
    }

    void wait(std::mutex& m) {
        m.unlock();
        this->cvm.lock();   // will block until a thread calls `notify_one`
        m.lock();
    }

    void notify_one() {
        this->cvm.unlock();
    }
};

Philip wants to use his condition variables to build a bank, where accounts support these operations:

Here’s Philip’s code.

      struct pg_acct {
          unsigned long balance;
          std::mutex m;
          pg_condition_variable cv;

          void deposit(unsigned amt) {
/*D1*/        this->m.lock();
/*D2*/        this->balance += amt;
/*D3*/        this->cv.notify_one();
/*D4*/        this->m.unlock();
          }

          void withdraw(unsigned amt) {
/*W1*/        this->m.lock();
/*W2*/        while (this->balance < amt) {
/*W3*/            this->cv.wait(this->m);
/*W4*/        }
/*W5*/        this->balance -= amt;
/*W6*/        this->m.unlock();
          }
      };

Philip’s friend Pauline Oliveros just shakes her head. “You got serious problems,” she says, pointing at this section of the C++ standard:

The expression m.unlock() shall…have the following semantics:

Requires: The calling thread shall own the mutex.

This means that the when m.unlock() is called, the calling thread must have previously locked the mutex, with no intervening unlocks. The penalty for deviance is undefined behavior.

QUESTION SYNCH-5A. Briefly explain how Philip’s code can trigger undefined behavior.


To fix this problem, Philip changes his condition variable and account to use a new type, fast_mutex, instead of std::mutex. This type is inspired by Linux’s “fast” mutexes. It’s OK to unlock a fast_mutex more than once, and it’s OK to unlock a fast_mutex on a different thread than the thread that locked it.

A fast_mutex has one important member, value, which can be 0 (unlocked) or 1 (locked).

Below, we've begun to write out an execution where Philip’s code is called by two threads. We write the line numbers each thread executes and the values in a after each line. We’ve left lines blank for you to fill in if you need to.

T1 T2 a.balance a.m.value a.cv.cvm.value
Initial state: 5 0 1
a.deposit(10) a.withdraw(12)
after W1 5 1 1
after D1 5 1 1
(T1 blocks on a.m)
 
 
 
 
 
 

QUESTION SYNCH-5B. Assuming T2’s call to withdraw eventually completes, what are the final values for a.balance, a.m.value, and a.cv.cvm.value?

QUESTION SYNCH-5C. In such an execution, which line of code (W1–5) unblocks Thread T1?

QUESTION SYNCH-5D. In such an execution, which, if any, line(s) of code (D1–4 and/or W1–5) set a->cv.cvm.value to zero?


QUESTION SYNCH-5E. For any collection of deposit and withdraw calls, Philip’s code will always ensure that the balance is valid. (There are other problems—a withdraw call might block forever when it shouldn’t—but the balance will be OK.) Why? List all that apply.

  1. Access to balance is protected by a condition variable.
  2. Access to balance is protected by a mutex.
  3. Arithmetic expressions like this->balance += amt; have atomic effect.

SYNCH-7. Fair synchronization

C++ standard mutexes are unfair: some threads might succeed in locking the mutex more often than others. For example, a simple experiment on Linux shows that if threads repeatedly try to lock the same mutex, some threads lock the mutex 1.13x more often than others. (Other kinds of lock are even less fair: some threads can lock a spinlock 3.91x more often than others!)

QUESTION SYNCH-7A. What is the name of the problem that would occur if one particular thread never locked the mutex, even though other threads locked and unlocked the mutex infinitely often?

To avoid unfairness, threads must take turns. One fair mutex implementation is called a ticket lock; this (incorrect, unsynchronized) code shows the basic idea.

struct ticket_mutex {
    unsigned now = 0;   // “now serving”
    unsigned next = 0;  // “next ticket”

    void lock() {
        unsigned t = this->next;    // mark this thread’s place in line
        ++this->next;               // next thread gets new place
        while (this->now != t) {    // wait for my turn
        }
    }

    void unlock() {
        ++this->now;
    }
};

QUESTION SYNCH-7B. Describe an instance of undefined behavior that could occur if multiple threads called ticket_mutex::lock on the same ticket mutex at the same time.

QUESTION SYNCH-7C. Fix lock and unlock using C++ atomics. Alternately, for partial credit, say which regions of code must be executed atomically.


The ticket lock implementation above uses polling. That will perform well if critical sections are short, but blocking is preferable if critical sections are long. Here’s a different ticket lock implementation:

      struct ticket_mutex {
/*T1*/    unsigned now;
/*T2*/    unsigned next;
/*T3*/    std::mutex m;

          void lock() {
/*L1*/        this->m.lock();
/*L2*/        unsigned t = this->next++;
/*L3*/        while (this->now != t) {
/*L4*/            this->m.unlock();
/*L5*/            sched_yield();
/*L6*/            this->m.lock();
/*L7*/        }
/*L8*/        this->m.unlock();
          }

          void unlock() {
/*U1*/        this->m.lock();
/*U2*/        ++this->now;
/*U3*/        this->m.unlock();
          }
      };

This ticket lock implementation uses std::mutex, which blocks, but the implementation itself uses polling.

QUESTION SYNCH-7D. Which line or lines of code mark this implementation as using polling?

QUESTION SYNCH-7E. Change the implementation to truly block. Include line numbers indicating where your code will go.

QUESTION SYNCH-7F. Most solutions to part E wake up blocked threads more than is strictly necessary. The ideal number of blocking calls is one: each thread should block at most once and wake up only when its turn comes. But the simplest correct solution will wake up each blocked thread a number of times proportional to the number of blocked threads.

Change your solution so that when there are 4 or fewer threads, every thread wakes up only when its turn comes. (Your solution must, of course, work correctly for any number of threads.) If your solution already works this way, you need not do anything here.

SYNCH-11. Pool synchronization

QUESTION SYNCH-11A. List the true statements.

  1. When a process exits, all of its child processes also exit.
  2. When a thread completes, all other threads in the process also complete.
  3. When a process exits, all of its threads exit whether or not they are complete.
  4. When a process exits via exit, all standard I/O buffers are automatically flushed.
  5. When a process exits via _exit, all standard I/O buffers are automatically flushed.
  6. None of the above.

QUESTION SYNCH-11B. Which of the following functions and system calls can block? List all that apply.

  1. fork
  2. getpid
  3. read
  4. waitpid with the WNOHANG flag
  5. std::thread constructor
  6. std::thread``::join
  7. std::thread``::detach
  8. std::mutex::lock
  9. std::mutex::unlock
  10. std::condition_variable::wait
  11. std::condition_variable::notify_all
  12. std::atomic increment
  13. None of the above

QUESTION SYNCH-11C. Which system call has behavior closest to that of std::thread``::join? Select one.

  1. fork
  2. read
  3. getpid
  4. kill
  5. waitpid with the WNOHANG flag
  6. waitpid without the WNOHANG flag

The next three questions concern this program.

std::atomic<int> x = 0; // global variable

void thread1() {
    x = 1;
}

void thread2() {
    x = 2;
}

int main() {
    std::thread t1(thread1);   /* 1 */
    std::thread t2(thread2);   /* 2 */
    t1.join();                 /* 3 */
    t2.join();                 /* 4 */
    printf("%d\n", x.load());
}

QUESTION SYNCH-11D. Does this program invoke undefined behavior, yes or no?

QUESTION SYNCH-11E. What are all the possible values the program can print (on those runs where it does not invoke undefined behavior)?

QUESTION SYNCH-11F. Order the numbered lines so that the program would only ever print 2. (Your answer should be the numbers 1–4 in some order.)

QUESTION SYNCH-11G. This problem models water polo players as threads. Each thread runs the player function and represents one of the players on a team. Each player thread:

  1. Waits until there are fewer than 5 players in the pool, then gets in the pool.
  2. Plays the game until a substitution is called.
  3. Gets out of the pool and goes back to step 1.

Your task is to complete this code. There must never be more than 5 players in the pool, but up to 5 players should be able to play simultaneously. There must be no undefined behavior or deadlock, and for full credit, players on the sidelines should block rather than poll. (No polling in the pool.)

struct pool {
    // (1) Your code here
}

void player(pool* p) {
    while (true) {
        // Block until there are fewer than 5 players in the pool, then get in.
        // (2) Your code here

        // Play.
        while (!take_substitution()) {
            play();
        }

        // Get out of the pool.
        // (3) Your code here
    }
}