Ancient CS 61 Content Warning!!!!!1!!!
This is not the current version of the class.
This site was automatically translated from a wiki. The translation may have introduced mistakes (and the content might have been wrong to begin with).

Problem Set 6: Adversarial Network Pong

This problem set will teach you some useful and common strategies for handling problems common in networking, including loss, delay, and low utilization. It also teaches you programming using threads and requires synchronization. The setting is a game called network pong.

Phase 0: Easy

Merge our problem set 6 code into your repository with git pull handout master. This will merge our code with your previous work. If you have any “conflicts” from problem set 5, resolve them before continuing further. Run git push to save your work back to code.seas.

Please use an explicit merge to create your repository. If you copy code by hand, our automated scripts will have trouble analyzing your code, and it’ll be harder for you to incorporate our updates.

You may also create a new cs61-psets repository for this assignment. Tell us if you do.

Once you have merged, edit the pset6/serverinfo.h file so that PONG_USER is defined to a unique string that only your team knows.

Note. We currently do not plan to implement authentication on the pong server. This means that, if you learn another team’s string, you can mess with their heads. Please do not abuse this system. Aggressive abuse will be reported to The Authorities.1

Type make, and then run the ./pong61 program. It will print out a URL. Visit that URL in your browser. You should see a bouncing ball in a rectangular field:

File:pong01.png

How does this work? pong61 sends HTTP messages to a Web server we run. HTTP is the application protocol on which the Web is built. Read more about HTTP before you continue.

When it starts up, pong61 makes a single request to a URL like this:

 http://cs61.seas.harvard.edu:6168/STRING/reset

This tells the server to reset the pong board’s state. The server returns a simple response like this:

 18 23

These two numbers are the board’s width and height, respectively. It also tells your browser to clear the board.

After this, pong61 makes many requests to URLs like this:

 http://cs61.seas.harvard.edu:6168/STRING/move?x=XPOS&y=YPOS&style=on

This request causes a new ball to appear at position (XPOS,YPOS). The server responds with a numeric code and explanation. If everything goes well, it says:

 0 OK

If there’s a problem, the numeric code is negative, as in:

 -1 x and y parameters must be numbers

After each request, pong61 waits 0.1 seconds before making the next request.

Our handout code runs each HTTP request in its own thread, using the pthreads library. The main thread spins to wait for each thread to complete before going on to the next. This works just fine on Phase 0. To do the problem set, you must change the code so it works on the other phases too, and introduce synchronization along the way. Use the web page's phase buttons to change phases.

Phase 1: Loss

In Phase 1, the server starts to lose messages. It will occasionally go offline for a short period. During that time, every move request is rejected by closing down the connection. The http_receive_response_headers function sets conn->cstate to cstate_broken and conn->status_code to -1 when this happens, but the pong thread ignores this problem and continues as if everything was fine. That position in the pong trail never gets filled in. Our server shows this mistake by drawing black marks in the spaces.

Your job in this phase is to detect lost messages and retry. When the server drops a connection, your code should close that connection (to free its resources) and make a new connection attempt at the same position. It shouldn’t move to the next position until the server responds for the current position.

However, you must be careful not to bombard the server while it is offline. The server will notice this and explode. Instead, you must implement a form of exponential backoff. This is a simple, powerful concept.

Exponential backoff is awesome because it responds to short outages quickly, but imposes only logarithmic overhead (i.e., the number of messages sent during the outage is logarithmic in the length of the outage). It’s ubiquitous: Ethernet is built on it, and the next time your Gmail goes offline, check out the numbers that appear after “Not connected. Trying again in...”.

HINT: This problem set rewards thinking before you code. Implement one phase at a time, always thinking how you could accomplish the task in the simplest correct way. Avoid overengineering! Our solution set implements all phases, without race conditions (we believe), in less than 50 lines of code.

Phase 2: Delay

In Phase 2, the server delays its responses. It will send you the full header for its response, but delay the body. Since the handout loop waits for the body before sending the next request, the pong ball will move extremely slowly in Phase 2. Too slowly, in fact: the server enforces a minimum ball speed, and when your code is slower than that speed, you’ll see some black marks on the display.

You might think solving this problem would be easy: just close the connection before the response arrives. But the server is too clever for this. If you close a connection prematurely, it explodes.

To support delay, your pong61 must handle multiple concurrent connections to the server. Now, the main thread may need to spawn a new thread before the response arrives! But watch out. If you leak connections, the server will explode.

Your Phase 2 code must also work in Phase 1. We suggest you make Phase 2 work first on its own, then go back and make Phase 1 work again.

Phase 3: Utilization and Synchronization

So far, your pong61 client opens a new network connection for every ball. This is wasteful and slow and in Phase 3 the server will not allow it. You should instead reuse valid HTTP connections for new ball positions.

An http_connection object is available for reuse if and only if conn->cstate == cstate_idle. This means that the server sent a complete response and is waiting for another request.

Reusing connections would be really easy—except that in Phase 3 the server also drops some connections (as in Phase 1) and delays some connections (as in Phase 2). Your pong61 client must handle it all, and you must use synchronization primitives correctly.

The key function you’ll need to add is a connection table of available connections. This can be a linked list, an array, or whatever you’d like. When a connection reaches state cstate_idle, add it to the table. When you need to call a new RPC, check the table first, and use that connection if one exists.

Make sure that you protect your connection table from concurrent access! There should be no race condition bugs. Use synchronization objects from the pthread library to handle this. You should simultaneously remove the global move_done variable and replace it with pthread synchronization objects. Investigate pthread_mutex_t and pthread_cond_t especially.

Phase 4: Congestion

In Phase 4, the server sometimes behaves as if it is congested. A congested server responds to a request not with 0 OK, but with a positive number, such as this:

 +1948 STOP

This means that the server is overloaded. pong61 is not allowed to send any more requests for (in this case) 1948 milliseconds—not even from concurrent threads. This should give the server enough time to cool down. The display will show a stop sign during the cool-down period, and if pong61 ignores the stop request and sends a message anyway, the server will explode. But after the cool-down period, the client should go right back to sending requests. A congestion response can sometimes be delayed, as in Phase 2; your client threads should proceed during the delay, entering the cool-down period only once the whole response is available.

Phases 1 through 3 are still active in Phase 4. Phase 4 may catch some race conditions in your code from Phase 3.

The best solutions will avoid all race conditions for non-delayed responses, meaning that the main thread will remain blocked during an immediate congestion signal (i.e., a complete congestion signal included in the server’s initial response). This may require changes to your Phase 2 code.

Phase 5: Evil

Phase 5 is a mystery, but not a very tough one. Run your code on Phase 5 and you’ll figure out the problem soon enough.

Race conditions

For full credit, your code must not suffer from race condition bugs. You’ll need to think this through carefully, as race conditions may not show up during normal testing. In your README.md, write a short paragraph explaining your strategy for avoiding race conditions.

We are only concerned with race conditions inside your client (i.e., between different client threads). We are not concerned with rare issues with scheduling between the server and the client, such as network reordering. It is impossible to avoid all client–server race conditions in this pset. But as usual, your code should never execute undefined behavior.

The ./pong61 -f flag, which runs pong61 faster than normal, might be useful as you look for race conditions.

Extra credit

If you have extra time, implement something fun. For example, two teams could get together and implement Space Invaders (one team programming the monsters, and one team programming the spaceship)! Here are some RPCs the server implements that might be useful.

Turnin

You will turn in your code by pushing your git repository and informing the grading server. Inform us ASAP if you have changed partner or repository from pset 5.

Remember to fill out README.md, including the “Race conditions” section, and AUTHORS.md.

Notes


This pset was created for CS61.


  1. On the other hand, if two teams actually wanted to mess with one another’s heads.... ↩︎