Introduction

In this series of labs, you will implement a fully functional distributed file server with the following architecture as described in the overview. To work correctly, the yfs servers need a locking service to coordinate updates to the file system structures. In this lab, you'll implement the lock service.

The core logic of the lock service is quite simple and consists of two modules, the lock client and lock server that communicate via RPCs. A client requests a specific lock from the lock server by sending an acquire request. The lock server grants the requested lock to one client at a time. When a client is done with the granted lock, it sends a release request to the server so the server can grant the lock to another client who also tried to acquire it in the past.

In addition to implementing the lock service, you'll also augment the provided RPC library to ensure at-most-once execution by eliminating duplicate RPC requests. Duplicate requests exist because the RPC system must re-transmit lost RPCs in the face of lossy network connections and such re-transmissions often lead to duplicate RPC delivery when the original request turns out not to be lost, or when the server reboots.

Duplicate RPC delivery, when not handled properly, often violates application semantics. Here's a example of duplicate RPCs causing incorrect lock server behavior. A client sends an acquire request for lock x, server grants the lock, client releases the lock with a release request, a duplicate RPC for the original acquire request then arrives at the server, server grants the lock again, but the client will never release the lock again since the second acquire is just a duplicate. Such behavior is clearly incorrect.

Software

The files you will need for this and subsequent lab assignments in this course are distributed using the Git version control system. To learn more about Git, take a look at the Git user's manual, or, if you are already familiar with other version control systems, you may find this CS-oriented overview of Git useful.

The URL for the course Git repository is http://news.cs.nyu.edu/~jinyang/fa10/labs/yfs-2010.git To install the files in your class account (or your own machine), you need to clone the course repository, by running the commands below.

% mkdir ~/ds-class
% cd ~/ds-class
% git clone  http://news.cs.nyu.edu/~jinyang/fa10/labs/yfs-2010.git lab
Initialized empty Git repository in ~/lab/.git/
% cd lab
% git checkout -b lab1 origin/lab1 

Git allows you to keep track of the changes you make to the code. For example, if you are finished with one of the exercises, you can checkpoint your progress, Use git status to check what files have been modified and add them to your to-be-committed list. Then, you can commit your changes:

% git add <the list of modified files....>
% git commit -am 'my solution for lab1 exercise9'
Created commit 60d2135: my solution for lab1 exercise9
 1 files changed, 1 insertions(+), 0 deletions(-)
% 

You can keep track of your changes by using the git diff command. Running git diff will display the changes to your code since your last commit, and git diff origin/lab1 will display the changes relative to the initial code supplied for this lab. Here, origin/lab1 is the name of the git branch with the initial code you downloaded from our server for this assignment.

Getting started

In lab , we provide you with a skeleton RPC-based lock server, a lock client interface, a sample application that uses the lock client interface, and a tester. Now compile and start up the lock server, giving it a port number on which to listen to RPC requests. You'll need to choose a port number that other programs aren't using. For example:

% cd lab
% make
% ./lock_server 3772
Now open a second terminal on the same machine and run lock_demo, giving it the port number on which the server is listening:
% cd lab
% ./lock_demo 3772
stat request from clt 1450783179
stat returned 0
% 

lock_demo asks the server for the number of times a given lock has been acquired, using the stat RPC that we have provided. In the skeleton code, this will always return 0. You can use it as an example of how to add RPCs. You don't need to fix stat to report the actual number of acquisitions of the given lock in this lab, but you may if you wish.

The lock client skeleton does not do anything yet for the acquire and release operations; similarly, the lock server does not implement any form of lock granting or releasing. Your job in this lab is to fill in the client and server function and the RPC protocol between the two processes.

Your Job

Your first job is to implement a correct lock server assuming a perfect underlying network. In the context of a lock service, correctness means obeying this invariant: at any instance of time, there is at most one client holding a lock of a given name.

We will use the program lock_tester to check the correctness invariant, i.e. whether the server grants each lock just once at any given time, under a variety of conditions. You run lock_tester with the same arguments as lock_demo. A successful run of lock_tester (with a correct lock server) will look like this:

% ./lock_tester 3772
simple lock client
acquire a release a acquire a release a
acquire a acquire b release b release a
test2: client 0 acquire a release a
test2: client 2 acquire a release a
. . .
./lock_tester: passed all tests successfully
If your lock server isn't correct, lock_tester will print an error message. For example, if lock_tester complains "error: server granted a twice!", the problem is probably that lock_tester sent two simultaneous requests for the same lock, and the server granted the lock twice (once for each request). A correct server would have sent one grant, waited for a release, and only then sent a second grant.

Your second job is to augment the RPC library to guarantee at-most-once execution. We simulate lossy networks on a local machine by setting the environmental variable RPC_LOSSY. If you can pass both the RPC system tester and the lock_tester, you are done. Here's a successful run of both testers:

% export RPC_LOSSY=0
% ./rpctest
simple test
. . .
rpctest OK

% killall lock_server
% export RPC_LOSSY=5
% ./lock_server 3722 &
% ./lock_tester 3772
simple lock client
acquire a release a acquire a release a
. . .
./lock_tester: passed all tests successfully

For this lab, your lock server and RPC augmentation must pass the both rpctest and lock_tester; you should ensure it passes several times in a row to guarantee there are no rare bugs. You should only make modifications on files rpc.{cc,h}, lock_client.{cc,h}, lock_server.{cc,h} and lock_smain.cc. We will test your code with with our own copy of the rest of the source files and testers. You are free to add new files to the directory as long as the Makefile compiles them appropriately, but you should not need to.

For this lab, you will not have to worry about server failures or client failures. You also need not be concerned about security such as malicious clients releasing locks that they don't hold.

Detailed Guidance

In principle, you can implement whatever design you like as long as your implementation satisfies all requirements in the "Your Job" section and passes the tester. To be nice, we provide detailed guidance and tips on a recommended implementation plan. You do not have to follow our recommendations, although doing so makes your life easier and allows maximal design/code re-use in later labs. Since this is your first lab, you should also read the general programming tips in the lab overview page as well.

Step One: implement the lock_server assuming a perfect network

First, you should get the lock_server running correctly without worrying about duplicate RPCs under lossy networks.

Step two: Implement at-most-once delivery in RPC

After your lock server has passed lock_tester under a perfect network, enable RPC_LOSSY by typing "export RPC_LOSSY=5", restart your lock_server and try lock_tester again. If you implemented lock_server in the simple way as described previously, you will see the lock_tester fail (or hang indefinitely). Try to understand exactly why your lock_tester fails when re-transmissions cause duplicate RPC delivery.

Read the RPC source code in rpc/rpc.{cc,h} and try to grasp the overall structure of the RPC library as much as possible first by yourself without reading the hints below.

The rpcc class handles the RPC client's function. At its core lies the rpcc::call1 function, which accepts a marshalled RPC request for transmission to the RPC server. We can see that call1 attaches additional RPC fields to each marshalled request:

   // add RPC fields before the RPC request data
   req_header h(ca.xid, proc, clt_nonce_, srv_nonce_, xid_rep_window_.front());
   req.pack_req_header(h);
What's the purpose for each field in req_header? (Hint: many of them are going to help you implement at-most-once delivery.) After call1 has finished preparing the final RPC request, it sits in a "while(1)" loop to (repeatedly) update the timeout value for the next retransmission and waits for the corresponding RPC reply or timeout to happen. Also, if the underlying (TCP) connection to the server fails, rpcc automatically re-connects to the server again (in function get_refconn) in order to perform retransmissions.

The rpcs class handles the RPC server's function. When the underlying connections have received a RPC request message, the function rpcs::got_pdu is invoked to dispatch the RPC request to a thread pool. The thread pool (class ThrPool) consists of a fixed number of threads that execute the rpcs::dispatch function to dispatch a RPC request to the registered RPC handler. The dispatch function extracts various RPC fields from the request. These fields include the RPC procedure number which is used to find the corresponding handler. Additionally, they also provide sufficient information for you to ensure the server can eliminate all duplicate requests.

Question: Our suggested implementation of lock server uses "blocking" RPC handlers, i.e. the server-side RPC handler functions can be blocked waiting for some external events from the clients. With "blocking" RPC handlers, how many concurrent "blocking" lock acqure requests can the server handle? (Hint: our implementation of rpcs currently uses a thread pool of 10 threads).

How to ensure at-most-once delivery? A strawman approach is to make the server remember all unique RPCs ever received. Each unique RPC is identified by both its xid (unique across a client instance) and clt_nonce (unique across all client instances). In addition to the RPC ids, the server must also remember the actual values of their corresponding replies so that it can re-send the (potentially lost) reply upon receiving a duplicate request without actually executing the RPC handler. This strawman guarantees at-most-once, but is not ideal since the memory holding the RPC ids and replies can grow indefinitely. A better alternative is to use a sliding window of remembered RPCs at the server. Such an approach requires the client to generate xid in a strict sequence, i.e. 0, 1, 2, 3... When can the server safely forget about a received RPC and its response, i.e. sliding the window forward?

Once you figure out the basic design for at-most-once delivery, go ahead and realize your implementation in rpc.cc (rpc.cc is the only file you should be modifying). Hints: you need to add code in two places, rpcs:add_reply to remember the RPC reply values and rpcs::checkduplicate_and_update to eliminate duplicate xid and update the appropriate information to help the server safely forget about certain received RPCs.

After you are done with step two, test your RPC implementation with ./rpctest and RPC_LOSSY set to 0 ("export RPC_LOSSY=0"). Make sure ./rpctest passes all tests. Once your RPC implementation passes all these tests, test your lock server again in a lossy environment by restarting your lock_server and lock_tester after setting RPC_LOSSY to 5 ("export RPC_LOSSY=5").

Handin procedure

Prepare a tar file by executing these commands:
% cd ~/ds-class/lab
% make clean
% cd ..
% tar czvf yfs-lab1.tgz lab/
That should produce a file called yfs-lab1.tgz in your ds-class/ directory. Go to submit site to upload yfs-lab1.tgz

You will receive full credit if your software passes the same tests we gave you when we run your software on our machines.