In this series of labs, you will implement a fully functional distributed file server with the following architecture:
The architecture involves multiple file clients, labeled yfs above, each running on a different machine. Because we use the FUSE userlevel file system toolkit, yfs will appear to local applications as a mounted file system. Instead of storing file system data locally, all yfs clients store data with a single extent server, which allows sharing of data across multiple yfs clients.
This architecture is appealing because, in principle, it shouldn't slow down very much as you add client hosts. Most of the complexity is in the yfs program, so new clients make use of their own CPUs rather than competing with existing clients for the server's CPU. The extent server is shared, but hopefully it's simple and fast enough to handle a large number of clients. In contrast, a conventional NFS server is pretty complex (it has a complete file system implementation) so it's more likely to be a bottleneck when shared by many NFS clients. The only fly in the ointment is that the yfs servers need a locking service to avoid inconsistent updates.
In this lab, you'll implement the lock service. The core logic of the lock service is quite simple and consists of two modules, the lock client and lock server that communicate via RPCs. A client requests a specific lock from the lock server by sending an acquire request. The lock server grants the requested lock to one client at a time. When a client is done with the granted lock, it sends a release request to the server so the server can grant the lock to another client who also tried to acquire it in the past.
In addition to implementing the lock service, you'll also augment the provided RPC library to ensure at-most-once execution by eliminating duplicate RPC requests. Duplicate requests exist because the RPC system must re-transmit lost RPCs in the face of lossy network connections and such re-transmissions often lead to duplicate RPC delivery when the original request turns out not to be lost, or when the server reboots. Duplicate RPC delivery, when not handled properly, often violates application semantics. Here's a example of duplicate RPCs causing incorrect lock server behavior. A client sends an acquire request for lock x, server grants the lock, client releases the lock with a release request, a duplicate RPC for the original acquire request then arrives at the server, server grants the lock again, but the client will never release the lock again since the second acquire is just a duplicate. Such behavior is clearly incorrect.
For this lab, you should be able to use any Linux/BSD/MacOS machines. First, create a directory for your labs (in our example, we'll call it "yfs"), and download the lab1 code skeleton from http://www.news.cs.nyu.edu/~jinyang/fa08/labs/yfs-lab1.tgz and RPC library from http://www.news.cs.nyu.edu/~jinyang/fa08/labs/yfs-rpc.tgz.
% mkdir yfs % cd yfs % wget -nc http://www.news.cs.nyu.edu/~jinyang/fa08/labs/yfs-lab1.tgz % tar xzvf yfs-lab1.tgz % wget -nc http://www.news.cs.nyu.edu/~jinyang/fa08/labs/yfs-rpc.tgz % tar xzvf yfs-rpc.tgz
In directory l1/, we provide you with a skeleton RPC-based lock server, a lock client interface, a sample application that uses the lock client interface, and a tester. Now compile and start up the lock server, giving it a port number on which to listen to RPC requests. You'll need to choose a port number that other programs aren't using. For example:
% cd l1 % make % ./lock_server 3772Now open a second terminal on the same machine and run lock_demo, giving it the port number on which the server is listening:
% cd ~/yfs/l1 % ./lock_demo 3772 stat request from clt 16283 for lock a stat returned 0 %
lock_demo asks the server for the number of times a given lock has been acquired, using the stat RPC that we have provided. In the skeleton code, this will always return 0. The lock client skeleton does not do anything yet for the acquire and release operations; similarly, the lock server does not implement any form of lock granting or releasing. Your job in this lab is to fill in the client and server function and the RPC protocol between the two processes.
In addition to being correct, we also demand that the rpc handlers at lock_server run to completion without blocking. That is, a server thread should not block on condition variables or remote RPCs. Of course, a thread can wait to acquire locks as long as we are certain that that lock will not be held by another thread across an RPC, and once it has acquired the lock it should run to completion. This requirement ensures your lock server can be replicated correctly at a later lab.
We will use the program lock_tester to check the correctness invariant, i.e. whether the server grants each lock just once at any given time, under a variety of conditions. You run lock_tester with the same arguments as lock_demo. A successful run of lock_tester (with a correct lock server) will look like this:
% ./lock_tester 3772 simple lock client acquire a release a acquire a release a acquire a acquire b release b releasea test2: client 0 acquire a release a test2: client 2 acquire a release a . . . ./lock_tester: passed all tests successfullyIf your lock server isn't correct, lock_tester will print an error message. For example, if lock_tester complains "error: server granted a twice!", the problem is probably that lock_tester sent two simultaneous requests for the same lock, and the server granted the lock twice (once for each request). A correct server would have sent one grant, waited for a release, and only then sent a second grant.
Your second job is to augment the RPC library in directory yfs/rpc to guarantee at-most-once execution. We simulate lossy networks on a local machine by setting the environmental variable RPC_LOSSY. If you can pass both the rpc system tester and the lock_tester, you are done. Here's a successful run of both testers:
% ./rpctest simple test . . . rpctest OK % killall lock_server % export RPC_LOSSY=5 % ./lock_server 3722 & % ./lock_tester 3772 simple lock client acquire a release a acquire a release a . . . ./lock_tester: passed all tests successfully
For this lab, your lock server and RPC augmentation must pass the both rpctest and lock_tester; you should ensure it passes several times in a row to guarantee there are no rare bugs. You should only make modifications on files rpc.{cc,h}, lock_client.{cc,h}, lock_server.{cc,h} and lock_smain.cc. We will test your code with with our own copy of the rest of the source files and testers. You are free to add new files to the directory as long as the Makefile compiles them appropriately, but you should not need to.
For this lab, you will not have to worry about server failures or client failures. You also need not be concerned about security such as malicious clients releasing locks that they don't hold.
The RPC library's source code is in directory yfs/rpc. To use it, the lock_server creates a rpc server object (rpcs) listening on a port and registers various RPC handlers (see an example in lock_smain.cc). The lock_client creates a rpc client object (rpcc), binds it to the lock_server's address (127.0.0.1) and port, and invokes RPC calls (see an example in lock_client.cc).
Each RPC procedure is identified by a unique procedure number. We have defined all the RPC numbers you will need in lock_protocol.h: acquire,release,subscribe,grant,stat. However, you must still register handlers for these RPCs with the rpc server object.
You can learn how to use the RPC system by studying the given stat call implementation across lock_client and lock_server. All RPC procedures have a standard interface with x+1 (x must be less than 6) arguments and an integer return value (see the example in lock_server::stat function). The last argument, a reference to an arbitary type, is always there so that a RPC handler can use it to return results (e.g. lock_server::stat returns the number of acquires for a lock). The RPC handler also returns an integer status code, and the convention is to return zero for success and to return positive numbers otherwise for various errors. If the RPC fails at the RPC library (e.g.timeouts), the RPC client gets a negative return value instead. The various reasons for RPC failures at the RPC library are defined in rpc.h under rpc_const.
The RPC system must know how to marshall arbitary objects into a stream of bytes to transmit over the network and unmarshall them at the other end. The RPC library has already provided marshall/unmarshall methods for standard C++ objects such as std::string, int, char (see file rpc/rpc.cc). If your RPC call includes different types of objects as arguments, you must provide your own marshalling method. You should be able to complete this lab with existing marshall/unmarshall methods.
The lock server can manage many distinct locks. Each lock has a name of type std::string. The set of locks is open-ended: if a client asks for a lock that the server has never seen before, the server should create the lock and grant it to the client. When multiple clients simulataneously request for a given lock, the lock server must grant the lock to each client one at a time. We demand all lock_server's rpc handlers be non-blocking. This requirement ensures your lock server can be replicated correctly at a later lab.
You will need to modify the lock server skeleton implementation in files lock_server.{cc,h} to accept acquire/release RPCs from the lock client, and to keep track of the state of the locks. Here is our suggested implementation plan. Convince yourself that all lock_server's rpc handlers are non-blocking according to this plan.
On the server a lock can be in two state
The rpc handler for acquire puts the client id on the corresponding lock's waitqueue. If the lock state is free, puts the lock name to the granter's workqueue and signals the granter thread to run.
The rpc handler for release changes the lock state to free, if the waitqueue for this lock is non-zero, puts the lock name to the granter's workqueue and signals the granter thread to run.
The granter thread does the following in a loop: for each lock in its workqueue, check if lock state is free, if so sends grant rpc requests to the first client on the lock's waitqueue and change lock state to locked. When workqueue is empty, waits on the granter's conditional variable.
The class lock_client is a client-side interface to the lock server (found in files lock_client.{cc,h}). The interface provides acquire() and release() functions that are supposed to take care of sending and receiving RPCs. Multiple threads in the client program can use the same lock_client object and request for the same lock name. See lock_demo.cc for an example of how an application uses the interface. Note that a basic requirement of the client interface is that lock_client::acquire must not return until that lock is granted.
Here is our suggested implementation plan for lock_client:
On the client a lock can be in several states:
The basic acquire() logic is like this:
check the state of the lock in a loop, if free, break out of loop, change the lock state to locked and proceed (i.e. return from acquire). else if absent, send acquire rpc to lock server, change lock state to acquiring, waits on the conditional variable associated with the lock else (either acquiring or locked) waits on the conditional variable associated with the lock
The release() logic is like this: send release rpc to the lock server, change the lock_state to absent. If the number of threads waiting on the lock is non-zero, wakes one up by signaling the corresponding conditional variable.
The lock client also listens on a local port using a rpc server object (see the rpcs member of the lock_client class in lock_client.h) to receive the grant rpc requests from the lock server (it needs to tell the rpc server which port it is listening on by sending the port number in a subscribe rpc request). The rpc handler for grant() is like this: change lock state to free and wakes up some thread waiting for the lock (if there is one) by signaling the corresponding conditional variable.
Both lock_client and lock_server's functions will be invoked by multiple threads concurrently. In particular, the RPC library always launches a new thread to invoke the RPC handler at the rpc server. Many different threads might also call lock_client's acquire() and release() functions simultaneously.
To protect access to shared data in the lock_client and lock_server, you need to use pthread mutexes. Please refer to the general tips for programming using threads. As seen from the suggested implementation plan, you also need to use pthread condition variables to synchronize the actions among multiple threads. Conditional variables go hand-in-hand with the mutexes, please see here for more details on programming with pthreads.
For robustness, when using conditional variables, it is recommended that when a thread that waited on a conditional variable wakes up, it checks a boolean predicate(s) associated with the wake-up condition. This protects from spurious wake-ups from the pthread_cond_wait() and pthread_cond_timedwait() functions. For example, the suggested logic described above lends itself to such an implementation (see how on the lock_client, a thread that wakes up checks the state of the lock.)
In this and later labs, we try to adhere to a simple (coarse-grained) locking convention: we acquire the subsystem/protocol lock at the beginning of a function and release it before returning. This convention works because we don't require atomicity across functions, and we don't share data structures between different subsystems/protocols. You will have an easier life by sticking to this convention.
Read the RPC source code in rpc/rpc.{cc,h} and try to grasp the overall structure of the RPC library as much as possible first by yourself without reading the hints below.
The rpcc class handles the RPC client's function. At its core lies the rpcc::call1 function, which accepts a marshalled RPC request for transmission to the RPC server. We can see that call1 attaches additional RPC fields to each marshalled request:
210 // add RPC fields before req 211 m1 << clt_nonce << svr_nonce << proc << myxid << xid_rep_window.front() << req.str(); 212What's the purpose for each of these fields? (Hint: most of them are going to help you implement at-most-once devliery) After call1 has finished preparing the final RPC request, it sits in a "while(1)" loop to (repeatedly) update the timeout value for the next retransmission and waits for the corresponding RPC reply or timeout to happen.
The rpcs class handles the RPC server's function. It creates a separate thread (executing rpcs::loop) that continously tries to read RPC requests from the underlying channel (e.g. a TCP connection). Once a request is read succesfully, it spawns a new thread to dispatch this request to the registered RPC handler. The function rpcs::dispatch implements the dispatch logic. It extracts various RPC fields from the request. These fields include the RPC procedure number which is used to find the corresponding handler. Additionally, they also provide sufficient information for you to ensure the server can eliminate all duplicate requests.
How to ensure at-most-once delivery? A strawman approach is to make the server remember all unique RPCs ever received. Each unique RPC is identified by both its xid (unique across a client instance) and clt_nonce (unique across all client instances). In addition to the RPC ids, the server must also remember the actual values of their corresponding replies so that it can re-send the (potentially lost) reply upon receiving a duplicate request without actually executing the RPC handler. This strawman guarantees at-most-once, but is not ideal since the memory holding the RPC ids and replies can grow indefinitely. A better alternative is to use a sliding window of remembered RPCs at the server. Such an approach requires the client to generate xid in a strict sequence, i.e. 0, 1, 2, 3... When can the server safely forget about a received RPC and its response, i.e. sliding the window forward?
Once you figure out the basic design for at-most-once delivery, go ahead and realize your implementation in rpc.cc (rpc.cc is the only file you should be modifying). Hints: you need to add code in three places, rpcc:rpcc constructor to create a thread to enable retransmissions, rpcs:add_reply to remember the RPC reply values and rpcs::checkduplicate_and_update to eliminate duplicate xid and update the appropriate information to help the server safely forget about certain received RPCs.
After you are done with step two, test your rpc implementation with RPC_LOSSY set to zero first ("export RPC_LOSSY=0"). Make sure ./rpctest passes all tests. Test with rpctest again after enabling loss ("export RPC_LOSSY=5"). Once your rpc implementation passes all these tests, test your lock server again in a lossy environment by restarting your lock_server and lock_tester after setting "RPC_LOSSY=5".
Tar your l1/ and rpc/ files together like the following.
cd ~/yfs/l1 make clean cd .. tar czvf yfs-lab1.tgz rpc/ l1/Go to submit site to upload yfs-lab1.tgz