In labs 7 and 8, you will replicate the lock service using the replicated state machine approach. See Schneider's RSM paper for a good, but non-required, reference. In the replicated state machine approach, one machine is the master; the master receives requests from clients and executes them on all replicas in the same order.
When the master fails, any of the replicas can take over its job, because they all should have the same state as the failed master. One of the key challenges is ensuring that everyone agrees on which replica is the master and which of the slaves are alive, despite arbitrary sequences of crashes or network partitions. We use Paxos to reach such an agreement.
In this lab, you will implement Paxos and use it to agree to a sequence of membership changes (i.e., view changes). We will implement the replicated lock server in lab 8. We have modified lock_smain.cc in this lab to start the RSM instead of the lock server; however, we will not actually replicate locks until lab 8.
When you complete this lab and the next you will have a replicated state machine that manages a group of lock servers. You should be able to start new lock servers, which will contact the master and ask to join the replica group. Nodes can also be removed from the replica group when they fail. The set of nodes in the group at a particular time is a view, and each time the view changes, you will run Paxos to agree on the new view.
The design we have given you consists of three layered modules. The RSM and config layers make downcalls to tell the layers below them what to do. The config and Paxos modules also make upcalls to the layers above them to inform them of significant events (e.g., Paxos agreed to a value, or a node became unreachable).
Each module has threads and internal locks. As described above, a thread may call down through the layers. For instance, the RSM could tell the config module to add a node, and the config module tells Paxos to agree to a new view. When Paxos finishes, a thread will invoke an upcall to inform higher layers of the completion. To avoid deadlock, we suggest that you use the rule that a module releases its internal locks before it upcalls, but can keep its locks when calling down.
Begin by initializing your Lab 7 branch with your implementation from Lab 6.
% cd ~/lab % git commit -am 'my solution to lab6' Created commit ... % git pull remote: Generating pack... ... % git checkout -b lab7 origin/lab7 Branch lab7 set up to track remote branch refs/remotes/origin/lab7. Switched to a new branch "lab7" % git merge lab6
This will add new files, paxos_protocol.h, paxos.{cc,h}, log.{cc,h}, rsm_tester.pl, config.{cc,h}, rsm.{cc,h}, and rsm_protocol.h to your lab/ directory and update the makefile from your previous lab. It will also incorporate minor changes into your lock_smain.cc to initialize the RSM module when the lock server starts. Note that since the RSM and the lock server both bind on the same port, this will actually disable your lock server until lab 8, unless you change the relevant line in lock_smain.cc back. The lock server will now take two command-line arguments: the port that the master and the port that the lock server you are starting should bind to.
In rsm.{cc,h}, we have provided you with code to set up the appropriate RPC handlers and manage recovery in this lab. You will need to write the code to handle nodes joining and leaving.
In files paxos.{cc,h}, you will find a sketch implementation of the acceptor and proposer classes that will use the Paxos protocol to agree on view changes. The file paxos_protocol.h defines the suggested RPC protocol between instances of Paxos running on different replicas, including structures for arguments and return types, and marshall code for those structures. The lion's share of the work in this lab is implementing Paxos.
The files log.{cc,h} provide a full implementation of a log class, which should be used by your acceptor and proposer classes to log important Paxos events to disk. Then, if the node fails and later re-joins, it has some memory about past views of the system. Please do not make any changes to this class, as we will use our own original versions of these files during testing.
config.cc maintains views using Paxos. You will need to understand how it interacts with the Paxos and RSM layers, but you should not need to make any changes to it for this lab. (You may do so if you wish, however.)
In the next lab we will test if the replicated lock service maintains the state of replicated locks correctly, but in this lab we will just test if view changes happen correctly. The lab tester rsm_tester.pl will automatically start several lock servers, kill and restart some of them, and check that you have implemented the Paxos protocol and used it correctly.
There are two classes that together implement the Paxos protocol:
acceptor and proposer. Each replica runs both
classes. The proposer class leads the Paxos protocol by
proposing new values and sending requests to all replicas.
The acceptor class processes the requests from the
proposer and sends responses. The method
The config module performs view changes among the set of participating nodes. The first view of the system is specified manually. Subsequent view changes rely on Paxos to agree on a unique next view to displace the current view.
When the system starts from scratch, the first node creates view 1 containing itself only, i.e. view_1={1}. When node 2 joins after the first node, node two's RSM module joins node 1 and transfers view 1 from the first node as the only member. Then, node 2 asks its config module to add itself to view 1. The config module will use Paxos to propose to nodes in view_1={1} a new view_2 containing node 1 and 2. When Paxos succeeds, view_2 is formed, i.e., view_2={1,2}. When node 3 joins, its RSM module will download the last view from the first node (view 2) and it will then attempt to propose to nodes in view 2 a new view_3={1,2,3}. And so on.
The config module will also initiate view changes when it
discovers that some nodes in the current view are not
responding. In particular, the node with the smallest id periodically
sends heartbeat RPCs to all others (and all other servers periodically
send heartbeats to the node with the smallest id). If a heartbeat RPC
times out, the config module calls the proposer's
The proposer keeps track of whether the current view is stable or not (using the proposer::stable variable). If the current view is stable, there are no on-going Paxos view change attempts by this node. When the current view is not stable, the node is initiating the Paxos protocol.
The acceptor logs important Paxos events as well as a complete history of all values agreed to on disk. At any time a node can reboot and when it re-joins, it may be many views behind. Unless the node brings itself up-to-date on the current view, it won't be able to participate in Paxos. By remembering all views, the other nodes can bring this re-joined node up to date.
The Paxos Made Simple paper describes a protocol that agrees on a value and then terminates. Since we want to run another instance of Paxos every time there is a view change, we need to ensure that messages from different instances are not confused. We do this by adding instance numbers (which are not the same as proposal numbers) to all messages. Since we are using Paxos to agree on view changes, the instance numbers in our use of Paxos are the same as the view numbers in the config module.
Paxos can't guarantee that every node learns the chosen value right away; some of them may be partitioned or crashed. Therefore, some nodes may be behind, stuck in an old instance of Paxos while the rest of the system has moved on to a new instance. If a node's acceptor gets an RPC request for an old instance, it should reply to the proposer with a special RPC response (set oldinstance to true). This response informs the calling proposer that it is behind and tells it what value was chosen for that instance.
Below is the pseudocode for Paxos. The acceptor and proposer skeleton classes contain member variables, RPCs, and RPC handlers corresponding to this code. Except for the additions to handle instances as described above, it mirrors the protocol described in the paper.
proposer run(instance, v): choose n, unique and higher than any n seen so far send prepare(instance, n) to all servers including self if prepare_ok(n_a, v_a) from majority: v' = v_a with highest n_a; choose own v otherwise send accept(instance, n, v') to all if accept_ok(n) from majority: send decided(instance, v') to all acceptor state: must persist across reboots n_h (highest prepare seen) instance_h, (highest instance accepted) n_a, v_a (highest accept seen) acceptor prepare(instance, n) handler: if instance <= instance_h reply oldinstance(instance, instance_value) else if n > n_h n_h = n reply prepare_ok(n_a, v_a) acceptor accept(instance, n, v) handler: if n >= n_h n_a = n v_a = v reply accept_ok(n) acceptor decide(instance, v) handler: paxos_commit(instance, v)
For a given instance of Paxos, potentially many nodes can make proposals, and each of these proposals has a unique proposal number. When comparing different proposals, the highest proposal number wins. To ensure that each proposal number is unique, each proposal consists of a number and the node's identifier. We provide you with a struct prop_t in paxos_protocol.h that you should use for proposal numbers; we also provide the > and >= operators for the class.
Each replica must log certain change to its Paxos state (in particular the n_a, v_a, and n_h fields), as well as log every agreed value. The provided log class does this for you; please use it without modification, as the test program depends on its output in a particular format.
In an ideal implementation of Paxos, the leader would multicast its messages to all the members of the current view at the same time. To simplify your implementation and make debugging easier, it's acceptable to send RPCs one at a time. Make sure you add the extra parameter rpcc::to(1000) to the end of your RPC calls, or the RPC library will spend a long time attempting to contact crashed nodes.
% ./rsm_tester.pl 0 1 2 3 4 5 6 7 test0... ... test1... ... test2... ... test3... ... test4... ... test5... ... test6... ... test7... ... tests done OK
Important: If rsm_tester.pl fails during the middle of a test, the remaining lock_server processes are not killed and the log files are not cleaned up (so you can debug the causes.). Make sure you do 'killall lock_server; rm -f *.log' to clean up the lingering processes before running rsm_tester.pl again.
We guide you through a series of steps to get this lab working incrementally.
Implement the Paxos protocol listed in the pseudocode, log view changes to disk, but do not worry about failures yet. Also implement the RSM code needed for nodes to join: commit_change(), and joinreq().
When starting from scratch (with blank on-disk logs), the first node initializes view 1 to itself (without going through Paxos) and logs view 1 to disk. When the second node starts, it also initializes view 1 to the first node (as specified in the constructor's argument) and logs view 1 to disk. However, since the second node does not find itself in view 1, it will send a joinreq RPC to the master and the master will ask its proposer to propose view 2 with the second node included in the view. And so on for the third node.
Next, fill in the Paxos implementation. Try to follow the pseudocode provided above, and use the RPC protocol we provide in paxos_protocol.h. Note that though the pseudocode shows different types of responses to each kind of RPC, our protocol combines these responses into one type of return structure. For example, the prepareres struct can act as a prepare_ok, an oldinstance, or a reject message, depending on the situation.
Whenever Paxos has successfully agreed on the new view, log the new view to disk. We have provided acceptor::commit_wo that calls the loginstance() method (in log.cc) for you.
The log class writes its content to a file in the current directory called paxos-[port].log. Note that rsm_tester.pl will remove these logs when a test finishes successfully, unless you comment out the second line of the cleanup() subroutine in the script. rsm_tester.pl also re-directs the stdout and stderr of your configuration server to lock_server-[arg1]-[arg2].log. You might find these logs useful for debugging.
Upon completing this step, you should be able to pass 'rsm_tester.pl 0'. This test starts three configuration servers one after another and checks that all servers go through the same three views.
Next you should handle the simple failure cases of a single configuration server failing. Recall that when dealing with failed nodes, the config module's heartbeat function calls proposer::run() to begin a Paxos protocol round.
Once this works, you should be able to run 'rsm_tester.pl 0 1 2'.
In addition to logging new views, modify your Paxos implementation to use the log class to log changes to n_h, and n_a and v_a when they are updated. Convince yourself why these three values must be logged to disk if we want to re-start a previous crashed node correctly. We have provided the code to write and read logs in log.cc (see log::loghigh(), and log::logprop()), so you just have to make sure to call the appropriate methods at the right times.
Now you can run tests that involve restarting a node after it fails. In particular, you should be able to pass 'rsm_tester.pl 3 4 '. In test 4, rsm_tester.pl starts three servers, kills the third server (the remaining two nodes should be able to agree on a new view), kills the second server (the remaining one node tries to run Paxos, but cannot succeed since no majority of nodes are present in the current view), restarts the third server (it will not help with the agreement since the third server is not in the current view), kills the third server, restarts second server (now agreement can be reached), and finally restarts third server.
Finally, you need to verify that your code handles some of the tricky corner cases that Paxos is supposed to deal with. Our test scripts do not test all possible corner cases, so you could still have a buggy Paxos implementation after this step, but you will have a good feel for the protocol.
In paxos.cc, we use two methods: breakpoint1() and breakpoint2() to induce complex failures. The proposer::run function calls breakpoint1() just after completing Phase 1, but before starting Phase 2. Similarly it calls breakpoint2() in between Phases 2 and 3. The RSM layer runs a small RPC server that accepts the rsm_test_protocol RPCs defined in rsm_protocol.h. The tester uses rsm_tester to sends RPCs to cause the server to exit at the respective breakpoint.
Test 5: This test starts three nodes and kills the third node. The first node will become the leader to initiate Paxos, but the test will cause it to crash at breakpoint 1 (at the end of Phase 1). Then the test will restart the killed third node, which together with the remaining node should be able to finish Paxos (ignoring the failed first node) and complete the view change successfully. The script will verify that the Paxos logs show the correct view changes.
Test 6: This test starts four nodes one by one and kills the fourth node. The first node initiates Paxos as a leader, but the test causes it to fail at breakpoint 2 (after phase 2.) When the fourth node re-joins the system, the rest of the nodes should finish agreeing on the view originally proposed by the first node, before making a new view of their own.
Test 7: This test is identical to test 6, except that it kills all the remaining nodes after the first node exits. Then it restarts all slaves and checks that they first agree on the first node's proposed view before making a new view of their own.
By now, your code should now reliably pass all required tests, i.e. 'rsm_tester.pl 0 1 2 3 4 5 6 7'.
% cd ~/ds-class/lab % make clean % rm core* % rm *log % cd .. % tar czvf yfs-lab7.tgz lab/That should produce a file called yfs-lab7.tgz in your ds-class/ directory. Go to submit site to upload yfs-lab7.tgz