Two Phase Commit on Persistent Key Value Store with Data Replication 6.824 Distributed Systems Final Project

Abstract

We implemented a distributed transaction processing system where data is partitioned and mapped to different shards. All shards are replicated to enhance the availability and durability of the system. The database model is simple key-value store inherited from Lab 4. The objectives of the project are 1) to process transaction using two phase commit and guarantee atomic execution, 2) to persistently store data and server status on disk to tolerate server failures. The requests of a transaction consist of GET, PUT, and ADD. To simplify implementation, the only scenario of aborting a transaction is that it has an add request that results a negative value. Firstly, we implemented a coarsegrained locking scheme where a transaction locks the groups it touches. In order to achieve better performance gains, we later refined our system with a fine-grained locking scheme where a transaction locks the keys it touches. Secondly, we implemented test cases to verify the correctness of the system under different failure scenarios including unreliable networks and server failures. Lastly we benchmark our system with synthetic workloads that varies with parameters like distribution of number of groups a transaction accesses, read/write ratios of a transaction, number of clients and so on. 1. IMPLEMENTATION DETAILS Our persistent key value store is constructed such that each key of type string is mapped to a value of type integer. We confined the type of values to integer to allow arithmetic operations on the key-value pairs. The data of our systems are partitioned into different shards. A group of servers are responsible for a distinct subset of shards, and all the data within a group are replicated. Our system supports atomic transactions, where a transaction is an ordered group of requests such as puts, gets and etc. We also support persistency, where all committed transactions are recorded to disk. Our persistent key-value store can model applications like an inventory management system or a bank account system. 1.1 Transaction Support Our system supports transaction consisting of requests of PUTs, GETs, and ADDs. • a PUT updates the value associated with the key • a GET returns the value associated with the key • an ADD adds a value (could be negative) to the previous value associated with the key In particularly, the transaction support of our system ensures the following properties. Consistency Our system allows multiple clients to send transactions concurrently. Specifically, our database enforce sequential consistency such that the effects of concurrent transactions appears as some sequential order. To enable sequential consistency, the client should lock the groups that a transaction touches before execution. And since we will have multiple servers within each group, we will ensure consistency inside the group with Paxos, to guarantee a global ordering of the operations within a group. Atomicity The transactions need also be atomic, i.e. either all of the requests of the transaction are executed or none are executed. We ensure atomicity of the transaction with two phase commit protocol. Persistency All the effects of a committed transaction on key value store are written on disk. Fault-tolerant Our system ensures the transaction processing is fault-tolerant. On the server sides, the Paxos implementation in Lab 3 has already ensured tolerance under network failures. In our project, we will modified Lab 3 such that Paxos can also handle server crashes. On client side, we will also record its transaction processing status on disk, so that after crash, the client will know at which point it was processing the transaction. 1.1.1 Persistent Paxos We made the Paxos persistent so that it can tolerate server crash. Whenever any of the propose number(n p), accepted number(n a) and accepted value(v a) is modified, all of them are written to the file system with the file name in the following format: SERVERNAME INS#.txt. When the Paxos log is truncated, the corresponding files are also deleted. When a server reboots after failure, it will reconstruct the Paxos instances by reading from the file system. We use a lazy reconstruction approach where an instance is read and reconstructed only when it is touched. Moreover, we added a new function call, Poll(seq int), to the Paxos, to peek the status of a Paxo instance at some

Extracted Key Phrases

3 Figures and Tables

Cite this paper

@inproceedings{Yu2014TwoPC, title={Two Phase Commit on Persistent Key Value Store with Data Replication 6.824 Distributed Systems Final Project}, author={Xiangyao Yu and Shuotao Xu}, year={2014} }