- Full text PDF available (15)
- This year (0)
- Last 5 years (1)
- Last 10 years (3)
OceanStore is a utility infrastructure designed to span the globe and provide continuous access to persistent information. Since this infrastructure is comprised of untrusted servers, data is protected through redundancy and cryptographic techniques. To improve performance, data is allowed to be cached anywhere, anytime. Additionally, monitoring of usage… (More)
This paper addresses the problem of churn—the continuous process of node arrival and departure—in distributed hash tables (DHTs). We argue that DHTs should perform lookups quickly and consistently under churn rates at least as high as those observed in deployed P2P systems such as Kazaa. We then show through experiments on an emulated network that current… (More)
• " OceanStore is an Internet-scale, persistent dat a store " • " for the first time, one can imagine providing t ruly durable, self maintaining storage to every computer user. " • " vision " of highly available, reliable, and persi stent, data store utility model-Amazon S3 ?!
OceanStore, a global storage infrastructure, automatically recovers from server and network failures, incorporates new resources, and adjusts to usage patterns. T he computing world is experiencing a transition from desktop PCs to connected information appliances , which — like the earlier transition from mainframes to PCs — will profoundly change the way… (More)
We have developed a new replay debugging tool, liblog, for distributed C/C++ applications. It logs the execution of deployed application processes and replays them deterministically, faithfully reproducing race conditions and non-deterministic failures, enabling careful offline analysis. To our knowledge, liblog is the first replay tool to address the… (More)
OceanStore is a utility infrastructure designed t o span the globe and provide continuous access to persistent information. Since t h i s i n f r astructure i s comprised of untrusted servers, data is protected through redundancy and cryptographic techniques. To improve performance, data is allowed t o b e cached anywhere, anytime. Finally, monitoring of… (More)
Debugging and profiling large-scale distributed applications is a daunting task. We present Friday, a system for debugging distributed applications that combines de-terministic replay of components with the power of symbolic , low-level debugging and a simple language for expressing higher-level distributed conditions and actions. Friday allows the… (More)
One of the key reasons overlay networks are seen as an excellent platform for large scale distributed systems is their resilience in the presence of node failures. This resilience rely on accurate and timely detection of node failures. Despite the prevalent use of keep-alive algorithms in overlay networks to detect node failures, their tradeoffs and the… (More)
We believe that large-scale replica management solutions should be based on an economic model. In this paper, we discuss the benefits provided by an economic approach and outline important directions for future research.