Corpus ID: 10717645

Distributed Wear levelling of Flash Memories

  title={Distributed Wear levelling of Flash Memories},
  author={Srimugunthan and K. Gopinath},
For large scale distributed storage systems, flash memories are an excellent choice because flash memories consume less power, take lesser floor space for a target throughput and provide faster access to data. In a traditional distributed filesystem, even distribution is required to ensure load-balancing, balanced space utilisation and failure tolerance. In the presence of flash memories, in addition, we should also ensure that the number of writes to these different flash storage nodes are… Expand
NVMCache: Wear-Aware Load Balancing NVM-based Caching for Large-Scale Storage Systems
  • Zhenhua Cai, Jiayun Lin, Fang Liu, Zhiguang Chen, Hongtao Li
  • Computer Science
  • 2020 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom)
  • 2020
NVMCache aims to ensure I/O load balancing and avoid access bottlenecks for large-scale storage systems, take advantage of NVM's read/write asymmetry and prioritize access to reduce write-request blocking and improve overall access performance, and extend the overall service lifetime of the NVM-based cache. Expand


Migrating server storage to SSDs: analysis of tradeoffs
An automated tool is described that, given device models and a block-level trace of a workload, determines the least-cost storage configuration that will support the workload's performance, capacity, and fault-tolerance requirements. Expand
Efficient data center architectures using non-volatile memory and reliability techniques
A distributed, energy-efficient data center architecture is proposed, replacing hard disk drives and DRAM main memory with non-volatile Memristors or PCM, and a novel on-chip cache fault tolerance scheme that allows more than a 30% improvement in energy efficiency is proposed. Expand
A self-balancing striping scheme for NAND-flash storage systems
This work proposes to encode popular data with redundancy by means of erasure codes to improve the read response time of flash-memory storage systems by offering 10% extra redundant space. Expand
Gordon: using flash memory to build fast, power-efficient clusters for data-intensive applications
The paper presents an exhaustive analysis of the design space of Gordon systems, focusing on the trade-offs between power, energy, and performance that Gordon must make, and describes a novel flash translation layer tailored to data intensive workloads and large flash storage arrays. Expand
Integrating NAND flash devices onto servers
A survey of current and potential Flash usage models in a data center is provided and an advocate using Flash as an extended system memory usage model---OS managed disk cache---and the necessary architectural changes are described. Expand
CRUSH: Controlled, Scalable, Decentralized Placement of Replicated Data
CRUSH is a scalable pseudorandom data distribution function designed for distributed object-based storage systems that efficiently maps data objects to storage devices without relying on a central directory. Expand
Quota enforcement for high-performance distributed storage systems
This work presents a solution that has less than 0.2% performance overhead while the system is below saturation, compared with not enforcing quota at all, and provides byte-level accuracy at all times, in the absence of failures and cheating. Expand
Differential RAID: Rethinking RAID for SSD reliability
Diff-RAID is proposed, a parity-based redundancy solution that creates an age differential in an array of SSDs and distributes parity blocks unevenly across the array, leveraging their higher update rate to age devices at different rates. Expand
Measurements of a distributed file system
This work analyzed the user-level file access patterns and caching behavior of the Sprite distributed file system and found that client cache consistency is needed to prevent stale data errors, but that it is not invoked often enough to degrade overall system performance. Expand
Ceph: a scalable, high-performance distributed file system
Performance measurements under a variety of workloads show that Ceph has excellent I/O performance and scalable metadata management, supporting more than 250,000 metadata operations per second. Expand