Erasure coding in windows azure storage
Windows Azure Storage (WAS) is a cloud storage system that provides customers the ability
to store seemingly limitless amounts of data for any duration of time. WAS customers have …
to store seemingly limitless amounts of data for any duration of time. WAS customers have …
Xoring elephants: Novel erasure codes for big data
Distributed storage systems for large clusters typically use replication to provide reliability.
Recently, erasure codes have been used to reduce the large storage overhead of three …
Recently, erasure codes have been used to reduce the large storage overhead of three …
[PDF][PDF] Availability in globally distributed storage systems
D Ford, F Labelle, FI Popovici, M Stokely… - … USENIX Symposium on …, 2010 - usenix.org
Highly available cloud storage is often implemented with complex, multi-tiered distributed
systems built on top of clusters of commodity servers and disk drives. Sophisticated …
systems built on top of clusters of commodity servers and disk drives. Sophisticated …
[HTML][HTML] Row-diagonal parity for double disk failure correction
P Corbett, B English, A Goel, T Grcanac… - Proceedings of the 3rd …, 2004 - usenix.org
Abstract Row-Diagonal Parity (RDP) is a new algorithm for protecting against double disk
failures. It stores all data unencoded, and uses only exclusive-or operations to compute …
failures. It stores all data unencoded, and uses only exclusive-or operations to compute …
Health status assessment and failure prediction for hard drives with recurrent neural networks
Recently, in order to improve reactive fault tolerance techniques in large scale storage
systems, researchers have proposed various statistical and machine learning methods …
systems, researchers have proposed various statistical and machine learning methods …
Rados: a scalable, reliable storage service for petabyte-scale storage clusters
Brick and object-based storage architectures have emerged as a means of improving the
scalability of storage clusters. However, existing systems continue to treat storage nodes as …
scalability of storage clusters. However, existing systems continue to treat storage nodes as …
Dynamic metadata management for petabyte-scale file systems
SA Weil, KT Pollack, SA Brandt… - SC'04: Proceedings of …, 2004 - ieeexplore.ieee.org
In petabyte-scale distributed file systems that decouple read and write from metadata
operations, behavior of the metadata server cluster will be critical to overall system …
operations, behavior of the metadata server cluster will be critical to overall system …
Simple regenerating codes: Network coding for cloud storage
Network codes designed specifically for distributed storage systems have the potential to
provide dramatically higher storage efficiency for the same availability. One main challenge …
provide dramatically higher storage efficiency for the same availability. One main challenge …
Rcmp: Reconstructing rdma-based memory disaggregation via cxl
Memory disaggregation is a promising architecture for modern datacenters that separates
compute and memory resources into independent pools connected by ultra-fast networks …
compute and memory resources into independent pools connected by ultra-fast networks …
Optimal recovery of single disk failure in RDP code storage systems
L **ang, Y Xu, JCS Lui, Q Chang - ACM SIGMETRICS Performance …, 2010 - dl.acm.org
Modern storage systems use thousands of inexpensive disks to meet the storage
requirement of applications. To enhance the data availability, some form of redundancy is …
requirement of applications. To enhance the data availability, some form of redundancy is …