Matthias Grawinkel

Learn More
Despite domain-specific digital archives are growing in number and size, there is a lack of studies describing their architectures and runtime characteristics. This paper investigates the storage landscape of the European Centre for Medium-Range Weather Forecasts (ECMWF) whose storage capacity has reached 100 PB and experiences an annual growth rate of(More)
Today's network file systems consist of a variety of complex subprotocols and backend storage classes. The data is typically spread over multiple data servers to achieve higher levels of performance and reliability. A metadata server is responsible for creating the mapping of a file to these data servers. It is hard to map application specific access(More)
Exponentially growing capacities of disk drives have increased the problem that not only a complete disk can fail, but also individual, small groups of sectors can be erroneous. These sector errors are especially critical during RAID rebuilds because they can only be detected when the corresponding sectors are read. Mechanisms to cope with sector errors,(More)
Admission Control (AC) avoids network congestion by determining whether a quantified request for resources can be approved without interfering the resource allocation of already accepted traffic flows. For time sensitive networked multimedia applications conventional AC schemes often prove to be unnecessarily strict. The crisp binary admission decision and(More)
The need for huge storage archives rises with the ever growing creation of data. With today’s big data and data analytics applications, some of these huge archives become active in the sense that all stored data can be accessed at any time. Running and evolving these archives is a constant tradeoff between performance, capacity, and price. We present(More)
We present the architecture for an disk based archival storage system and propose a new RAID scheme that is designed for "write once, read sometimes" workloads. By intertwining parity groups into a multi-dimensional RAID and improving the single disk reliability with intra-disk redundancy, the system achieves an elastic fault tolerance that can at least(More)
As the number of client machines in high end computing clusters increases, the file system cannot keep up with the resulting volume of requests, using a centralized metadata server. This problem will be even more prominent with the advent of the exascale computing age. In this context, the centralized metadata server represents a bottleneck for the scaling(More)
The performance gap between processors and I/O represents a serious scalability limitation for applications running on computing clusters. Parallel file systems often provide mechanisms that allow programmers to disclose their I/O pattern knowledge to the lower layers of the I/O stack through a hints API. This information can be used by the file system to(More)