Drew S. Roselli

Learn More
In this paper, we describe the collection and analysis of file system traces from a variety of different environments, including both UNIX and NT systems, clients and servers, and instructional and production systems. Our goal is to understand how modern workloads affect the ability of file systems to provide high performance to users. Because of the(More)
We propose a new paradigm for network file system design: <italic>serverless network file systems</italic>. While traditional network file systems rely on a central server machine, a serverless system utilizes workstations cooperating as peers to provide all file system services. Any machine in the system can store, cache, or control any block of data. Our(More)
File system designers today face a dilemma. A log-structured file system (LFS) can offer superior performance for many common workloads such as those with frequent small writes, read traffic that is predominantly absorbed by the cache, and sufficient idle time to clean the log. However, an LFS has poor performance for other workloads, such as random updates(More)
In this report, we describe the collection of file system traces from three different environments. By using the auditing system to collect traces on client machines, we are able to get detailed traces with minimal kernel changes. We then present results of traffic analysis on the traces, contrasting them with those from previous studies. Based on these(More)
We demonstrate that high-level file system events exhibit self-similar behaviour, but only for short-term time scales of approximately under a day. We do so through the analysis of four sets of traces that span time scales of milliseconds through months, and that differ in the trace collection method, the filesystems being traced, and the chronological(More)
  • 1