Steffen Viken Valvåg

Learn More
The complexity of implementing large scale distributed computations has motivated new programming models. Google's MapReduce model has gained widespread use and aims to hide the complex details of data partitioning and distribution, scheduling, synchronization , and fault tolerance. However, our experiences from the enterprise search business indicate that(More)
MapReduce has become a widely employed programming model for large-scale data-intensive computations. Traditional MapReduce engines employ dynamic routing of data as a core mechanism for fault tolerance and load balancing. An alternative mechanism is static routing, which reduces the need to store temporary copies of intermediate data, but requires a(More)
Key/value databases are popular abstractions for applications that require synchronous single-key look-ups. However, such databases invariably have a random I/O access pattern, which is inefficient on traditional storage media. To maximize throughput, an alternative is to rely on asynchronous batch processing of requests. As applications evolve, changing(More)
—The omni-kernel architecture is designed around pervasive monitoring and scheduling. Motivated by new requirements in virtualized environments, this architecture ensures that all resource consumption is measured, that resource consumption resulting from a scheduling decision is attributable to an activity, and that scheduling decisions are fine-grained.(More)