Cagri Balkesen

Learn More
The architectural changes introduced with multi-core CPUs have triggered a redesign of main-memory join algorithms. In the last few years, two diverging views have appeared. One approach advocates careful tailoring of the algorithm to the architectural parameters (cache sizes, TLB, and memory bandwidth). The other approach argues that modern hardware is(More)
In this paper we experimentally study the performance of main-memory, parallel, multi-core join algorithms, focusing on sort-merge and (radix-)hash join. The relative performance of these two join approaches have been a topic of discussion for a long time. With the advent of modern multicore architectures, it has been argued that sort-merge join is now a(More)
Recognition of patterns in event streams has become important in many application areas of Complex Event Processing (CEP) including financial markets, electronic health-care systems, and security monitoring systems. In most applications, patterns have to be detected continuously and in real-time over streams that are generated at very high rates, imposing(More)
In this paper, we propose a framework for adaptive admission control and management of a large number of dynamic input streams in parallel stream processing engines. The framework takes as input any available information about input stream behaviors and the requirements of the query processing layer, and adaptively decides how to adjust the entry points of(More)
Complex event processing (CEP) is an essential functionality for cross-reality environments. Through CEP, we can turn raw sensor data generated in the real world into more meaningful information that has some significance for the virtual world. In this article, the authors present DejaVu, a general-purpose event processing system built at ETH Zurich.(More)
Existing main-memory hash join algorithms for multi-core can be classified into two camps. Hardware-oblivious hash join variants do not depend on hardware-specific parameters. Rather, they consider qualitative characteristics of modern hardware and are expected to achieve good performance on any technologically similar platform. The assumption behind these(More)
Über Jahrzehnte hinweg gelang es außerordentlich gut, aus der ”Dividende” von Moore’s Law immer schnellere Computersysteme zu bauen, die Anwendungssoftware quasi ganz automatisch beschleunigten. Die Grenzen dieses Ansatzes werden immer deutlicher und es ist zwischenzeitlich klar, dass signifikante Leistungssteigerungen in Zukunft nur noch durch einen hohen(More)
For many years, the highest energy cost in processing has been data movement rather than computation, and energy is the limiting factor in processor design [21]. As the data needed for a single application grows to exabytes [56], there is clearly an opportunity to design a bandwidth-optimized architecture for big data computation by specializing hardware(More)
Contemporary frameworks for data analytics, such as Hadoop, Spark, and Flink seek to allow applications to scale performance flexibly by adding hardware nodes. However, we find that when the computation on each individual node is optimized, peripheral activities such as creating data partitions, messaging and synchronizing between nodes diminish the speedup(More)
The architectural changes introduced with multi-core CPUs have triggered a redesign of main-memory join algorithms. In the last few years, two diverging views have appeared. One approach advocates careful tailoring of the algorithm to the parameters of the architecture (cache sizes, TLB, and memory bandwidth). The other approach argues that modern hardware(More)
  • 1