Kurt Mehlhorn

Learn More
Combinatorial and geometric computing is a core area of computer science (CS). In fact, most CS curricula contain a course in data structures and algorithms. The area deals with objects such as graphs, sequences, dictionaries, trees, shortest paths, flows, matchings, points, segments, lines, convex hulls, and Voronoi diagrams and forms the basis for(More)
In this article, we propose a family of efficient kernels for large graphs with discrete node labels. Key to our method is a rapid feature extraction scheme based on the Weisfeiler-Lehman test of isomorphism on graphs. It maps the original graph to a sequence of graphs, whose node attributes capture topological and label information. A family of kernels can(More)
State-of-the-art graph kernels do not scale to large graphs with hundreds of nodes and thousands of edges. In this article we propose to compare graphs by counting graphlets, i.e., subgraphs with k nodes where k ∈ {3, 4, 5}. Exhaustive enumeration of all graphlets being prohibitively expensive, we introduce two theoretically grounded speedup schemes, one(More)
  • Kurt Mehlhorn
  • EATCS Monographs on Theoretical Computer Science
  • 1984
Why should wait for some days to get or receive the data structures and algorithms 3 multi dimensional searching and computational geometry book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This data structures(More)
In this paper we explore the use of weak B-trees to represent sorted lists. In weak B-trees each node has at least a and at most b sons where 2a≦b. We analyse the worst case cost of sequences of insertions and deletions in weak B-trees. This leads to a new data structure (level-linked weak B-trees) for representing sorted lists when the access pattern(More)
In this paper we describe a new method for proving lower bounds on the complexity of VLSI - computations and more generally distributed computations. Lipton and Sedgewick observed that the crossing sequence arguments used to prove lower bounds in VLSI (or TM or distributed computing) apply to (accepting) nondeterministic computations as well as to(More)
Efficient implementations of Dijkstra's shortest path algorithm are investigated. A new data structure, called the <italic>radix heap</italic>, is proposed for use in this algorithm. On a network with <italic>n</italic> vertices, <italic>m</italic> edges, and nonnegative integer arc costs bounded by <italic>C</italic>, a one-level form of radix heap gives a(More)
The dynamic dictionary problem is considered: provide an algorithm for storing a dynamic set, allowing the operations insert, delete, and lookup. A dynamic perfect hashing strategy is given: a randomized algorithm for the dynamic dictionary problem that takes O(1) worst-case time for lookups and O(1) amortized expected time for insertions and deletions; it(More)
The present paper provides a comprehensive study of the following problem. Consider algorithms which are designed for shared memory models of parallel computation (PRAMs) in which processors are allowed to have fairly unrestricted access patterns to the shared memory. Consider also parallel machines in which the shared memory is organized in modules where(More)