Heiko Röglin

Learn More
We study the impact of combinatorial structure in congestion games on the complexity of computing pure Nash equilibria and the convergence time of best response sequences. In particular, we investigate which properties of the strategy spaces of individual players ensure a polynomial convergence time. We show that if the strategy space of each player(More)
Unlike standard congestion games, weighted congestion games and congestion games with player-specific delay functions do not necessarily possess pure Nash equilibria. It is known, however, that there exist pure equilibria for both of these variants in the case of singleton congestion games, i. e., if the players’ strategy spaces contain only sets of(More)
Various economic interactions can be modeled as two-sided markets. A central solution concept to these markets are stable matchings, introduced by Gale and Shapley. It is well known that stable matchings can be computed in polynomial time, but many real-life markets lack a central authority to match agents. In those markets, matchings are formed by actions(More)
2-Opt is probably the most basic local search heuristic for the TSP. This heuristic achieves amazingly good results on “real world” Euclidean instances both with respect to running time and approximation ratio. There are numerous experimental studies on the performance of 2-Opt. However, the theoretical knowledge about this heuristic is still very limited.(More)
We prove that the number of Pareto-optimal solutions in any multiobjective binary optimization problem with a finite number of linear objective functions is polynomial in the model of smoothed analysis. This resolves a conjecture of Rene Beier. Moreover, we give polynomial bounds on all finite moments of the number of Pareto-optimal solutions, which yields(More)
We consider bicriteria optimization problems and investigate the relationship between two standard approaches to solving them: (i) computing the Pareto curve and (ii) the so-called decision maker’s approach in which both criteria are combined into a single (usually non-linear) objective function. Previous work by Papadimitriou and Yannakakis showed how to(More)
The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, the k-means method has been studied in the model of smoothed analysis. But(More)
The k-means method is a widely used clustering algorithm. One of its distinguished features is its speed in practice. Its worst-case running-time, however, is exponential, leaving a gap between practical and theoretical performance. Arthur and Vassilvitskii [3] aimed at closing this gap, and they proved a bound of poly(nk, σ−1) on the smoothed running-time(More)
The k-means algorithm is the method of choice for clustering large-scale data sets and it performs exceedingly well in practice. Most of the theoretical work is restricted to the case that squared Euclidean distances are used as similarity measure. In many applications, however, data is to be clustered with respect to other measures like, e.g., relative(More)