Learn More
Unlike standard congestion games, weighted congestion games and congestion games with player-specific delay functions do not necessarily possess pure Nash equi-libria. It is known, however, that there exist pure equilibria for both of these variants in the case of singleton congestion games, i. e., if the players' strategy spaces contain only sets of(More)
We study the impact of combinatorial structure in congestion games on the complexity of computing pure Nash equilibria and the convergence time of best response sequences. In particular, we investigate which properties of the strategy spaces of individual players ensure a polynomial convergence time. We show that if the strategy space of each player(More)
Various economic interactions can be modeled as two-sided markets. A central solution concept to these markets are stable matchings, introduced by Gale and Shapley. It is well known that stable matchings can be computed in polynomial time, but many real-life markets lack a central authority to match agents. In those markets, matchings are formed by actions(More)
The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, the k-means method has been studied in the model of smoothed analysis. But(More)
2-Opt is probably the most basic local search heuristic for the TSP. This heuristic achieves amazingly good results on “real world” Euclidean instances both with respect to running time and approximation ratio. There are numerous experimental studies on the performance of 2-Opt. However, the theoretical knowledge about this heuristic is still very limited.(More)
The k-means algorithm is the method of choice for clustering large-scale data sets and it performs exceedingly well in practice. Most of the theoretical work is restricted to the case that squared Euclidean distances are used as similarity measure. In many applications, however, data is to be clustered with respect to other measures like, e.g., relative(More)
A sequence of objects that are characterized by their color has to be processed. Their processing order influences how efficiently they can be processed: Each color change between two consecutive objects produces costs. A reordering buffer, which is a random access buffer with storage capacity for <i>k</i> objects, can be used to rearrange this sequence(More)
The k-means algorithm is the method of choice for clustering large-scale data sets and it performs exceedingly well in practice despite its exponential worst-case running-time. To narrow the gap between theory and practice, k-means has been studied in the semi-random input model of smoothed analysis, which often leads to more realistic conclusions than mere(More)