#### Filter Results:

- Full text PDF available (15)

#### Publication Year

2009

2017

- This year (1)
- Last 5 years (13)
- Last 10 years (15)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Yun Kuen Cheung, Richard Cole, Nikhil R. Devanur
- STOC
- 2013

Tatonnement is a simple and natural rule for updating prices in Exchange (Arrow-Debreu) markets. In this paper we define a class of markets for which tatonnement is equivalent to gradient descent. This is the class of markets for which there is a convex potential function whose gradient is always equal to the negative of the excess demand and we call it… (More)

This paper continues the study, initiated by Cole and Fleischer in [Cole and Fleischer 2008], of the behavior of a tatonnement price update rule in Ongoing Fisher Markets. The prior work showed fast convergence toward an equilibrium when the goods satisfied the weak gross substitutes property and had bounded demand and income elasticities.
The current work… (More)

- Yun Kuen Cheung, Gramoz Goranci, Monika Henzinger
- ICALP
- 2016

Given a graph where vertices are partitioned into k terminals and non-terminals, the goal is to compress the graph (i.e., reduce the number of non-terminals) using minor operations while preserving terminal distances approximately. The distortion of a compressed graph is the maximum multiplicative blow-up of distances between all pairs of terminals. We… (More)

- Yun Kuen Cheung, Philippe Flajolet, Mordecai J. Golin, C. Y. James Lee
- ANALCO
- 2009

This paper studies two functions arising separately in the analysis of algorithms. The first function is the solution to the Multidimensional Divide-And-Conquer (MDC) Recurrence that arises when solving problems involving points in d-dimensional space. The second function concerns weighted digital sums. Let n = (bibi−1 · · · b1b0)2 and SM (n) = ∑i t=0 t(t +… (More)

- Yun Kuen Cheung, Richard Cole
- ArXiv
- 2014

Gradient descent is an important class of iterative algorithms for minimizing convex functions. Classically, gradient descent has been a sequential and synchronous process. Distributed and asynchronous variants of gradient descent have been studied since the 1980s, and they have been experiencing a resurgence due to demand from large-scale machine learning… (More)

- Yun Kuen Cheung
- ArXiv
- 2017

Given a weighted graph G = (V,E,w) with a set of k terminals T ⊂ V , the Steiner Point Removal problem seeks for a minor of the graph with vertex set T , such that the distance between every pair of terminals is preserved within a small multiplicative distortion. Kamma, Krauthgamer and Nguyen (SODA 2014, SICOMP 2015) used a ball-growing algorithm to show… (More)

- Yun Kuen Cheung
- IJCAI
- 2016

We revisit the problem of designing strategyproof mechanisms for allocating divisible items among two agents who have linear utilities, where payments are disallowed and there is no prior information on the agents’ preferences. The objective is to design strategyproof mechanisms which are competitive against the most efficient (but not strategyproof)… (More)

- Yun Kuen Cheung, Richard Cole
- ArXiv
- 2016

This paper concerns asynchrony in iterative processes, focusing on gradient descent and tatonnement, a fundamental price dynamic. Gradient descent is an important class of iterative algorithms for minimizing convex functions. Classically, gradient descent has been a sequential and synchronous process, although distributed and asynchronous variants have been… (More)

In this lecture, we will construct constant degree D vertex-expanders with expansion of (1 − ε)D (a.k.a. constant degree lossless expanders). This beautiful construction is due to Capalbo-ReingoldVadhan-Wigderson. Concretely, for every constant ε > 0 and every N , we will construct a bipartite graph (L,R,E), |L| = N , |R| = M = poly(ε)N), with left degree D… (More)

Consider the following weighted digital sum (WDS) variant: write integer n as n = 21 + 22 + · · · + 2k with i1 > i2 > · · · > ik ≥ 0 and set WM (n) := ∑k t=1 t 2t . This type of weighted digital sum arises (when M = 1) in the analysis of bottom-up mergesort but is not “smooth” enough to permit a clean analysis. We therefore analyze its average TWM (n) := 1… (More)