Learn More
The study of online convex optimization in the bandit setting was initiated by Klein-berg (2004) and Flaxman et al. (2005). Such a setting models a decision maker that has to make decisions in the face of adversari-ally chosen convex loss functions. Moreover, the only information the decision maker receives are the losses. The identities of the loss(More)
Cyclic coordinate descent is a classic optimization method that has witnessed a resurgence of interest in signal processing, statistics, and machine learning. Reasons for this renewed interest include the simplicity, speed, and stability of the method, as well as its competitive performance on 1 regularized smooth optimization problems. Surprisingly, very(More)
A Support Vector Method for multivariate performance measures was recently introduced by Joachims (2005). The underlying optimization problem is currently solved using cutting plane methods such as SVM-Perf and BMRM. One can show that these algorithms converge to an accurate solution in O 1 λλ iterations, where λ is the trade-off parameter between the(More)
We present the first measurements of the e[over -->]p-->epgamma cross section in the deeply virtual Compton scattering (DVCS) regime and the valence quark region. The Q(2) dependence (from 1.5 to 2.3 GeV(2)) of the helicity-dependent cross section indicates the twist-2 dominance of DVCS, proving that generalized parton distributions (GPDs) are accessible to(More)
Structured output prediction is an important machine learning problem both in theory and practice, and the max-margin Markov network (M 3 N) is an effective approach. All state-of-the-art algorithms for optimizing M 3 N objectives take at least O(1//) number of iterations to find an accurate solution. [? ] broke this barrier by proposing an excessive gap(More)
This paper considers the stability of online learning algorithms and its implications for learnability (bounded regret). We introduce a novel quantity called forward regret that intuitively measures how good an online learning algorithm is if it is allowed a one-step look-ahead into the future. We show that given stability, bounded forward regret is(More)
As massive repositories of real-time human commentary, social media platforms have arguably evolved far beyond passive facilitation of online social interactions. Rapid analysis of information content in online social media streams (news articles, blogs,tweets etc.) is the need of the hour as it allows business and government bodies to understand public(More)
Cyclic coordinate descent is a classic optimization method that has witnessed a resurgence of interest in machine learning. Reasons for this include its simplicity, speed and stability, as well as its competitive performance on ℓ 1 regularized smooth optimization problems. Surprisingly, very little is known about its finite time convergence behavior on(More)
In a recent paper Joachims [1] presented SVM-Perf, a cutting plane method (CPM) for training linear Support Vector Machines (SVMs) which converges to an accurate solution in O(1// 2) iterations. By tightening the analysis, Teo et al. [2] showed that O(1//) iterations suffice. Given the impressive convergence speed of CPM on a number of practical problems,(More)