#### Filter Results:

#### Publication Year

2006

2013

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

The study of online convex optimization in the bandit setting was initiated by Klein-berg (2004) and Flaxman et al. (2005). Such a setting models a decision maker that has to make decisions in the face of adversari-ally chosen convex loss functions. Moreover, the only information the decision maker receives are the losses. The identities of the loss… (More)

Cyclic coordinate descent is a classic optimization method that has witnessed a resurgence of interest in Signal Processing, Statistics and Machine Learning. Reasons for this renewed interest include the simplicity, speed, and stability of the method as well as its competitive performance on 1 regularized smooth optimization problems. Surprisingly, very… (More)

As massive repositories of real-time human commentary, social media platforms have arguably evolved far beyond passive facilitation of online social interactions. Rapid analysis of information content in online social media streams (news articles, blogs,tweets etc.) is the need of the hour as it allows business and government bodies to understand public… (More)

Structured output prediction is an important machine learning problem both in theory and practice, and the max-margin Markov network (M 3 N) is an effective approach. All state-of-the-art algorithms for optimizing M 3 N objectives take at least O(1//) number of iterations to find an accurate solution. [? ] broke this barrier by proposing an excessive gap… (More)

This paper considers the stability of online learning algorithms and its implications for learnability (bounded regret). We introduce a novel quantity called forward regret that intuitively measures how good an online learning algorithm is if it is allowed a one-step look-ahead into the future. We show that given stability, bounded forward regret is… (More)

Cyclic coordinate descent is a classic optimization method that has witnessed a resurgence of interest in machine learning. Reasons for this include its simplicity, speed and stability, as well as its competitive performance on ℓ 1 regularized smooth optimization problems. Surprisingly, very little is known about its finite time convergence behavior on… (More)

A Support Vector Method for multivariate performance measures was recently introduced by Joachims (2005). The underlying optimization problem is currently solved using cutting plane methods such as SVM-Perf and BMRM. One can show that these algorithms converge to an accurate solution in O 1 λλ iterations, where λ is the trade-off parameter between the… (More)

In a recent paper Joachims [1] presented SVM-Perf, a cutting plane method (CPM) for training linear Support Vector Machines (SVMs) which converges to an accurate solution in O(1// 2) iterations. By tightening the analysis, Teo et al. [2] showed that O(1//) iterations suffice. Given the impressive convergence speed of CPM on a number of practical problems,… (More)

Given <i>n</i> points in a <i>d</i> dimensional Euclidean space, the Minimum Enclosing Ball (MEB) problem is to find the ball with the smallest radius which contains all <i>n</i> points. We give two approximation algorithms for producing an enclosing ball whose radius is at most ε away from the optimum. The first requires… (More)

- Ankan Saha, Vikas Sindhwani, Zenglin Xu, Irwin King, Shenghuo Zhu, Yuan Qi +2 others
- 2010

Learning a dictionary of basis elements with the objective of building compact data representations is a problem of fundamental importance in statistics, machine learning and signal processing. In many settings, data points appear as a stream of high dimensional feature vectors. Streaming datasets present new twists to the dictionary learning problem. On… (More)