#### Filter Results:

#### Publication Year

2010

2015

#### Publication Type

#### Co-author

#### Key Phrase

#### Publication Venue

Learn More

This paper describes the BID Data Suite, a collection of hardware, software and design patterns that enable fast, large-scale data mining at very low cost. By co-designing all of these elements we achieve single-machine performance levels that equal or exceed reported <i>cluster</i> implementations for common benchmark problems. A key design criterion is… (More)

Gibbs sampling is a workhorse for Bayesian inference but has several limitations when used for parameter estimation, and is often much slower than non-sampling inference methods. SAME (State Augmentation for Marginal Estimation) [15, 8] is an approach to MAP parameter estimation which gives improved parameter estimates over direct Gibbs sampling. SAME can… (More)

— Many systems have been developed for machine learning at scale. Performance has steadily improved, but there has been relatively little work on explicitly defining or approaching the limits of performance. In this paper we describe the application of roofline design, an approach borrowed from computer architecture, to large-scale machine learning. In… (More)

- John Canny, Huasha Zhao
- 2013

This paper describes recent work on the BIDMach toolkit for large-scale machine learning. BIDMach has demonstrated single-node performance that exceeds that of published cluster systems for many common machine-learning task. BIDMach makes full use of both CPU and GPU acceleration (through a sister library BID-Mat), and requires only modest hardware… (More)

—Allreduce is a basic building block for parallel computing. Our target here is " Big Data " processing on commodity clusters (mostly sparse power-law data). Allreduce can be used to synchronize models, to maintain distributed datasets, and to perform operations on distributed data such as sparse matrix multiply. We first review a key constraint on cluster… (More)

—Many large datasets exhibit power-law statistics: The web graph, social networks, text data, clickthrough data etc. Their adjacency graphs are termed natural graphs, and are known to be difficult to partition. As a consequence most distributed algorithms on these graphs are communication-intensive. Many algorithms on natural graphs involve an Allreduce: a… (More)

Incremental model-update strategies are widely used in machine learning and data mining. By " incremental update " we refer to models that are updated many times using small subsets of the training data. Two well-known examples are stochastic gradient and MCMC. Both provide fast sequential performance and have generated many of the best-performing methods… (More)

Search advertising shows trends of vertical extension. Vertical ads, including product ads and local search ads, are proliferating at an ever increasing pace. They typically offer better ROI to advertisers as a result of better user engagement. However, campaigns and bids in vertical ads are not set at the keyword level. As a result, the matching between… (More)

- ‹
- 1
- ›