#### Filter Results:

- Full text PDF available (20)

#### Publication Year

1995

2013

- This year (0)
- Last 5 years (7)
- Last 10 years (17)

#### Publication Type

#### Co-author

#### Publication Venue

#### Data Set Used

#### Key Phrases

Learn More

- Lihong Li, Jason D. Williams, Suhrid Balakrishnan
- INTERSPEECH
- 2009

Reinforcement learning (RL) is a promising technique for creating a dialog manager. RL accepts features of the current dialog state and seeks to find the best action given those features. Although it is often easy to posit a large set of potentially useful features, in practice, it is difficult to find the subset which is large enough to contain useful… (More)

In many real world applications of machine learning, the distribution of the training data (on which the machine learning model is trained) is different from the distribution of the test data (where the learnt model is actually deployed). This is known as the problem of Domain Adaptation. We propose a novel deep learning model for domain adaptation which… (More)

This paper shows that the location of screen taps on modern smartphones and tablets can be identified from accelerometer and gyroscope readings. Our findings have serious implications, as we demonstrate that an attacker can launch a background process on commodity smartphones and tablets, and silently monitor the user's inputs, such as keyboard presses and… (More)

- Suhrid Balakrishnan, Sumit Chopra
- WSDM
- 2012

Typical recommender systems use the root mean squared error (RMSE) between the predicted and actual ratings as the evaluation metric. We argue that RMSE is not an optimal choice for this task, especially when we will only recommend a few (top) items to any user. Instead, we propose using a ranking metric, namely normalized discounted cumulative gain (NDCG),… (More)

- Suhrid Balakrishnan, David Madigan
- Journal of Machine Learning Research
- 2008

Classifiers favoring sparse solutions, such as support vector machines, relevance vector machines, LASSO-regression based classifiers, etc., provide competitive methods for classification problems in high dimensions. However, current algorithms for training sparse clas-sifiers typically scale quite unfavorably with respect to the number of training… (More)

For Bayesian analysis of massive data, Markov chain Monte Carlo (MCMC) techniques often prove infeasible due to computational resource constraints. Standard MCMC methods generally require a complete scan of the dataset for each iteration. Ridgeway and Madigan (2002) and Chopin (2002b) recently presented importance sampling algorithms that combined… (More)

— A computationally efficient means for propagation of uncertainty in computational models is provided by the Stochastic Response Surface Method (SRSM), which facilitates uncertainty analysis through the determination of statistically equivalent reduced models. SRSM expresses random outputs in terms of a " polynomial chaos expansion " of Hermite… (More)

- Suhrid Balakrishnan, Amit Roy, Marianthi G. Ierapetritou, Gregory P. Flach, Panos G. Georgopoulos, SUBSURFACE HYDROLOGY
- 2003

In this work, a computationally efficient Bayesian framework for the reduction and characterization of parametric uncertainty in computationally demanding environmental 3-D numerical models has been developed. The framework is based on the combined application of the Stochastic Response Surface Method (SRSM, which generates accurate and computationally… (More)

We explore the use of proper priors for variance parameters of certain sparse Bayesian regression models. This leads to a connection between sparse Bayesian learning (SBL) models (Tipping, 2001) and the recently proposed Bayesian Lasso (Park and Casella, 2008). We outline simple modifications of existing algorithms to solve this new variant which… (More)

- Rui Chen, Christos Faloutsos, +47 authors Maria Rifqi
- 2013