#### Filter Results:

- Full text PDF available (101)

#### Publication Year

1988

2017

- This year (2)
- Last 5 years (22)
- Last 10 years (51)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Brain Region

#### Cell Type

#### Key Phrases

#### Method

#### Organism

Learn More

- Yaakov Engel, Shie Mannor, Ron Meir
- IEEE Transactions on Signal Processing
- 2004

We present a nonlinear version of the recursive least squares (RLS) algorithm. Our algorithm performs linear regression in a high-dimensional feature space induced by a Mercer kernel and can therefore be used to recursively construct minimum mean-squared-error solutions to nonlinear least-squares problems that are frequently encountered in signal processing… (More)

- Yaakov Engel, Shie Mannor, Ron Meir
- ICML
- 2005

Gaussian Process Temporal Difference (GPTD) learning offers a Bayesian solution to the policy evaluation problem of reinforcement learning. In this paper we extend the GPTD framework by addressing two pressing issues, which were not adequately treated in the original GPTD paper (Engel et al., 2003). The first is the issue of stochasticity in the state… (More)

- Ron Meir, Gunnar Rätsch
- Machine Learning Summer School
- 2002

We provide an introduction to theoretical and practical aspects of Boosting and Ensemble learning, providing a useful reference for researchers in the field of Boosting as well as for those seeking to enter this fascinating area of research. We begin with a short background concerning the necessary learning theoretical foundations of weak learners and their… (More)

- Yaakov Engel, Shie Mannor, Ron Meir
- 2004

We present a non-linear kernel-based version of the Recursive Least Squares (RLS) algorithm. Our Kernel-RLS algorithm performs linear regression in the feature space induced by a Mercer kernel, and can therefore be used to recursively construct the minimum meansquared-error regressor. Sparsity (and therefore regularization) of the solution is achieved by an… (More)

- Yaakov Engel, Shie Mannor, Ron Meir
- ICML
- 2003

We present a novel Bayesian approach to the problem of value function estimation in continuous state spaces. We define a probabilistic generative model for the value function by imposing a Gaussian prior over value functions and assuming a Gaussian noise model. Due to the Gaussian nature of the random processes involved, the posterior distribution of the… (More)

- Ron Meir
- Machine Learning
- 2000

We consider the problem of one-step ahead prediction for time series generated by an underlying stationary stochastic process obeying the condition of absolute regularity, describing the mixing nature of process. We make use of recent results from the theory of empirical processes, and adapt the uniform convergence framework of Vapnik and Chervonenkis to… (More)

- Ron Meir, Tong Zhang
- Journal of Machine Learning Research
- 2003

Bayesian approaches to learning and estimation have played a significant role in the Statistics literature over many years. While they are often provably optimal in a frequentist setting, and lead to excellent performance in practical applications, there have not been many precise characterizations of their performance for finite sample sizes under general… (More)

- Yaakov Engel, Shie Mannor, Ron Meir
- ECML
- 2002

We present a novel algorithm for sparse online greedy kernelbased nonlinear regression. This algorithm improves current approaches to kernel-based regression in two aspects. First, it operates online at each time step it observes a single new input sample, performs an update and discards it. Second, the solution maintained is extremely sparse. This is… (More)

- George Leifman, Ron Meir, Ayellet Tal
- The Visual Computer
- 2005

Shape-based retrieval of 3D models has become an important challenge in computer graphics. Object similarity, however, is a subjective matter, dependent on the human viewer, since objects have semantics and are not mere geometric entities. Relevance feedback aims at addressing the subjectivity of similarity. This paper presents a novel relevance feedback… (More)

- Dorit Baras, Ron Meir
- Neural Computation
- 2007

Learning agents, whether natural or artificial, must update their internal parameters in order to improve their behavior over time. In reinforcement learning, this plasticity is influenced by an environmental signal, termed a reward, that directs the changes in appropriate directions. We apply a recently introduced policy learning algorithm from machine… (More)