#### Filter Results:

- Full text PDF available (6)

#### Publication Year

1994

2016

#### Publication Type

#### Co-author

#### Publication Venue

Learn More

- Jfg De Freitas, M Niranjan, Ah Gee, A Doucet
- 1998

We discuss a novel strategy for training neural networks using sequential Monte Carlo algorithms and propose a new hybrid gradient descent/sampling importance resampling algorithm (HySIR). In terms of both computational time and accuracy, the hybrid SIR is a clear improvement over conventional sequential Monte Carlo techniques. The new algorithm may be… (More)

- Jfg De Freitas, M Niranjan, Ah Gee
- 1998

In this paper, we show that a hierarchical Bayesian modelling approach to sequential learning leads to many interesting attributes such as regularisation and automatic relevance determination. We identify three inference levels within this hierarchy, namely model selection, parameter estimation and noise estimation. In environments where data arrives… (More)

- Jfg De Freitas, M Niranjan, Ah Gee
- 1998

In this paper, we derive an EM algorithm for nonlinear state space models. We use it to estimate jointly the neural network weights, the model uncertainty and the noise in the data. In the E-step we apply a forward-backward Rauch-Tung-Striebel smoother to compute the network weights. For the M-step, we derive expressions to compute the model uncertainty and… (More)

- Jfg De Freitas, M Niranjan, Ah Gee
- 2000

In this paper, we derive an EM algorithm for nonlinear state space models. We use it to estimate jointly the neural network weights, the model uncertainty and the noise in the data. In the E-step we apply a forward-backward Rauch-Tung-Striebel smoother to compute the network weights. For the M-step, we derive expressions to compute the model uncertainty and… (More)

- Hongyu Li, M. Niranjan
- 2007 IEEE Workshop on Machine Learning for Signal…
- 2007

In this paper, we report on an empirical study of several high dimensional classification problems and show that much of the discriminant information may lie in low dimensional subspaces. Feature subset selection is achieved either by forward selection or backward elimination from the full feature space with support vector machines (SVMs) as base… (More)

The analysis of a speech segment is conventionally performed through linear prediction and the subsequent minimisation of a data error term in the least squares sense. The parameters derived as such maximise the likelihood of the data. In a learning problem, the addition of penalty terms, or regularisers, to the data term facilitates the estimation of the… (More)

- Jfg De Freitas, M Niranjan, Ah Gee
- 1999

In this paper, we derive an EM algorithm for nonlinear state space models. We use it to estimate jointly the neural network weights, the model uncertainty and the noise in the data. In the E-step we apply a forward-backward Rauch-Tung-Striebel smoother to compute the network weights. For the M-step, we derive expressions to compute the model uncertainty and… (More)

- Mahendra Singh Niranjan, Sunil Jha
- IJMTM
- 2016

- ‹
- 1
- ›