Pattern Recognition and Machine Learning

  title={Pattern Recognition and Machine Learning},
  author={Radford M. Neal},
  pages={366 - 366}
the selection of symmetric factorial designs, that is, a design where all factors have the same number of levels. Chapter 3 focuses on selection of two-level factorial designs and discusses complementary design theory and related topics in the selection of designs. Chapter 4 covers the selection of three level designs followed by the general case of s-levels. Chapter 5 discusses estimation capacity, presenting the connections with complementary designs followed by the estimation capacity for… 
New Flexible Models and Design Construction Algorithms for Mixtures and Binary Dependent Variables
markdownabstractThis thesis discusses new mixture(-amount) models, choice models and the optimal design of experiments. Two chapters of the thesis relate to the so-called mixture, which is a product
Product Portfolio Selection of Designs Through an Analysis of Lower-Dimensional Manifolds and Identification of Common Properties
This work introduces a product family hierarchy, where the designs can be classified into phenomenological design family, functional part family and embodiment part family, and uses multi-objective optimisation to identify the non-dominated solutions or the Pareto-front.
Joint multitask feature learning and classifier design
  • S. Gutta, Qi Cheng
  • Computer Science
    2013 47th Annual Conference on Information Sciences and Systems (CISS)
  • 2013
This paper proposes a new multitask learning approach in which feature selection and classifier design for all the binary classification tasks are carried out simultaneously, and considers probabilistic nonlinear kernel classifiers for binary classification.
Mixtures of Gaussian Distributions under Linear Dimensionality Reduction
This paper presents a mixture model for reducing dimensionality based on a linear transformation which is not restricted to be orthogonal and compared the classification performance of the proposed method with that of other popular classifiers including the mixture of Probabilistic Principal Component Analyzers and the Gaussian mixture model.
Pattern Recognition and Machine Learning
sity, say, whereas least squares cross-validation can be considered a universally applicable method. The authors present an example for a single set of simulated data (from a bimodal density) to
Local-Learning-Based Feature Selection for High-Dimensional Data Analysis
This paper considers feature selection for data classification in the presence of a huge number of irrelevant features. We propose a new feature-selection algorithm that addresses several major
On efficient methods for high-dimensional statistical estimation
The first main contribution of the thesis is to develop moment-matching techniques for multi-index non-linear regression problems, and proposes the averaging of moment parameters, which are called prediction functions for finite-dimensional models.
Large-deviation analysis and applications Of learning tree-structured graphical models
It is proved that among all unlabeled trees, the star and the chain are the worst and best for learning respectively, and scaling laws on the number of samples and the number variables for structure learning to remain consistent in high-dimensions are proved.
Bernoulli Mixture Models for Markov Blanket Filtering and Classification
  • M. Saeed
  • Computer Science
    WCCI Causation and Prediction Challenge
  • 2008
The use of Bernoulli mixture models for Markov blanket flltering and classiflcation of binary data is presented, overcoming the short comings of their algorithm and increasing the e‐ciency of this algorithm considerably.
Prototype Classification: Insights from Machine Learning
This work sheds light on the discrimination between patterns belonging to two different classes by casting this decoding problem into a generalized prototype framework and relates mean-of-class prototype classification to other classification algorithms by showing that the prototype classifier is a limit of any soft margin classifier and that boosting a prototype classifiers yields the support vector machine.