Incremental Sparse Bayesian Ordinal Regression

@article{Li2018IncrementalSB,
  title={Incremental Sparse Bayesian Ordinal Regression},
  author={Chang Li and M. de Rijke},
  journal={Neural networks : the official journal of the International Neural Network Society},
  year={2018},
  volume={106},
  pages={
          294-302
        }
}
  • C. Li, M. de Rijke
  • Published 2018
  • Computer Science, Mathematics, Medicine
  • Neural networks : the official journal of the International Neural Network Society
Ordinal Regression (OR) aims to model the ordering information between different data categories, which is a crucial topic in multi-label learning. An important class of approaches to OR models the problem as a linear combination of basis functions that map features to a high-dimensional non-linear space. However, most of the basis function-based algorithms are time consuming. We propose an incremental sparse Bayesian approach to OR tasks and introduce an algorithm to sequentially learn the… Expand
Probabilistic Feature Selection and Classification Vector Machine
TLDR
This work proposes a novel sparse Bayesian embedded feature selection algorithm that adopts truncated Gaussian distributions as both sample and feature priors and derives a generalization error bound for PFCVMLP, which demonstrates the importance of feature selection. Expand
Solving large-scale support vector ordinal regression with asynchronous parallel coordinate descent algorithms
TLDR
A special SVOR formulation is highlighted whose thresholds are described implicitly, so that the dual formulation is concise to apply the state-of-the-art asynchronous parallel coordinate descent algorithm, such as AsyGCD. Expand
Feature Relevance Determination for Ordinal Regression in the Context of Feature Redundancies and Privileged Information
TLDR
This contribution focuses on feature selection paradigms, which enable us to uncover relevant factors of a given regularity based on a sparse model by determining feature relevance bounds, which correspond to the minimum and maximum feature relevance, respectively, if searched over all equivalent models. Expand
Online Learning to Rank with List-level Feedback for Image Filtering
TLDR
This paper proposes two methods that allow online learning to rank in OLTR with list-level feedback, and evaluates the proposed methods on the image filtering task, in which deep neural networks are used to rank images in response to a set of standing queries. Expand
Broad Learning System Based on Nonlinear Transformation and Feedback Adjustment
TLDR
In this paper, weather data sets are used to prove the rationality and effectiveness of the algorithm framework and the best nonlinear mapping function from the angle of data tilt to accurate data sets is found. Expand
Personalized broad learning system for facial expression
TLDR
This paper proposes a transfer learning model based on emotional information entropy as the mapping feature layer to ensure the accuracy of mapping under the condition of small sample size and exhibits the superiority of the approach in multiple facial expression datasets. Expand

References

SHOWING 1-10 OF 42 REFERENCES
Ordinal Regression with Sparse Bayesian
TLDR
A probabilistic framework for ordinal prediction is proposed, which can be used in modeling ordinal regression, and a sparse Bayesian treatment is given, in which an automatic relevance determination prior over weights is used. Expand
Semi-supervised Gaussian Process Ordinal Regression
TLDR
Experimental results demonstrate that the proposed GP based approach makes effective use of the unlabeled data to give better generalization performance than the supervised approach, and is a useful approach for probabilistic semi-supervised ordinal regression problem. Expand
A Maximum Margin Approach for Semisupervised Ordinal Regression Clustering
TLDR
The proposed M2SORC method outperforms the traditional semisupervised clustering methods considered and is formulated to maximize the margins of the closest neighboring clusters and meanwhile minimize the loss associated with the sample-ranking constraints. Expand
Probabilistic Feature Selection and Classification Vector Machine
TLDR
This work proposes a novel sparse Bayesian embedded feature selection algorithm that adopts truncated Gaussian distributions as both sample and feature priors and derives a generalization error bound for PFCVMLP, which demonstrates the importance of feature selection. Expand
Kernel Discriminant Learning for Ordinal Regression
TLDR
This paper proposes a novel regression method by extending the Kernel Discriminant Learning using a rank constraint and demonstrates experimentally that the proposed method is capable of preserving the rank of data classes in a projected data space. Expand
Sparse Bayesian approach for feature selection
  • Chang Li, H. Chen
  • Mathematics, Computer Science
  • 2014 IEEE Symposium on Computational Intelligence in Big Data (CIBD)
  • 2014
TLDR
The improved Gaussian priors, named as truncated Gaussian prior, are introduced into the feature space for feature selection, and into the sample space to generate sparsity to the weight parameters, respectively. Expand
Ordinal regression based on learning vector quantization
TLDR
This paper extends the previous approach termed ordinal generalized matrix learning vector quantization with a more suitable and natural cost function, leading to more intuitive parameter update rules, and in this approach the bandwidth of the prototype weights is automatically adapted. Expand
Validation Based Sparse Gaussian Processes for Ordinal Regression
TLDR
A variant of the Gaussian process ordinal regression (GPOR) approach, leave-one-out GPOR (LOO-GPOR), performs model selection using the leave- one-out cross-validation technique and provides an approach to design a sparse model for GPOR. Expand
Gaussian Processes for Ordinal Regression
We present a probabilistic kernel approach to ordinal regression based on Gaussian processes. A threshold model that generalizes the probit function is used as the likelihood function for ordinalExpand
Multiple-Instance Ordinal Regression
TLDR
Compared with the existing single-instance OR work, MIOR is able to learn a more accurate OR classifier on multiple-instance data where only the bag label is available and the instance label is unknown. Expand
...
1
2
3
4
5
...