Multi-Kernel Correntropy for Robust Learning

@article{Chen2021MultiKernelCF,
  title={Multi-Kernel Correntropy for Robust Learning},
  author={Badong Chen and Xin Wang and Zejian Yuan and Pengju Ren and Jing Qin},
  journal={IEEE transactions on cybernetics},
  year={2021},
  volume={PP}
}
  • Badong Chen, X. Wang, +2 authors J. Qin
  • Published 2021
  • Medicine, Mathematics, Computer Science
  • IEEE transactions on cybernetics
As a novel similarity measure that is defined as the expectation of a kernel function between two random variables, correntropy has been successfully applied in robust machine learning and signal processing to combat large outliers. The kernel function in correntropy is usually a zero-mean Gaussian kernel. In a recent work, the concept of mixture correntropy (MC) was proposed to improve the learning performance, where the kernel function is a mixture Gaussian kernel, namely, a linear… Expand

Figures and Tables from this paper

Robust Maximum Mixture Correntropy Criterion-Based Semi-Supervised ELM With Variable Center
TLDR
A more effective semi-supervised ELM data learning algorithm with the robust maximum mixture correntropy criterion (MMCC) based optimization scheme is explored in this brief. Expand
Error Loss Networks
TLDR
The proposed ELN provides a unified model for a large class of error loss functions, which includes some information theoretic learning (ITL) loss functions as special cases and a new machine learning paradigm where the learning process is divided into two stages. Expand

References

SHOWING 1-10 OF 66 REFERENCES
Mixture correntropy for robust learning
TLDR
Experimental results show that the learning algorithms under MMCC can perform very well and achieve better performance than the conventional MCC based algorithms as well as several other state-of-the-art algorithms. Expand
Kernel adaptive filtering with maximum correntropy criterion
TLDR
A new kernel adaptive algorithm is developed, called the kernel maximum correntropy (KMC), which combines the advantages of the KLMS and maximum Correntropy criterion (MCC), and also studies its convergence and self-regularization properties by using the energy conservation relation. Expand
Robust C-Loss Kernel Classifiers
TLDR
This paper study the C-loss kernel classifier with the Tikhonov regularization term, which is used to avoid overfitting, and uses the representer theorem to improve the sparseness of the resulting C- losses. Expand
Learning with the maximum correntropy criterion induced losses for regression
TLDR
The focus in this paper is concerned with the connections between the regression model associated with the correntropy induced loss and the least squares regression model, and its convergence property. Expand
Correntropy in Data Classification
TLDR
In this chapter, the usability of the correntropy-based similarity measure in the paradigm of statistical data classification is addressed and the issues related to the non-convexity of the Correntropic loss function are considered while proposing new classification methods. Expand
Multiple Kernel Learning Algorithms
TLDR
Overall, using multiple kernels instead of a single one is useful and it is believed that combining kernels in a nonlinear or data-dependent way seems more promising than linear combination in fusing information provided by simple linear kernels, whereas linear methods are more reasonable when combining complex Gaussian kernels. Expand
Robust Principal Component Analysis Based on Maximum Correntropy Criterion
TLDR
Numerical results demonstrate that the proposed method can outperform robust rotational-invariant PCAs based on L1 norm when outliers occur and requires no assumption about the zero-mean of data for processing and can estimate data mean during optimization. Expand
The C-loss function for pattern classification
TLDR
The discriminant function obtained by optimizing the proposed loss function in the neighborhood of the ideal 0-1 loss function to train a neural network is immune to overfitting, more robust to outliers, and has consistent and better generalization performance as compared to other commonly used loss functions, even after prolonged training. Expand
Correntropy-Based Hypergraph Regularized NMF for Clustering and Feature Selection on Multi-Cancer Integrated Data
TLDR
A novel method called correntropy-based hypergraph regularized NMF (CHNMF) is proposed to solve the complex optimization problem of non-negative matrix factorization and extensive experimental results indicate that the proposed method is superior to other state-of-the-art methods for clustering and feature selection. Expand
MultiK-MHKS: A Novel Multiple Kernel Learning Algorithm
TLDR
A new effective multiple kernel learning algorithm that can maximally correlate the m views in the transformed coordinates and introduces a special term called Inter-Function Similarity Loss RIFSI into the existing regularization framework so as to guarantee the agreement of multiview outputs. Expand
...
1
2
3
4
5
...