Inductive Semi-supervised Learning Through Optimal Transport

@inproceedings{Hamri2021InductiveSL,
  title={Inductive Semi-supervised Learning Through Optimal Transport},
  author={Mourad El Hamri and Youn{\`e}s Bennani and Issam Falih},
  booktitle={ICONIP},
  year={2021}
}
In this paper, we tackle the inductive semi-supervised learning problem that aims to obtain label predictions for out-of-sample data. The proposed approach, called Optimal Transport Induction (OTI), extends efficiently an optimal transport based transductive algorithm (OTP) to inductive tasks for both binary and multi-class settings. A series of experiments are conducted on several datasets in order to compare the proposed approach with state-of-the-art methods. Experiments demonstrate the… 

Incremental Unsupervised Domain Adaptation Through Optimal Transport

The proposed approach, called DA-OTP, aims to learn a gradual subspace alignment of the source and target domains through Supervised Locality Preserving Projection, so that projected data in the joint low-dimensional latent subspace can be domain-invariant and easily separable.

References

SHOWING 1-10 OF 17 REFERENCES

Label Propagation Through Optimal Transport

The proposed approach, Optimal Transport Propagation (OTP), performs in an incremental process, label propagation through the edges of a complete bipartite edge-weighted graph, whose affinity matrix is constructed from the optimal transport plan between empirical measures defined on labeled and unlabeled data.

A survey on semi-supervised learning

This survey aims to provide researchers and practitioners new to the field as well as more advanced readers with a solid understanding of the main approaches and algorithms developed over the past two decades, with an emphasis on the most prominent and currently relevant work.

Efficient Non-Parametric Function Induction in Semi-Supervised Learning

Experiments show that the proposed non-parametric algorithms which provide an estimated continuous label for the given unlabeled examples are extended to function induction algorithms that correspond to the minimization of a regularization criterion applied to an out-of-sample example, and happens to have the form of a Parzen windows regressor.

Label Propagation through Linear Neighborhoods

A novel graph-based semi supervised learning approach is proposed based on a linear neighborhood model, which assumes that each data point can be linearly reconstructed from its neighborhood, and can propagate the labels from the labeled points to the whole data set using these linear neighborhoods with sufficient smoothness.

Graph-based semi-supervised learning

First, a method called linear neighborhood propagation is proposed, which can automatically construct the optimal graph, and a novel multilevel scheme is introduced to make the algorithm scalable for large data sets.

Learning from labeled and unlabeled data with label propagation

A simple iterative algorithm to propagate labels through the dataset along high density are as d fined by unlabeled data is proposed and its solution is analyzed, and its connection to several other algorithms is analyzed.

Label Propagation and Quadratic Criterion

This chapter shows how different graph-based algorithms for semi-supervised learning can be cast into a common framework where one minimizes a quadratic cost criterion whose closed-form solution is found by solving a linear system of size n.

An Information-Theoretic External Cluster-Validity Measure

In this paper we propose a measure of clustering quality or accuracy that is appropriate in situations where it is desirable to evaluate a clustering algorithm by somehow comparing the clusters it

Computational Optimal Transport: With Applications to Data Science

Computational Optimal Transport presents an overview of the main theoretical insights that support the practical effectiveness of OT before explaining how to turn these insights into fast computational schemes.

Sinkhorn Distances: Lightspeed Computation of Optimal Transport

This work smooths the classic optimal transport problem with an entropic regularization term, and shows that the resulting optimum is also a distance which can be computed through Sinkhorn's matrix scaling algorithm at a speed that is several orders of magnitude faster than that of transport solvers.