# Semi-supervised learning, causality, and the conditional cluster assumption

@inproceedings{vonKgelgen2020SemisupervisedLC, title={Semi-supervised learning, causality, and the conditional cluster assumption}, author={Julius von K{\"u}gelgen and M. Loog and Alexander Mey and Bernhard Sch{\"o}lkopf}, booktitle={UAI}, year={2020} }

While the success of semi-supervised learning (SSL) is still not fully understood, Scholkopf et al. (2012) have established a link to the principle of independent causal mechanisms. They conclude that SSL should be impossible when predicting a target variable from its causes, but possible when predicting it from its effects. Since both these cases are somewhat restrictive, we extend their work by considering classification using cause and effect features at the same time, such as predicting…

## 10 Citations

Toward Causal Representation Learning

- Computer Science, PhilosophyProceedings of the IEEE
- 2021

Fundamental concepts of causal inference are reviewed and related to crucial open problems of machine learning, including transfer and generalization, thereby assaying how causality can contribute to modern machine learning research.

Causal Direction of Data Collection Matters: Implications of Causal and Anticausal Learning for NLP

- Computer ScienceEMNLP
- 2021

This work argues that the causal direction of the data collection process bears nontrivial implications that can explain a number of published NLP findings, such as differences in semi-supervised learning and domain adaptation performance across different settings.

Revisiting Deep Semi-supervised Learning: An Empirical Distribution Alignment Framework and Its Generalization Bound

- Computer ScienceArXiv
- 2022

This work revisits the semi-supervised learning problem from a new perspective of explicitly reducing empirical distribution mismatch between labeled and unlabeled samples and proposes a new deep semi- Supervised Learning by Empirical Distribution Alignment (SLEDA) framework, and develops a new theoretical generalization bound for the research community to better understand the problem.

39 Causality for Machine Learning

- Computer Science
- 2021

It is argued that the hard open problems of machine learning and AI are intrinsically related to causality, and how the field is beginning to understand them is explained.

Causality for Machine Learning

- Computer ScienceArXiv
- 2019

It is argued that the hard open problems of machine learning and AI are intrinsically related to causality, and how the field is beginning to understand them is explained.

Reimplementation of FixMatch and Investigation on Noisy (Pseudo) Labels and Confirmation Errors of FixMatch

- Computer Science
- 2021

This work reimplements FixMatch and achieves sonably comparable performance with the official implementation, which supports that FixMatch outperforms semi-superivesed learning benchmarks and demonates that the author’s choices with respect to those ablations were experimentally sound.

From Statistical to Causal Learning

- Computer Science, Philosophy
- 2022

We describe basic ideas underlying research to build and understand artiﬁcially intelligent systems: from symbolic approaches via statistical learning to interventional models relying on concepts of…

F ROM S TATISTICAL TO C AUSAL L EARNING

- Computer Science, Philosophy
- 2022

We describe basic ideas underlying research to build and understand artiﬁcially intelligent systems: from symbolic approaches via statistical learning to interventional models relying on concepts of…

Independent mechanism analysis, a new concept?

- Computer ScienceNeurIPS
- 2021

This work provides theoretical and empirical evidence that its approach circumvents a number of nonidentifiability issues arising in nonlinear blind source separation, by thinking of each source as independently influencing the mixing process.

Nonlinear Invariant Risk Minimization: A Causal Approach

- Computer ScienceArXiv
- 2021

Invariant Causal Representation Learning is proposed, a learning paradigm that enables out-of-distribution (OOD) generalization in the nonlinear setting (i.e., nonlinear representations and nonlinear classifiers) and builds upon a practical and general assumption: the prior over the data representation factorizes when conditioning on the target and the environment.

## References

SHOWING 1-10 OF 53 REFERENCES

Semi-Generative Modelling: Covariate-Shift Adaptation with Cause and Effect Features

- Computer ScienceAISTATS
- 2019

This work argues that adapting to covariate-shift adaptation requires learning with both causes and effects of a target variable, Y, and shows how this setting leads to what is called a semi-generative model, P(Y,X_E|X_C,θ).

Improvability Through Semi-Supervised Learning: A Survey of Theoretical Results

- Computer ScienceArXiv
- 2019

This survey explores different types of theoretical results when one uses unlabeled data in classification and regression tasks and discusses the biggest bottleneck of semi-supervised learning, namely the assumptions they make.

Contrastive Pessimistic Likelihood Estimation for Semi-Supervised Classification

- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2016

A general way to perform semi-supervised parameter estimation for likelihood-based classifiers for which, on the full training set, the estimates are never worse than the supervised solution in terms of the log-likelihood.

Invariant Models for Causal Transfer Learning

- Computer ScienceJ. Mach. Learn. Res.
- 2018

This work relaxes the usual covariate shift assumption and assumes that it holds true for a subset of predictor variables: the conditional distribution of the target variable given this subset of predictors is invariant over all tasks.

On causal and anticausal learning

- Computer ScienceICML
- 2012

The problem of function estimation in the case where an underlying causal model can be inferred is considered, and a hypothesis for when semi-supervised learning can help is formulated, and corroborate it with empirical results.

Learning Independent Causal Mechanisms

- Computer ScienceICML
- 2018

This work develops an algorithm to recover a set of independent (inverse) mechanisms from a sets of transformed data points, based on aset of experts that compete for data generated by the mechanisms, driving specialization.

Projected estimators for robust semi-supervised classification

- Computer ScienceMachine Learning
- 2017

It is proved that, measured on the labeled and unlabeled training data, this semi-supervised procedure never gives a lower quadratic loss than the supervised alternative, the first approach that offers such strong guarantees for improvement over the supervised solution.

Domain Adaptation by Using Causal Inference to Predict Invariant Conditional Distributions

- Computer ScienceNeurIPS
- 2018

This work proposes an approach for solving causal domain adaptation problems that exploits causal inference and does not rely on prior knowledge of the causal graph, the type of interventions or the intervention targets, and demonstrates a possible implementation on simulated and real world data.

Does Unlabeled Data Provably Help? Worst-case Analysis of the Sample Complexity of Semi-Supervised Learning

- Computer ScienceCOLT
- 2008

It is proved that for basic hypothesis classes over the real line, if the distribution of unlabeled data is ‘smooth’, knowledge of that distribution cannot improve the labeled sample complexity by more than a constant factor.

Inferring deterministic causal relations

- Computer ScienceUAI
- 2010

This paper considers two variables that are related to each other by an invertible function, and shows that even in the deterministic (noise-free) case, there are asymmetries that can be exploited for causal inference.