• Corpus ID: 238744421

Domain Generalization via Domain-based Covariance Minimization

@article{Wu2021DomainGV,
  title={Domain Generalization via Domain-based Covariance Minimization},
  author={Anqi Wu},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.06298}
}
  • Anqi Wu
  • Published 12 October 2021
  • Computer Science, Mathematics
  • ArXiv
Researchers have been facing a difficult problem that data generation mechanisms could be influenced by internal or external factors leading to the training and test data with quite different distributions, consequently traditional classification or regression from the training set is unable to achieve satisfying results on test data. In this paper, we address this nontrivial domain generalization problem by finding a central subspace in which domain-based covariance is minimized while the… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 38 REFERENCES
Central Subspace Dimensionality Reduction Using Covariance Operators
TLDR
This work proposes a novel method called the Covariance Operator Inverse Regression (COIR) that generalizes IR to nonlinear input/output spaces without explicit target slicing and demonstrates the benefits of COIR on several important regression problems in both fully supervised and semi-supervised settings.
Dimensionality Reduction for Supervised Learning with Reproducing Kernel Hilbert Spaces
TLDR
This work treats the problem of dimensionality reduction as that of finding a low-dimensional “effective subspace” of X which retains the statistical relationship between X and Y and establishes a general nonparametric characterization of conditional independence using covariance operators on a reproducing kernel Hilbert space.
A Survey on Transfer Learning
TLDR
The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.
Domain Generalization via Invariant Feature Representation
TLDR
Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables is proposed.
Regularized multi--task learning
TLDR
An approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines, that have been successfully used in the past for single-- task learning is presented.
Learning from Distributions via Support Measure Machines
TLDR
A kernel-based discriminative learning framework on probability measures that learns using a collection of probability distributions that have been constructed to meaningfully represent training data and proposes a flexible SVM (Flex-SVM) that places different kernel functions on each training example.
Multi-Task Learning for Classification with Dirichlet Process Priors
TLDR
Experimental results on two real life MTL problems indicate that the proposed algorithms automatically identify subgroups of related tasks whose training data appear to be drawn from similar distributions are more accurate than simpler approaches such as single-task learning, pooling of data across all tasks, and simplified approximations to DP.
Logistic regression with an auxiliary data source
TLDR
This paper proposes a method to relax the requirement to draw examples from the same source distribution in the context of logistic regression, called "Migratory-Logit" or M- logit, which is demonstrated successfully on simulated as well as real data sets.
Sliced Inverse Regression for Dimension Reduction
Abstract Modern advances in computing power have greatly widened scientists' scope in gathering and investigating information from many variables, information which might have been ignored in the
Boosting for transfer learning
TLDR
This paper presents a novel transfer learning framework called TrAdaBoost, which extends boosting-based learning algorithms and shows that this method can allow us to learn an accurate model using only a tiny amount of new data and a large amount of old data, even when the new data are not sufficient to train a model alone.
...
1
2
3
4
...