Quantum subspace alignment for domain adaptation

  title={Quantum subspace alignment for domain adaptation},
  author={Xi He and Xiaoting Wang},
Domain adaptation (DA) is used for adaptively obtaining labels of an unprocessed data set with a given related, but different labelled data set. Subspace alignment (SA), a representative DA algorithm, attempts to find a linear transformation to align the subspaces of the two different data sets. The classifier trained on the aligned labelled data set can be transferred to the unlabelled data set to predict the target labels. In this paper, two quantum versions of the SA are proposed to… 
2 Citations

Figures from this paper

Learning Equality Constraints for Motion Planning on Manifolds

This work considers the problem of learning representations of constraints from demonstrations with a deep neural network, which it calls Equality Constraint Manifold Neural Network (ECoMaNN), to learn a level-set function of the constraint suitable for integration into a constrained sampling-based motion planner.



Quantum correlation alignment for unsupervised domain adaptation

  • Xi He
  • Computer Science
  • 2020
The simulation results prove that the variational quantum correlation alignment algorithm (VQCORAL) can achieve competitive performance compared with the classical CORAL.

Subspace Distribution Alignment for Unsupervised Domain Adaptation

A unified view of existing subspace mapping based methods is presented and a generalized approach that also aligns the distributions as well as the subspace bases is developed that shows improved results over published approaches.

Unsupervised domain adaptation using parallel transport on Grassmann manifold

A novel framework based on the parallel transport of union of the source subspaces on the Grassmann manifold is developed, which allows for multiple domain shifts between the source and target domains.

Domain adaptation for object recognition: An unsupervised approach

This paper presents one of the first studies on unsupervised domain adaptation in the context of object recognition, where data has been labeled only from the source domain (and therefore do not have correspondences between object categories across domains).

Unsupervised Visual Domain Adaptation Using Subspace Alignment

This paper introduces a new domain adaptation algorithm where the source and target domains are represented by subspaces described by eigenvectors, and seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one.

Subspace Interpolation via Dictionary Learning for Unsupervised Domain Adaptation

This work proposes to interpolate subspaces through dictionary learning to link the source and target domains, which are able to capture the intrinsic domain shift and form a shared feature representation for cross domain recognition.

Quantum locally linear embedding

This paper presents two implementations of the quantum locally linear embedding algorithm (qLLE) to perform the nonlinear dimensionality reduction on quantum devices and achieves an exponential speedup in O(\mathrm{poly}(\log N))$.

Connecting the Dots with Landmarks: Discriminatively Learning Domain-Invariant Features for Unsupervised Domain Adaptation

This paper automatically discovers the existence of landmarks and uses them to bridge the source to the target by constructing provably easier auxiliary domain adaptation tasks, and shows how this composition can be optimized discriminatively without requiring labels from the target domain.

Geodesic flow kernel for unsupervised domain adaptation

This paper proposes a new kernel-based method that takes advantage of low-dimensional structures that are intrinsic to many vision datasets, and introduces a metric that reliably measures the adaptability between a pair of source and target domains.

Quantum variational autoencoder

A quantum variational autoencoder (QVAE) is introduced: a VAE whose latent generative process is implemented as a quantum Boltzmann machine (QBM), which can be trained end-to-end by maximizing a well-defined loss-function: a ‘quantum’ lower-bound to a variational approximation of the log-likelihood.