Flipped-Adversarial AutoEncoders

@inproceedings{Zhang2018FlippedAdversarialA,
  title={Flipped-Adversarial AutoEncoders},
  author={Jiyi Zhang and Hung Vi Dang and Hwee Kuan Lee and Ee-Chien Chang},
  year={2018}
}
We propose a flipped-Adversarial AutoEncoder (FAAE) that simultaneously trains a generative model G that maps an arbitrary latent code distribution to a data distribution and an encoder E that embodies an "inverse mapping" that encodes a data sample into a latent code vector. Unlike previous hybrid approaches that leverage adversarial training criterion in constructing autoencoders, FAAE minimizes re-encoding errors in the latent space and exploits adversarial criterion in the data space… CONTINUE READING
38
Twitter Mentions

Citations

Publications citing this paper.

Similar Papers