Distribution Preserving Network Embedding

@article{Qin2019DistributionPN,
  title={Distribution Preserving Network Embedding},
  author={Anyong Qin and Zhaowei Shang and Taiping Zhang and Yuan Yan Tang},
  journal={ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2019},
  pages={3562-3566}
}
The deep autoencoder network which is based on constraining non-negative weights, can learn a low dimensional part-based representation. On the other hand, the inherent structure of the each data cluster can be described by the distribution of the intraclass sample. Then one hopes to learn a new low dimensional feature which can preserve the intrinsic structure embedded in the high dimensional data space perfectly. In this paper, by preserving data distribution, a deep part-based representation… CONTINUE READING

Figures, Tables, and Topics from this paper.

References

Publications referenced by this paper.
SHOWING 1-10 OF 31 REFERENCES

Sparse autoencoder

Andrew Ng
  • https: //web.stanford.edu/class/cs294a/ sparseAutoencoder_2011new.pdf, Stanford University, 2011, in CS294A Lecture notes.
  • 2011
VIEW 12 EXCERPTS
HIGHLY INFLUENTIAL

Edge-Smoothing-Based Distribution Preserving Hyperspherical Embedding for Hyperspectral Image Classification

  • IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
  • 2018
VIEW 3 EXCERPTS

Deep Clustering via Joint Convolutional Autoencoder Embedding and Relative Entropy Minimization

Kamran Ghasedi Dizaji, Amirhossein Herandi, Cheng Deng, Weidong Cai, Heng Huang
  • 2017 IEEE International Conference on Computer Vision (ICCV)
  • 2017

Hinton , Simon Osindero , and Yee - Whye Teh , “ A fast learning algorithm for deep belief net

Andrew Ng, E. Geoffrey
  • IEEE International Conference on Acoustics , Speech and Signal Processing
  • 2013