Auto-encoder is a popular representation learning technique which can capture the generative model of data via a encoding and decoding procedure typically driven by reconstruction errors in an unsupervised way. In this paper, we propose a semi-supervised manifold learning based auto-encoder (named semAE). semAE is based on a regularized auto-encoder framework which leverages semi-supervised manifold learning to impose regularization based on the encoded representation. Our proposed approach suits more practical scenarios in which a small number of labeled data are available in addition to a large number of unlabeled data. Experiments are conducted on several well-known benchmarking datasets to validate the efficacy of semAE from the aspects of both representation and classification. The comparisons to state-of-the-art representation learning methods on classification performance in semi-supervised settings demonstrate the superiority of our approach.