Weiyang Liu

Learn More
Sparse representation classification (SRC) plays an important role in pattern recognition. Recently, a more generic method named as collaborative representation classification (CRC) has greatly improved the efficiency of SRC. By taking advantage of recent development of CRC, this paper explores to smoothly apply the kernel technique to further improve its(More)
We consider the image classification problem via kernel collaborative representation classification with locality constrained dictionary (KCRC-LCD). Specifically, we propose a kernel collaborative representation classification (KCRC) approach in which kernel method is used to improve the discrimination ability of collaborative representation classification(More)
Transitive distance is an ultrametric with elegant properties for clustering. Conventional transitive distance can be found by referring to the minimum spanning tree (MST). We show that such distance metric can be generalized onto a minimum spanning random forest (MSRF) with element-wise max pooling over the set of transitive distance matrices from an MSRF.(More)
We present a locality preserving K-SVD (LP-KSVD) algorithm for joint dictionary and classifier learning, and further incorporate kernel into our framework. In LP-KSVD, we construct a locality preserving term based on the relations between input samples and dictionary atoms, and introduce the locality via nearest neighborhood to enforce the locality of(More)
Occlusion in face recognition is a common yet challenging problem. While sparse representation based classification (SRC) has been shown promising performance in laboratory conditions (i.e. noiseless or random pixel corrupted), it performs much worse in practical scenarios. In this paper, we consider the practical face recognition problem, where the(More)
Sparse coding with dictionary learning (DL) has shown excellent classification performance. Despite the considerable number of existing works, how to obtain features on top of which dictionaries can be better learned remains an open and interesting question. Many current prevailing DL methods directly adopt well-performing crafted features. While such(More)
Cross-entropy loss together with softmax is arguably one of the most common used supervision components in convolutional neural networks (CNNs). Despite its simplicity, popularity and excellent performance, the component does not explicitly encourage discriminative learning of features. In this paper, we propose a generalized large-margin softmax(More)