#### Filter Results:

- Full text PDF available (74)

#### Publication Year

2002

2017

- This year (6)
- Last 5 years (50)
- Last 10 years (85)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- Xu-Cheng Yin, Xuwang Yin, Kaizhu Huang, Hongwei Hao
- IEEE Transactions on Pattern Analysis and Machine…
- 2014

Text detection in natural scene images is an important prerequisite for many content-based image analysis tasks. In this paper, we propose an accurate and robust method for detecting texts in natural scene images. A fast and effective pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSERs) as character candidates using the… (More)

- Chu-Hong Hoi, Chi-Hang Chan, Kaizhu Huang, M.R. Lyu, Irwin King
- 2004 IEEE International Joint Conference on…
- 2004

Recently, support vector machines (SVMs) have been engaged on relevance feedback tasks in content-based image retrieval. Typical approaches by SVMs treat the relevance feedback as a strict binary classification problem. However, these approaches do not consider an important issue of relevance feedback, i.e. the unbalanced dataset problem, in which the… (More)

- Yiming Ying, Kaizhu Huang, Colin Campbell
- NIPS
- 2009

In this paper we study the problem of learning a low-rank (sparse) distance matrix. We propose a novel metric learning model which can simultaneously conduct dimension reduction and learn a distance matrix. The sparse representation involves a mixed-norm regularization which is non-convex. We then show that it can be equivalently formulated as a convex… (More)

- Kaizhu Huang, Haiqin Yang, Irwin King, Michael R. Lyu, Lai-Wan Chan
- Journal of Machine Learning Research
- 2004

We construct a distribution-free Bayes optimal classifier called the Minimum Error Minimax Probability Machine (MEMPM) in a worst-case setting, i.e., under all possible choices of class-conditional densities with a given mean and covariance matrix. By assuming no specific distributions for the data, our model is thus distinguished from traditional Bayes… (More)

- Kaizhu Huang, Danian Zheng, Jun Sun, Yoshinobu Hotta, Katsuhito Fujimoto, Satoshi Naoi
- Pattern Recognition Letters
- 2010

This paper provides a sparse learning algorithm for Support Vector Classification (SVC), called Sparse Support Vector Classification (SSVC), which leads to sparse solutions by automatically setting the irrelevant parameters exactly to zero. SSVC adopts the L0-norm regularization term and is trained by an iteratively reweighted learning algorithm. We show… (More)

- Kaizhu Huang, Haiqin Yang, Irwin King, Michael R. Lyu
- ICML
- 2004

A new large margin classifier, named Maxi-Min Margin Machine (M<sup>4</sup>) is proposed in this paper. This new classifier is constructed based on both a "local: and a "global" view of data, while the most popular large margin classifier, Support Vector Machine (SVM) and the recently-proposed important model, Minimax Probability Machine (MPM) consider data… (More)

- Bo Xu, Kaizhu Huang, Cheng-Lin Liu
- ICFHR
- 2010

We consider the problem of similar Chinese character recognition in this paper. Engaging the Average Symmetric Uncertainty (ASU) criterion to measure the correlation between different image regions and the class label, we manage to detect the most critical regions for each pair of similar characters. These critical regions are proved to contain more… (More)

- Kaizhu Huang, Yiming Ying, Colin Campbell
- 2009 Ninth IEEE International Conference on Data…
- 2009

There has been significant recent interest in sparse metric learning (SML) in which we simultaneously learn both a good distance metric and a low-dimensional representation. Unfortunately, the performance of existing sparse metric learning approaches is usually limited because the authors assumed certain problem relaxations or they target the SML objective… (More)

Discriminative classifiers such as Support Vector Machines directly learn a discriminant function or a posterior probability model to perform classification. On the other hand, generative classifiers often learn a joint probability model and then use Bayes rules to construct a posterior classifier from this model. In general, generative classifiers are not… (More)

- Yan-Ming Zhang, Kaizhu Huang, Guanggang Geng, Cheng-Lin Liu
- ECML/PKDD
- 2013

The k nearest neighbors (kNN) graph, perhaps the most popular graph in machine learning, plays an essential role for graphbased learning methods. Despite its many elegant properties, the brute force kNN graph construction method has computational complexity of O(n), which is prohibitive for large scale data sets. In this paper, based on the… (More)