Iterative Double Clustering for Unsupervised and Semi-supervised Learning

Abstract

We present a powerful meta-clustering technique called Iterative Double Clustering (IDC). The IDC method is a natural extension of the recent Double Clustering (DC) method of Slonim and Tishby that exhibited impressive performance on text categorization tasks [12]. Using synthetically generated data we empirically find that whenever the DC procedure is successful in recovering some of the structure hidden in the data, the extended IDC procedure can incrementally compute a significantly more accurate classification. IDC is especially advantageous when the data exhibits high attribute noise. Our simulation results also show the effectiveness of IDC in text categorization problems. Surprisingly, this unsupervised procedure can be competitive with a (supervised) SVM trained with a small training set. Finally, we propose a simple and natural extension of IDC for semi-supervised and transductive learning where we are given both labeled and unlabeled examples.

DOI: 10.1007/3-540-44795-4_11

Extracted Key Phrases

3 Figures and Tables

Statistics

051015'03'05'07'09'11'13'15'17
Citations per Year

70 Citations

Semantic Scholar estimates that this publication has 70 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{ElYaniv2001IterativeDC, title={Iterative Double Clustering for Unsupervised and Semi-supervised Learning}, author={Ran El-Yaniv and Oren Souroujon}, booktitle={ECML}, year={2001} }