# Compression and machine learning: a new perspective on feature space vectors

@article{Sculley2006CompressionAM, title={Compression and machine learning: a new perspective on feature space vectors}, author={D. Sculley and Carla E. Brodley}, journal={Data Compression Conference (DCC'06)}, year={2006}, pages={332-341} }

The use of compression algorithms in machine learning tasks such as clustering and classification has appeared in a variety of fields, sometimes with the promise of reducing problems of explicit feature selection. [...] Key Result To underscore this point, we find theoretical and empirical connections between traditional machine learning vector models and compression, encouraging cross-fertilization in future work Expand

## 113 Citations

An investigation of implicit features in compression-based learning for comparing webpages

- Computer SciencePattern Analysis and Applications
- 2014

This work performs feature selection in the feature space induced by a well-known compression algorithm and finds that a subset of the features is sufficient for a near-perfect classification of these webpages.

Text Mining Using Data Compression Models

- Computer Science
- 2012

A compression-based method for instance selection, capable of extracting a diverse subset of documents that are representative of a larger document collection that is useful for initializing k-means clustering, and as a pool-based active learning strategy for supervised training of text classifiers.

Compression-Based Data Mining

- Mathematics, Computer ScienceEncyclopedia of Data Warehousing and Mining
- 2009

Compression-based data mining is a universal approach to clustering, classification, dimensionality reduction, and anomaly detection. It is motivated by results in bioinformatics, learning, andâ€¦

Compressive Feature Learning

- Computer ScienceNIPS
- 2013

This paper addresses the problem of unsupervised feature learning for text data by using a dictionary-based compression scheme to extract a succinct feature set and finds a set of word k-grams that minimizes the cost of reconstructing the text losslessly.

An Efficient Algorithm for Large Scale Compressive Feature Learning

- Computer ScienceAISTATS
- 2014

The recently proposed Compressive Feature Learning framework is expanded and it is shown that CFL is NPâ€“Complete and a novel and efficient approximation algorithm based on a homotopy that transforms a convex relaxation of CFL into the original problem is provided.

Text Classification Using Compression-Based Dissimilarity Measures

- Computer ScienceInt. J. Pattern Recognit. Artif. Intell.
- 2015

Experimental evaluation of the proposed efficient methods for text classification based on information-theoretic dissimilarity measures reveals that it approximates, sometimes even outperforms previous state-of-the-art techniques, despite being much simpler, in the sense that they do not require any text pre-processing or feature engineering.

Text Classification with Compression Algorithms

- Computer ScienceArXiv
- 2012

A kernel function that estimates the similarity between two objects computing by their compressed lengths is defined, which is important because compression algorithms can detect arbitrarily long dependencies within the text strings.

Verification based on Compression-Models

- 2018

Compression models represent an interesting approach for different classification tasks and have been used widely across many research fields. We adapt compression models to the field of authorshipâ€¦

PyLZJD: An Easy to Use Tool for Machine Learning

- Computer ScienceProceedings of the 18th Python in Science Conference
- 2019

PyLZJD is introduced, a library that implements LZJD in a manner meant to be easy to use and apply for novice practitioners, followed by examples of how to use it on problems of disparate data types.

Construction of Efficient V-Gram Dictionary for Sequential Data Analysis

- Computer ScienceCIKM
- 2018

A new method for constructing an optimal feature set from sequential data that creates a dictionary of n-grams of variable length, based on the minimum description length principle, which shows competitive results on standard text classification collections without using the text structure.

## References

SHOWING 1-10 OF 35 REFERENCES

Clustering by compression

- Computer Science, PhysicsIEEE Transactions on Information Theory
- 2005

Evidence of successful application in areas as diverse as genomics, virology, languages, literature, music, handwritten digits, astronomy, and combinations of objects from completely different domains, using statistical, dictionary, and block sorting compressors is reported.

Text categorization using compression models

- Computer ScienceProceedings DCC 2000. Data Compression Conference
- 2000

Test categorization is the assignment of natural language texts to predefined categories based on their concept to provide an overall judgement on the document as a whole, rather than discarding information by pre-selecting features.

The similarity metric

- Mathematics, Computer ScienceIEEE Transactions on Information Theory
- 2004

A new "normalized information distance" is proposed, based on the noncomputable notion of Kolmogorov complexity, and it is demonstrated that it is a metric and called the similarity metric.

Introduction to Information Theory and Data Compression

- Computer Science
- 1998

This pioneering textbook serves two independent courses-in information theory and in data compression-and also proves valuable for independent study and as a reference.

Spam Filtering Using Compression Models

- 2005

Spam filtering poses a special problem in text categorization, of which the defining characteristic is that filters face an active adversary, which constantly attempts to evade filtering. Since spamâ€¦

Text mining: a new frontier for lossless compression

- Computer ScienceProceedings DCC'99 Data Compression Conference (Cat. No. PR00096)
- 1999

This paper aims to promote text compression as a key technology for text mining, allowing databases to be created from formatted tables such as stock-market information on Web pages.

Kernel Methods for Pattern Analysis

- Computer ScienceICTAI
- 2003

This book provides an easy introduction for students and researchers to the growing field of kernel-based pattern analysis, demonstrating with examples how to handcraft an algorithm or a kernel for a new specific application, and covering all the necessary conceptual and mathematical tools to do so.

A repetition based measure for verification of text collections and for text categorization

- Computer ScienceSIGIR
- 2003

The results show that the method outperforms SVM at multi-class categorization, and interestingly, that results correlate strongly with compression-based methods.

Towards parameter-free data mining

- Computer ScienceKDD
- 2004

This work shows that recent results in bioinformatics and computational theory hold great promise for a parameter-free data-mining paradigm, and shows that this approach is competitive or superior to the state-of-the-art approaches in anomaly/interestingness detection, classification, and clustering with empirical tests on time series/DNA/text/video datasets.

Data Compression Using Adaptive Coding and Partial String Matching

- Computer ScienceIEEE Trans. Commun.
- 1984

This paper describes how the conflict can be resolved with partial string matching, and reports experimental results which show that mixed-case English text can be coded in as little as 2.2 bits/ character with no prior knowledge of the source.