Compression and machine learning: a new perspective on feature space vectors

@article{Sculley2006CompressionAM,
  title={Compression and machine learning: a new perspective on feature space vectors},
  author={D. Sculley and Carla E. Brodley},
  journal={Data Compression Conference (DCC'06)},
  year={2006},
  pages={332-341}
}
The use of compression algorithms in machine learning tasks such as clustering and classification has appeared in a variety of fields, sometimes with the promise of reducing problems of explicit feature selection. [] Key Result To underscore this point, we find theoretical and empirical connections between traditional machine learning vector models and compression, encouraging cross-fertilization in future work

Tables from this paper

An investigation of implicit features in compression-based learning for comparing webpages

This work performs feature selection in the feature space induced by a well-known compression algorithm and finds that a subset of the features is sufficient for a near-perfect classification of these webpages.

Text Mining Using Data Compression Models

A compression-based method for instance selection, capable of extracting a diverse subset of documents that are representative of a larger document collection that is useful for initializing k-means clustering, and as a pool-based active learning strategy for supervised training of text classifiers.

Compression-Based Data Mining

Compression-based data mining is a universal approach to clustering, classification, dimensionality reduction, and anomaly detection. It is motivated by results in bioinformatics, learning, and

Compressive Feature Learning

This paper addresses the problem of unsupervised feature learning for text data by using a dictionary-based compression scheme to extract a succinct feature set and finds a set of word k-grams that minimizes the cost of reconstructing the text losslessly.

An Efficient Algorithm for Large Scale Compressive Feature Learning

The recently proposed Compressive Feature Learning framework is expanded and it is shown that CFL is NP–Complete and a novel and efficient approximation algorithm based on a homotopy that transforms a convex relaxation of CFL into the original problem is provided.

Text Classification Using Compression-Based Dissimilarity Measures

Experimental evaluation of the proposed efficient methods for text classification based on information-theoretic dissimilarity measures reveals that it approximates, sometimes even outperforms previous state-of-the-art techniques, despite being much simpler, in the sense that they do not require any text pre-processing or feature engineering.

Text Classification with Compression Algorithms

A kernel function that estimates the similarity between two objects computing by their compressed lengths is defined, which is important because compression algorithms can detect arbitrarily long dependencies within the text strings.

Verification based on Compression-Models

This work proposes an intrinsic AV method, which yields competitive results compared to a number of current state-of-the-art approaches, based on support vector machines or neural networks, and can handle complicated AV cases where both, the questioned and the reference document, are not related to each other in terms of topic or genre.

PyLZJD: An Easy to Use Tool for Machine Learning

PyLZJD is introduced, a library that implements LZJD in a manner meant to be easy to use and apply for novice practitioners, followed by examples of how to use it on problems of disparate data types.

Construction of Efficient V-Gram Dictionary for Sequential Data Analysis

A new method for constructing an optimal feature set from sequential data that creates a dictionary of n-grams of variable length, based on the minimum description length principle, which shows competitive results on standard text classification collections without using the text structure.
...

References

SHOWING 1-10 OF 32 REFERENCES

Text categorization using compression models

Test categorization is the assignment of natural language texts to predefined categories based on their concept to provide an overall judgement on the document as a whole, rather than discarding information by pre-selecting features.

The similarity metric

A new "normalized information distance" is proposed, based on the noncomputable notion of Kolmogorov complexity, and it is demonstrated that it is a metric and called the similarity metric.

Introduction to Information Theory and Data Compression

This pioneering textbook serves two independent courses-in information theory and in data compression-and also proves valuable for independent study and as a reference.

Spam Filtering Using Compression Models

This paper summarizes the experiments for the TREC 2005 spam track, in which the use of adaptive statistical data compression models are considered for the spam filtering task, and presents experimental results indicating that compression models perform well in comparison to established spam filters.

Text mining: a new frontier for lossless compression

This paper aims to promote text compression as a key technology for text mining, allowing databases to be created from formatted tables such as stock-market information on Web pages.

Kernel Methods for Pattern Analysis

This book provides an easy introduction for students and researchers to the growing field of kernel-based pattern analysis, demonstrating with examples how to handcraft an algorithm or a kernel for a new specific application, and covering all the necessary conceptual and mathematical tools to do so.

A repetition based measure for verification of text collections and for text categorization

The results show that the method outperforms SVM at multi-class categorization, and interestingly, that results correlate strongly with compression-based methods.

Towards parameter-free data mining

This work shows that recent results in bioinformatics and computational theory hold great promise for a parameter-free data-mining paradigm, and shows that this approach is competitive or superior to the state-of-the-art approaches in anomaly/interestingness detection, classification, and clustering with empirical tests on time series/DNA/text/video datasets.

Data Compression Using Adaptive Coding and Partial String Matching

This paper describes how the conflict can be resolved with partial string matching, and reports experimental results which show that mixed-case English text can be coded in as little as 2.2 bits/ character with no prior knowledge of the source.

An Introduction to Kolmogorov Complexity and Its Applications

The book presents a thorough treatment of the central ideas and their applications of Kolmogorov complexity with a wide range of illustrative applications, and will be ideal for advanced undergraduate students, graduate students, and researchers in computer science, mathematics, cognitive sciences, philosophy, artificial intelligence, statistics, and physics.