Efficiently Approximating Weighted Sums with Exponentially Many Terms

@inproceedings{Chawla2001EfficientlyAW,
  title={Efficiently Approximating Weighted Sums with Exponentially Many Terms},
  author={Deepak Chawla and Lin Li and Stephen D. Scott},
  booktitle={COLT/EuroCOLT},
  year={2001}
}
We explore applications of Markov chain Monte Carlo methods for weight estimation over inputs to the Weighted Majority (WM) and Winnow algorithms. This is useful when there are exponentially many such inputs and no apparent means to efficiently compute their weighted sum. The applications we examine are pruning classifier ensembles using WM and learning general DNF formulas using Winnow. These uses require exponentially many inputs, so we define Markov chains over the inputs to approximate the… 

Making efficient learning algorithms with exponentially many features

This work proposed a Winnow-based algorithmGMIL-2 with a new grouping strategy, which has the same generalization ability as GMIL-1 and can save more than 98% time and memory in practice and also proposed two extensions to further improve its generalization abilities.

Online Closure-Based Learning of Relational Theories

This work develops an online algorithm for learning theories formed by disjunctions of existentially quantified conjunctions of atoms that shows that the number of mistakes depends only logarithmically on theNumber of features.

Static Optimality and Dynamic Search-Optimality in Lists and Trees

A (computationally in efficient) algorithm is shown that can achieve a 1+ε ratio with respect to the best static list in hindsight, by a simple efficient algorithm, and what is called ``dynamic search optimality'': dynamic optimality if the on-line algorithm is allowed to make free rotations after each request.

Multiple-Instance Learning of Real-Valued Geometric Patterns

This work defines and study a real-valued multiple-instance model in which each multiple- instance example (bag) is given areal-valued label in [0, 1] that indicates the degree to which the bag satisfies the target concept.

A Bibliography of Papers in Lecture Notes in Computer Science (2003) (Part 1 of 4)

(2 + ε) [?]. (2) [?]. + [?, ?]. {0, ∗, 1} [?]. 0, 1 [?]. 0− 1 [?, ?]. 1 [?, ?]. 2 [?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?]. 2 [?]. 3 [?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?,

On approximating weighted sums with exponentially many terms

References

SHOWING 1-10 OF 23 REFERENCES

Efficient Learning With Virtual Threshold Gates

We reduce learning simple geometric concept classes to learning disjunctions over exponentially many variables. We then apply an on-line algorithm called Winnow whose number of prediction mistakes

Redundant noisy attributes, attribute errors, and linear-threshold learning using winnow

Weakly learning DNF and characterizing statistical query learning using Fourier analysis

It is proved that an algorithm due to Kushilevitz and Mansour can be used to weakly learn DNF using membership queries in polynomial time, with respect to the uniform distribution on the inputs, and it is obtained that DNF expressions and decision trees are not evenWeakly learnable with any unproven assumptions.

The Markov chain Monte Carlo method: an approach to approximate counting and integration

The introduction of analytical tools with the aim of permitting the analysis of Monte Carlo algorithms for classical problems in statistical physics has spurred the development of new approximation algorithms for a wider class of problems in combinatorial enumeration and optimization.

An Efficient Extension to Mixture Techniques for Prediction and Decision Trees

An efficient method for maintaining mixtures of prunings of a prediction or decision tree that extends the previous methods for “node-based” pruning to the larger class of edge-based prunments, and it is proved that the algorithm maintains correctly the mixture weights for edge- based prunts with any bounded loss function.

A Mildly Exponential Time Algorithm for Approximating the Number of Solutions to a Multidimensional Knapsack Problem

A time randomized algorithm is described that estimates the number of feasible solutions of a multidimensional knapsack problem within 1 ± ε of the exact number of constraints.

How to use expert advice

This work analyzes algorithms that predict a binary value by combining the predictions of several prediction strategies, called `experts', and shows how this leads to certain kinds of pattern recognition/learning algorithms with performance bounds that improve on the best results currently known in this context.

The weighted majority algorithm

A simple and effective method, based on weighted voting, is introduced for constructing a compound algorithm in a situation in which a learner faces a sequence of trials, and the goal of the learner is to make few mistakes.

Random walks on truncated cubes and sampling 0-1 knapsack solutions

  • Ben MorrisA. Sinclair
  • Mathematics
    40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)
  • 1999
A full-polynomial randomized approximation scheme for counting the feasible solutions of a 0-1 knapsack problem is obtained using a combinatorial construction called a "balanced almost uniform permutation", which seems to be of independent interest.

Learning Quickly When Irrelevant Attributes Abound: A New Linear-Threshold Algorithm

  • N. Littlestone
  • Computer Science
    28th Annual Symposium on Foundations of Computer Science (sfcs 1987)
  • 1987
This work presents one such algorithm that learns disjunctive Boolean functions, along with variants for learning other classes of Boolean functions.