• Corpus ID: 49555723

High Dimensional Discrete Integration by Hashing and Optimization

@article{Maity2018HighDD,
  title={High Dimensional Discrete Integration by Hashing and Optimization},
  author={Raj Kumar Maity and Arya Mazumdar and Soumyabrata Pal},
  journal={ArXiv},
  year={2018},
  volume={abs/1806.11542}
}
Recently Ermon et al. (2013) pioneered an ingenuous way to practically compute approximations to large scale counting or discrete integration problems by using random hashes. The hashes are used to reduce the counting problems into many separate discrete optimization problems. The optimization problems can be solved by an NP-oracle, and if they allow some amenable structure then commercial SAT solvers or linear programming (LP) solvers can be used in lieu of the NP-oracle. In particular, Ermon… 

Figures and Tables from this paper

Multi-resolution Hashing for Fast Pairwise Summations
TLDR
This work provides a general framework for designing data structures through hashing that reaches far beyond what previous techniques allowed, and leads to data structures with sub-linear query time that significantly improve upon random sampling and can be used for Kernel Density, Partition Function Estimation and sampling.

References

SHOWING 1-10 OF 26 REFERENCES
Taming the Curse of Dimensionality: Discrete Integration by Hashing and Optimization
TLDR
A randomized algorithm is proposed that gives a constant-factor approximation of a general discrete integral defined over an exponentially large set and demonstrates that with a small number of MAP queries the authors can efficiently approximate the partition function of discrete graphical models.
Low-density Parity Constraints for Hashing-Based Discrete Integration
TLDR
This work proposes the use of low-density parity constraints to make inference more tractable in practice and shows that such sparse constraints belong to a new class of hash functions that are called Average Universal, which continue to provide provable accuracy guarantees.
Stochastic Integration via Error-Correcting Codes
TLDR
Two ideas to address the task of summing a non-negative function f over a discrete set Ω are introduced, one of which is to maximize f over explicitly generated random affine subspaces of Ω, which is equivalent to unconstrained maximization of f over an exponentially smaller domain.
Optimization With Parity Constraints: From Binary Codes to Discrete Integration
TLDR
This work proposes an Integer Linear Programming (ILP) formulation for the problem, enhanced with new sparsification techniques to improve decoding performance and gets both lower and upper bounds on the partition function, which hold with high probability and are much tighter than those obtained with variational methods.
Algorithmic Improvements in Approximate Counting for Probabilistic Inference: From Linear to Logarithmic SAT Calls
TLDR
This work presents a new approach to hashing-based approximate model counting in which the number of oracle invocations grows logarithmically in n, while still providing strong theoretical guarantees, and designs an algorithm for #CNF with strongly probably approximately correct (SPAC) guarantees.
On Approximation Algorithms for #P
TLDR
Any function in Valiant’s class P can be approximated to within any constant factor by a function in the class $\Delta _3^p $ of the polynomial-time, hierarchy.
Approximating the Partition Function of the Ferromagnetic Potts Model
TLDR
It is as hard to approximate the partition function as it is to find approximate solutions to a wide range of counting problems, including that of determining the number of independent sets in a bipartite graph.
A New Approach to Model Counting
TLDR
ApproxCount, an algorithm that approximates the number of satisfying assignments or models of a formula in propositional logic, is introduced, based on SampleSat, a new algorithm that samples from the solution space of a propositional Logic formula near-uniformly.
Model Counting: A New Strategy for Obtaining Good Bounds
TLDR
This work presents a new approach to model counting that is based on adding a carefully chosen number of so-called streamlining constraints to the input formula in order to cut down the size of its solution space in a controlled manner.
Graphical Models, Exponential Families, and Variational Inference
TLDR
The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models.
...
1
2
3
...