Ising Model Optimization Problems on a FPGA Accelerated Restricted Boltzmann Machine

@article{Patel2020IsingMO,
  title={Ising Model Optimization Problems on a FPGA Accelerated Restricted Boltzmann Machine},
  author={Saavan Patel and Lili Chen and Philip Canoza and Sayeef S. Salahuddin},
  journal={arXiv: Hardware Architecture},
  year={2020}
}
Optimization problems, particularly NP-Hard Combinatorial Optimization problems, are some of the hardest computing problems with no known polynomial time algorithm existing. Recently there has been interest in using dedicated hardware to accelerate the solution to these problems, with physical annealers and quantum adiabatic computers being some of the state of the art. In this work we demonstrate usage of the Restricted Boltzmann Machine (RBM) as a stochastic neural network capable of solving… 

Figures and Tables from this paper

High-performance combinatorial optimization based on classical mechanics
TLDR
This work proposes an algorithm based on classical mechanics, which is obtained by modifying a previously proposed algorithm called simulated bifurcation, which allows us to achieve not only high speed by parallel computing but also high solution accuracy for problems with up to one million binary variables.
Increasing ising machine capacity with multi-chip architectures
TLDR
The proposed architectures allow an Ising machine to scale in capacity and maintain its significant performance advantage (about 2200x speedup over a state-of-the-art computational substrate) and proposed optimizations in supporting batch mode operation can cut down communication demand by about 4--5x without a significant impact on solution quality.
Ising machines as hardware solvers of combinatorial optimization problems
TLDR
This review surveys the current status of various approaches to constructing Ising machines and explains their underlying operational principles, and compares and contrasts their performance using standard metrics.
Simulated bifurcation assisted by thermal fluctuation
Various kinds of Ising machines based on unconventional computing have recently been developed for practically important combinatorial optimization. Among them, the machines implementing a heuristic
Massively Parallel Probabilistic Computing with Sparse Ising Machines
TLDR
This paper presents a parallel version of the TSP called TSP “TSP2” which was developed at the University of California, Santa Barbara with real-time measurements in 2016 and showed promise in simulating the dynamic response of the immune system.
Scaling advantage of chaotic amplitude control for high-performance combinatorial optimization
The development of physical simulators, called Ising machines, that sample from low energy states of the Ising Hamiltonian has the potential to transform our ability to understand and control complex
Benchmarking a Probabilistic Coprocessor
TLDR
A probabilistic coprocessor based on p-bits that is naturally suited to solve problems that require large amount of random numbers utilized in Monte Carlo and Markov Chain Monte Carlo algorithms is presented and benchmarked.
Perspective: Probabilistic computing with p-bits
TLDR
It is shown that p-computers based on p-bits can significantly accelerate randomized algorithms used in a wide variety of applications including but not limited to Bayesian networks, optimization, Ising models and quantum Monte Carlo.
Probabilistic computing with p-bits
TLDR
This work makes the case for a probabilistic computer based on p-bits which take on values 0 and 1 with controlled probabilities and can be implemented with specialized compact energy-efficient hardware.
Quadratic Unconstrained Binary Optimisation via Quantum-Inspired Annealing
Joseph Bowles, Alexandre Dauphin, Patrick Huembeli, José Martinez, and Antonio Aćın 4 ICFO Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, Castelldefels
...
...

References

SHOWING 1-10 OF 61 REFERENCES
Optimised simulated annealing for Ising spin glasses
Logically Synthesized, Hardware-Accelerated, Restricted Boltzmann Machines for Combinatorial Optimization and Integer Factorization
TLDR
This work proposes a method of combining RBMs together that avoids the need to train large problems in their full form, and proposes methods for making the RBM more hardware amenable, allowing the algorithm to be efficiently mapped to an FPGA-based accelerator.
Power-efficient combinatorial optimization using intrinsic noise in memristor Hopfield neural networks
TLDR
A memristor-based annealing system that uses an analogue neuromorphic architecture based on a Hopfield neural network can solve non-deterministic polynomial-time (NP)-hard max-cut problems in an approach that is potentially more efficient than current quantum, optical and digital approaches.
33.1 A 74 TMACS/W CMOS-RRAM Neurosynaptic Core with Dynamically Reconfigurable Dataflow and In-situ Transposable Weights for Probabilistic Graphical Models
  • W. Wan, R. Kubendran, H. Wong
  • Computer Science
    2020 IEEE International Solid- State Circuits Conference - (ISSCC)
  • 2020
TLDR
This paper describes a CIM architecture implemented in a 130nm CMOS/RRAM process, that delivers the highest reported computational energy-efficiency of 74 tera-multiply-accumulates per second per watt (TMACS/W) for RRAM-based CIM architectures while simultaneously offering dataflow reconfigurability to address the limitations of previous designs.
Integer factorization using stochastic magnetic tunnel junctions
TLDR
A proof-of-concept experiment for probabilistic computing using spintronics technology, and integer factorization, an illustrative example of the optimization class of problems addressed by adiabatic9 and gated2 quantum computing, is presented.
FlexGibbs: Reconfigurable Parallel Gibbs Sampling Accelerator for Structured Graphs
TLDR
FlexGibbs, a reconfigurable parallel Gibbs sampling inference accelerator for structured graphs is proposed, designed an architecture optimal for solving Markov Random Field tasks using an array of parallel Gibbs samplers, enabled by chromatic scheduling.
Silicon chip delivers quantum speeds [News]
  • J. Boyd
  • Computer Science
    IEEE Spectrum
  • 2018
Fujitsu has designed a new computer architecture running on silicon-dubbed the Digital Annealer-which the company claims rivals quantum computers in utility. Fujitsu began offering cloud services in
Analog CMOS deterministic Boltzmann circuits
TLDR
CMOS circuits implementing an analog neural network with on-chip deterministic Boltzmann learning (DBL) and capacitive synaptic weight storage have been designed, fabricated, and tested and indicate that deterministicboltzmann ANNs can be implemented efficiently using analog CMOS circuitry.
On the computational complexity of Ising spin glass models
TLDR
In a spin glass with Ising spins, the problems of computing the magnetic partition function and finding a ground state are studied and are shown to belong to the class of NP-hard problems, both in the two-dimensional case within a magnetic field, and in the three-dimensional cases.
...
...