Integer factorization with a neuromorphic sieve

@article{Monaco2017IntegerFW,
  title={Integer factorization with a neuromorphic sieve},
  author={John V. Monaco and Manuel M. Vindiola},
  journal={2017 IEEE International Symposium on Circuits and Systems (ISCAS)},
  year={2017},
  pages={1-4}
}
The bound to factor large integers is dominated by the computational effort to discover numbers that are smooth, typically performed by sieving a polynomial sequence. On a von Neumann architecture, sieving has log-log amortized time complexity to check each value for smoothness. This work presents a neuromorphic sieve that achieves a constant time check for smoothness by exploiting two characteristic properties of neuromorphic architectures: constant time synaptic integration and massively… 

Figures from this paper

Factoring Integers With a Brain-Inspired Computer

A neuromorphic sieve is presented that achieves a constant-time check for smoothness by reversing the roles of space and time from the von Neumann architecture and exploiting two characteristic properties of brain-inspired computation: massive parallelism and constant time synaptic integration.

Solving Vertex Cover via Ising Model on a Neuromorphic Processor

This work demonstrates how a neuromorphic processor can be used to solve the classic vertex cover problem via an Ising spin model and states that space and time efficiency is decreased only by a constant factor without degrading solution quality.

Dynamic Programming with Spiking Neural Computing

It is demonstrated that a broad class of combinatorial and graph problems known as dynamic programs enjoy simple and efficient neuromorphic implementations, by developing a general technique to convert dynamic programs to spiking neuromorphic algorithms.

Shortest Path and Neighborhood Subgraph Extraction on a Spiking Memristive Neuromorphic Implementation

This work demonstrates two graph problems that can be solved using SNCs and discusses the approach for mapping these applications to an SNC, and estimates the performance of a memristive SNC for these applications on three real-world graphs.

The TENNLab Exploratory Neuromorphic Computing Framework

This letter presents the software architecture of the TENNLab framework, a software infrastructure that will enable potential users of spiking, neuromorphic computing systems to develop applications and evaluate computing architectures, and for architecture researchers to develop and evaluate their architectures with a variety of applications.

Spiking Neuromorphic Networks for Binary Tasks

The goal with this work is to enable the composition of multiple spiking neural networks, perhaps trained with other methodologies, without requiring information to leave a neuroprocessor for processing by conventional hardware.

Building a Comprehensive Neuromorphic Platform for Remote Computation

This paper discusses methods, motivated by recent results, to produce a cohesive neuromorphic system that effectively integrates novel and traditional algorithms for context-driven remote computation.

The Case for RISP: A Reduced Instruction Spiking Processor

RISP, a reduced instruction spiking processor, is introduced and it is demonstrated how it aids in developing hand built neural networks for simple computational tasks, and how it may be employed to simplify neural networks built with more complicated machine learning techniques.

Efficient CMOS Invertible Logic Using Stochastic Computing

This paper presents a design methodology for invertible stochastic gates, which can be implemented using a small amount of CMOS hardware and proves that the design can not only correctly implement the basic gates with invertable capability but can also be extended to construct invertibles stochastics adder and multiplier circuits.

Reducing the Size of Spiking Convolutional Neural Networks by Trading Time for Space

This work designs multiple spiking computational modules, which reduce the size of the networks back to size ofThe conventional networks by taking advantage of the temporal nature of spiking neural networks.

References

SHOWING 1-7 OF 7 REFERENCES

A Block Lanczos Algorithm for Finding Dependencies Over GF(2)

The Lanczos algorithm is modified to produce a sequence of orthogonal subspaces of GF(2)n, each having dimension almost N, by applying the given matrix and its transpose to N binary vectors at once.

A million spiking-neuron integrated circuit with a scalable communication network and interface

Inspired by the brain’s structure, an efficient, scalable, and flexible non–von Neumann architecture is developed that leverages contemporary silicon technology and is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification.

The Role of Smooth Numbers in Number Theoretic Algorithms

A smooth number is a number with only small prime factors. In particular, a positive integer is y-smooth if it has no prime factor exceeding y. Smooth numbers are a useful tool in number theory

The Quadratic Sieve Factoring Algorithm

The quadratic sieve algorithm is currently the method of choice to factor very large composite numbers with no small factors, and some of the improvements suggested for it are described.

Smooth numbers: computational number theory and beyond

The analysis of many number theoretic algorithms turns on the role played by integers which have only small prime factors; such integers are known as “smooth numbers”. To be able to determine which

Prime Numbers: A Computational Perspective

Prime numbers beckon to the beginner, the basic notion of primality being accessible to a child. Yet, some of the simplest questions about primes have stumped humankind for millennia. In this book,

Algorithmic Number Theory: Lattices, Number Fields, Curves and Cryptography

1. Solving Pell's equation Hendrik Lenstra 2. Basic algorithms in number theory Joe Buhler and Stan Wagon 3. Elliptic curves Bjorn Poonen 4. The arithmetic of number rings Peter Stevenhagen 5. Fast