Universal Memcomputing Machines

  title={Universal Memcomputing Machines},
  author={Fabio L. Traversa and Massimiliano Di Ventra},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  • F. Traversa, M. Ventra
  • Published 5 May 2014
  • Computer Science
  • IEEE Transactions on Neural Networks and Learning Systems
We introduce the notion of universal memcomputing machines (UMMs): a class of brain-inspired general-purpose computing machines based on systems with memory, whereby processing and storing of information occur on the same physical location. We analytically prove that the memory properties of UMMs endow them with universal computing power (they are Turing-complete), intrinsic parallelism, functional polymorphism, and information overhead, namely, their collective states can support exponential… 
Memcomputing NP-complete problems in polynomial time using polynomial resources and collective states
This work shows an experimental demonstration of an actual memcomputing architecture that solves the NP-complete version of the subset sum problem in only one step and is composed of a number of memprocessors that scales linearly with the size of the problem.
On the Universality of Memcomputing Machines
Universal memcomputing machines (UMMs) represent a novel computational model in which memory (time nonlocality) accomplishes both tasks of storing and processing of information. UMMs have been shown
Memcomputing: Leveraging memory and physics
The literature surrounding a novel hybrid analog-digital computing system (a memcomputer) built from memristors, a basic electronics component with variable resistance, theorized about in 1971 and used recently for various applications including fast non-volatile RAM.
Stress-Testing Memcomputing on Hard Combinatorial Optimization Problems
The simulations of DMMs still scale linearly in both time and memory up to these very large problem sizes versus the exponential requirements of the state-of-the-art solvers, which further reinforce the advantages of the physics-based memcomputing approach compared with traditional ones.
Memcomputing: Leveraging memory and physics to compute efficiently
This work discusses how to employ one such property, memory (time non-locality), in a novel physics-based approach to computation: Memcomputing, and focuses on digital memcomputing machines that are scalable.
Polynomial-time solution of prime factorization and NP-complete problems with digital memcomputing machines.
It is proved mathematically that periodic orbits and strange attractors cannot coexist with equilibria, and the implications of the DMM realization through SOLCs to the NP = P question related to constraints of poly-resources resolvability are discussed.
Memcomputing for Accelerated Optimization
This work discusses self-organizing gates, namely Self-Organizing Algebraic Gates (SOAGs), aimed to solve linear inequalities and therefore used to solve optimization problems in Integer Linear Programming (ILP) format.
MemComputing: An efficient topological computing paradigm
  • M. Di Ventra, F. Traversa
  • Computer Science
    2017 IEEE International Conference on Microwaves, Antennas, Communications and Electronic Systems (COMCAS)
  • 2017
This work introduces memcomputing, a novel computing paradigm that employs memory (time non-locality) to both store and process information on the same physical location to solve complex problems very efficiently both in hardware and in software.
MemComputing Integer Linear Programming
This work proposes a radically different non-algorithmic approach to ILP based on a novel physics-inspired computing paradigm: Memcomputing, and describes a new circuit architecture of memcomputing machines specifically designed to solve for the linear inequalities representing a general ILP problem.
A Survey and Discussion of Memcomputing Machines
It is argued that the UMM is a physically implausible machine, and that the DMM model, as described by numerical simulations, is no more powerful than Turing-complete computation.


Dynamic computing random access memory
It is shown that DCRAM provides massively-parallel and polymorphic digital logic, namely it allows for different logic operations with the same architecture, by varying only the control signals, and therefore can really serve as an alternative to the present computing technology.
Solving mazes with memristors: a massively-parallel approach
  • Y. Pershin, M. Ventra
  • Chemistry
    Physical review. E, Statistical, nonlinear, and soft matter physics
  • 2011
The results demonstrate not only the application of memristive networks to the field of massively parallel computing, but also an algorithm to solve mazes, which could find applications in different fields.
Algorithms for quantum computation: discrete logarithms and factoring
  • P. Shor
  • Computer Science
    Proceedings 35th Annual Symposium on Foundations of Computer Science
  • 1994
Las Vegas algorithms for finding discrete logarithms and factoring integers on a quantum computer that take a number of steps which is polynomial in the input size, e.g., the number of digits of the integer to be factored are given.
Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations
A new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks, based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry.
The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life plus The Secrets of Enigma
Alan Turing 1912-1954 Computable Numbers: A Guide 1. On Computable Numbers, with an Application to the Entscheidensproblem (1936) 2. On Computable Numbers: Corrections and Critiques 3. Systems of
Neuromorphic, Digital, and Quantum Computation With Memory Circuit Elements
Memory effects are ubiquitous in nature and the class of memory circuit elements - which includes memristive, memcapacitive, and meminductive systems - shows great potential to understand and
Computers and Intractability: A Guide to the Theory of NP-Completeness
Horn formulae play a prominent role in artificial intelligence and logic programming. In this paper we investigate the problem of optimal compression of propositional Horn production rule knowledge
Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer
  • P. Shor
  • Computer Science
    SIAM Rev.
  • 1999
Efficient randomized algorithms are given for factoring integers and finding discrete logarithms, two problems that are generally thought to be hard on classical computers and that have been used as the basis of several proposed cryptosystems.
Memristor Networks
Top experts in computer science, mathematics, electronics, physics and computer engineering present foundations of the memristor theory and applications, demonstrate how to design neuromorphic network architectures based on Memristor assembles, analyse varieties of the dynamic behaviour of memristive networks and show how to realise computing devices from memristors.
GPU Computing
The background, hardware, and programming model for GPU computing is described, the state of the art in tools and techniques are summarized, and four GPU computing successes in game physics and computational biophysics that deliver order-of-magnitude performance gains over optimized CPU applications are presented.