Parallelism in random access machines

  title={Parallelism in random access machines},
  author={Steven Fortune and Jim Wyllie},
  journal={Proceedings of the tenth annual ACM symposium on Theory of computing},
  • S. Fortune, J. Wyllie
  • Published 1 May 1978
  • Computer Science
  • Proceedings of the tenth annual ACM symposium on Theory of computing
A model of computation based on random access machines operating in parallel and sharing a common memory is presented. [] Key Result Similar results hold for other classes. The effect of limiting the size of the common memory is also considered.
Parallel random access machines with powerful instruction sets
  • W. Savitch
  • Mathematics
    Mathematical systems theory
  • 2005
It is shown that NP is equal to the class of sets accepted by this model in nondeterministic timeO(logn), that PSPACE isequal to theclass of setsaccepted byThis model in deterministic polynomial time and that P is equal To that of set accepted by a restricted version of thismodel inO( logn) space.
The power of parallel random access machines with augmented instruction sets
It is proved that the class of languages accepted in polynomial time by a parallel random access machine (PRAM) with both multiplication and shifts contains NEXPTIME and is contained in EXPSPACE.
On the Power of Probabilistic Choice in Synchronous Parallel Computations
It is shown that parallelism uniformly speeds up time bounded Probabilistic sequential RAM computations by nearly a quadratic factor, and that probabilistic choice can be, eliminated from parallel computation by introducing nonuniformity.
  • Mak
  • Computer Science
  • 2017
This paper presents speedup theorems where both M and M' use the same kind of storage medium, which is not linear tapes, and demonstrates that parallel time is strictly more powerful than deterministic time for Turing machines.
A complexity theory for unbounded fan-in parallelism
New upper bounds on the (unbounded fan-in) circuit complexity of symmetric Boolean functions are proved and several reducibilities and equivalences among problems are given.
Division is good
  • Janos Simon
  • Computer Science
    20th Annual Symposium on Foundations of Computer Science (sfcs 1979)
  • 1979
It is shown that in certain situations parallelism and stochastic features ('distributed random choices') are provably more powerful than either parallelism or randomness alone.
Division in Idealized Unit Cost RAMS
Simultaneous WRITES of parallel random access machines do not help to compute simple arithmetic functions
The ability of the strongest parallel random access machine model WRAM, in which different processors may simultaneously try to write into the same cell of the common memory, is investigated, and a logarithmic lower time bound for WRAMs is proved.


Time-bounded random access machines
This paper introduces a formal model for random access computers and argues that the model is a good one to use in the theory of computational complexity and shows the existence of a time complexity hierarchy which is finer than any standard abstract computer model.
On the Power of Multiplication in Random Access Machines
It is proved that, counting one operation as a unit of time and considering the machines as acceptors, deterministic and nondeterministic polynomial time acceptable languages are the same, and are exactly the languages recognizable in polynomially tape by Turing machines.
Time Bounded Random Access Machines with Parallel Processing
The RAM model of Cook and Reckhow ~s extended to allow parallel recursive calls and the elementary theory of such machines is developed The uniform cost criterion is used The results include proofs
Parallel and Nondeterministic Time Complexity Classes (Preliminary Report)
It is shown that NP is equal to the class of sets accepted by this model in nondeterministic time 0(log n), and this result is generalized to arbitrary time classes.
A characterization of the power of vector machines
Random access machines (RAMs) are usually defined to have registers that hold integers, but their ability to operate bit by bit on the bit vectors used to represent integers is overlooked, so a vector machine is called.
On parallelism in turing machines
  • D. Kozen
  • Computer Science
    17th Annual Symposium on Foundations of Computer Science (sfcs 1976)
  • 1976
A natural characterization of the polynomial time hierarchy of Stockmeyer and Meyer in terms of parallel machines is given, and a generalization of Saviten's result NONDET-L(n)-SPACE ⊆ L(n)2-SPACE is given.
Fast parallel matrix inversion algorithms
  • L. Csanky
  • Computer Science, Mathematics
    16th Annual Symposium on Foundations of Computer Science (sfcs 1975)
  • 1975
It will be shown in the sequel that the parallel arithmetic complexity of all these four problems is upper bounded by O(log2n) and the algorithms that establish this bound use a number of processors polynomial in n, disproves I. Munro's conjecture.
Parallel algorithms for the transitive closure and the connected component problems
Parallel programs are presented that determine the transitive closure of a matrix using n 3 processors and connected components of an undirected graph using n 2 processors and in both cases the desired results are obtained in time 0(log<supscrpt>2</supsCrpt>n).
Parallel Solution of Recurrence Problems
  • P. Kogge
  • Mathematics, Computer Science
    IBM J. Res. Dev.
  • 1974
It is shown that if the recurrence function f has associated with it two other functions that satisfy certain composition properties, then it can be constructed elegant and efficient parallel algorithms that can compute all N elements of the series in time proportional to ⌈log2N⌉.