Von Neumann Computers

  title={Von Neumann Computers},
  author={Rudolf Eigenmann and David J. Lilja},
The sections in this article are 1 Historical Perspectives 2 Organization and Operation of the Von Neumann Architecture 3 Memory-Access Bottleneck 4 Alternatives to the Von Neumann Architecture 5 Current Applications of Von Neumann Computers 6 Conclusions 

Figures from this paper

A Survey of Different Approaches for Overcoming the Processor - Memory Bottleneck
A brief review of various memorycentric systems that implement different approaches of merging or placing the memory near to the processing elements and a deep analysis of several well-known memory-centric systems are given.
Performance Evaluation of RISC-Based Memory-Centric Processor Architecture
This paper proposes a memory-centric approach of parallel processing in a distributed system, which includes several RISC-based processor cores that integrate on-chip memory and provide direct access to it, without using registers and cache memory.
Design of Processor in Memory with RISC-modified Memory-Centric Architecture
A novel memory-centric approach of computing in a RISC-modified processor core that includes on-chip memory, which can be directly accessed, without the use of general-purpose registers (GPRs) and cache memory, is proposed.
Timing Analysis of General-Purpose Graphics Processing Uni ts for Real-Time Systems: Models and Analyses
This work addressed the problem of GPU timing analysis from the probabilistic and measurement-based perspectives, and developed both theoretical and practical approaches that could provide exact values and tight upper bounds, marginally optimistic lower bounds or Probabilistic upper bounds on the worst-case temporal behavior of GPU processing.
The RASP (Random Access Stored Program) abstract machine emulator implemented as a plugin for emuStudio – extendable platform for computer architectures emulation.
Explaining simulated phenomena : a defense of the epistemic power of computer simulations
This work proposes to defend their epistemic power by showing how computer simulations explain simulated phenomena by elaborating on two central questions, the first regarding the process of explaining a simulated phenomenon by using a computer simulation, and the second concerning the understanding that such an explanation yields.
From MTJ Device to Hybrid CMOS/MTJ Circuits: A Review
The article concludes with the challenges and future prospects of hybrid CMOS/MTJ circuits, which will motivate people in academia to cultivate research in this domain and industry to realize the prototype for a wide range of potential applications.
Veritabanı Uygulamalarında FPGA Tabanlı Hızlandırıcı Kullanımı
Yazilim uygulamalarinin performansi, sistem duzeyindeki cesitli ozellikler ile olculebilir. Sistem performansi iki ana faktor tarafindan belirlenir: veri hesaplama ve hareket kapasitesi. Hesaplama
FPGA Implementation of RISC-based Memory-centric Processor Architecture
  • D. Efnusheva
  • Computer Science
    International Journal of Advanced Computer Science and Applications
  • 2019
A RISC-based memory-centric processor architecture is proposed that integrates the memory into the same chip die, and thus provides direct access to the on-chip memory, without the use of general-purpose registers (GPRs) and cache memory.
Investigation Performance of Strassen Matrix Multiplication Algorithm on Distributed Systems
This book discusses the history of iconography and its applications in literature, as well as some of the techniques used to describe and explain the construction of icons.


John von Neumann's Contributions to Computing and Computer Science
  • W. Aspray
  • Computer Science
    Annals of the History of Computing
  • 1989
In this essay, Aspray provides a survey of von Neumann's many important contributions to computer architecture, hardware, design and construction, programming, numerical analysis, scientific computation, and the theory of computing.
John von Neumann and the origins of modern computing
John von Neumann, in his 54 years of life, transformed the face and character of many pure/applied mathematical subject areas. And in particular he participated seminally in the creation of the
Advanced computer architecture - parallelism, scalability, programmability
This book deals with advanced computer architecture and parallel programming techniques and is suitable for use as a textbook in a one-semester graduate or senior course, offered by Computer Science, Computer Engineering, Electrical Engineering, or Industrial Engineering programs.
The origins of computer programming
  • B. Randell
  • Economics
    IEEE Annals of the History of Computing
  • 1994
This article discusses early automatic devices, Babbage's contributions set against a background of the technology of his day, the contributions of some of his direct successors, and the genesis of the stored-program idea.
Computer Architecture: A Designer''s Text Based on a Generic RISC, McGraw-Hill Computer Science Ser
Definition of a machine, in which we specify the instruction set, philosophy first and then the critical step details of the instruction set, formats and hardware and software implications operating
When Caches Aren't Enough: Data Prefetching Techniques
The authors review three popular prefetching techniques: software-initiated prefetchy, sequential hardware- initiatedPrefetching, and pref fetching via reference prediction tables.
Cache Memories
design issues. Specific aspects of cache memories tha t are investigated include: the cache fetch algorithm (demand versus prefetch), the placement and replacement algorithms, line size,
Cache coherence in large-scale shared-memory multiprocessors: issues and comparisons
This paper surveys current cache coherence mechanisms, and identifies several issues critical to their design, and hybrid strategies are presented that can enhance the performance of the multiprocessor memory system by combining several different coherence mechanism into a single system.
A survey of cache coherence schemes for multiprocessors
Schemes for cache coherence that exhibit various degrees of hardware complexity, ranging from protocols that maintain coherence in hardware, to software policies that prevent the existence of copies
The social limits of speed: development and use of supercomputers
The development of supercomputers over the past three decades is described in conjunction with the social relations surrounding the development and use of these machines.