Motorola's 88000 family architecture

@article{Alsup1990Motorolas8F,
  title={Motorola's 88000 family architecture},
  author={Mitch Alsup},
  journal={IEEE Micro},
  year={1990},
  volume={10},
  pages={48-66}
}
  • M. Alsup
  • Published 1 May 1990
  • Computer Science
  • IEEE Micro
The initial members of the 88000 family of high-performance 32-bit microprocessor are the 88100 processor and the 88200 cache and memory management unit (CMMU). The processor manipulates integer and floating-point data and initiates instruction and data memory transactions. The CMMU minimizes the latency of main memory requests by maintaining a cache for data transaction and a cache for memory management translations. A typical system consists of one processor and two identical cache chips, one… 

ALU design and processor branch architecture

The Microarchitecture of Pipelined and Superscalar Computers

TLDR
The Microarchitecture of Pipelined and Superscalar Computers has been specifically developed as a textbook for advanced undergraduate or graduate level instruction and makes an invaluable reference for microprocessor design engineers and those seeking to pursue research in the area.

Multiprocessor Cache Coherence Based on Virtual Memory Support

TLDR
It is shown that VM-based cache coherence performs well for scientific applications that require significant aggregate memory bandwidth and basically trades off design simplicity for increased software overheads.

Issues in the Design of High Performance SIMD Architectures

TLDR
This paper develops analytical models of the potential speedup, and applies those models to real program traces obtained on a MasPar MP-2 system, and considers the impact of all improvements taken together.

An evaluation of multiprocessor cache coherence based on virtual memory support

  • K. PetersenKai Li
  • Computer Science
    Proceedings of 8th International Parallel Processing Symposium
  • 1994
TLDR
This paper presents an evaluation of the impact of several architectural parameters on the performance of virtual memory (VM) based cache coherence schemes for shared-memory multiprocessors, using trace-driven simulations to evaluate the effect of the architectural parameters.

A "Neural-RISC" processor and parallel architecture for neural networks

This thesis investigates a RISC microprocessor and a parallel architecture designed to optimise the computation of neural network models. The "Neural-RISC" is a primitive transputer-like

COMA-BC: a cache only memory architecture multicomputer for non-hierarchical common bus networks

TLDR
The authors propose a multicomputer system, COMA-BC, based on a bus type interconnection network governed by hardware coherence, which provides a programming model with shared, dynamic variable types in which the data migrates to those nodes that need it.

Cache coherence for shared memory multiprocessors based on virtual memory support

  • K. PetersenKai Li
  • Computer Science
    [1993] Proceedings Seventh International Parallel Processing Symposium
  • 1993
TLDR
A software cache coherence scheme that uses virtual memory (VM) support to maintain cache coherency for shared memory multiprocessors and evaluated two consistency models for the VM-based approach: sequential consistency and lazy release consistency.

Control Flow: Branching and Control Hazards

TLDR
This chapter shall discuss a number of measures for dealing with the branch latency, which is arguably the hardest problem in the design of high-performance instruction pipelines.

Dynamically reconfigurable architecture for a class of real-time applications

TLDR
The proposed methodology incorporates into the architectural design the notion of resource sharing as well as techniques for satisfying timing requirements, based upon a new computing system architecture called Dynamically Recon gurable Architecture or DRA, which is suitable for the target class of realtime applications.

References

SHOWING 1-10 OF 15 REFERENCES

Using cache memory to reduce processor-memory traffic

TLDR
It is demonstrated that a cache exploiting primarily temporal locality (look-behind) can indeed reduce traffic to memory greatly, and introduce an elegant solution to the cache coherency problem.

Organization and VLSI implementation of MIPS

TLDR
Low level, streamlined instruction set coupled with a fast pipeline to achieve an instruction rate of two million instructions per second facilitates both processor control and interrupt handling in the pipeline.

A Performance Analysis of MC68020-based Systems

TLDR
A method for estimating the performance of the MC68020, a 32-bit microprocessor, and computer systems based on the 68020 and a model of bus behavior that includes locality, types of accesses, and DMA activity is described.

Cache Memories

design issues. Specific aspects of cache memories tha t are investigated include: the cache fetch algorithm (demand versus prefetch), the placement and replacement algorithms, line size,

The case for the reduced instruction set computer

TLDR
It is argued that the next generation of VLSI computers may be more effectively implemented as RISC's than CISC's, and in fact may even do more harm than good.

Reduced instruction set computer architectures for VLSI

TLDR
This dissertation shows that the recent trend in computer architecture towards instruction sets of increasing complexity leads to inefficient use of scarce resources and investigates the alternative of Reduced Instruction Set Computer (RISC) architectures which allow effective use of on-chip transistors in functional units that provide fast access to frequently used operands and instructions.

An overview of the MIPS-X-MP project

TLDR
This report surveys the four key components of the MIPS-X-MP project: high performance VLSI processor architecture and design, multiprocessor architectural studies, multip rocessor programming systems, and optimizing compiler technology.

The effect of instruction set complexity on program size and memory performance

TLDR
While the miss ratio is affected by object program size, it appears that this can be corrected by simplying increasing the size of the cache and measurements of bus traffic show that even with large caches, machines with simple instruction sets can expect substantially more main memory reads than machines with dense object programs.

Available instruction-level parallelism for superscalar and superpipelined machines

TLDR
A parameterizable code reorganization and simulation system was developed and used to measure instruction-level parallelism and the average degree of superpipelining metric is introduced, suggesting that this metric is already high for many machines.

Implications of structured programming for machine architecture

TLDR
A highly compact instruction encoding scheme is presented, which can reduce program size by a factor of 3.