Cache-based Computer Systems

@article{Kaplan1973CachebasedCS,
  title={Cache-based Computer Systems},
  author={Kenneth R. Kaplan and Robert O. Winder},
  journal={Computer},
  year={1973},
  volume={6},
  pages={30-36}
}
A cache-based computer system employs a fast, small memory -the " cache" - interposed between the usual processor and main memory. At any given time the cache contains as much as possible the instructions and data the processor needs; as new information is needed it is brought from main memory to cache, displacing old information. The processor tends to operate with a memory of cache speed but with main memory cost-per-bit. This configuration has analogies with other systems employing memory… 
Cache memory performance in a unix enviroment
TLDR
The intent is to credibly quantify the performance implications of parameter selection in a manner which emphasizes implementation tradeoffs using address reference traces obtained from typical multitasking UNIX workloads to research cache memory performance.
The Memory System of a High-Performance Personal Computer
The memory system of the Dorado, a compact high- performance personal computer, has very high I/O bandwidth, a large paged virtual memory, a cache, and heavily pipelined control; this paper discusses
Using cache memory to reduce processor-memory traffic
TLDR
It is demonstrated that a cache exploiting primarily temporal locality (look-behind) can indeed reduce traffic to memory greatly, and introduce an elegant solution to the cache coherency problem.
Cache system design in the tightly coupled multiprocessor system
TLDR
System requirements in the multiprocessor environment as well as the cost-performance trade-offs of the cache system design are given in detail and the possibility of sharing the Cache system hardware with other multiprocessioning facilities (such as dynamic address translation, storage protection, locks, serialization, and the system clocks) is discussed.
Cache memory systems for multiprocessor architecture
TLDR
By appropriate cache system design, adequate memory system speed can be achieved to keep the processors busy and smaller cache memories are required for dedicated processors than for standard processors.
Lockup-free instruction fetch/prefetch cache organization
TLDR
A cache organization is presented that essentially eliminates a penalty on subsequent cache references following a cache miss and has been incorporated in a cache/memory interface subsystem design, and the design has been implemented and prototyped.
Lockup-free instruction fetch/prefetch cache organization
TLDR
A cache organization is presented that essentially eliminates a penalty on subsequent cache references following a cache miss and has been incorporated in a cache/memory interface subsystem design, and the design has been implemented and prototyped.
Stack-Based Single-Pass Cache Simulation
TLDR
This chapter and the following chapter address the problem of simulating cache-based memory systems, which optimally requires measurement of the performance of a large number of cache designs.
An efficient flexible buffered memory system
TLDR
A flexible low cost multiclass memory system has been evaluated and constructed to accommodate memory sizes from 128 000 to 4 000 000 ten-bit bytes to achieve an efficient directory and update list by a high-speed segmented memory using memory cells with half the access time of the buffer memory.
Effectiveness of Private Caches in Multiprocessor Systems with Parallel-Pipelined Memories
TLDR
An approximate model is developed to estimate the processor utilization and the speed-up improvement provided by the caches, and it assumes a two-dimensional organization, previously studied under random and word access.
...
...

References

SHOWING 1-6 OF 6 REFERENCES
Considerations in block-oriented systems design
TLDR
The feasibility of transmitting blocks of words between memory and CPU is explored in a simulation model driven by customer-based IBM 7000 series data, and block transfer is seen to be an efficient memory access method which can provide high performance, superior to single-word access.
Structural Aspects of the System/360 Model 85 II: The Cache
The cache, a high-speed buffer establishing a storage hierarchy in the Model 85, is discussed in depth in this part, since it represents the basic organizational departure from other SYSTEM/360
Evaluation of multilevel memories
TLDR
Stack processing is described as a replacement for simulation that obtains hit-ratio data 1000 times faster than before and an example is given to illustrate how to select between two competing technologies, how to design the best hierarchy, and how to determine the information flow which optimizes the total cost performance of the system.
Slave Memories and Dynamic Storage Allocation
  • M. Wilkes
  • Computer Science
    IEEE Trans. Electron. Comput.
  • 1965
TLDR
The use is discussed of a fast core memory of, say, 32000 words as a slave to a slower core memory in such a way that in practical cases the effective access time is nearer that of the fast memory than that ofThe slow memory.
Study of "Look-Aside" Memory
  • F. Lee
  • Psychology, Biology
    IEEE Transactions on Computers
  • 1969
A small, but fast, associative memory can be used in a "look-aside" manner to improve the overall memory performance of a computer. For a 128-cell 100-ns associate memory working with a 1-us main
A Data Base For Computer Performance Evaluation
TLDR
An RCA Labs team project begun in 1968 is described, with the general goal to predict the performance of new system architectures being considered within RCA for future computers, where the cache-system or slave memory idea was the principal subject.