On the capacity of computer memory with defects

@article{Heegard1983OnTC,
  title={On the capacity of computer memory with defects},
  author={Chris Heegard and Abbas El Gamal},
  journal={IEEE Trans. Inf. Theory},
  year={1983},
  volume={29},
  pages={731-739}
}
A computer memory with defects is modeled as a discrete memoryless channel with states that are statistically determined. [] Key Method Arimoto-Blahut type algorithms are used to compute the storage capacity.

Figures from this paper

Capacity and coding for memories with real-time noisy defect information at encoder and decoder
TLDR
The paper investigates the problem of information storage in a defective medium where real-time noisy information is available on the defects at both the encoder and the decoder and shows that this problem can be transformed into Shannon's channel with side information problem.
Capacity and coding for memories with real-time noisy defect information at both sides
  • M. Salehi
  • Computer Science
    [1990] Proceedings. First International Symposium on Uncertainty Modeling and Analysis
  • 1990
TLDR
The paper investigates the problem of information storage in defective media where a real-time noisy information about the defects is available at both the encoder and the decoder and it is shown that in some cases of interest the capacity can be described without employing Shannon strategies.
Explicit capacity achieving codes for defective memories
TLDR
It is shown how the state of the art capacity achieving codes, in combination with a coset coding and another error correcting code, can be used in order to asymptotically achieve the capacity of the binary defective memory.
On coding for 'stuck-at' defects
TLDR
Additive linear codes for use on the defect channel--a model for computer memories with stuck-at defects--are studied and a reasonably practical convolutional coding scheme is described and simulated.
Partitioned linear block codes for computer memory with 'stuck-at' defects
  • C. Heegard
  • Computer Science
    IEEE Trans. Inf. Theory
  • 1983
TLDR
It is shown that partitioned linear block codes achieve the Shannon capacity for a computer memory with symmetric defects and errors.
Coding with Side Information for Radiation-Tolerant Memory Devices
TLDR
Simulation results show that while coding with complete side information at the encoder offers the most performance gain compared to when coding without side information is used, coding with partial side information can close the gap between the optimal and current approach with- out incurring much additional overhead.
On the capacity of sticky storage devices
  • H. Witsenhausen
  • Computer Science
    AT&T Bell Laboratories Technical Journal
  • 1984
TLDR
The problem of finding the maximum error-free, long-term average capacity per cell and cycle for binary cells with either unilateral or symmetric stickiness is solved.
Coding for memory with stuck-at defects
  • Yongjune KimB. Kumar
  • Computer Science
    2013 IEEE International Conference on Communications (ICC)
  • 2013
TLDR
An encoding scheme for partitioned linear block codes (PLBC) which mask the stuck-at defects in memories is proposed and an upper bound and the estimate of the probability that masking fails are derived.
Scrubbing with partial side information for radiation-tolerant memory
TLDR
Alternative coding schemes for scrubbing are investigated, where the channel model depends on the cell states, defective or not, and the encoder uses channel state information (CSI) or side information.
Information representation and coding for flash memories
  • Anxiao JiangJehoshua Bruck
  • Computer Science
    2009 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing
  • 2009
TLDR
A focus is placed on rewriting codes and rank modulation in flash memories, with a view to addressing many aspects of a successful storage system.
...
...

References

SHOWING 1-10 OF 20 REFERENCES
An error correcting scheme for defective memory
TLDR
A scheme for storing information in a memory system with defective memory ceils using "additive" codes was proposed by Kuznetsov and Tsybakov, and considerably better bounds on the information rate are presented.
An algorithm for computing the capacity of arbitrary discrete memoryless channels
  • S. Arimoto
  • Computer Science
    IEEE Trans. Inf. Theory
  • 1972
TLDR
A systematic and iterative method of computing the capacity of arbitrary discrete memoryless channels is presented and a few inequalities that give upper and lower bounds on the capacity are derived.
Computation of channel capacity and rate-distortion functions
  • R. Blahut
  • Computer Science
    IEEE Trans. Inf. Theory
  • 1972
TLDR
A simple algorithm for computing channel capacity is suggested that consists of a mapping from the set of channel input probability vectors into itself such that the sequence of probability vectors generated by successive applications of the mapping converges to the vector that achieves the capacity of the given channel.
Noiseless coding of correlated information sources
TLDR
The minimum number of bits per character R_X and R_Y needed to encode these sequences so that they can be faithfully reproduced under a variety of assumptions regarding the encoders and decoders is determined.
A coding theorem for the discrete memoryless broadcast channel
  • K. Marton
  • Computer Science
    IEEE Trans. Inf. Theory
  • 1979
TLDR
A coding theorem for the discrete memoryless broadcast channel is proved for the case where no common message is to he transmitted and the result is tight for broadcast channels having one deterministic component.
On source coding with side information via a multiple-access channel and related problems in multi-user information theory
TLDR
A coding theorem is introduced and established for another type of source-channel matching problem, i.e., a system of source coding with side information via a MAC, which can be regarded as an extension of the Ahlswede-Korner-Wyner type noiseless coding system.
Coding Theorems of Information Theory
  • J. Wolfowitz
  • Computer Science
    Ergebnisse der Mathematik und Ihrer Grenzgebiete
  • 1961
TLDR
This chapter discusses the Discrete Memoryless Channel, a discrete memoryless channel with additive Gaussian noise, and the coding theorem, which states that message sequences on the periphery of the sphere or within a shell adjacent to the boundary should be considered to be discrete.
Principles of digital communication and coding
  • V. Chan
  • Mathematics
    Proceedings of the IEEE
  • 1981
TLDR
The author has achieved to a considerable degree his stated goals of making the book interesting to practicing engineers, useful as a textbook for graduate students, and a starting point for further investigation by researchers.
A proof of the data compression theorem of Slepian and Wolf for ergodic sources (Corresp.)
  • T. Cover
  • Computer Science, Mathematics
    IEEE Trans. Inf. Theory
  • 1975
TLDR
It is established that the Slepian-Wolf theorem is true without change for arbitrary ergodic processes \{(X_i,Y_i)\}_{i=1}^{\infty} and countably infinite alphabets.
...
...