On the distribution of the number of computations in any finite number of subtrees for the stack algorithm

@article{Johannesson1985OnTD,
  title={On the distribution of the number of computations in any finite number of subtrees for the stack algorithm},
  author={Rolf Johannesson and Kamil Sh. Zigangirov},
  journal={IEEE Trans. Inf. Theory},
  year={1985},
  volume={31},
  pages={100-102}
}
Multitype branching processes have been employed to determine the stack algorithm computational distribution for one subtree. These results are extended here to the distribution of the number of computations in any finite number of subtrees. Starting from the computational distribution for K-1 subsequent subtrees, a recurrent equation for the distribution for K subsequent subtrees is determined. 

Figures from this paper

Bounds on a probability for the heavy tailed distribution and the probability of deficient decoding in sequential decoding

  • T. Hashimoto
  • Computer Science
    IEEE Transactions on Information Theory
  • 2005
TLDR
A new bound on a probability in the tail of the heavy tailed distribution is given and this bound is used to prove the long-standing conjecture on PG, that is, PG ap constanttimes1/(sigmarhoNrho-1) for a large speed factor sigma of the decoder and for aLarge receive buffer size N whenever the coding rate R and rho satisfy E(rho)=rhoR for 0 les rho les 1.

Analysis of Sequential Decoding Complexity Using the Berry-Esseen Inequality

TLDR
His study presents a novel technique to estimate the computational complexity of sequential decoding using the Berry-Esseen theorem, and finds that the theoretical upper bound for the simplified GDA almost matches the simulation results as the signal-to-noise ratio (SNR) per information bit ($\gamma_b$) is greater than or equal to 8 dB.

Sequential Decoding of Convolutional Codes

TLDR
This article surveys many variants of sequential decoding in literature, and presents the Algorithm A, a general sequential search algorithm, and classes of convolutional codes that are particularly appropriate for sequential decoding are outlined.

New importance sampling methods for simulating sequential decoders

Two importance sampling techniques are presented for estimating the distribution of computation of sequential decoding for specific convolutional codes (not ensemble averages). Only stack algorithm

Wiener odd and even indices on BC-Trees

TLDR
It is theoretically that Wiener odd index is not more than its even index for general BC-Trees and closed formulae of the two indices are provided for path BC-tree, star, k-extending star tree and caterpillar BC- tree.

References

SHOWING 1-9 OF 9 REFERENCES

On the distribution of computation for sequential decoding using the stack algorithm

TLDR
An analytical procedure is presented for generating the computational distribution for the Zigangirov-Jelinek stack algorithm and the calculated computational performance is virtually Identical to that obtained by time consuming simulations.

The distribution of the sequential decoding computation time

  • J. Savage
  • Computer Science
    IEEE Trans. Inf. Theory
  • 1966
TLDR
This paper considers the probability distribution of the computation time per decoded digit for the Fano algorithm on the binary symmetric channel and shows that it behaves as L^{-\alpha, \alpha > 0, in the distribution parameter L, that is, it is of the Pareto type.

A lower bound to the distribution of computation for sequential decoding

TLDR
The performance of systems using sequential decoding is limited by the computational and buffer capabilities of the decoder, not by the probability of making a decoding error.

An upper bound on moments of sequential decoding effort

  • F. Jelinek
  • Computer Science
    IEEE Trans. Inf. Theory
  • 1969
TLDR
An infinite tree code ensemble upper bound is derived on the moments of the computational effort connected with sequential decoding governed by the Fano algorithm, which agrees qualitatively with the lower bounds of Jacobs and Berlekamp.

Further results on binary convolutional codes with an optimum distance profile (Corresp.)

TLDR
It is shown how the optimum distance profile criterion can be used to limit the search for codes with a large value of d_{\infty} , and extensive lists of such robustly optimal codes containing rate R = l/2 nonsystematic codes are presented.

On the computation time distribution of the sequential decoding

  • IEEE Trans . Inform . Theory

Further results on binary convolutional codes with an opt imum distance profile

  • IEEE Trans . Inform . Themy
  • 1972

On the computation time distribution of the sequential decoding

  • IEEE Trans . Inform . Theory