#### Filter Results:

- Full text PDF available (12)

#### Publication Year

2002

2014

- This year (0)
- Last 5 years (2)
- Last 10 years (4)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

We formulate and prove a quantum Shannon-McMillan theorem. The theorem demonstrates the significance of the von Neumann entropy for translation invariant ergodic quantum spin systems on Z-lattices: the entropy gives the logarithm of the essential number of eigenvectors of the system on large boxes. The one-dimensional case covers quantum information sources… (More)

- Igor Bjelaković, Jean-Dominique Deuschel, Tyll Krüger, Ruedi Seiler, Rainer Siegmund-Schultze, Arleta Szko
- 2004

We present a quantum extension of a version of Sanov’s theorem focussing on a hypothesis testing aspect of the theorem: There exists a sequence of typical subspaces for a given set Ψ of stationary quantum product states asymptotically separating them from another fixed stationary product state. Analogously to the classical case, the exponential separating… (More)

- Igor Bjelaković, Jean-Dominique Deuschel, Tyll Krüger, Ruedi Seiler, Rainer Siegmund-Schultze, Arleta Szko
- 2008

Discrete stationary classical processes as well as quantum lattice states are asymptotically confined to their respective typical support, the exponential growth rate of which is given by the (maximal ergodic) entropy. In the iid case the distinguishability of typical supports can be asymptotically specified by means of the relative entropy, according to… (More)

In classical information theory, entropy rate and algorithmic complexity per symbol are related by a theorem of Brudno. In this paper, we prove a quantum version of this theorem, connecting the von Neumann entropy rate and two notions of quantum Kolmogorov complexity, both based on the shortest qubit descriptions of qubit strings that, run by a universal… (More)

We give a self-contained, new proof of the monotonicity of the quantum relative entropy which seems to be natural from the point of view of quantum information theory. It is based on the quantum version of Stein’s lemma which provides an operational interpretation of the quantum relative entropy.

We prove the ergodic version of the quantum Stein’s lemma which was conjectured by Hiai and Petz. The result provides an operational and statistical interpretation of the quantum relative entropy as a statistical measure of distinguishability, and contains as a special case the quantum version of the Shannon-McMillan theorem for ergodic states. A version of… (More)

- Tyll Krüger, Guido Montúfar, Ruedi Seiler, Rainer Siegmund-Schultze
- Kybernetika
- 2013

We lift important results about universally typical sets, typically sampled sets, and empirical entropy estimation in the theory of samplings of discrete ergodic information sources from the usual one-dimensional discrete-time setting to a multidimensional lattice setting. We use techniques of packings and coverings with multidimensional windows to… (More)

- Igor Bjelaković, Tyll Krüger, Rainer Siegmund-Schultze, Arleta Szkoła
- 2003

We give an equivalent finitary reformulation of the classical Shannon-McMillan-Breiman theorem which has an immediate translation to the case of ergodic quantum lattice systems. This version of a quantum Breiman theorem can be derived from the proof of the quantum ShannonMcMillan theorem presented in [2].

In part I (math.PR/0406392) we proved for an arbitrary onedimensional random walk with independent increments that the probability of crossing a level at a given time n is O(n). In higher dimensions we call a random walk ’polygonally recurrent’ (resp. transient) if a.s. infinitely many (resp. finitely many) of the straight lines between two consecutive… (More)

Abstract. We prove for an arbitrary one-dimensional random walk with independent increments that the probability of crossing a level at a given time n is O(n). Moment or symmetry assumptions are not necessary. In removing symmetry the (sharp) inequality P (|X+Y | ≤ 1) < 2P (|X−Y | ≤ 1) for independent identically distributed X, Y is used. In part II we… (More)