Finite-State Complexity and the Size of Transducers

@inproceedings{Calude2010FiniteStateCA,
  title={Finite-State Complexity and the Size of Transducers},
  author={Cristian S. Calude and Kai Salomaa and Tania K. Roblot},
  booktitle={DCFS},
  year={2010}
}
Finite-state complexity is a variant of algorithmic information theory obtained by replacing Turing machines with finite transducers. We consider the state-size of transducers needed for minimal descriptions of arbitrary strings and, as our main result, we show that the state-size hierarchy with respect to a standard encoding is infinite. We consider also hierarchies yielded by more general computable encodings. 

Figures from this paper

The Computation of Finite-State Complexity by Tania K . Roblot

This thesis proposes a new variant to Algorithmic Information Theory constructed around finite-state complexity, a computable counterpart to Kolmogorov complexity based on finite transducers rather than Turing machines, and presents a first attempt at applying the finite- state complexity in a practical setting.

Theoretical Computer Science A linearly computable measure of string complexity

A measure of string complexity, called I -complexity, computable in linear time and space, is presented, which counts the number of different substrings in a given string.

A linearly computable measure of string complexity

Advanced Topics on State Complexity of Combined Operations

This thesis discusses the state complexities of individual operations on regular languages, including union, intersection, star, catenation, reversal and so on, and introduces the concept of estimation and approximation of state complexity.

Invariance and Universality of Complexity

  • H. Jürgensen
  • Computer Science, Mathematics
    Computation, Physics and Beyond
  • 2012
Without any assumptions regarding encodings of functions and arguments and without any assumptions about computability or computing models, the notion of complexity is introduced and a general invariance theorem is proved and sufficient conditions are stated for complexity to be computable.

Searching for Compact Hierarchical Structures in DNA by means of the Smallest Grammar Problem

It is proved that the number of smallest grammars can be exponential in the size of the sequence and then analysed the stability of the discovered structures between minimal Grammar Parsing for real-life examples.

A Review of Methods for Estimating Algorithmic Complexity and Beyond: Options, Challenges, and New Directions

It is explained how different approaches to algorithmic complexity can explore the relaxation of different necessary and sufficient conditions in their exploration of numerical applicability with some of those approaches taking greater risks than others in exchange for application relevancy.

A Review of Methods for Estimating Algorithmic Complexity and Beyond: Options, and Challenges

It is explained how different approaches to algorithmic complexity can explore the relaxation of different necessary and sufficient conditions in their exploration of numerical applicability with some of those approaches taking greater risks than others in exchange for application relevancy.

A Review of Methods for Estimating Algorithmic Complexity: Options, Challenges, and New Directions †

It will be explained how different approaches to algorithmic complexity can explore the relaxation of different necessary and sufficient conditions in their pursuit of numerical applicability, with some of these approaches entailing greater risks than others in exchange for greater relevance.

References

SHOWING 1-10 OF 19 REFERENCES

Resource-Bounded Kolmogorov Complexity Revisited

A fresh look at CD complexity, where CD t (x) is the smallest program that distinguishes x from all other strings in time t(¦x¦), and a CND complexity, a new nondeterministic variant of CD complexity.

Automaticity: Properties of a Measure of Descriptional Complexity

The notion of automaticity is explored, which attempts to model how “close” f is to a finite-state function.

Approximating the smallest grammar: Kolmogorov complexity in natural models

The main result is an exponential improvement of the best proved approximation ratio, which is given an <i>O</i>(log (<i/g</i><sup>*</sup>)) approximation algorithm for the smallest non-deterministic finite automaton with advice that produces a given string.

Automaticity I: Properties of a Measure of Descriptional Complexity

Let?and?be nonempty alphabets with?finite. Letfbe a function mapping?* to?. We explore the notion ofautomaticity, which attempts to model how “close”fis to a finite-state function. Formally, the

Grammar Compression, LZ-Encodings, and String Algorithms with Implicit Input

The grammar compression is more convenient than LZ-encoding, its size differs from that of LZ's by at most logarithmic factor, and the constructive proof is based on the concept similar to balanced trees.

𝒫𝒮-regular languages

A class of generalized regular languages, namely 𝒫𝒮-regular languages, is introduced, and some characterizations of such generalizedregular languages are given.

Approximation algorithms for grammar-based compression

The approximation ratio is analyzed, that is, the maximum ratio between the size of the generated grammar and the smallest possible grammar over all inputs, for four previously-proposed grammar-based compression algorithms.

Information and Randomness: An Algorithmic Perspective

Algorithmic information theory

This article is a brief guide to the field of algorithmic information theory, its underlying philosophy, the major subfields, applications, history, and a map of the field are presented.