Consistent Identification in the Limit of Any of the Classes k -Valued Is NP-hard

@inproceedings{Florncio2001ConsistentII,
  title={Consistent Identification in the Limit of Any of the Classes k -Valued Is NP-hard},
  author={Christophe Costa Flor{\^e}ncio},
  booktitle={LACL},
  year={2001}
}
In [Bus87], [BP90] 'discovery procedures' for CCGs were defined that accept a sequence of structures as input and yield a set of grammars.In [Kan98] it was shown that some of the classes based on these procedures are learnable (in the technical sense of [Gol67]). In [CF00] it was shown that learning some of these classes by means of a consistent learning function is NP-hard.The complexity of learning classes from one particular family, Gk-valued, was still left open. In this paper it is shown… 
Consistent Identification in the Limit of Rigid Grammars from Strings Is NP-hard
TLDR
It is shown that the learning functions for these learnable classes are all NP-hard.
Fast Learning from Strings of 2-Letter Rigid Grammars
TLDR
The class of 2-letter rigid grammars is studied and it is shown thatgrammars in this class can be learned very efficiently, within Gold's paradigm of identification in the limit, from positive examples.
k-Valued Non-Associative Lambek Categorial Grammars are not Learnable from Strings
TLDR
It is shown that the class of rigid and k-valued NL grammars is unlearnable from strings, for each k; this result is obtained by a specific construction of a limit point in the considered class, that does not use product operator.
A Learnable Class of CCGs from Typed Examples
TLDR
A new way of considering learning Categorial Grammars from semantic knowledge is presented, and the hypothesis that semantic types, in the usual sense, are general information making a distinction between facts, is made.
A Learnable Class of Classical Categorial Grammars from Typed Examples
TLDR
The main result is that for every CCG, a new subclass of CCGs with good properties from a language-theoretic point of view is defined.
On Limit Points for Some Variants of Rigid Lambek Grammars
TLDR
It is shown that in contrast to k-valued classical categorial grammars, different classes of Lambek Grammars are not learnable from strings following Gold's model.
Learning Recursive Automata from Positive Examples
TLDR
This theoretical paper studies how to translate finite state automata into categorial grammars and back, and shows that the generalization operators employed in both domains can be compared and that their result can always be represented by generalized automata, called "recursive automata ".
When Categorial Grammars Meet Regular Grammatical Inference
TLDR
It is proved that every unidirectional categorial grammar, and thus every context-free language, can be represented by a new kind of finite-state generative model called a recursive automaton.
On Categorial Grammatical Inference and Logical Information Systems
TLDR
This work considers several classes of categorial grammars and discusses their learnability, and considers the Logical Information Systems approach, that allows for navigation, querying, updating, and analysis of heterogeneous data collections where data are given (logical) descriptors.
Proceedings of FGVienna: The 8th conference on Formal Grammar
TLDR
It is proved that the languages of link structured lists of words associated to rigid link grammars have finite elasticity and a learning algorithm is shown and this result leads to the learnability of rigid or k-valued link Grammars learned from strings.
...
...

References

SHOWING 1-10 OF 10 REFERENCES
Learnable Classes of Categorial Grammars
TLDR
The dissertation investigates learnability of various classes of classical categorial grammars within the Gold paradigm of identification in the limit from positive data and proves that finite elasticity is preserved under the inverse image of a finite-valued relation, extending results of Wright's and of Moriyama and Sato's.
Finding patterns common to a set of strings (Extended Abstract)
TLDR
This problem is shown to be effectively solvable in the general case and to lead to correct inference in the limit of the pattern languages and a polynomial time algorithm for finding minimal one-variable pattern languages compatible with a given set of strings is given.
Language Identification in the Limit
  • E. M. Gold
  • Linguistics, Computer Science
    Inf. Control.
  • 1967
Consistent Polynominal Identification in the Limit
TLDR
It will turn out, that consistency is the first natural condition having a narrowing effect for arbitrary update boundaries, and it is shown the existence of arbitrary hard, as well as an infinite chain of harder and harder consistent learnable sets.
On the Complexity of Consistent Identification of Some Classes of Structure Languages
In [5,7] ‘discovery procedures’ for CCGs were defined that accept a sequence of structures as input and yield a set of grammars.
Categorial grammars determined from linguistic data by unification
TLDR
The algorithm presented here extends an earlier one restricted to rigid categorial grammars, introduced in [4] and [5], by admitting non-rigid outputs, and introduces the notion of an optimal unifier, a natural generalization of that of a most general unifier.
On the Complexity of Inductive Inference
Inductive Inference, DFAs, and Computational Complexity
TLDR
The results discussed determine the extent to which DFAs can be feasibly inferred, and highlight a number of interesting approaches in computational learning theory.
Learning and Consistency
In designing learning algorithms it seems quite reasonable to construct them in such a way that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis
Systems That Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists
Systems That Learn presents a mathematical framework for the study of learning in a variety of domains. It provides the basic concepts and techniques of learning theory as well as a comprehensive