• Corpus ID: 9467525

The Complexity of Learning Acyclic CP-Nets

  title={The Complexity of Learning Acyclic CP-Nets},
  author={Eisa A. Alanazi and Malek Mouhoub and Sandra Zilles},
Learning of user preferences has become a core issue in AI research. For example, recent studies investigate learning of Conditional Preference Networks (CP-nets) from partial information. To assess the optimality of learning algorithms as well as to better understand the combinatorial structure of CP-net classes, it is helpful to calculate certain learning-theoretic information complexity parameters. This paper provides theoretical justification for exact values (or in some cases bounds) of… 
The Complexity of Learning Acyclic Conditional Preference Networks
This article focuses on the frequently studied case of learning from so-called swap examples, which express preferences among objects that differ in only one attribute, and presents bounds on or exact values of some well-studied information complexity parameters, namely the VC dimension, the teaching dimension, and the recursive teaching dimension for classes of acyclic CP-nets.
Interactive Learning of Acyclic Conditional Preference Networks
This paper determines bounds on or exact values of some of the most central information complexity parameters, namely the VC dimension, the (recursive) teaching dimension), the self-directed learning complexity, and the optimal mistake bound, for classes of acyclic CP-nets.
Query-based learning of acyclic conditional preference networks from contradictory preferences
Conditional preference networks (CP-nets) provide a compact and intuitive graphical tool to represent the preferences of a user. However, learning such a structure is known to be a difficult problem
Query-based learning of acyclic conditional preference networks from noisy data
This paper proposes a new, efficient, and robust query-based learning algorithm for acyclic CP-nets that takes into account the incoherences in the user’s preferences or in noisy data by searching in a principled way the variables that condition the other ones.
Online Learning of Acyclic Conditional Preference Networks from Noisy Data
This is the first algorithm dealing with online learning of CP-nets in the presence of noise, relying on information-theoretic measures defined over the induced preference rules and the Hoeffding bound to define an asymptotically optimal decision criterion.
An Evolutionary Approach for Learning Conditional Preference Networks from Inconsistent Examples
This work presents an evolutionary-based method for solving the CP-net learning problem from inconsistent examples and indicates that the proposed approach is able to find a good quality CP-nets and outperforms the current state-of-the-art algorithms in terms of both sample agreement and graph similarity.
Cutting Cycles of Conditional Preference Networks with Feedback Set Approach
A class of the parent vertices in a ring CP-nets firstly and then gives corresponding algorithm, respectively, based on FVS and FAS based on feedback vertex set and feedback arc set are defined.
Structure Learning of Conditional Preference Networks Based on Dependent Degree of Attributes From Preference Database
This paper provides theoretical support for the use of a conditional independent test for learning the structure of CP-nets and proposes the dependent degree to calculate the dependency relationship among attributes.
Summarizing Conditional Preference Networks
This thesis proposes an approach to aggregate the preferences of multiple users via a single CP-net, while minimizing disagreement with individual users, and presents two algorithms that assume all the input CP-nets are separable.


Learning conditional preference networks
Learning CP-Net Preferences Online from User Queries
This is the first efficient and resolute CP-net learning algorithm: if a preference order can be represented as aCP-net, the algorithm learns a CP-nets in time n p, where p is a bound on the number of parents a node may have.
Learning Conditional Preference Networks from Inconsistent Examples
This work introduces the model of learning consistent CP-nets from inconsistent examples and presents a method to solve this model, which is verified on both simulated data and real data, and it is compared with existing methods.
Learning Ordinal Preferences on Multiattribute Domains: The Case of CP-nets
This paper focuses on the learnability issue of conditional preference networks, or CP-nets, that have recently emerged as a popular graphical language for representing ordinal preferences in a concise and intuitive manner and provides results in both passive and active learning.
Adaptive Versus Nonadaptive Attribute-Efficient Learning
  • P. Damaschke
  • Mathematics, Computer Science
    Machine Learning
  • 2004
A graph-theoretic characterization of nonadaptive learning families, called r-wise bipartite connected families, are given and it is proved that the optimal query number O(2r + r log n) can be already achieved in O(r) stages.
Recursive teaching dimension, VC-dimension and sample compression
It is shown that the recursive teaching dimension, recently introduced by Zilles et al. (2008), is strongly connected to known complexity notions in machine learning, e.g., the self-directed learning complexity and the VC-dimension.
Ceteris Paribus Preference Elicitation with Predictive Guarantees
It is proved that the learning problem is intractable, even under several simplifying assumptions, and the proposed algorithm is a PAC-learner, and, thus, that the CP-networks it induces accurately predict the user's preferences on previously unseen situations.
Structural Results About On-line Learning Models With and Without Queries
We solve an open problem of Maass and Turán, showing that the optimal mistake-bound when learning a given concept class without membership queries is within a constant factor of the optimal number of
Learning Quickly When Irrelevant Attributes Abound: A New Linear-Threshold Algorithm
  • N. Littlestone
  • Mathematics
    28th Annual Symposium on Foundations of Computer Science (sfcs 1987)
  • 1987
Valiant (1984) and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss incremental learning of these functions. We consider a setting in
Open Problem: Recursive Teaching Dimension Versus VC Dimension
The Recursive Teaching Dimension (RTD) of a concept classC is a complexity parameter referring to the worst-case number of labelled examples needed to learn any target concept inC from a teacher