# The Complexity of Learning Acyclic CP-Nets

@inproceedings{Alanazi2016TheCO, title={The Complexity of Learning Acyclic CP-Nets}, author={Eisa A. Alanazi and Malek Mouhoub and Sandra Zilles}, booktitle={IJCAI}, year={2016} }

Learning of user preferences has become a core issue in AI research. For example, recent studies investigate learning of Conditional Preference Networks (CP-nets) from partial information. To assess the optimality of learning algorithms as well as to better understand the combinatorial structure of CP-net classes, it is helpful to calculate certain learning-theoretic information complexity parameters. This paper provides theoretical justification for exact values (or in some cases bounds) of…

## 13 Citations

The Complexity of Learning Acyclic Conditional Preference Networks

- Computer Science
- 2018

This article focuses on the frequently studied case of learning from so-called swap examples, which express preferences among objects that differ in only one attribute, and presents bounds on or exact values of some well-studied information complexity parameters, namely the VC dimension, the teaching dimension, and the recursive teaching dimension for classes of acyclic CP-nets.

Interactive Learning of Acyclic Conditional Preference Networks

- Computer ScienceArXiv
- 2018

This paper determines bounds on or exact values of some of the most central information complexity parameters, namely the VC dimension, the (recursive) teaching dimension), the self-directed learning complexity, and the optimal mistake bound, for classes of acyclic CP-nets.

The complexity of exact learning of acyclic conditional preference networks from swap examples

- Computer ScienceArtif. Intell.
- 2020

Query-based learning of acyclic conditional preference networks from contradictory preferences

- MathematicsEURO Journal on Decision Processes
- 2018

Conditional preference networks (CP-nets) provide a compact and intuitive graphical tool to represent the preferences of a user. However, learning such a structure is known to be a difficult problem…

Query-based learning of acyclic conditional preference networks from noisy data

- Computer Science
- 2016

This paper proposes a new, efficient, and robust query-based learning algorithm for acyclic CP-nets that takes into account the incoherences in the user’s preferences or in noisy data by searching in a principled way the variables that condition the other ones.

Online Learning of Acyclic Conditional Preference Networks from Noisy Data

- Computer Science2017 IEEE International Conference on Data Mining (ICDM)
- 2017

This is the first algorithm dealing with online learning of CP-nets in the presence of noise, relying on information-theoretic measures defined over the induced preference rules and the Hoeffding bound to define an asymptotically optimal decision criterion.

An Evolutionary Approach for Learning Conditional Preference Networks from Inconsistent Examples

- Computer ScienceADMA
- 2017

This work presents an evolutionary-based method for solving the CP-net learning problem from inconsistent examples and indicates that the proposed approach is able to find a good quality CP-nets and outperforms the current state-of-the-art algorithms in terms of both sample agreement and graph similarity.

Cutting Cycles of Conditional Preference Networks with Feedback Set Approach

- Computer Science, MedicineComput. Intell. Neurosci.
- 2018

A class of the parent vertices in a ring CP-nets firstly and then gives corresponding algorithm, respectively, based on FVS and FAS based on feedback vertex set and feedback arc set are defined.

Structure Learning of Conditional Preference Networks Based on Dependent Degree of Attributes From Preference Database

- Computer ScienceIEEE Access
- 2018

This paper provides theoretical support for the use of a conditional independent test for learning the structure of CP-nets and proposes the dependent degree to calculate the dependency relationship among attributes.

Summarizing Conditional Preference Networks

- Computer Science
- 2019

This thesis proposes an approach to aggregate the preferences of multiple users via a single CP-net, while minimizing disagreement with individual users, and presents two algorithms that assume all the input CP-nets are separable.

## References

SHOWING 1-10 OF 43 REFERENCES

Learning conditional preference networks

- Mathematics, Computer ScienceArtif. Intell.
- 2010

Learning CP-Net Preferences Online from User Queries

- Computer ScienceAAAI
- 2013

This is the first efficient and resolute CP-net learning algorithm: if a preference order can be represented as aCP-net, the algorithm learns a CP-nets in time n p, where p is a bound on the number of parents a node may have.

Learning Conditional Preference Networks from Inconsistent Examples

- Computer ScienceIEEE Transactions on Knowledge and Data Engineering
- 2014

This work introduces the model of learning consistent CP-nets from inconsistent examples and presents a method to solve this model, which is verified on both simulated data and real data, and it is compared with existing methods.

Learning Ordinal Preferences on Multiattribute Domains: The Case of CP-nets

- Computer Science, MathematicsPreference Learning
- 2010

This paper focuses on the learnability issue of conditional preference networks, or CP-nets, that have recently emerged as a popular graphical language for representing ordinal preferences in a concise and intuitive manner and provides results in both passive and active learning.

Adaptive Versus Nonadaptive Attribute-Efficient Learning

- Mathematics, Computer ScienceMachine Learning
- 2004

A graph-theoretic characterization of nonadaptive learning families, called r-wise bipartite connected families, are given and it is proved that the optimal query number O(2r + r log n) can be already achieved in O(r) stages.

Recursive teaching dimension, VC-dimension and sample compression

- Computer ScienceJ. Mach. Learn. Res.
- 2014

It is shown that the recursive teaching dimension, recently introduced by Zilles et al. (2008), is strongly connected to known complexity notions in machine learning, e.g., the self-directed learning complexity and the VC-dimension.

Ceteris Paribus Preference Elicitation with Predictive Guarantees

- Computer ScienceIJCAI
- 2009

It is proved that the learning problem is intractable, even under several simplifying assumptions, and the proposed algorithm is a PAC-learner, and, thus, that the CP-networks it induces accurately predict the user's preferences on previously unseen situations.

Structural Results About On-line Learning Models With and Without Queries

- Mathematics, Computer ScienceMachine Learning
- 2004

We solve an open problem of Maass and Turán, showing that the optimal mistake-bound when learning a given concept class without membership queries is within a constant factor of the optimal number of…

Learning Quickly When Irrelevant Attributes Abound: A New Linear-Threshold Algorithm

- Mathematics28th Annual Symposium on Foundations of Computer Science (sfcs 1987)
- 1987

Valiant (1984) and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss incremental learning of these functions. We consider a setting in…

Open Problem: Recursive Teaching Dimension Versus VC Dimension

- Mathematics, Computer ScienceCOLT
- 2015

The Recursive Teaching Dimension (RTD) of a concept classC is a complexity parameter referring to the worst-case number of labelled examples needed to learn any target concept inC from a teacher…