#### Filter Results:

#### Publication Year

2014

2016

#### Publication Type

#### Co-author

#### Publication Venue

#### Data Set Used

#### Key Phrases

Learn More

Most logic-based machine learning algorithms rely on an Occamist bias where textual complexity of hypotheses is minimised. Within Inductive Logic Programming (ILP), this approach fails to distinguish between the efficiencies of hypothesised programs , such as quick sort (O(n log n)) and bubble sort (O(n 2)). This paper addresses this issue by considering… (More)

Meta-Interpretive Learning (MIL) is an ILP technique which uses higher-order metarules to support predicate invention and learning of recursive definitions. In MIL the selection of metarules is analogous to the choice of refinement operators in a refinement graph search. The metarules determine the structure of permissible rules which in turn defines the… (More)

Data Transformation is an important part of data curation, which involves the maintenance of research data on a long-term basis with the aim of allowing its re-use. This activity, widespread in commercial and academic data analytics projects, is labour intensive, involving the manual construction and debugging of large numbers of small, special purpose data… (More)

Many tasks in AI require the design of complex programs and representations, whether for programming robots, designing game-playing programs, or conducting textual or visual transformations. This paper explores a novel inductive logic programming approach to learn such programs from examples. To reduce the complexity of the learned programs, and thus the… (More)

Inductive programming approaches typically rely on an Occamist bias to select hypotheses with minimal textual complexity. This approach, however, fails to distinguish between the efficiencies of hypothe-sised programs, such as merge sort (O(n log n)) and bubble sort (O(n 2)). We address this issue by introducing techniques to learn logic programs with… (More)

Most logic-based machine learning algorithms rely on an Occamist bias where textual simplicity of hypotheses is optimised. This approach, however , fails to distinguish between the efficien-cies of hypothesised programs, such as quick sort (O(n log n)) and bubble sort (O(n 2)). We address this issue by considering techniques to minimise both the resource… (More)

The challenge today with an increasing volume and complexity of information is the actual connections we make with the information, its value and how it impacts us. This paper interweaves three topics as they relate to communicating scientific knowledge and the use of the arts to heighten our ability to connect to the information, so as to transition from… (More)

Formal verification is increasingly used in industry. A popular technique is interactive theorem proving, used for instance by Intel in HOL light. The ability to learn and reapply proof strategies from a small set of proofs would significantly increase the productivity of these systems, and make them more cost-effective to use. Previous attempts have had… (More)

Fiction authors rarely provide detailed descriptions of scenes, preferring the reader to fill in the details using their imagination. Therefore, to perform detailed text-to-scene conversion from books, we need to not only identify explicit objects but also infer implicit objects. In this paper, we describe an approach to inferring objects using Wikipedia… (More)

- ‹
- 1
- ›