• Publications
  • Influence
Conditional Neural Processes
TLDR
Conditional Neural Processes are inspired by the flexibility of stochastic processes such as GPs, but are structured as neural networks and trained via gradient descent, yet scale to complex functions and large datasets.
Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders
TLDR
It is shown that a heuristic called minimum information constraint that has been shown to mitigate this effect in VAEs can also be applied to improve unsupervised clustering performance with this variant of the variational autoencoder model with a Gaussian mixture as a prior distribution.
The Event Calculus Explained
  • M. Shanahan
  • Computer Science
    Artificial Intelligence Today
  • 1999
TLDR
The event calculus is presented, a logic-based formalism for representing actions and their effects which reduces to monotonic predicate completion and is shown to apply to a variety of domains, including those featuring actions with indirect effects, actions with nondeterministic effects, concurrent actions, and continuous change.
The entropic brain: a theory of conscious states informed by neuroimaging research with psychedelic drugs
TLDR
It is argued that the defining feature of “primary states” is elevated entropy in certain aspects of brain function, such as the repertoire of functional connectivity motifs that form and fragment across time, and that this entropy suppression furnishes normal waking consciousness with a constrained quality and associated metacognitive functions, including reality-testing and self-awareness.
Solving the frame problem
Murray Shanahan has been actively involved in research on the frame problem since the late eighties. When a scientist looks back at the past of his eld, including his own early work, he can see how
Some Alternative Formulations of the Event Calculus
The Event Calculus is a narrative based formalism for reasoning about actions and change originally proposed in logic programming form by Kowalski and Sergot. In this paper we summarise how variants
Deep reinforcement learning with relational inductive biases
TLDR
The main contribution of this work is to introduce techniques for representing and reasoning about states in model-free deep reinforcement learning agents via relational inductive biases, which can offer advantages in efficiency, generalization, and interpretability.
An abductive event calculus planner
  • M. Shanahan
  • Computer Science
    J. Log. Program.
  • 1 July 2000
...
1
2
3
4
5
...