• Publications
  • Influence
Convolutional 2D Knowledge Graph Embeddings
TLDR
ConvE, a multi-layer convolutional network model for link prediction, is introduced and it is found that ConvE achieves state-of-the-art Mean Reciprocal Rank across most datasets. Expand
Adversarial Sets for Regularising Neural Link Predictors
TLDR
This method is the first method that can use function-free Horn clauses (as in Datalog) to regularise any neural link predictor, with complexity independent of the domain size. Expand
NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language
TLDR
A model combining neural networks with logic programming in a novel manner for solving multi-hop reasoning tasks over natural language by using an Prolog prover to utilize a similarity function over pretrained sentence encoders and fine-tune the representations for the similarity function via backpropagation. Expand
PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them
TLDR
It is found that PAQ preempts and caches test questions, enabling RePAQ to match the accuracy of recent retrieve-and-read models, whilst being significantly faster, and a new QA-pair retriever, RePAZ, is introduced to complement PAQ. Expand
Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge
TLDR
This paper reduces the problem of automatically generating adversarial examples that violate a set of given First-Order Logic constraints in Natural Language Inference by maximising a quantity measuring the degree of violation of such constraints and using a language model for generating linguistically-plausible examples. Expand
Learning Reasoning Strategies in End-to-End Differentiable Proving
TLDR
Conditional Theorem Provers is presented, an extension to NTPs that learns an optimal rule selection strategy via gradient-based optimisation and is shown to show better link prediction results on standard benchmarks in comparison with other neural-symbolic models, while being explainable. Expand
Differentiable Reasoning on Large Knowledge Bases and Natural Language
TLDR
Greedy NTPs are proposed, an extension to NTP's addressing their complexity and scalability limitations, thus making them applicable to real-world datasets and a novel approach for jointly reasoning over KBs and textual mentions by embedding logic facts and natural language sentences in a shared embedding space. Expand
Complex Query Answering with Neural Link Predictors
TLDR
This work translates each query into an end-to-end differentiable objective, where the truth value of each atom is computed by a pre-trained neural link predictor, and analyses two solutions to the optimisation problem, including gradient-based and combinatorial search. Expand
Regularizing Knowledge Graph Embeddings via Equivalence and Inversion Axioms
TLDR
A principled and scalable method for leveraging equivalence and inversion axioms during the learning process, by imposing a set of model-dependent soft constraints on the predicate embeddings, which consistently improves the predictive accuracy of several neural knowledge graph embedding models without compromising their scalability properties. Expand
Knowledge Graph Embeddings and Explainable AI
TLDR
The state-of-the-art in this field of knowledge graph embeddings is summarized by describing the approaches that have been introduced to represent knowledge in the vector space and considering the problem of explainability. Expand
...
1
2
3
4
5
...