Corpus ID: 236171028

How to Tell Deep Neural Networks What We Know

@article{Dash2021HowTT,
  title={How to Tell Deep Neural Networks What We Know},
  author={Tirtharaj Dash and Sharad Chitlangia and Aditya Ahuja and Ashwin Srinivasan},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.10295}
}
We present a short survey of ways in which existing scientific knowledge are included when constructing models with neural networks. The inclusion of domain-knowledge is of special interest not just to constructing scientific assistants, but also, many other areas that involve understanding data using human-machine collaboration. In many such instances, machine-based model construction may benefit significantly from being provided with human-knowledge of the domain encoded in a sufficiently… Expand

Figures and Tables from this paper

From Statistical Relational to Neural Symbolic Artificial Intelligence: a Survey
TLDR
This survey identifies several parallels across seven different dimensions between neuralsymbolic and statistical relational artificial intelligence fields that cannot only be used to characterize and position neuralsYmbolic artificial intelligence approaches but also to identify a number of directions for further research. Expand
Inclusion of Domain-Knowledge into GNNs using Mode-Directed Inverse Entailment
TLDR
BotGNNs are presented, as capable of combining the computational efficacy of GNNs with the representational versatility of ILP, and are compared to multi-layer perceptrons that use features representing a “propositionalised” form of the background knowledge. Expand
Using Domain-Knowledge to Assist Lead Discovery in Early-Stage Drug Design
TLDR
The results suggest a way of combining symbolic domain-knowledge and deep generative models to constrain the exploration of the chemical space of molecules, when there is limited information on target-inhibitors. Expand

References

SHOWING 1-10 OF 105 REFERENCES
ProbLog: A Probabilistic Prolog and Its Application in Link Discovery
TLDR
A general framework for minimisation-based belief change is presented, which focuses on a set-theoretic notion of minimisation, and also considers other approaches, such as cardinality-based and priority-based minimisation. Expand
The Connectionist Inductive Learning and Logic Programming System
TLDR
Comparisons with the results obtained by some of the main neural, symbolic, and hybrid inductive learning systems, using the same domain knowledge, show the effectiveness of C-IL2P. Expand
pages 265–281
  • Springer,
  • 1991
volume 861866
  • Boston, MA,
  • 1990
WHO Technical Report
  • J. Meigs
  • Medicine
  • The Yale Journal of Biology and Medicine
  • 1954
TLDR
The Feather River Coordinated Resource Management Group (FR-CRM) has been restoring channel/ meadow/ floodplain systems in the Feather River watershed since 1985 and recognized the possibility of a significant change in carbon stocks in these restored meadows and valleys. Expand
Beyond Graph Neural Networks with Lifted Relational Neural Networks
TLDR
A declarative differentiable programming framework based on the language of Lifted Relational Neural Networks, where small parameterized logic programs are used to encode relational learning scenarios, and how the contemporary GNN models can be easily extended towards higher relational expressiveness. Expand
DeepStochLog: Neural Stochastic Logic Programming
TLDR
This work introduces neural grammar rules into stochastic definite clause grammars to create a framework that can be trained end-to-end in neural symbolic learning, and shows that inference and learning in neural stochastics logic programming scale much better than for neural probabilistic logic programs. Expand
Turning 30: New Ideas in Inductive Logic Programming
TLDR
This work surveys recent work in inductive logic programming (ILP), a form of machine learning that induces logic programs from data, which has shown promise at addressing limitations of state-of-the-art machine learning. Expand
WWW ’19
  • page 3307–3313, New York, NY, USA,
  • 2019
DeepProbLog: Neural Probabilistic Logic Programming
TLDR
This work is the first to propose a framework where general-purpose neural networks and expressive probabilistic-logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples. Expand
...
1
2
3
4
5
...