• Publications
  • Influence
Discourse Embellishment Using a Deep Encoder-Decoder Network
TLDR
A new NLG task, textual embellishment, is defined by taking a text as input and generating a semantically equivalent output with increased lexical and syntactic complexity, and is presented using LSTM Encoder-Decoder networks trained on the WikiLarge dataset. Expand
Cerebral microbleed detection in traumatic brain injury patients using 3D convolutional neural networks
TLDR
A computer aided detection system based on 3D convolutional neural networks for detecting CMBs in 3D susceptibility weighted images outperforming related work on CMB detection in TBI patients. Expand
Encoding and Decoding Dynamic Sensory Signals with Recurrent Neural Networks: An Application of Conceptors to Birdsongs
TLDR
This work proposes a model that describes a way to encode and decode sensory stimuli based on the activity patterns of multiple, recurrently connected neural populations with different receptive fields and demonstrates the ability of this model to learn and recognize complex, dynamic stimuli using birdsongs as exemplary data. Expand
Visual Attention Through Uncertainty Minimization in Recurrent Generative Models
TLDR
A recurrent generative neural network model is proposed that predicts a visual scene based on foveated glimpses and shifts its attention in order to minimize the uncertainty in its predictions, providing evidence that uncertainty minimization could be a fundamental mechanisms for the allocation of visual attention. Expand
Uncertainty through Sampling: The Correspondence of Monte Carlo Dropout and Spiking in Artificial Neural Networks
TLDR
It is demonstrated that in cases of incomplete world knowledge (epistemic uncertainty) as well as for noisy observations (aleatoric uncertainty) both neuron models show similar uncertainty representations, providing evidence that sampling could play a fundamental role in representing uncertainties in neural systems. Expand
Task-dependent attention allocation through uncertainty minimization in deep recurrent generative models
TLDR
A recurrent generative neural network model is proposed that predicts a visual scene based on foveated glimpses and produces naturalistic eye-movements focusing on salient stimulus regions, providing evidence that uncertainty minimization could be a fundamental mechanisms for the allocation of visual attention. Expand