Learn More
Most tasks in natural language processing can be cast into question answering (QA) problems over language input. We introduce the dynamic memory network (DMN), a unified neural network framework which processes input sequences and questions, forms semantic and episodic memories, and generates relevant answers. Questions trigger an iterative attention(More)
We explore alternative acoustic modeling techniques for large vocabulary speech recognition using Long Short-Term Memory recurrent neural networks. For an acoustic frame labeling task, we compare the conventional approach of cross-entropy (CE) training using fixed forced-alignments of frames and labels, with the Con-nectionist Temporal Classification (CTC)(More)
Recently, deep architectures, such as recurrent and recursive neural networks have been successfully applied to various natural language processing tasks. Inspired by bidirectional recurrent neural networks which use representations that summarize the past and future around an instance, we propose a novel architecture that aims to capture the structural(More)
In many bioinformatics applications, it is important to assess and compare the performances of algorithms trained from data, to be able to draw conclusions unaffected by chance and are therefore significant. Both the design of such experiments and the analysis of the resulting data using statistical tests should be done carefully for the results to carry(More)
We present the multiplicative recurrent neural network as a general model for com-positional meaning in language, and evaluate it on the task of fine-grained sentiment analysis. We establish a connection to the previously investigated matrix-space models for compositionality, and show they are special cases of the mul-tiplicative recurrent net. Our(More)
We discuss an autoencoder model in which the encoding and decoding functions are implemented by decision trees. We use the soft decision tree where internal nodes realize soft multivariate splits given by a gating function and the overall output is the average of all leaves weighted by the gating values on their path. The encoder tree takes the input and(More)
Budding Tree Softening the notion of being a leaf, a bud node redirects instances to all its children (as in an internal node), as well as makes a contribution itself (as in a leaf node). F m (x) = γρ m + (1 − γ)(F L m (x)g m (x) + F R m (x)(1 − g m (x)) g m (x) = 1 1 + exp[−(w T m x + w m0)] γ ∈ [0, 1] is the (soft) leafness parameter. g m (x) ∈ (0, 1) is(More)