Share This Author
A Probabilistic Computational Model of Cross-Situational Word Learning
A novel computational model of early word learning is presented to shed light on the mechanisms that might be at work in this process, and demonstrates that much about word meanings can be learned from naturally occurring child-directed utterances, without using any special biases or constraints.
Representations of language in a model of visually grounded speech signal
An in-depth analysis of the representations used by different components of the trained model shows that encoding of semantic aspects tends to become richer as the authors go up the hierarchy of layers, whereas encoding of form-related aspects of the language input tends to initially increase and then plateau or decrease.
A Computational Model of Early Argument Structure Acquisition
A computational model for the representation, acquisition, and use of verbs and constructions is presented, founded on a novel view of constructions as a probabilistic association between syntactic and semantic features.
Representation of Linguistic Form and Function in Recurrent Neural Networks
A method for estimating the amount of contribution of individual tokens in the input to the final prediction of the networks is proposed and shows that the Visual pathway pays selective attention to lexical categories and grammatical functions that carry semantic information, and learns to treat word types differently depending on their grammatical function and their position in the sequential structure of the sentence.
Encoding of phonology in a recurrent neural model of grounded speech
It is found that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer.
A Probabilistic Model of Early Argument Structure Acquisition
- A. Alishahi
- Computer Science, Linguistics
A probabilistic usage-based model of verb argument structure acquisition that can successfully learn abstract knowledge of language from instances of verb usage, and use this knowledge in various language tasks, and shows that the model learns intuitive profiles for both semantic roles and selectional preferences.
Analyzing and interpreting neural networks for NLP: A report on the first BlackboxNLP workshop
A number of representative studies in each category are reviewed, including systematic manipulation of input to neural networks and the impact on their performance, and testing whether interpretable knowledge can be decoded from intermediate representations acquired by neural networks.
Integrating Syntactic Knowledge into a Model of Cross-situational Word Learning
A probabilistic model of word learn- ing which integrates cross-situational evidence and the knowledge of lexical categories into a single learning mechanism is presented.
Analyzing analytical methods: The case of phonology in neural models of spoken language
It is concluded that reporting analysis results with randomly initialized models is crucial, and that global-scope methods tend to yield more consistent and interpretable results and are recommend their use as a complement to local-scope diagnostic methods.
A Probabilistic Incremental Model of Word Learning in the Presence of Referential Uncertainty
Results of simulations on naturalistic child-directed data show that the probabilistic incremental model of early word learning exhibits behaviours similar to those observed in the early lexical acquisition of children, such as vocabulary spurt and fast mapping.