• Publications
  • Influence
A Probabilistic Computational Model of Cross-Situational Word Learning
A novel computational model of early word learning is presented to shed light on the mechanisms that might be at work in this process, and demonstrates that much about word meanings can be learned from naturally occurring child-directed utterances, without using any special biases or constraints. Expand
Representations of language in a model of visually grounded speech signal
An in-depth analysis of the representations used by different components of the trained model shows that encoding of semantic aspects tends to become richer as the authors go up the hierarchy of layers, whereas encoding of form-related aspects of the language input tends to initially increase and then plateau or decrease. Expand
Representation of Linguistic Form and Function in Recurrent Neural Networks
A method for estimating the amount of contribution of individual tokens in the input to the final prediction of the networks is proposed and shows that the Visual pathway pays selective attention to lexical categories and grammatical functions that carry semantic information, and learns to treat word types differently depending on their grammatical function and their position in the sequential structure of the sentence. Expand
A Computational Model of Early Argument Structure Acquisition
A computational model for the representation, acquisition, and use of verbs and constructions is presented, founded on a novel view of constructions as a probabilistic association between syntactic and semantic features. Expand
Encoding of phonology in a recurrent neural model of grounded speech
It is found that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. Expand
A probabilistic model of early argument structure acquisition
A probabilistic usage-based model of verb argument structure acquisition that can successfully learn abstract knowledge of language from instances of verb usage, and use this knowledge in various language tasks, and shows that the model learns intuitive profiles for both semantic roles and selectional preferences. Expand
Learning language through pictures
The model consists of two Gated Recurrent Unit networks with shared word embeddings, and uses a multi-task objective by receiving a textual description of a scene and trying to concurrently predict its visual representation and the next word in the sentence. Expand
Integrating Syntactic Knowledge into a Model of Cross-situational Word Learning
A probabilistic model of word learn- ing which integrates cross-situational evidence and the knowledge of lexical categories into a single learning mechanism is presented. Expand
A Probabilistic Incremental Model of Word Learning in the Presence of Referential Uncertainty
We present a probabilistic incremental model of early word learning. The model acquires the meaning of words from exposure to word usages in sentences, paired with appropriate semanticExpand
Cross-situational Learning of Low Frequency Words: The Role of Context Familiarity and Age of Exposure
Cross-situational Learning of Low Frequency Words: The Role of Context Familiarity and Age of Exposure Afsaneh Fazly, Fatemeh Ahmadi-Fakhr Computer Sciences and Engineering Shiraz University Shiraz,Expand