Google, Ex: Uber Technologies Inc, Ex:Tufts University
Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
An Embodied Real-Time Model of Language-Guided Incremental Visual Search
This paper presents an embodied real-time model of interactive incremental vision and natural language process- ing that can explain previous experimental findings in a novel way by showing that divergent results found in different ex- perimental conditions by Spivey et al. (2001) might not be due to differences in processing configurations.
ParsiNLU: A Suite of Language Understanding Challenges for Persian
- Daniel Khashabi, Arman Cohan, Yadollah Yaghoobzadeh
- Computer Science, LinguisticsTransactions of the Association for Computational…
- 11 December 2020
This work introduces ParsiNLU, the first benchmark in Persian language that includes a range of language understanding tasks—reading comprehension, textual entailment, and so on, and presents the first results on state-of-the-art monolingual and multilingual pre-trained language models on this benchmark and compares them with human performance.
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
Evaluation of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters finds that model performance and calibration both improve with scale, but are poor in absolute terms.
An embodied incremental Bayesian model of cross-situational word learning
- Sepideh Sadeghi, Matthias Scheutz, Evan A. Krause
- Computer Science, PsychologyJoint IEEE International Conference on…
- 1 September 2017
This work presents an incremental Bayesian model of cross-situational word learning with limited access to past situations and demonstrates its superior performance compared to other baseline incremental models, especially under conditions of sensory noise in the speech and visual modalities.
Joint acquisition of word order and word referent in a memory-limited and incremental learner
- Sepideh Sadeghi, Matthias Scheutz
- Linguistics8th IEEE International Conference on Cognitive…
- 1 September 2017
This work studies the utility of joint acquisition of simple versions of word order and word meaning in early stages of acquisition in a memory-limited incremental model and results were limited and only pronounced in the presence of high referential ambiguity and delayed syntactic bootstrapping.
Early Syntactic Bootstrapping in an Incremental Memory-Limited Word Learner
A probabilistic framework for early syntactic bootstrapping in the absence of advanced structured representations is presented and joint acquisition of word order and word referent facilitates one-shot learning of new words as well as inferring intentions of the speaker in ambiguous contexts.
Models of Cross-Situational and Crossmodal Word Learning in Task-Oriented Scenarios
- Brigitte Krenn, Sepideh Sadeghi, F. Neubarth, Stephanie Gross, M. Trapp, Matthias Scheutz
- Computer ScienceIEEE Transactions on Cognitive and Developmental…
- 1 September 2020
A Bayesian approach for co-learning object-word mappings and referential intention which allows for incremental learning from only a few situations where the display of referents to the learning system is systematically varied is presented.
Sensitivity to Input Order: Evaluation of an Incremental and Memory-Limited Bayesian Cross-Situational Word Learning Model
A variation of the incremental and memory-limited algorithm for Bayesian cross-situational word learning is presented and it is shown that the functional performance of the sub-optimal model on corpus data is close to that of its optimal counterpart.
A Hubel Wiesel model of early concept generalization based on local correlation of input features
- Sepideh Sadeghi, K. Ramanathan
- Computer ScienceThe International Joint Conference on Neural…
- 3 October 2011
The input integration framework is proposed - a set of operations performed on the inputs to the learning modules of the Hubel Wiesel model of conceptual memory that can be used to explain how humans intuitively fit a hierarchical representation for any kind of data.
Acquisition of Word-Object Associations from Human-Robot and Human-Human Dialogues
- Sepideh Sadeghi, Bradley Oosterveld, Evan A. Krause, Matthias Scheutz
- Computer Science, BiologyInternational Conference on Robotics and…
- 1 May 2019
The expanded word learning capabilities in the outcome system are demonstrated and how learning from both human-human and human-robot dialogues can be achieved in one integrated system is demonstrated.