Young Children Bet on Their Numerical Skills
It was shown that children as young as 5 years made metacognitive “bets” on their numerical discriminations in a wagering task, and metacognition ability in only the numerical domain predicted their school-based mathematics knowledge.
Dissociable signatures of visual salience and behavioral relevance across attentional priority maps in human cortex
- Thomas C. Sprague, Sirawaj Itthipuripat, Vy A. Vo, J. Serences
- Psychology, BiologybioRxiv
- 2 October 2017
This work tested the hypothesis that visual salience and behavioral relevance independently impact the activation profile across retinotopically-organized cortical regions by quantifying attentional priority maps measured in human brains using functional MRI while participants attended one of two differentially-salient stimuli.
Spatial Tuning Shifts Increase the Discriminability and Fidelity of Population Codes in Visual Cortex
It is found that vRF position shifts contribute more to population-level enhancements of visual information than changes in vRF size or gain, which suggests that position shifts are a principal mechanism by which spatial attention enhances population codes for relevant visual information.
Interpretable multi-timescale models for predicting fMRI responses to continuous natural speech
- Shailee Jain, Vy A. Vo, Shivangi Mahto, Amanda LeBel, Javier Turek, Alexander G. Huth
- Computer Science, PsychologybioRxiv
- 2 October 2020
This work constructs interpretable multi-timescale representations by forcing individual units in an LSTM LM to integrate information over specific temporal scales, which allows us to explicitly and directly map the timescale of information encoded by each individual fMRI voxel.
Inverted Encoding Models Assay Population-Level Stimulus Representations, Not Single-Unit Neural Tuning
- Thomas C. Sprague, K. Adam, Joshua J. Foster, Masih Rahmati, David W. Sutterer, Vy A. Vo
- 1 May 2018
It is argued that using stimulus reconstructions to infer properties of single neurons, such as neural tuning bandwidth, is an ill-posed problem with no unambiguous solution.
Multi-timescale representation learning in LSTM Language Models
- Shivangi Mahto, Vy A. Vo, Javier Turek, Alexander G. Huth
- Computer ScienceInternational Conference on Learning…
- 27 September 2020
This work constructs explicitly multi-timescale language models by manipulating the input and forget gate biases in a long short-term memory (LSTM) network and empirically analyze the timescale of information routed through each part of the model using word ablation experiments and forgetting gate visualizations.
Low-Dimensional Structure in the Space of Language Representations is Reflected in Brain Responses
- Richard J. Antonello, Javier Turek, Vy A. Vo, Alexander G. Huth
- Computer ScienceNeural Information Processing Systems
- 9 June 2021
An encoder-decoder transfer learning method from computer vision is adapted to investigate the structure among 100 different feature spaces extracted from hidden representations of various networks trained on language tasks to reveal a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embedDings.
Multivariate Analysis of BOLD Activation Patterns Recovers Graded Depth Representations in Human Visual and Parietal Cortex
- Margaret M. Henderson, Vy A. Vo, C. Chunharas, Thomas C. Sprague, J. Serences
- 1 July 2019
This work recorded BOLD fMRI while human subjects viewed a stereoscopic random-dot sphere at various positions along the depth and horizontal axes, and the stimuli were presented across a wider range of disparities compared to previous neuroimaging studies, allowing for validated method of recovering depth representations from retinotopic cortex.
Value-driven attentional capture enhances distractor representations in early visual cortex
- Sirawaj Itthipuripat, Vy A. Vo, Thomas C. Sprague, J. Serences
- Psychology, BiologybioRxiv
- 4 March 2019
It is suggested that value-driven attentional capture begins with sensory modulations of distractor representations in early areas of visual cortex, and the fidelity of neural representations related to task-irrelevant distractors increased when the distractors were previously associated with a high reward.
Approximating Stacked and Bidirectional Recurrent Architectures with the Delayed Recurrent Neural Network
- Javier Turek, Shailee Jain, Vy A. Vo, M. Capotă, Alexander G. Huth, Theodore L. Willke
- Computer ScienceInternational Conference on Machine Learning
- 30 August 2019
The delayed-RNN is explored, which is a single-layer RNN that has a delay between the input and output, and it is proved that a weight-constrained version of the delayed- RNN is equivalent to a stacked RNN, and that the delay gives rise to partial acausality, much like bidirectional networks.