Learn More
despite tremendous diversity among different liquids' material and dynamical characteristics. This proficiency suggests the brain has a sophisticated cognitive mechanism for reasoning about liquids, yet to date there has been little effort to study this mechanism quantitatively or describe it computationally. Here we find evidence that people's reasoning(More)
Humans demonstrate remarkable abilities to predict physical events in dynamic scenes, and to infer the physical properties of objects from static images. We propose a generative model for solving these problems of physical scene understanding from real-world videos and images. At the core of our generative model is a 3D physics engine, operating on an(More)
A glance at an object is often sufficient to recognize it and recover fine details of its shape and appearance, even under highly variable viewpoint and lighting conditions. How can vision be so rich, but at the same time fast? The analysis-by-synthesis approach to vision offers an account of the richness of our percepts, but it is typically considered too(More)
We study people's abilities to transfer object category knowledge across visual and haptic domains. If a person learns to categorize objects based on inputs from one sensory modality, can the person categorize these same objects when the objects are perceived through another modality? Can the person categorize novel objects from the same categories when(More)
How do people learn multisensory, or amodal, representations, and what consequences do these representations have for perceptual performance? We address this question by performing a rational analysis of the problem of learning multisensory representations. This analysis makes use of a Bayesian nonparametric model that acquires latent multisensory features(More)
Linguistic meaning has long been recognized to be highly context-dependent. Quantifiers like many and some provide a particularly clear example of context-dependence. For example, the interpretation of quantifiers requires listeners to determine the relevant domain and scale. We focus on another type of context-dependence that quantifiers share with other(More)
We investigate the hypothesis that multisensory representations mediate the crossmodal transfer of shape knowledge across visual and haptic modalities. In our experiment, participants rated the similarities of pairs of synthetic 3-D objects in visual, haptic, cross-modal, and multisensory settings. Our results offer two contributions. First, we provide(More)
Probabilistic formulations of inverse graphics have recently been proposed for a variety of 2D and 3D vision problems [15, 12, 14, 9]. These approaches represent visual elements in form of graphics simulators that produce approximate renderings of the visual scenes. Existing approaches either model pixel data or hand-crafted intermediate representations(More)
People's representations of most and arguably all linguistic and non-linguistic categories are probabilistic. However, in linguistic theory, quantifier meanings have traditionally been defined set-theoretically in terms of categorical evaluation functions. In 4 " adaptation " experiments, we provide evidence for the alternative hypothesis that quantifiers(More)
People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from(More)