Learn More
Hands appear very often in egocentric video, and their appearance and pose give important cues about what people are doing and what they are paying attention to. But existing work in hand detection has made strong assumptions that work well in only simple scenarios, such as with limited interaction with other people or in lab settings. We develop methods to(More)
A key component of the human visual system is our attentional control - the selection of which visual stimuli to pay attention to at any moment in time. Understanding visual attention in children could yield new insight into how the visual system develops during formative years and how their visual attention and selection play a role in development and(More)
Egocentric cameras are becoming more popular, introducing increasing volumes of video in which the biases and framing of traditional photography are replaced with those of natural viewing tendencies. This paradigm enables new applications, including novel studies of social interaction and human development. Recent work has focused on identifying the camera(More)
Understanding visual attention in children could yield insight into how the visual system develops during formative years and how children's overt attention plays a role in development and learning. We are particularly interested in the role of hands and hand activities in children's visual attention. We use head-mounted cameras to collect egocentric video(More)
Early visual object recognition in a world full of cluttered visual information is a complicated task at which toddlers are incredibly efficient. In their everyday lives, toddlers constantly create learning experiences by actively manipulating objects and thus self-selecting object views for visual learning. The work in this paper is based on the hypothesis(More)
Recent technological advances have made lightweight, head mounted cameras both practical and affordable and products like Google Glass show first approaches to introduce the idea of egocentric (first-person) video to the mainstream. Interestingly, the computer vision community has only recently started to explore this new domain of egocentric vision, where(More)
Wearable devices are becoming part of everyday life, from first-person cameras (GoPro, Google Glass), to smart watches (Apple Watch), to activity trackers (FitBit). These devices are often equipped with advanced sensors that gather data about the wearer and the environment. These sensors enable new ways of recognizing and analyzing the wearer's everyday(More)
Wearable cameras are becoming practical and inexpensive , creating new applications and opportunities including novel studies of social interaction and human development. Recent work has focused on identifying the camera wearer's hands in egocentric video, as a first step towards more complex analysis. Here we study how to disambiguate and track not only(More)
Work in Cognitive Science has shown that infants are amazingly efficient at the complex task of learning to recognize objects in a world full of visual clutter. In fact, many computer vision researchers have drawn analogies between that process and the impressive recent performance of deep learning. This connection raises the exciting potential that better(More)
During early visual development, the infant's body and actions both create and constrain the experiences on which the visual system grows. Evidence on early motor development suggests a bias for acting on objects with the eyes, head, trunk, hands, and object aligned at midline. Because these sensory-motor bodies structure visual input, they may also play a(More)