Detecting activities of daily living in first-person camera views

Abstract

We present a novel dataset and novel algorithms for the problem of detecting activities of daily living (ADL) in firstperson camera views. We have collected a dataset of 1 million frames of dozens of people performing unscripted, everyday activities. The dataset is annotated with activities, object tracks, hand positions, and interaction events. ADLs differ from typical actions in that they can involve long-scale temporal structure (making tea can take a few minutes) and complex object interactions (a fridge looks different when its door is open). We develop novel representations including (1) temporal pyramids, which generalize the well-known spatial pyramid to approximate temporal correspondence when scoring a model and (2) composite object models that exploit the fact that objects look different when being interacted with. We perform an extensive empirical evaluation and demonstrate that our novel representations produce a two-fold improvement over traditional approaches. Our analysis suggests that real-world ADL recognition is “all about the objects,” and in particular, “all about the objects being interacted with.”

DOI: 10.1109/CVPR.2012.6248010

Extracted Key Phrases

Showing 1-10 of 217 extracted citations
05010020102011201220132014201520162017
Citations per Year

355 Citations

Semantic Scholar estimates that this publication has received between 286 and 444 citations based on the available data.

See our FAQ for additional information.