Corpus ID: 1904265

PKU-MMD: A Large Scale Benchmark for Continuous Multi-Modal Human Action Understanding

@article{Liu2017PKUMMDAL,
  title={PKU-MMD: A Large Scale Benchmark for Continuous Multi-Modal Human Action Understanding},
  author={Chunhui Liu and Yueyu Hu and Yanghao Li and Sijie Song and Jiaying Liu},
  journal={ArXiv},
  year={2017},
  volume={abs/1703.07475}
}
  • Chunhui Liu, Yueyu Hu, +2 authors Jiaying Liu
  • Published 2017
  • Computer Science
  • ArXiv
  • Despite the fact that many 3D human activity benchmarks being proposed, most existing action datasets focus on the action recognition tasks for the segmented videos. There is a lack of standard large-scale benchmarks, especially for current popular data-hungry deep learning based methods. In this paper, we introduce a new large scale benchmark (PKU-MMD) for continuous multi-modality 3D human action understanding and cover a wide range of complex human activities with well annotated information… CONTINUE READING

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 64 CITATIONS, ESTIMATED 87% COVERAGE

    MMAct: A Large-Scale Dataset for Cross Modal Human Action Understanding

    VIEW 3 EXCERPTS
    CITES BACKGROUND

    NTU RGB+D 120: A Large-Scale Benchmark for 3D Human Activity Understanding

    VIEW 1 EXCERPT
    CITES METHODS

    Video benchmarks of human action datasets: a review

    VIEW 4 EXCERPTS
    CITES BACKGROUND
    HIGHLY INFLUENCED

    Modality Compensation Network: Cross-Modal Adaptation for Action Recognition

    VIEW 3 EXCERPTS

    Instance-Aware Detailed Action Labeling in Videos

    VIEW 1 EXCERPT

    FILTER CITATIONS BY YEAR

    2017
    2020

    CITATION STATISTICS

    • 14 Highly Influenced Citations

    • Averaged 18 Citations per year from 2018 through 2020

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 60 REFERENCES

    NTU RGB+D: A Large Scale Dataset for 3D Human Activity Analysis

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    Berkeley MHAD: A comprehensive Multimodal Human Action Database

    VIEW 1 EXCERPT

    ActivityNet: A large-scale video benchmark for human activity understanding

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    Mining actionlet ensemble for action recognition with depth cameras

    VIEW 1 EXCERPT

    Learning realistic human actions from movies

    VIEW 1 EXCERPT

    Multimodal Multipart Learning for Action Recognition in Depth Videos

    VIEW 1 EXCERPT

    Watch-n-patch: Unsupervised understanding of actions and relations

    VIEW 2 EXCERPTS