Fathom: reference workloads for modern deep learning methods

@article{Adolf2016FathomRW,
  title={Fathom: reference workloads for modern deep learning methods},
  author={Robert Adolf and Saketh Rama and Brandon Reagen and Gu-Yeon Wei and David M. Brooks},
  journal={2016 IEEE International Symposium on Workload Characterization (IISWC)},
  year={2016},
  pages={1-10}
}
  • Robert Adolf, Saketh Rama, +2 authors David M. Brooks
  • Published 2016
  • Computer Science
  • 2016 IEEE International Symposium on Workload Characterization (IISWC)
  • Deep learning has been popularized by its recent successes on challenging artificial intelligence problems. One of the reasons for its dominance is also an ongoing challenge: the need for immense amounts of computational power. Hardware architects have responded by proposing a wide array of promising ideas, but to date, the majority of the work has focused on specific algorithms in somewhat narrow application domains. While their specificity does not diminish these approaches, there is a clear… CONTINUE READING

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 94 CITATIONS, ESTIMATED 98% COVERAGE

    Deep Learning for Computer Architects

    Characterizing Deep Learning Training Workloads on Alibaba-PAI

    VIEW 9 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    Deep Learning Cookbook: Recipes for your AI Infrastructure and Applications

    VIEW 3 EXCERPTS
    CITES BACKGROUND & METHODS

    Benchmarking and Analyzing Deep Neural Network Training

    TBD: Benchmarking and Analyzing Deep Neural Network Training

    VIEW 1 EXCERPT
    CITES BACKGROUND

    QuTiBench: Benchmarking Neural Networks on Heterogeneous Hardware

    VIEW 10 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    Demystifying the MLPerf Benchmark Suite

    VIEW 2 EXCERPTS
    CITES BACKGROUND

    FILTER CITATIONS BY YEAR

    2016
    2020

    CITATION STATISTICS

    • 12 Highly Influenced Citations

    • Averaged 27 Citations per year from 2018 through 2020

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 63 REFERENCES

    DaDianNao: A Machine-Learning Supercomputer

    VIEW 1 EXCERPT

    Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators

    PuDianNao: A Polyvalent Machine Learning Accelerator

    VIEW 1 EXCERPT

    Large Scale Distributed Deep Networks

    VIEW 1 EXCERPT

    ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars

    VIEW 1 EXCERPT

    CortexSuite: A synthetic brain benchmark suite

    VIEW 2 EXCERPTS

    Caffe: Convolutional Architecture for Fast Feature Embedding

    VIEW 6 EXCERPTS
    HIGHLY INFLUENTIAL