Corpus ID: 201666952

Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency

@article{Hoffer2019MixM,
  title={Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency},
  author={E. Hoffer and B. Weinstein and Itay Hubara and Tal Ben-Nun and Torsten Hoefler and Daniel Soudry},
  journal={ArXiv},
  year={2019},
  volume={abs/1908.08986}
}
  • E. Hoffer, B. Weinstein, +3 authors Daniel Soudry
  • Published 2019
  • Computer Science, Mathematics
  • ArXiv
  • Convolutional neural networks (CNNs) are commonly trained using a fixed spatial image size predetermined for a given model. [...] Key Result For example, we are able to reach a 79.27% accuracy with a model evaluated at a 288 spatial size for a relative improvement of 14% over the baseline.Expand Abstract
    2 Citations
    Feature Lenses: Plug-and-play Neural Modules for Transformation-Invariant Visual Representations
    • PDF
    Resolution Switchable Networks for Runtime Efficient Image Recognition
    • 3
    • PDF

    References

    SHOWING 1-10 OF 43 REFERENCES
    Fixing the train-test resolution discrepancy
    • 85
    • Highly Influential
    • PDF
    Very Deep Convolutional Networks for Large-Scale Image Recognition
    • 43,522
    • PDF
    EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
    • 1,336
    • PDF
    Improved Regularization of Convolutional Neural Networks with Cutout
    • 809
    • PDF
    Rethinking the Inception Architecture for Computer Vision
    • 9,872
    • PDF
    Scale-Invariant Convolutional Neural Networks
    • 66
    • PDF
    Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
    • 1,401
    • Highly Influential
    • PDF
    Scale-Invariant Recognition by Weight-Shared CNNs in Parallel
    • 5
    • PDF
    AutoAugment: Learning Augmentation Policies from Data
    • 515
    • PDF