Humans can decipher adversarial images

@article{Zhou2019HumansCD,
  title={Humans can decipher adversarial images},
  author={Zhenglong Zhou and Chaz Firestone},
  journal={Nature Communications},
  year={2019},
  volume={10}
}
  • Zhenglong Zhou, Chaz Firestone
  • Published 2019
  • Computer Science, Biology, Medicine
  • Nature Communications
  • Does the human mind resemble the machine-learning systems that mirror its performance? Convolutional neural networks (CNNs) have achieved human-level benchmarks in classifying novel images. These advances support technologies such as autonomous vehicles and machine diagnosis; but beyond this, they serve as candidate models for human vision itself. However, unlike humans, CNNs are “fooled” by adversarial examples—nonsense patterns that machines recognize as familiar objects, or seemingly… CONTINUE READING

    Figures and Topics from this paper.

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 30 CITATIONS

    What do adversarial images tell us about human vision?

    VIEW 10 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    FIERS USING ELEMENTS OF HUMAN VISUAL COGNI- TION

    • 2019
    VIEW 1 EXCERPT
    CITES BACKGROUND

    A Surprising Density of Illusionable Natural Speech

    VIEW 1 EXCERPT
    CITES BACKGROUND

    FILTER CITATIONS BY YEAR

    2019
    2020

    CITATION STATISTICS

    • 3 Highly Influenced Citations

    • Averaged 15 Citations per year from 2019 through 2020

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 51 REFERENCES

    Deep neural networks are easily fooled: High confidence predictions for unrecognizable images

    The Limitations of Deep Learning in Adversarial Settings

    Robust Physical-World Attacks on Deep Learning Visual Classification