Corpus ID: 214692960

Can you hear me $\textit{now}$? Sensitive comparisons of human and machine perception

@article{Lepori2020CanYH,
  title={Can you hear me \$\textit\{now\}\$? Sensitive comparisons of human and machine perception},
  author={Michael A. Lepori and Chaz Firestone},
  journal={arXiv: Audio and Speech Processing},
  year={2020}
}
The rise of sophisticated machine-recognition systems has brought with it a rise in comparisons between human and machine perception. But such comparisons face an asymmetry: Whereas machine perception of some stimulus can often be probed through direct and explicit measures, much of human perceptual knowledge is latent, incomplete, or embedded in unconscious mental processes that may not be available for explicit report. Here, we show how this asymmetry can cause such comparisons to… Expand
1 Citations
Performance vs. competence in human–machine comparisons
  • Chaz Firestone
  • Medicine, Psychology
  • Proceedings of the National Academy of Sciences
  • 2020
TLDR
Focusing on the domain of image classification, three factors contributing to the species-fairness of human–machine comparisons are identified, extracted from recent work that equates superficial constraints on demonstrating that knowledge. Expand

References

SHOWING 1-10 OF 66 REFERENCES
Humans can decipher adversarial images
TLDR
How humans can anticipate which objects CNNs will see in adversarial images is shown, showing that human and machine classification of adversarial image classification are robustly related. Expand
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition
TLDR
This paper develops effectively imperceptible audio adversarial examples by leveraging the psychoacoustic principle of auditory masking, while retaining 100% targeted success rate on arbitrary full-sentence targets and makes progress towards physical-world over-the-air audio adversaria examples by constructing perturbations which remain effective even after applying realistic simulated environmental distortions. Expand
Exploring Perceptual Illusions in Deep Neural Networks
TLDR
It is found that deep neural networks trained exclusively for object recognition exhibit the Müller-Lyer illusion, but not other illusions, which shows that some perceptual computations that are similar to humans’ may come “for free” in a system with perceptual goals similar to human’s. Expand
Controversial stimuli: Pitting neural networks against each other as models of human cognition
TLDR
This work synthesized controversial stimuli: images for which different models produce distinct responses, and found that deep neural networks, which model the distribution of images, performed better than purely discriminative DNNs, which learn only to map images to labels. Expand
Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems
TLDR
This paper exploits the fact that multiple source audio samples have similar feature vectors when transformed by acoustic feature extraction algorithms to exploit knowledge of the signal processing algorithms commonly used by VPSes to generate the data fed into machine learning systems. Expand
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
TLDR
This work takes convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and finds images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class, and produces fooling images, which are then used to raise questions about the generality of DNN computer vision. Expand
Hidden Voice Commands
TLDR
This paper explores in this paper how voice interfaces can be attacked with hidden voice commands that are unintelligible to human listeners but which are interpreted as commands by devices. Expand
Cognition does not affect perception: Evaluating the evidence for “top-down” effects
TLDR
This work suggests that none of these hundreds of studies – either individually or collectively – provides compelling evidence for true top-down effects on perception, or “cognitive penetrability,” and suggests that these studies all fall prey to only a handful of pitfalls. Expand
Hackers easily fool artificial intelligences.
  • M. Hutson
  • Medicine, Computer Science
  • Science
  • 2018
TLDR
At the ICML meeting, adversarial attacks were a hot subject, with researchers reporting novel ways to trick AIs as well as new ways to defend them and one of the conference's two best paper awards went to a study suggesting protected AIs aren't as secure as their developers might think. Expand
Intriguing properties of neural networks
TLDR
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Expand
...
1
2
3
4
5
...