Adversarial Attacks on Deep Learning Systems for User Identification based on Motion Sensors

  title={Adversarial Attacks on Deep Learning Systems for User Identification based on Motion Sensors},
  author={Cezara Benegui and Radu Tudor Ionescu},
  booktitle={International Conference on Neural Information Processing},
For the time being, mobile devices employ implicit authentication mechanisms, namely, unlock patterns, PINs or biometric-based systems such as fingerprint or face recognition. While these systems are prone to well-known attacks, the introduction of an explicit and unobtrusive authentication layer can greatly enhance security. In this study, we focus on deep learning methods for explicit authentication based on motion sensor signals. In this scenario, attackers could craft adversarial examples… 

Experiments on Adversarial Examples for Deep Learning Model Using Multimodal Sensors

This study discusses the possibility of attacks on DNN models by hacking only a small number of sensors, and performs experiments using the human activity recognition model with three sensor devices attached to the chest, wrist, and ankle of a user, to demonstrate that attacks are possible.

Generative adversarial attacks on motion-based continuous authentication schemes

The empirical results demonstrate that Generative models cause a higher Equal Error Rate and misclassification error in attack scenarios, and prove that data samples crafted by generative models can be a severe threat to continuous authentication schemes using motion sensor data.

Adversarial attacks and defenses on ML- and hardware-based IoT device fingerprinting and identification

An LSTM-CNN architecture based on hardware performance behavior for individual device identifica- tion is proposed and adversarial training and model distillation defense techniques are selected to improve the model resilience to evasion attacks, improving its robustness without degrading its performance in an impactful manner.

Data-driven behavioural biometrics for continuous and adaptive user verification using Smartphone and Smartwatch

This work proposes an algorithm to blend behavioural biometrics with multi-factor authentication (MFA) by introducing a two-step user verification algorithm that verifies the user’s identity using motion-based biometric and complements the multi-Factor authentication, thus making it more secure and flexible.

An Implicit Identity Authentication Method Based on Deep Connected Attention CNN for Wild Environment

This paper describes users’ behavioral patterns through the build-in sensors of smartphones, and applies them to the authentication system, focusing on the multi-class strategy in CIA.

Simple Black-Box Adversarial Attacks on Deep Neural Networks

This work focuses on deep convolutional neural networks and demonstrates that adversaries can easily craft adversarial examples even without any internal knowledge of the target network, and proposes schemes that could serve as a litmus test for designing robust networks.

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

The Boundary Attack is introduced, a decision-based attack that starts from a large adversarial perturbations and then seeks to reduce the perturbation while staying adversarial and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet.

Analysis of Adversarial Attacks against CNN-based Image Forgery Detectors

The vulnerability of CNN-based image forensics methods to adversarial attacks is analyzed, considering several detectors and several types of attack, and testing performance on a wide range of common manipulations, both easily and hardly detectable.

Robust Adversarial Perturbation on Deep Proposal-based Models

A robust adversarial perturbation (R-AP) method to attack deep proposal-based object detectors and instance segmentation algorithms to universally degrade their performance in a black-box fashion is described.

Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

New transferability attacks between previously unexplored (substitute, victim) pairs of machine learning model classes, most notably SVMs and decision trees are introduced.

Continuous user identification via touch and movement behavioral biometrics

This work presents SilentSense, a framework to authenticate users silently and transparently by exploiting dynamics mined from the user touch behavior biometrics and the micro-movement of the device caused by user's screen-touch actions.

Draw It As Shown: Behavioral Pattern Lock for Mobile User Authentication

A novel mechanism based on the pattern lock, in which behavioral biometrics are employed to address problems of security and usability, and turns the lock pattern into public knowledge rather than a secret and leveraging touch dynamics.

Performance Analysis of Motion-Sensor Behavior for User Authentication on Smartphones

The results suggest that sensory data could provide useful authentication information, and this level of performance approaches sufficiency for two-factor authentication on smartphones.

Convolutional Neural Networks for User Identification Based on Motion Sensors Represented as Images

This paper transforms the discrete 3-axis signals from the motion sensors into a gray-scale image representation which is provided as input to a convolutional neural network (CNN) that is pre-trained for multi-class user classification.

Beware, Your Hands Reveal Your Secrets!

A new breed of side-channel attack on the PIN entry process on a smartphone which entirely relies on the spatio-temporal dynamics of the hands during typing to decode the typed text and is very likely to be adopted by adversaries who seek to stealthily steal sensitive private information.