Acoustic Emanation of Haptics as a Side-Channel for Gesture-Typing Attacks

@article{Roscoe2020AcousticEO,
  title={Acoustic Emanation of Haptics as a Side-Channel for Gesture-Typing Attacks},
  author={Jonathan Francis Roscoe and Max Smith-Creasey},
  journal={2020 International Conference on Cyber Security and Protection of Digital Services (Cyber Security)},
  year={2020},
  pages={1-4}
}
  • J. RoscoeMax Smith-Creasey
  • Published 1 June 2020
  • Computer Science
  • 2020 International Conference on Cyber Security and Protection of Digital Services (Cyber Security)
In this paper, we show that analysis of acoustic emanations recorded from haptic feedback during gesture-typing sessions is a viable side-channel for carrying out eavesdropping attacks against mobile devices. The proposed approach relies on acoustic emanation resulting from haptic events, namely the buzz of a small vibration motor as the finger initiates the gesture-typing of a work in a sentence. By analysing time between haptic feedback events, it is possible to identify the text that a user… 

Figures and Tables from this paper

Unconventional Mechanisms for Biometric Data Acquisition via Side-Channels

The proliferation of household smart devices is discussed and the literature is reviewed to explore whether the implementation characteristics of such systems may provide avenues of attack to obtain private biometric data.

Discerning User Activity in Extended Reality Through Side-Channel Accelerometer Observations

This pilot paper explores how malicious actors may be able to eavesdrop on a virtual reality session, by tracking the physical movements of a user using a third-party accelerometer, attached to the user.

References

SHOWING 1-10 OF 21 REFERENCES

SurfingAttack: Interactive Hidden Attack on Voice Assistants Using Ultrasonic Guided Waves

A new attack called SurfingAttack is designed that would enable multiple rounds of interactions between the voice-controlled device and the attacker over a longer distance and without the need to be in line-of-sight, and enables new attack scenarios, such as hijacking a mobile Short Message Service passcode, making ghost fraud calls without owners’ knowledge, etc.

Keyboard acoustic emanations revisited

An attack taking as input a 10-minute sound recording of a user typing English text using a keyboard and recovering up to 96% of typed characters is presented, using the statistical constraints of the underlying content, English language, to reconstruct text from sound recordings without knowing the corresponding clear text.

Compromising Reflections-or-How to Read LCD Monitors around the Corner

This work presents a novel eavesdropping technique that exploits reflections of the screen's optical emanations in various objects that one commonly finds in close proximity to the screen and uses those reflections to recover the original screen content.

Don’t Interrupt Me While I Type: Inferring Text Entered Through Gesture Typing on Android Keyboards

This study demonstrates a new way in which system-wide resources can be a threat to user privacy and concludes that real-time interrupt information should be made inaccessible via a tighter SELinux policy in the next Android version.

Hard Drive of Hearing: Disks that Eavesdrop with a Synthesized Microphone

This research demonstrates that the mechanical components in magnetic hard disk drives behave as microphones with sufficient precision to extract and parse human speech.

Keyboard acoustic emanations

We show that PC keyboards, notebook keyboards, telephone and ATM pads are vulnerable to attacks based on differentiating the sound emanated by different keys. Our attack employs a neural network to

When AES blinks: introducing optical side channel

It turns out that for an outdated and unprotected 0.8 µm PIC16F84A microcontroller it is possible to recover the AES secret key directly during the initial AddRoundKey operation as the side channel can distinguish the individual key bits being XORed to the plaintext.

Next-generation of virtual personal assistants (Microsoft Cortana, Apple Siri, Amazon Alexa and Google Home)

  • V. KepuskaGamal Bohouta
  • Computer Science
    2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC)
  • 2018
The multi-modal dialogue systems which process two or more combined user input modes, such as speech, image, video, touch, manual gestures, gaze, and head and body movement are used in order to design the Next-Generation of VPAs model.

Recognition of isolated musical patterns using Context Dependent Dynamic Time Warping

This paper presents an efficient method for recognizing isolated musical patterns in a monophonic environment, using a novel extension of Dynamic Time Warping, which we call Context Dependent Dynamic