Adversarial Black-Box Attacks on Automatic Speech Recognition Systems Using Multi-Objective Evolutionary Optimization

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

  • Shreya Khare
  • Rahul Aralikatte
  • Senthil Mani
Fooling deep neural networks with adversarial input have exposed a significant vulnerability in the current state-of-the-art systems in multiple domains. Both black-box and white-box approaches have been used to either replicate the model itself or to craft examples which cause the model to fail. In this work, we propose a framework which uses multi-objective evolutionary optimization to perform both targeted and un-targeted black-box attacks on Automatic Speech Recognition (ASR) systems. We apply this framework on two ASR systems: Deepspeech and Kaldi-ASR, which increases the Word Error Rates (WER) of these systems by upto 980%, indicating the potency of our approach. During both un-targeted and targeted attacks, the adversarial samples maintain a high acoustic similarity of 0.98 and 0.97 with the original audio.
OriginalsprogEngelsk
TitelProc. Interspeech 2019
ForlagInternational Speech Communication Association (ISCA)
Publikationsdato15 sep. 2019
Sider3208-3212
DOI
StatusUdgivet - 15 sep. 2019
BegivenhedInterspeech 2019 - 20th Annual Conference of the International Speech Communication Association - Graz, Østrig
Varighed: 15 sep. 201919 sep. 2019

Konference

KonferenceInterspeech 2019 - 20th Annual Conference of the International Speech Communication Association
LandØstrig
ByGraz
Periode15/09/201919/09/2019

ID: 239857996