Hard-Label Black-Box Adversarial Attack on Deep Electrocardiogram Classifier [Workshop paper]

NESL Technical Report #: 2020-11-1

Authors:

Abstract: Through aiding the process of diagnosing cardiovascular diseases (CVD) such as arrhythmia, electrocardiograms (ECGs) have progressively improved prospects for an automated diagnosis system in modern healthcare. Recent years have seen the promising applications of deep neural networks (DNNs) in analyzing ECG data, even outperforming cardiovascular experts in identifying certain rhythm irregularities. However, DNNs have shown to be susceptible to adversarial attacks, which intentionally compromise the models by adding perturbations to the inputs. This concept is also applicable to DNN-based ECG classifiers and the prior works generate these adversarial attacks in a white-box setting where the model details are exposed to the attackers. However, the black-box condition, where the classification model's architecture and parameters are unknown to the attackers, remains mostly unexplored. Thus, we aim to fool ECG classifiers in the black-box and hard-label setting where given an input, only the final predicted category is visible to the attacker. Our attack on the DNN classification model for the PhysioNet Computing in Cardiology Challenge 2017 [12] database produced ECG data sets mostly indistinguishable from the white-box version of an adversarial attack on this same database. Our results demonstrate that we can effectively generate the adversarial ECG inputs in this black-box setting, which raises significant concerns regarding the potential applications of DNN-based ECG classifiers in security-critical systems.

External paper URL

Publisher

Page (Start): 6

Page (End): 12

Page (Count): 6

Date: 2020-11-16

Publisher: ACM

Public Document?: Yes

NESL Document?: Yes

Document category: Workshop paper

Primary Research Area: Privacy, Security, and Integrity

Back