Why the Failure? How Adversarial Examples Can Provide Insights for Interpretable Machine Learning

NESL Technical Report #: 2018-7-1

Authors:

Abstract: Recent advances in Machine Learning (ML) have profoundly changed many detection, classification, recognition and inference tasks. Given the complexity of the battle space, ML has the potential to revolutionise how Coalition Situation Understanding is synthesised and revised. However, many issues must be overcome before its widespread adoption. In this paper we consider two - interpretability and adversarial attacks. Interpretability is needed because military decision-makers must be able to justify their decisions. Adversarial attacks arise because many ML algorithms are very sensitive to certain kinds of input perturbations. In this paper, we argue that these two issues are conceptually linked, and insights in one can provide insights in the other. We illustrate these ideas with relevant examples from the literature and our own experiments.

Publication Forum: 21st International Conference on Information Fusion (FUSION 2018)

Date: 2018-10-02

Public Document?: Yes

NESL Document?: Yes

Primary Research Area: Applications

Back