Academy & Industry Research Collaboration Center (AIRCC)

Volume 11, Number 10, July 2021

A Modified Drake Equation for Assessing Adversarial Risk to Machine Learning Models

  Authors

Josh Kalin1, 2, David Noever2, Matthew Ciolino2 and Gerry Dozier1, 1Auburn University, USA, 2PeopleTec, Inc, USA

  Abstract

Machine learning models present a risk of adversarial attack when deployed in production. Quantifying the contributing factors and uncertainties using empirical measures could assist the industry with assessing the risk of downloading and deploying common model types. This work proposes modifying the traditional Drake Equation’s formalism to estimate the number of potentially successful adversarial attacks on a deployed model. The Drake Equation is famously used for parameterizing uncertainties and it has been used in many research fields outside of its original intentions to estimate the number of radio-capable extra-terrestrial civilizations. While previous work has outlined methods for discovering vulnerabilities in public model architectures, the proposed equation seeks to provide a semi-quantitative benchmark for evaluating and estimating the potential risk factors for adversarial attacks.

  Keywords

Neural Networks, Machine Learning, Image Classification, Adversarial Attacks.