Volume 15, Number 5

Invertible Neural Network for Inference Pipeline Anomaly Detection

  Authors

Malgorzata Schwab and Ashis Biswas, University of Colorado at Denver, USA

  Abstract

This study combines research in machine learning and system engineering practices to conceptualize a paradigm-enhancing trustworthiness of a machine learning inference pipeline. We explore the topic of reversibility in deep neural networks and introduce its anomaly detection capabilities to build a framework of integrity verification checkpoints across the inference pipeline of a deployed model. We leverage previous findings and principles regarding several types of autoencoders, deep generative maximumlikelihood training and invertibility of neural networks to propose an improved network architecture for anomaly detection. We hypothesize and experimentally confirm that an Invertible Neural Network (INN) trained as a convolutional autoencoder is a superior alternative naturally suited to solve that task. This remarkable INN’s ability to reconstruct data from its compressed representation and to solve inverse problems is then generalized and applied in the field of Trustworthy AI to achieve integrity verification of an inference pipeline through the concept of an INN-based Trusted Neural Network (TNN) nodes placed around the mission critical parts of the system, as well as the end-to-end outcome verification. This work aspires to enhance robustness and reliability of applications employing artificial intelligence, which are playing increasingly noticeable role in highly consequential decision-making processes across many industries and problem domains. INNs are invertible by construction and tractably trained simultaneously in both directions. This feature has untapped potential to improve the explainability of machine learning pipelines in support of their trustworthiness and is a topic of our current studies.

  Keywords

Invertible, Autoencoder, Anomaly, Trustworthiness