Authors
Arav Agarwal1 and Rhea Mahajan2, 1India, 2University of Jammu, India
Abstract
This paper delves into the intricate realm of generative Artificial Intelligence (AI) models, specifically focusing on transformers like GPT (Generative Pre-trained Transformer). Despite their remarkable capabilities, these models pose challenges in terms of interpretability and accountability, owing to their complex architectures and vast training data. This paper employs a model to investigate the importance of words within the corpus, employing sensitivity analysis techniques. Specifically, attention weights are used to measure the impact of individual words on the model's predictions. The paper proposes a novel approach to rank the importance of words by leveraging attention weights and conducting sensitivity analysis across the dataset. To quantify the discrepancies between model-generated outputs and ground truth, the Kullback-Leibler (KL) divergence is employed. This divergence measure aids in evaluating how well the model captures the underlying distribution of words in the corpus. By integrating KL divergence into the sensitivity analysis, the study aims to provide a more comprehensive understanding of word importance.
Keywords
Artificial Intelligence; Kullback-Leibler (KL) divergence; Generative Pre-trained Transformer