Alfrin Saldanha and Hyoil Han, Illinois State University, USA
The rapid advancement of Artificial Intelligence has driven the adoption of machine learning technologies across diverse domains, with recommender systems playing a pivotal role in delivering personalized suggestions. However, as user-centric applications become increasingly sophisticated, providing recommendations without clear explanations is no longer adequate. Explainable recommendation systems bridge this gap by enhancing transparency, user understanding, and trust through interpretable and contextually relevant explanations. These systems strive to balance high recommendation accuracy with the clarity of their explanations. This paper examines state-of-the-art models and methodologies in explainable recommendation systems, focusing on their computational underpinnings, evaluation metrics, and practical outcomes. We analyze the strengths and limitations of existing approaches and discuss opportunities for integrating innovative techniques and emerging technologies. Our study aims to advance the development of more effective, explainable recommendation systems adaptable to diverse application domains, aligning with the interdisciplinary focus of computational science.
Explainable Recommender Systems, Artificial Intelligence, Knowledge Mining, Virus, Machine Learning .