Volume 17, Number 2

Enhancing Malware Detection and Analysis using Deep Learning and Explainable AI (XAI)

  Authors

Samah Alajmani 1, Ebtihal Aljuaid 1, Ben Soh 2 and Raneem Y. Alyami 1, 1 Taif University, Saudi Arabia, 2 La Trobe University, Australia

  Abstract

The rising complexity of malware threats has raised significant concerns within the antimalware community. The rapid evolution of cyber threats, particularly malware, is one of the most dangerous cybercrimes for online users due to its fast speed and self-replication. Advanced detection and analysis techniques may be required to detect it correctly. Deep learning (DL), a powerful tool in the fight against malware, accurately classifies and automates feature extraction. However, the black-box nature of DL models prevents them from being used in security-critical applications since they are difficult to understand and trust. Explainable AI (XAI) techniques enhance transparency and clarity in model decision-making, fostering a deeper understanding and building trust among cybersecurity professionals. This work introduces a new approach to identifying the behavior of modern malware through the integration of Deep learning combined with heuristics approaches and Explainable AI (XAI), precisely Shapley Additive explanations (SHAP),and Local Interpretable Model-agnostic Explanations (LIME).A synthetic dataset obtained from Kaggle served to train several models, including CNN, DNN, Random Forest, and Decision Trees. The experimental results clearly indicated that the Random Forest model achieved the highest accuracy at 69. 3%, whereas the CNN and DNN models delivered similar performances, with accuracy rates of 59. 5% and 59. 2%, respectively. Further analysis using SHAP and LIME unveiled critical features that influenced the models' decisions, thereby enhancing our understanding of AI-driven security solutions. This study effectively bridges the gap between performance and interpretability in the field of malware detection.

  Keywords

Malware detection, Deep learning, Explainable AI, Cybersecurity, Model interpretability, Artificial intelligence.