Volume 17, Number 2

Strategic Adoption of Trustworthy and Explainable AI in Organizations: Implications for Corporate Governance, Risk Management, and Firm Performance

  Authors

Dennis Farai Mahuni, University of Zimbabwe, Zimbabwe

  Abstract

The accelerated diffusion of Artificial Intelligence (AI) across industries has fundamentally reshaped corporate strategy, governance architectures, and enterprise risk management systems. While AI-driven analytics enhance predictive accuracy, operational efficiency, and strategic decision-making, the increasing reliance on complex and opaque algorithmic models has generated significant concerns regarding transparency, accountability, ethical compliance, and systemic risk. In response, the concept of Trustworthy AI—advanced prominently by the European Commission—and the development of Explainable AI (XAI) frameworks have emerged as critical governance imperatives for contemporary organizations. Trustworthy AI emphasizes lawfulness, ethical alignment, robustness, and human oversight, while explainability enhances interpretability and auditability of algorithmic outputs. This paper examines the strategic adoption of trustworthy and explainable AI within organizational contexts and analyzes its implications for corporate governance, risk management, and firm performance. Empirical findings revealed that there is a positive correlation between trustworthy and explainable AI and corporate governance, risk management and firm performance.

  Keywords

Trustworthy AI, Explainable AI (XAI), Corporate Governance, Enterprise Risk Management, Firm Performance