Ewuradjoa Mansa Quansah, Saint Petersburg University, Russian Federation and Teesside University, United Kingdom
As generative AI systems like ChatGPT gain popularity, empirical analysis is essential to evaluate capabilities. This study investigates ChatGPT's skills for mathematical calculations through controlled experiments. Tests involving counting numbers, finding averages, and demonstrating Excel methods reveal inconsistencies and errors, indicating lack of true contextual understanding. While ChatGPT can provide solutions, its reasoning shows gaps versus human cognition. The results provide concrete evidence of deficiencies, complementing conceptual critiques. Findings caution against over-reliance on generative models for critical tasks and highlight needs to advance reasoning and human-AI collaboration. This analysis contributes to AI literature by urging continued progress of AI, so technologies like ChatGPT can be deployed safely and responsibly.
ChatGPT, Artificial intelligence, AI, Generative AI, large language-based models, experiment