Publication Details

Title:

The Impact of AI Chatbots on the Landscape of Professional Accountancy Examination:An Experimental Study.

Details:

Authors;
Nathaniel Amoah,Samuel Koranteng Fianko,Sena Dake,Kwesi Agyemang,Isaac Nyame,Osei Adjaye-Gyamfi,Isaac Kwesi Nooni,Edinam Agbemaua,Fredrick Agropah,Franklin Zuma,Light Zagllago,Derrick Delali Atiase,Robert Amponsah,Rayan Lartey.
Abstract
The potential benefits and social consequences of AI systems, especially in decision-making and job replacement, have sparked debate among educators, regulators, policymakers, researchers, and ethicists. Our experiment analyzed the performance of AI Chatbots (ChatGPT-3.5, ChatGPT-4, Claude, and Gemini) on professional accounting exams compared to human test-takers using ANOVA and paired t-tests. The AI Chatbots were evaluated on two fronts: passing the ICAG certification requirement of a 50% score and their performance relative to human scores. Key findings revealed that untrained chatbots achieved mean scores of 79.75 (Claude), 77 (ChatGPT-4), 54.38 (ChatGPT-3.5), and 50.25 (Gemini), with significant differences between groups (p=0.029). Training improved mean scores to 79.88 (Claude), 80.25 (ChatGPT-4), 59.38 (ChatGPT -3.5), and 59 (Gemini). Compared to humans, untrained Claude and ChatGPT-4 outperformed on average (p=0.019), while trained models showed no significant difference (p=0.130). A one-tailed t-test found that trained chatbots performed significantly better than untrained (p=0.039). The study concludes AI chatbots can potentially influence professional exam outcomes, eroding trust in accounting professionals. Collaboration between government, educational regulators, and AI researchers is needed to develop standards protecting exam integrity. Suggested measures include remote proctoring, prohibiting electronic devices near exam centres, and establishing legal frameworks for AI standards.

Keywords: Chatbots, ChatGPT, Claude AI, Gemini, Generative AI, Professional Accountancy Examination, Accounting education