AI-Assisted Learning: A Study on Undergraduate Web Development Proficiency
Abstract
The rapid adoption of Large Language Models (LLMs) in education, particularly AI-powered content generators like ChatGPT, has introduced significant challenges in accurately assessing student learning outcomes. This study aims to investigate the impact of AI-generated content on student performance and the effectiveness of traditional assessment methods in a web development module. The objectives are to evaluate the influence of AI tools on students’ knowledge, cognitive abilities, and creative skills, and to identify the challenges in assessing learning outcomes when AI tools are utilized by students. The study involved 450 first-year undergraduates at a private university in Sri Lanka, divided into an experimental group, which utilized ChatGPT, and a control group that did not. Both groups were tasked with creating a webpage within a limited timeframe, using HTML, CSS, JavaScript, and PHP. Performance was assessed across ten attributes, including code quality, problem-solving skills, logical thinking, debugging skills, time management, and innovation. The assessment combined automation tools, such as Sonar Qube integrated with Jenkins, and manual evaluation methods to ensure comprehensive results. Findings indicate that the experimental group outperformed the control group, suggesting that AI tools can significantly enhance student performance. However, the study also highlights the difficulty in accurately assessing learning outcomes in the presence of AI-generated content, underscoring the need for new evaluation frameworks to differentiate between human and AI contributions
Collections
- Technology [22]