AI-Generated Questions in Context: A Contextualized Investigation Using Platform Data, Student Feedback, and Faculty Observations

Published online: May 13, 2025 Full Text: PDF (1.65 MiB) DOI: https://doi.org/10.24138/jcomss-2024-0120
Cite this paper
Authors:
Rachel Van Campenhout, Benny G. Johnson, Michelle Clark, Melissa Deininger, Shannon Harper, Kelly Odenweller, Erin Wilgenbusch

Abstract

In recent years, artificial intelligence has been leveraged to develop an automatic question generation (AQG) system that places formative practice questions alongside textbook content in an ereader platform. Engaging with formative practice while reading is a highly effective learning strategy. AQG made it possible to scale this method to thousands of textbooks and millions of students for free. Previous research studies used aggregated data from all questions answered by all students to complete the largest evaluation of the performance metrics for automatically generated questions. However, these studies also indicated that when assigned in a classroom setting, student behavior and question performance metrics would differ. In this study, we evaluate data collected from 19 course sections taught by four faculty members at Iowa State University to gain a broader understanding of how students engage with these AI-generated practice questions when part of their university courses. Implementation strategies for the courses, student engagement, and question performance metrics are analyzed, and student feedback gathered from surveys and course evaluations are presented. Implications for further use in higher education classrooms are discussed.

Keywords

automatic question generation, performance metrics, question difficulty, persistence, natural learning context, student behavior
Creative Commons License 4.0
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.