Academic Research Library

Find some of the best Journals and Proceedings.

AI-Supported Enhancement of Distractors in Multiple-Choice Tests: Reflections on Assessment Quality in Open Education

Author : Belgin Boz YüksekdaÄŸ, Nejdet Karadağ, Murat Akyıldız, Mesut Aydemir

Abstract :This study aims to examine the improvement of non-functional distractors in multiple-choice tests (MCTs) through artificial intelligence (AI)-based methods and the reflections of these improvements on exam performance. It is well known that in MCTs widely used in open and distance education systems, some options remain non-functional due to low selection rates (<5%), which negatively affect measurement reliability. In this context, participants of an e-certificate program were selected as the sample, and non-functional distractors were identified in a set of test items across several courses. In the methodology, the ChatGPT-4o model was employed to revise non-functional distractors. The distractors generated by AI were evaluated by subject-matter experts in terms of content relevance, conceptual accuracy, and functionality; additional productions were carried out for those receiving low scores. In this way, both the pedagogical validity of AI-generated distractors and the role of human expert approval in assessment processes were examined. The strengthened items were then administered in e-certificate exams, and item analysis was performed. The findings revealed that in some items, the selection rates of distractors exceeded 5%, ensuring a more balanced distribution. However, the persistence of non-functional options in several questions indicated that AI-based revisions were not always sufficient. In particular, alternative distractors could not be generated effectively for negatively worded items. Consequently, it was concluded that AI-supported distractor generation holds significant potential for improving the quality of assessment tools; however, without pedagogical alignment, reflection of conceptual misconceptions, ethical considerations, and continuous human supervision, it cannot be sufficient on its own. In this respect, the study contributes to the growing body of research on the use of AI in assessment and evaluation processes and provides insights for future studies

Keywords :Open and distance education, assessment, artificial intelligence, multiple-choice tests, distractor.

Conference Name :International Conference on Distance Education Technologies and Applications (ICDETA-25)

Conference Place Dublin,Ireland

Conference Date 1st Dec 2025

Preview