AI-Assisted Evaluation of Handwritten Answers with a Smart Feedback Loop
Author : Mirnalini E R
Abstract : Grading messy handwritten answers takes too long. One person might mark differently than another. Some students get unfair results because of how they write or make small grammar mistakes. This paper discusses the use of artificial intelligence to help fix these problems. Initially the handwritten text is converted to digital text by using intelligent document processing. Then comes understanding meaning: does the answer actually match what was expected? Transformers: smart language models check how close the ideas are. Grammar gets reviewed at the same time, without rigid rules but with context awareness. Scores come out based on both sense and structure. Feedback follows, shaped around each learner's response. It points out strengths, highlights gaps, guides improvement. Fairness stays central throughout. Handwriting differences matter less now. Mistakes in syntax don’t block comprehension anymore. The system adapts instead of rejecting. Learning moves forward without waiting weeks for results. When mistakes happen, teachers can step in to check and change AI grades. Their fixes go straight back into the system, helping it learn over time. Instead of just numbers, a live dashboard shows patterns - like where students struggle most or how tests are performed overall. Watching real usage reveals faster grading without losing accuracy. Feedback gets better too. As a result, this becomes a better assisted solution for large classrooms and fair evaluation is done. Learning from each decision makes the whole process tougher against errors down the line.
Keywords : Artificial intelligence, handwritten evaluation, semantic similarity, human-in-the-loop, personalised feedback
Conference Name : International Conference on AI in Data Science and Deep Learning (ICIADL-26)
Conference Place : Bhopal, India
Conference Date : 15th Feb 2026