Automated Essay Grading with Explainable AI for Personalized Writing Feedback Generation
Main Article Content
Abstract
Automated Essay Grading (AEG) systems efficiently score essays but typically function as black-box models, providing numerical grades without transparent explanations or actionable feedback for student improvement. This research proposes a novel framework integrating Explainable Artificial Intelligence (XAI) techniques with transformer-based language models (BERT, RoBERTa) to generate interpretable, personalized writing feedback. The system employs SHAP and LIME explainability frameworks to identify specific linguistic features, argumentative structures, and coherence patterns that influence essay quality across multiple rubric dimensions. A critical innovation is aspect-based feedback generation that highlights concrete strengths, pinpoints improvement opportunities, and provides contextual recommendations tailored to individual student needs. The methodology incorporates fairness-aware protocols to mitigate algorithmic bias and ensure equitable assessment across diverse student populations. Experimental validation measures scoring accuracy through correlation with human raters, rubric-level prediction precision, and qualitative assessment of feedback quality and pedagogical value. The system addresses a fundamental gap in formative assessment by transforming automated scoring from summative evaluation into an active learning resource that supports skill development. By combining advanced machine learning with explainability and sound pedagogical principles, this research contributes to trustworthy educational technologies that enhance rather than replace human expertise in writing instruction.