We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.

Durham Research Online
You are in:

A Good Classifier is Not Enough: A XAI Approach for Urgent Instructor-Intervention Models in MOOCs

Alrajhi, Laila and Pereira, Filipe Dwan and Cristea, Alexandra I. and Aljohani, Tahani (2022) 'A Good Classifier is Not Enough: A XAI Approach for Urgent Instructor-Intervention Models in MOOCs.', in Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners’ and Doctoral Consortium. , pp. 424-427. Lecture Notes in Computer Science., 13356


Deciding upon instructor intervention based on learners’ comments that need an urgent response in MOOC environments is a known challenge. The best solutions proposed used automatic machine learning (ML) models to predict the urgency. These are ‘black-box’-es, with results opaque to humans. EXplainable artificial intelligence (XAI) is aiming to understand these, to enhance trust in artificial intelligence (AI)-based decision-making. We propose to apply XAI techniques to interpret a MOOC intervention model, by analysing learner comments. We show how pairing a good predictor with XAI results and especially colour-coded visualisation could be used to support instructors making decisions on urgent intervention.

Item Type:Book chapter
Full text:Publisher-imposed embargo until 26 July 2023.
(AM) Accepted Manuscript
File format - PDF
Publisher Web site:
Publisher statement:The final authenticated version is available online at
Date accepted:No date available
Date deposited:26 September 2022
Date of first online publication:26 July 2022
Date first made open access:26 July 2023

Save or Share this output

Look up in GoogleScholar