Students seem to prefer teacher feedback over AI feedback

Published

17 September, 2024

A new EPFL paper has found that students are cautious towards AI feedback, highlighting the complexity of integrating it into educational feedback systems.

Feedback plays a crucial role in learning, helping individuals to understand and improve their performance, yet globally large and diverse student populations often mean that providing timely and personalized observations can be a challenge.

Recent advances in Generative Artificial Intelligence offer a solution to these challenges but most existing studies primarily target technological aspects like model accuracy and often miss the social-emotional aspects of AI’s acceptance.

Now, researchers working in the Machine Learning for Education Laboratory (ML4ED) part of EPFL’s School of Computer and Communication Sciences (IC) have investigated how the identity of the feedback provider affects students’ perception.

In their paper AI or Human? Evaluating Student Feedback Perceptions in Higher Education, being presented this week at the European Conference on Technology Enhanced Learning, the researchers describe how more than 450 EPFL students across diverse academic programs and levels evaluated personalized feedback in authentic educational settings both before and after the disclosure of whether it was from a human or generated by AI.

“Our research found that before students identify whether a human or AI is giving them feedback, they don’t perceive a difference in quality or in friendliness. After they found out that it was AI giving the feedback, they either lowered the score of the AI or increased the score of the human, which tells us that they do not trust the AI,” explained Professor Tanja Käser, Head of the ML4ED Laboratory.

Participants in the study were also asked to guess the feedback provider correctly. In total, 274 of the 457 participants correctly guessed which feedback was human and which was generated by AI. The researchers found that neither age nor gender significantly impacted correct answers, but the type of course task did. Students identified feedback as AI-generated more easily on projects involving coding than on short logical proof tasks.

The researchers believe that one of the key questions arising from the study is how the perception of trust in AI as a feedback provider can affect the real-world implementation of AI feedback in the classroom.

“This has important implications in learning. Good feedback will tell you what you did well, what you didn’t, and future actions you can take. If you’re less prepared to take heed of the feedback that you get because it’s from AI and you don’t trust it, you’re less likely to improve your learning as classrooms integrate more of these models,” said Tanya Nazaretsky, a Post-Doctoral Researcher in the ML4ED Lab and lead author of the paper.

Increasingly, it’s clear that AI can be very useful in education to support learning and there’s a high readiness to accept it. However, there are perceived obstacles around lack of transparency and accountability, privacy violations, and training data sources.

“An important concern was the capability of AI to understand the real learning context outside its confines. A lot of students made the comment ‘the AI doesn’t know me as a person, the AI just sees what is in the system but there are other factors that are important for the learning process and the AI cannot see it’. Despite the readiness to accept AI, there is a real lack of trust and this hinders its adoption in practice,” continued Nazaretsky.

Käser says that in hindsight, the strong preference of human over AI feedback was unexpected but it demonstrates that a lot more research is needed on the acceptance and integration of AI in learning environments.

“Let’s assume the AI was perfect, we still need to show how it can be adapted and seamlessly integrated into curriculums and teaching. One key finding from this paper is that we should never forget the human element.”

AI or Human? Evaluating Student Feedback Perceptions in Higher Educationhas recently been nominated for best research paper at ECTEL 2024, theNineteenth European Conference on Technology Enhanced Learning

Thanks to Jean-Cédric Chappelier, Sacha Friedli, Olivier Lévêque, Alexander Mathis, Patrick Wang, Robert West, Akhil Arora, Jade Maï Cock, Bahar Radmehr, and Manoel Horta Ribeiro for their support in designing the study and in data collection efforts.

Furtherthanks to Franck Khayat, Aymeric Bacuet, Félix Rodriguez Moya, Farouk Boukil, Marc Pitteloud, Yacine Chaouch, Ali Ridha Mrad, Antoine Munier, Iris Meditz, Arthur Tabary, Ghalia Bennani, François Dumoncel, Félicien Gâche, Oussama Gabouj, Jean Porchet, Salim Boussofara, Alice Potter, and the teaching teams of the participating courses for providing the human-created feedback.

Author: Tanya Petersen

Source: EPFL