The Impact of AI Generative Technology on Learning: A Study on Student Confidence

Web Editor

January 2, 2026

Introduction

The rise of AI generative technology is transforming our relationship with technology. An AI system can produce an inadequate result without being broken or misused, simply “mistaken,” and we have the option to correct it, much like interacting with a human.

Shifting Trust and Usage

This possibility significantly alters how we trust and employ AI compared to other technologies where errors are more apparent. With AI systems, we must question if they’re malfunctioning and verify the results they provide.

Impact on Education

This shift directly affects the educational sector, impacting both teachers and students. AI’s potential in education ranges from enhancing learning to being counterproductive, with the level of trust we place in it determining the outcome.

Trust and Technology: A Modified Relationship

Trust is crucial in technology use. Excessive trust can lead to dependency and vulnerability when the technology fails, while insufficient trust results in inefficient use or outright rejection. In education, the latter is less problematic than the former: disbelief encourages students to verify information, ideally benefiting their learning.

Study Findings

Nearly 80% of the 132 participating students use AI frequently or very frequently. None reported never using it for academic purposes.

Discrepancy Between Perception and Reality

More than 75% of students use unreliable methods to verify AI-provided results. About 40% don’t even perform basic actions like requesting sources. This is concerning given that over 75% admit AI tools sometimes or frequently provide inadequate responses.

Interestingly, over 90% of respondents believe they can identify inadequate responses occasionally, and none reported being unable to do so. However, they perceive teachers as incapable of detecting such errors.

Irrational Subjectivity

Many universities promote AI use through specific training for teachers and students. However, a significant number of students distrust paid AI tools provided by universities due to privacy concerns. They fear the university could access their queries and know if they’ve misused AI.

Inadequate Use of AI

AI is helpful for completing academic tasks, but this doesn’t ensure it’s beneficial for learning. When students trust AI responses without verifying them, the learning process suffers.

Excessive trust in AI hinders its use as a learning tool, as critical thinking and metacognition—mental processes needed to solve problems—are absent. Paradoxically, students spend saved time on these tasks trying to avoid detection in their AI use.

Key Questions and Answers

  • Q: How do students verify AI-provided information? A: Most use unreliable methods, often skipping basic verification steps like requesting sources.
  • Q: How do students perceive their ability to identify inadequate AI responses? A: Over 90% believe they can identify such responses occasionally.
  • Q: Why do some students distrust university-provided AI tools? A: Privacy concerns lead many to believe their queries could be accessed and misused.
  • Q: How does excessive trust in AI impact learning? A: It hinders AI’s use as a learning tool, as critical thinking and metacognition are absent.

This study highlights the need for further research and measures to ensure efficient AI integration in education.