Who needs to learn anymore when you can prompt your way through education? Geography assignment? Done. Speech? Polished. Understanding? Optional. As one student put it after giving a perfect AI-generated presentation: “I had no clue what I was saying.” This marks the dawn of an age of effortless output; tasks are completed faster than ever, but the process of thinking is increasingly questionable. In this blog post, we will explore exactly this issue: What impact does Generative AI have on individual learning and critical thinking in academic contexts, and how can we design learning processes that integrate technology meaningfully without blindly relying on it?
Generative Artificial Intelligence (GenAI) refers to technologies that not only analyze or classify but actively generate new content. Often, these entirely new outputs rival or even surpass the quality of human work. Tools like OpenAI’s ChatGPT or Google’s DeepMind Gemini are becoming increasingly widespread. According to the UK regulator Ofcom, two in five children aged 7–12 were already using GenAI tools in 2023 and among those aged 13–17, the figure rose to four in five. This spread is progressing even faster than previous digital revolutions such as social media [1]!
The use of GenAI is also rapidly growing in educational and academic contexts. More and more students use these tools to support homework, presentations, or academic writing – sometimes with, sometimes without the direction from educators [2]. The potential is vast: Particularly highlighted is the creative input GenAI systems can provide, especially when it comes to brainstorming topics or getting started on written assignments. The generated, often well-structured responses serve as valuable inspiration that can be expanded and deepened through independent thought. GenAI not only helps develop ideas but also assists with their linguistic expression – and it’s available 24/7. Unlike teachers or lecturers, AI is constantly on hand to answer urgent questions about homework or learning material both inside and outside the classroom. It also breaks down physical, linguistic, or infrastructural barriers, making education more accessible – especially for learners with disabilities. Various “smart learning assistance tools” are currently being developed to promote inclusive education. These assist, for example, students with hearing, vision, or speech impairments in effectively communicating and participating actively in class (e.g. [3]).
In its role as a learning assistant, GenAI can take on various functions: It acts as a source of information, a facilitator of discursive or experiential learning, or a collaborative partner in problem-solving. Studies show that personalized, AI-supported feedback can significantly promote learning. It measurably improves student performance [4], particularly benefiting underperforming students or those learning a new language by assisting in writing and enhancing productivity (e.g. [5]). Such feedback can not only be textual, but visual: Dashboards, for instance, provide real-time feedback on learning activities and outcomes, helping students develop self-assessment skills. The didactic concept of “scaffolding” – targeted support for learning new content – can also be implemented via AI. Here, the difficulty and sequence of tasks are dynamically adapted to the learner’s skill level, with support gradually reduced as confidence grows.
Many are familiar with that phenomenon: You read slides or study materials for an exam but retain almost nothing. GenAI can play an active role here too, stimulating constructive learning activities that increase cognitive engagement. For example, it can simulate conversations with historical figures or provide dialogues for language learning, promoting active knowledge acquisition [6]. When learners must engage with AI by defending a thesis or constructing an argument, this can lead to deeper understanding and strengthen both subject-specific and cross-disciplinary skills [7]. Such tasks can also boost motivation by allowing space for creative and imaginative processes. Furthermore, GenAI can generate personalized, time-saving learning materials, such as summaries, flashcards, quiz questions, or podcasts. Studies show that conventional written summaries often lack effectiveness in practice [8]. Effective learning requires information to be viewed from different perspectives and anchored through relevant examples…an area where AI can offer strong support.
In scientific work as well, GenAI is increasingly taking on tasks once considered the sole domain of humans, for example developing research questions, designing studies, creating data collection tools, or drafting academic texts [9]. These activities constitute knowledge work, where GenAI doesn’t just automate but also enhances human capability. Offloading repetitive, cognitively simple tasks can free researchers and students to focus more on conceptual and creative processes. This shift is seen as a form of “upskilling”: AI doesn’t replace humans but reorients them toward higher-value tasks [10].
Overall, GenAI holds great promise for making learning more personalized, accessible, creative, and – at best – more effective. However, realizing this potential depends on pedagogically meaningful integration and critical use. It should not be forgotten that GenAI outputs are based on algorithms and training data, which may contain biases or vary in accuracy, specificity, and pedagogical quality. Only if learners see AI not as a shortcut but as a sparring partner can it realise its full educational potential.
This brings us to the risks – especially when the use of GenAI tools is not characterised by reflection, but by uncritical handling: There is growing concern that excessive use of GenAI may lead to the long-term loss of core cognitive and academic skills – with far-reaching consequences for individual learning, academic work, and societal participation.
Various studies show that both students and academics increasingly turn to AI as their primary problem-solving tool. Rather than engaging critically with content or formulating original thoughts, they opt for tools like ChatGPT – fast, efficient, seemingly error-free. But this practice not only enters ethical grey areas concerning responsibility and academic integrity [2], it can also fundamentally undermine learning. Critical thinking skills may erode if AI is used as a substitute rather than a supplement [11]. Researchers found that while students supported by ChatGPT showed improved short-term performance, they displayed less metacognitive activity and no knowledge gains or transfer – a phenomenon referred to as “metacognitive laziness” [12].
This loss of autonomy due to the described overreliance on GenAI tools is not just functional but also psychological. When young people repeatedly rely on GenAI to solve problems, make decisions, or seek emotional support, it can reduce their sense of self-efficacy over time and lead to “learned helplessness” [13]. Students who consult AI in all academic contexts may eventually doubt their ability to solve tasks independently [14]. This affects not only those with limited prior knowledge but also individuals with low self-confidence, who turn to seemingly “safe” AI-generated solutions to avoid making mistakes [8].
Beyond education, we observe a broader trend toward “deskilling”: Tasks that were once cognitively demanding are now delegated to AI, causing skills to atrophy or never develop at all. It was found in an experiment with consultants that while those with less experience performed better with AI support, the gap between their actual competence and output widened [15]. Similarly, individuals with no programming knowledge could use AI to generate HTML code nearly as effectively as experienced developers [16] – a technological advance that raises the question of whether human expertise in various scientific fields will be devalued in the long term… or even more drastically: will human experts become redundant?
The loss of skills affects not just practical abilities but also key aspects of thinking and decision-making. There is a shift in cognitive responsibilities: Rather than analyzing, synthesizing, or evaluating content independently, the focus shifts to integrating and managing AI-generated responses. This could fundamentally alter the cognitive profile of academic work. Especially concerning is the potential decline in judgment: Studies indicate that as AI systems become more reliable, people become less inclined to critically question their outputs [17]. If we delegate thinking processes to AI alongside the task of writing, we must consider how this affects the future of academic communication and education. Researchers warn of a loss of originality and less diversity in academic outcomes [8].
Skill loss due to increased AI use is not just an individual problem – it could become a systemic societal weakness. If fewer people build expertise independently of AI, we risk becoming non-operational in the event of technical failure. The balance between humans and machines may shift: Instead of autonomous partners, we risk becoming mere overseers without the understanding necessary to intervene effectively. What’s at stake goes beyond functional abilities! It touches on what makes us critically thinking, responsible, and empathetic beings. If these qualities fade, not only will education and science become poorer, but so will social interaction.
But first, take a deep breath. The potential consequences of AI-driven deskilling may sound bleak, but that’s precisely why forward-thinking solutions are essential.
A key strategy is not only to build AI competencies but also to consciously foster foundational skills that must remain intact, regardless of technological support. Critical thinking, problem-solving, and judgment must be actively practiced and nurtured. This is about more than mere “future skills”; it’s about the very substance of academic education. But first, we need a shared understanding of which competencies are indispensable and why.
We also need to strengthen autonomy in using AI. Instead of immediately trusting AI outputs, individual assessment should come first. Only those who know their own expertise can critically evaluate AI suggestions. This requires not only knowledge but also confidence – those secure in their own expertise are more likely to think critically, even if it takes more effort [8]. Universities can play a key role by promoting domain-specific AI literacy and reflective thinking.
On a design level, AI systems could be developed to challenge rather than replace critical thinking – through feedback mechanisms, transparent decision processes, or prompts highlighting cognitive biases and blind spots. Learning could be conceived as a co-designed process between humans and machines: a form of hybrid intelligence. These systems would not only deliver results but encourage users to question, improve, and understand them. In short: They could optimize both outputs and learning processes…and actively counteract deskilling. Draft designs for how GenAI can support researchers adequately in academic writing can be found in an earlier article by researchers in our project, linked below [18].
We have highlighted the promising possibilities of GenAI to support learners and researchers, but these are overshadowed by numerous risks and the danger of deskilling. Our focus has primarily been on cognitive learning processes and outcomes, while noting that emotional processes and mental health are also affected by GenAI. What is clear is that we must begin thinking ahead now and generate strategies to get the most out of human-AI interaction, without falling into blind trust or societal dependency on artificial intelligence. Ultimately, the goal is clear: An AI-supported educational and academic world where people are not disempowered, but enabled to remain critical, competent, and capable of action.
[1] Mansfield, K. L., Ghai, S., Hakman, T., Ballou, N., Vuorre, M., & Przybylski, A. K. (2025). From social media to artificial intelligence: Improving research on digital harms in youth. The Lancet Child & Adolescent Health, 9(3), 194–204. https://doi.org/10.1016/S2352-4642(24)00332-8
[2] Tobin, J. (2023). Educational technology: Digital innovation and AI in schools. https://lordslibrary.parliament.uk/educational-technology-digital-innovation-and-ai-in-schools/
[3] Srivastava, S., Varshney, A., Katyal, S., Kaur, R., & Gaur, V. (2021). A smart learning assistance tool for inclusive education. Journal of Intelligent & Fuzzy Systems, 40(6), 11981–11994. https://doi.org/10.3233/JIFS-210075
[4] Kochmar, E., Vu, D. D., Belfer, R., Gupta, V., Serban, I. V., & Pineau, J. (2022). Automated Data-Driven Generation of Personalized Pedagogical Interventions in Intelligent Tutoring Systems. International Journal of Artificial Intelligence in Education, 32(2), 323–349. https://doi.org/10.1007/s40593-021-00267-x
[5] Wambsganss, T., Kueng, T., Soellner, M., & Leimeister, J. M. (2021). ArgueTutor: An Adaptive Dialog-Based Learning System for Argumentation Skills. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–13. https://doi.org/10.1145/3411764.3445781
[6] Shin, D., Kim, H., Lee, J. H., & Yang, H. (2021). Exploring the use of an artificial intelligence chatbot as second language conversation partners. Korean Journal of English Language and Linguistics, 21, 375-391.
[7] Bauer, E., Greiff, S., Graesser, A. C., Scheiter, K., & Sailer, M. (2025). Looking Beyond the Hype: Understanding the Effects of AI on Learning. Educational Psychology Review, 37(2), 45. https://doi.org/10.1007/s10648-025-10020-8
[8] Lee, H.-P. (Hank), Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–22. https://doi.org/10.1145/3706598.3713778
[9] Van Noorden, R., & Perkel, J. M. (2023). AI and science: what 1600 researchers think. Nature, 621, 672-675. https://doi.org/10.1038/d41586-023-02980-0
[10] Crowston, K., & Bolici, F. (2025). Deskilling and upskilling with AI systems. Information Research an International Electronic Journal, 30(iConf), Article iConf. https://doi.org/10.47989/ir30iConf47143
[11] Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review. Smart Learning Environments. Smart Learn. Environ., 11(1), 28. https://doi.org/10.1186/s40561-024-00316-7
[12] Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., Shen, Y., Li, X., & Gašević, D. (2025). Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. British Journal of Educational Technology, 56(2), 489–530. https://doi.org/10.1111/bjet.13544
[13] Lindebaum, D. (2023a). Researchers embracing ChatGPT are like turkeys voting for Christmas. Times Higher Education, 11. May. https://www.researchgate.net/publication/371225532_Researchers_embracing_ChatGPT_are_like_turkeys_voting_for_Christmas
[14] Yu, Y., Liu, Y., Zhang, J., Huang, Y., & Wang, Y. (2025). Understanding Generative AI Risks for Youth: A Taxonomy Based on Empirical Data. arXiv. https://doi.org/10.48550/arXiv.2502.16383
[15] Dell’Acqua, F., McFowland III, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. SSRN Scholarly Paper 4573321. https://doi.org/10.2139/ssrn.4573321
[16] Campero, A., Vaccaro, M., Song, J., Wen, H., Almaatouq, A., & Malone, T. W. (2022). A Test for Evaluating Performance in Human-Computer Systems. arXiv. https://doi.org/10.48550/arXiv.2206.12390
[17] Dell’Acqua, F. (2024). Falling asleep at the wheel: Human/AI collaboration in a field experiment on HR recruiters. Laboratory for Innovation Science, Harvard Business School. https://www.almendron.com/tribuna/wp-content/uploads/2023/09/falling-asleep-at-the-whee.pdf
[18] Mangold, A., Gawer, L., Weinhold, S., Zietz, J., Gawer, L. (2025). From Fragmentation to Focus: How AI Can Assist Researchers in Academic Writing. In: Kurosu, M., Hashizume, A. (eds) Human-Computer Interaction. HCII 2025. Lecture Notes in Computer Science, vol 15767. Springer, Cham. https://doi.org/10.1007/978-3-031-93838-2_4