AI et al. or Human et al.: Academic Integrity and Intelligent Technology
13
Minute Read
25.11.2025
Aline Mangold

Table Of Contents

Nowadays, AI is spread across various domains. These include automotive, finance, healthcare and education [1]. However, AI is not only permeating the industry. An increasing number of researchers are also adopting AI in their work [2]. While AI solutions promise to enhance productivity and accelerate the research process, they also raise various ethical questions. For example, if ChatGPT writes manuscripts, are researchers still credible for their work? What if the AI makes mistakes? Most importantly, what will happen to human competence? In this article, we will explore these questions in more depth and suggest possible solutions for researchers and AI developers.

AI and Privacy: What Happens to Sensitive Data?

Any researcher working with personal data may well have asked themselves this question: “Can I enter participant data into an AI system? What will happen to it?” The answer, of course, depends on the AI system in question. However, most AI systems powered by Large Language Models (LLMs), such as ChatGPT, are not transparent about how they process and save personal data [3]. That means sensitive data, for example, in the field of medicine, could be stored by big tech companies. This is probably not something that participants would have agreed to. Further, your own personal data, such as name or affiliation, might be used to finetune the LLM and appear somewhere on the other end. Does that mean that LLMs cannot be trusted with personal data at all? The short answer is no, because there are other solutions. Open-source models such as GPT-OSS 120 B [4], for example, can be hosted locally if there is an adequate technical infrastructure. This means that if institutions load and host LLMs on their own servers, the data will not leave the organization. When it comes to closed-source models hosted by Big Tech, though, anonymizing any personal data is a wise decision.

AI Is Rewriting Academia, but Who’s the Real Author?

Debates about authorship are not uncommon in academia. However, this was once a question settled between human authors. Nowadays, AI has entered the field as a new player. The community seems overwhelmed. What should we do with AI-generated manuscripts? Is the AI the author? Or is the human the author? Is it co-authorship? Most organizations concerned with publication ethics agree that AI tools do not fulfil the criteria for authorship because authorship implies accountability, and the ability to deal with potential conflicts [5]. These are things that an AI cannot do (yet). Using AI tools to assist with writing manuscripts is usually permitted e.g. [6, 7]. Nevertheless, the permission comes with conditions, such as disclosing which tools were used and for what purpose, and calling for human authors to retain full responsibility for the content. This can be addressed by AI developers. For instance, if journals require the disclosure of AI usage in a more transparent way, it should be possible for users to receive detailed records of their interaction behavior. What was prompted? Which parts of the AI outputs were used for the manuscript? Were changes made at a content level or purely at a linguistic level?

Another problem related to the unreflective use of AI tools is plagiarism. LLMs work with the data contained in their training corpora and human feedback [8], but they can also search the web and incorporate new research articles. What if you use an AI  output that is actually a quote from another research paper, but without the reference? The answer is clear: as a credible author, you are responsible for any plagiarism. So, regardless of how coherent an AI output might appear, you should always double-check it.

Does AI Use Equal a Decrease in Scientific Quality?

A case study investigating the use of AI to generate journal articles found that 48 out of 53 examined articles appeared to be AI-generated [9]. Several papers achieved AI detection scores of 100%. Although AI detection is not completely accurate, it is undeniable that AI-generated content is present in current publications. This raises the question of whether this is a positive or negative development.

On the one hand, AI could enable researchers to publish important findings more quickly by reducing the time spent on tedious tasks like grammar checking and proofreading. On the other hand, AI could lead to an increase in the number of publications. Indeed, there has been growth in publications in recent years [10]. However, quantity does not necessarily equate to quality. Which characteristics of AI systems could influence paper quality? Firstly, many AI systems, including LLMs, are probabilistic [11]. This means that they work with probabilities, which can limit reproducibility. In some areas, this variability in AI output is not a problem; in others, it is. Take, for instance, a sentence paraphraser. If the new paraphrase shows adequate wording and the meaning remains the same, it could potentially be used for a manuscript. This goes for as long as the paraphrase does not reformulate important key terms that are unique to the domain and should remain unchanged. Another example is intelligent search engines. If you want to conduct a systematic literature review, the results should be reproducible, so with the same search string, you should receive the same literature. If this is not the case, the tool should not be used for a systematic literature review. The key issue underlying this is that probabilistic AI processes are often blindly incorporated into processes that should be deterministic. This needs to be addressed by AI developers, by analyzing the goals and quality standards of users’ tasks.

Another factor that could contribute to a lack of quality in AI-generated publications is hallucinations. These occur when the AI incorporates its training data into the answer [12]. They can result in false and/or unverifiable answers. The simplest way to avoid hallucinations is to double-check AI-generated answers and correct them manually. But why ask AI to write whole manuscript parts in the first place? Try instead prompting it in a way that improves your thought process instead of using one prompt for every problem. This way, you can be sure that your content and ideas are original and come from your own mind and the literature you are using.

What Remains after Prompting? AI and Competency Loss

As we highlighted in our article about the erosion of human skills, AI could have deskilling effects [13]. This means that researchers may become so reliant on AI that they neglect to develop and practice essential skills. The current generation of researchers may have learnt academic writing from scratch. But what about current students, who have grown up with AI always being accessible? Will they be able to distinguish between good and bad academic texts? Will they be able to reflect critically on their own work? The answer lies in how schools and universities adapt to AI. If students are given tasks that encourage critical thinking and collaborative work with AI, they could reap the benefits. However, if institutions fail to adapt and simply forbid AI, students may use it secretly and without reflection. But even as a competent researcher, you should always ask yourself these questions before using AI:

  1. “Does this enhance or impede my thought process?”?”
  2. “Will I be able to learn from this experience?”
  3. “If there was no AI, would I be able to solve this on my own?”

Naturally, we are tempted to use AI tools for all kinds of tasks because they are accessible and simple to use, not to mention the productivity boost they promise. However, developers should not exploit this “metacognitive laziness” [14]. Instead, they should build systems that enhance and stimulate researchers’ thought processes, rather than replacing them with one-shot solutions.

Conclusion: How We Should Use AI in Academia

AI is rapidly reshaping academic work, offering powerful support while simultaneously challenging long-held principles of research integrity. As the technology spreads across privacy-sensitive fields, questions about data protection, authorship, manuscript quality, and human competency become impossible to ignore. The key message is clear: AI can be a valuable partner in academia, but only when used critically, transparently, and with full human accountability. Researchers must remain skeptical, verify AI outputs, protect sensitive data, and ensure that their own skills and judgment remain at the center of scholarly work. At the same time, developers have a responsibility to create systems that support and not replace human thought. In an era of intelligent technology, preserving academic integrity means striking the right balance: letting AI enhance our abilities without allowing it to erode the core of scientific quality.

References

[1] Raza, M., Jahangir, Z., Riaz, M. B., Saeed, M. J. & Sattar, M. A. (2025). Industrial applications of large language models. Scientific Reports, 15(1), 13755. https://doi.org/10.1038/s41598-025-98483-1

[2] Kwon, D. (2025). Is it OK for AI to write science papers? Nature survey shows researchers are split. Nature, 641(8063), 574–578. https://doi.org/10.1038/d41586-025-01463-8

[3] Zampano, G. (2024, 20. Dezember). Italy’s privacy watchdog fines OpenAI for ChatGPT’s violations in collecting users personal data | AP News. AP News. https://apnews.com/article/italy-privacy-authority-openai-chatgpt-fine-6760575ae7a29a1dd22cc666f49e605f

[4] OpenAI. (2025, 5. August). Neu: gpt-oss. www.openai.com. Abgerufen am 24. November 2025, von https://openai.com/de-DE/index/introducing-gpt-oss/

[5] COPE Council. (2024). Authorship and AI tools. www.publicationethics.org. Abgerufen am 24. November 2025, von https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools

[6] Elsevier. (o. D.). Generative AI policies for journals. www.elsevier.com. https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals

[7] SAGE Publications. (o. D.). Artificial intelligence policy. https://www.sagepub.com/journals/publication-ethics-policies/artificial-intelligence-policy

[8] Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P. F., Leike, J. & Lowe, R. (2022, 6. Dezember). Training language models to follow instructions with human feedback. https://proceedings.neurips.cc/paper_files/paper/2022/hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html

[9] Spinellis, D. (2025). False authorship: an explorative case study around an AI-generated article published under my name. Research Integrity And Peer Review, 10(1), 8. https://doi.org/10.1186/s41073-025-00165-z

[10] Larivière, V., Haustein, S. & Mongeon, P. (2015). The Oligopoly of Academic Publishers in the Digital Era. PLoS ONE, 10(6), e0127502. https://doi.org/10.1371/journal.pone.0127502

[11] Toney, A. & Wails, R. (2025). Certain but not Probable? Differentiating Certainty from Probability in LLM Token Outputs for Probabilistic Scenarios. Proceedings Of The 2nd Workshop On Uncertainty-Aware NLP (UncertaiNLP 2025), 51–60. https://doi.org/10.18653/v1/2025.uncertainlp-main.6

[12] Ji, Z., Yu, T., Xu, Y., Lee, N., Ishii, E. & Fung, P. (2023). Towards Mitigating LLM Hallucination via Self Reflection. Findings Of The Association For Computational Linguistics: EMNLP 2023. https://doi.org/10.18653/v1/2023.findings-emnlp.123

[13] Crowston, K. & Bolici, F. (2025). Deskilling and upskilling with AI systems. Information Research An International Electronic Journal, 30(iConf), 1009–1023. https://doi.org/10.47989/ir30iconf47143

[14] Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., Shen, Y., Li, X. & Gašević, D. (2024). Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. British Journal Of Educational Technology, 56(2), 489–530. https://doi.org/10.1111/bjet.13544

Read further

Prompting through life: The silent erosion of skill and knowledge?
Anthea Dathe
23.06.2025
Who needs to learn anymore when you can prompt your way through education? Geography assignment? Done. Speech? Polished. Understanding? Optional. As one student put it after giving a perfect AI-genera...
Full article
AI in Academic Writing: 5 Things to Keep in Mind
Juliane Zietz
02.08.2024
Generative AI and explainable AI (XAI) have a huge impact on numerous fields by enabling fast and supported text generation. For researchers and students, AI tools are particularly valuable for academ...
Full article
Explainable AI
Aline Mangold
01.12.2023
What is Explainable Artificial Intelligence (XAI)? Explainable Artificial Intelligence (XAI) is a recent transformative approach in the field of artificial intelligence, focusing on enhancing the tran...
Full article
Explainable UI
Paul Seidel
01.12.2023
What does XUI mean and how does it differ from UI? “Explainable User Interfaces” (XUIs) have emerged as a critical component related to “Explainable Artificial Intelligence” (XAI), addressing the grow...
Full article

Get involved in our project!

Stay up to date, join our initiative and be one of the first to test our product, taking your AI-powered research to new levels.
AI PAPER MAKER
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.