Preserving Integrity in AI-assisted Research

An image displaying the logos of EU (ESF-Plus) & Saxony(SAB), which financed this project.An image displaying the logos of EU (ESF-Plus) & Saxony(SAB), which financed this project.

TU Dresden X ByteBuzzer GmbH

With the R&D Project “AI Paper Maker”, we want to shape the future of research and scientific communication. We examine the current handling and perception of AI in research practice, analyse the requirements for optimal human-AI teaming and elaborate on the technological feasibility of a transparent, ethical and scientifically integer human-AI interface.
Our contribution to research

AI in Research

AI is currently used in research for literature reviews, data analysis, writing and translation. Natural language processing and machine learning are used in many subjects, impacting research in ways that are both beneficial and concerning.

AI Potentials in Research

In research, experience and linguistic-related inequalities exist [1]. AI has the potential to overcome these challenges by providing linguistic and argumentative improvements. The availability and accessibility of knowledge can be enhanced by the creation of customized knowledge bases.

AI use in daily research tasks has experienced a significant upswing [2]. It offers support in experimental design, cleaning data, creating hypotheses, shaping research questions, summarising datasets and suggesting follow-up experiments. Among researchers, however there is a demand for AI tools that act as sparring partners [3][4] as opposed to one-click-solutions.

Due to the increased number of publications in various academic fields [5], the pressure to read large bodies of research papers has increased. Modern AI technologies can help processing, filtering and aggregating relevant literature and theby reduce researchers' workload.

AI tools are increasingly used to assist with scientific writing [2]. Editing tools can improve grammar in manuscripts, especially for authors writing in a second language. Citation tools can format references and suggest relevant sources. Lastly, summarization features facilitate reading and understanding academic literature.

In research, experience and linguistic-related inequalities exist [1]. AI has the potential to overcome these challenges by providing linguistic and argumentative improvements. The availability and accessibility of knowledge can be enhanced by the creation of customized knowledge bases.

AI use in daily research tasks has experienced a significant upswing [2]. It offers support in experimental design, cleaning data, creating hypotheses, shaping research questions, summarising datasets and suggesting follow-up experiments. Among researchers, however there is a demand for AI tools that act as sparring partners [3][4] as opposed to one-click-solutions.

Due to the increased number of publications in various academic fields [5], the pressure to read large bodies of research papers has increased. Modern AI technologies can help processing, filtering and aggregating relevant literature and theby reduce researchers' workload.

AI tools are increasingly used to assist with scientific writing [2]. Editing tools can improve grammar in manuscripts, especially for authors writing in a second language. Citation tools can format references and suggest relevant sources. Lastly, summarization features facilitate reading and understanding academic literature.

Biggest AI Challenges in Research

Established large language models are characterised by a significant lack of transparency with regard to their operational mechanisms and the and the characteristics of training data. This black box nature is at odds with the scientific principle of traceability and verifiability. Irreproducable Statements, ideas or literature references without traceable sources can be harmful if adopted [6].

The integration of AI within research endeavours has the potential to disrupt the conventional concept of authorship. In the event of significant sections of text being generated by an AI system, or hypotheses being proposed, the question of attribution of intellectual property becomes a salient issue. [11]

While the utilisation of AI tools has the potential to reduce researchers' workload and support the creation of new ideas, there is a risk of deskilling [7]. This term refers to the potential loss of skills such as critical thinking due to overreliance on AI. Consequently, novel competencies such as critical evaluation of AI outputs are required. Additionally, skills like editing AI outputs are further expanded (upskilling).

While AI systems are generally regarded as neutral, they are influenced by the biases inherent in the training data. In research, this can result in the systematic favouring or ignoring of certain perspectives. This issue becomes particularly problematic when researchers use AI as a seemingly objective tool without critically considering the inherent bias structures. [8][9]

Large language models have a significant drawback: they are trained on massive web datasets, often containing personalized data such as names, birth dates along with data like intellecutal property [10]. In addition, users might enter personalised data that could potentially be misused. These issues can be tackled by data minimization (using only necessary data for a task) as well as encryption of masking of personalized data.

How to overcome these challenges

Our Mission

... is to accelerate usage of AI-based Informations systems for researchers, while protecting their academic integrity and giving them greater control over their co-created work.
Our vision
The 3 Project Goals

  1. Amano, T., Ramírez-Castañeda, V., Berdejo-Espinola, V., Borokini, I., Chowdhury, S., Golivets, M., González-Trujillo, J. D., Montaño-Centellas, F., Paudel, K., White, R. L., & Veríssimo, D. (2023). The manifold costs of being a non-native English speaker in science. PLOS Biology, 21(7), e3002184. https://doi.org/10.1371/journal.pbio.3002184
  2. Van Noorden, R., & Perkel, J. M. (2023). AI and science: What 1,600 researchers think. Nature, 621(7980), 672–675. https://doi.org/10.1038/d41586-023-02980-0
  3. Morris, M. R. (2023). Scientists’ Perspectives on the Potential for Generative AI in their Fields (arXiv:2304.01420). arXiv. https://doi.org/10.48550/arXiv.2304.01420
  4. Mangold, A., Gawer, L., Weinhold, S., Zietz, J. & Gawer, L. (2025a). From Fragmentation to Focus: How AI Can Assist Researchers in Academic Writing. In Lecture notes in computer science (S. 55–71). https://doi.org/10.1007/978-3-031-93838-2_4
  5. Bauman, A., Lee, K. C., & Pratt, M. (2024). Understanding the Increases in Physical Activity Publications From 1985 to 2022: A Global Perspective. https://doi.org/10.1123/jpah.2024-0050
  6. Kamath, U., & Liu, J. (2021). Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning. Springer International Publishing. https://doi.org/10.1007/978-3-030-83356-5
  7. Crowston, K., & Bolici, F. (2025). Deskilling and upskilling with AI systems. Information Research an International Electronic Journal, 30(iConf), Article iConf. https://doi.org/10.47989/ir30iConf47143
  8. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
  9. Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D., & Tzovara, A. (2021). Addressing bias in big data and AI for health care: A call for open science. Patterns, 2(10). https://doi.org/10.1016/j.patter.2021.100347
  10. Kibriya, H., Khan, W. Z., Siddiqa, A. & Khan, M. K. (2024). Privacy issues in Large Language Models: A survey. Computers & Electrical Engineering, 120, 109698. https://doi.org/10.1016/j.compeleceng.2024.109698
  11. Watson, S., Brezovec, E., & Romic, J. (2025). The role of generative AI in academic and scientific authorship: An autopoietic perspective. AI & SOCIETY, 40(5), 3225–3235. https://doi.org/10.1007/s00146-024-02174-w
  12. Kobak, D., González-Márquez, R., Horvát, E.-Á., & Lause, J. (2025). Delving into ChatGPT usage in academic writing through excess vocabulary (arXiv:2406.07016). arXiv. https://doi.org/10.48550/arXiv.2406.07016

Get involved in our project!

Stay up to date, join our initiative and be one of the first to test our product, taking your AI-powered research to new levels.
AI PAPER MAKER
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.