AI in Academic Writing: 5 Things to Keep in Mind
4
Minute Read
Juliane Zietz
August 20, 2024
AI can help in academic writing

Generative AI and explainable AI (XAI) have a huge impact on numerous fields by enabling fast and supported text generation. For researchers and students, AI tools are particularly valuable for academic writing, whether it is a paper, a thesis, or other academic work. Saving time and effort are attractive benefits. However, currently available generative AI models are not perfect and come with potential pitfalls and challenges. To use AI effectively and to produce accurate scientific content, it is necessary to remain aware of these issues.

To find the most crucial problems with AIs that are currently available, we tested several, such as ChatGPT, Humata, and Typeset. These tools can read PDF files and analyse them to answer questions, aid in literature reviews and clarify complex research questions. Each tool differs slightly in its functionalities, e.g. whether they explain their output (XAI), whether you can upload PDFs yourself and whether the focus is primarily on academic writing or not. However, overall, we found five significant challenges posed by the tools used for academic writing:

  1. Errors and Irrelevance
  2. Hallucination of Proofs
  3. Metadata Handling
  4. Image and Table Information Extraction
  5. Meaning Distortion

Let’s dive right in.

1. AI Output: Risk of Errors and Irrelevance

Generative AI models, like ChatGPT, are built on Large Language Models (LLMs) designed to generate text using statistical calculations. These models predict the next word in a sentence based on patterns learned from diverse datasets such as books, articles, and academic papers. As a result, LLMs show great promise in aiding in various fields, including academic writing. However, generative AI primarily produces text, which may not always be accurate or fact-based. This can result in outputs that are misleading, contain partial truths, or lack relevant information, posing significant risks in academic contexts where precision and reliability are crucial.

For instance, if you found a research gap and you use AI tools for your literature overview, this might yield misleading unrelated information or partially correct answers. This is where XAI can be of great help: by highlighting the sources and basis of AI-generated content, such as marked passages in PDF documents, citations, or outbound links, you can more easily double-check and verify the AI-generated answers. However, don’t forget to also check the given XAI methods! Sometimes, they can lead to more misinformation or overtrust (see next problem with AI in academic writing).

2. XAI: Hallucination of Proofs

XAIs aim to make AI’s decision-making process more transparent (learn more about XAI with our learning ressources). However, in academic writing, generative AI can sometimes fabricate citations or references that do not exist. This can be particularly problematic in literature work where citations are crucial for supporting arguments and providing evidence. Additionally, methods that highlight paragraphs within PDFs often result in suboptimal markings, such as highlighting an abstract or an entire page rather than specific relevant sections. So be aware that even if the AI output seems to be supported by markings, external links, or citations, the output must still be verified. Utilize trusted databases like PubMed, Google Scholar, and institutional access to journals to confirm the existence and accuracy of references. If in doubt, exclude any suspicious citations!

3. AI inefficiencies with Metadata Handling

One big generative AI problem we discovered during our testing sessions is the handling of metadata. When uploading multiple papers to AI tools for analysis or summary, handling metadata such as author names, titles, and journals can be inefficient. Many AI systems struggle with accurately extracting this information, leading to incomplete or incorrect outputs. With XAI, we can find the same problem. When asking about the author of a paper, there is nothing highlighted within the PDF.

The issue in metadata handling can pose problems in academic writing, especially in literature research. Also, be aware of this when you need to verify specific information from a paper. Keep in mind to always check that the AI-generated output is based on the correct source, as it may inadvertently cite unrelated or incorrect information.

4. Limitations in Image and Table Information Extraction

In academic papers, crucial information is often found in tables, graphs, and images. However, generative AI models and PDF analysis tools predominantly focus on text. This leads to frequently overlooked data presented in these visual elements. In our tests, when an AI summarizes a paper with significant graphical data, it might only refer to the captions and miss the critical context provided by the figures themselves. This is apparent when using XAI that highlights paragraphs in the text: often, only captions are marked.

The ability of AI tools to accurately extract and interpret figures and tables varies greatly, so it’s essential to ensure that the tools you use are capable of handling more than just text. However, even if the tool claims to extract information from visuals, manually reviewing the visual content in your paper is always a good practice.

Additionally, if your focus is on academic writing, don’t forget to include appropriate graphics and tables when your topic allows it. While generative AI excels at producing text, it does not yet create reliable scientific graphs and figures. Including well-crafted visuals can enhance the clarity and impact of your academic work.

5. Distortion of Original Meaning

As mentioned above, generative AI models, including LLMs like ChatGPT, operate based on statistical calculations. This approach can sometimes distort the original meaning of sentences, especially in complex or nuanced academic discussions. But even straightforward associations between variables can be misrepresented as the AI might mix up predicted words. To ensure that AI-generated summaries or explanations maintain the original intent, it’s crucial to carefully review and edit them. When in doubt, refer back to the original text for clarification. For this, XAI can help you find the original text in a paper quickly.

Current Generative AI Tools: Conclusion

So, what have we learned so far? Generative AI and XAI can be powerful for academic writing, especially for non-native English speakers. AI tools can save a lot of time, effort, and uncertainty regarding language correctness. However, it is crucial to be aware of the potential pitfalls and problems with AI described above. While we have listed five major challenges, there might be more.

Overall, we recommend the following:

  • always double-check outputs
  • do not overtrust the explanations of XAI
  • be aware of metadata handling issues
  • keep in mind limitations in graphical data extraction
  • ensure the original meaning is preserved

By taking these steps, you can avoid making mistakes in your academic writing or misinterpreting complex scientific topics in literature reviews. By using AI tools responsibly and mindfully, your academic writing can greatly benefit from currently available AI and XAI tools.