AI in Data Analysis
Ethics
AI tools, and especially LLMs (large language models) have been widely available for just a few years, basically for just a little while. However, they have managed to gain a die-hard fan base, as well as a group of people who see them nearly as a harbinger of the end of the world, or at least of the world as we know it (Kokotajlo et al., 2025). So, everything we are trying to point out here may soon become completely irrelevant…
Many people use it regularly, including the authors of this leaflet. The most common things we use it for are coding tasks (mainly script debugging), literature search, summarising long texts, explaining things or text editing. AI tools can give better search results than traditional search engines, can help with learning new analytical methods, new programming languages, and it can also make interdisciplinary research easier (Mammides & Papadopoulos, 2024; RStudioDataLab, 2023). We want to encourage you to play, experiment with AI tools, and try to understand how they work and where the limits of their usage lie, too. But keep in mind that the content of your conversations with AI tools (i.e. data, information about yourself) may be used for further improvement of the model, especially in free versions. So, even if some tools (e.g. Chat GPT) allows you to turn off chat history or delete your conversations, think twice before you share anything and do not send sensitive data to LLMs.
One of the important (and still not really resolved) issues of extensive usage for LLMs relates to copyright. If you let an LLM write a homework, an essay or even a whole article, who is actually the author of the whole thing? Is this a case of plagiarism? Isn’t it fair to acknowledge the authorship of the AI tool, too? How? Should, e.g., ChatGPT be credited as high as a co-author? Who is responsible for the correctness and accuracy of such text (The group for AI in teaching at Masaryk University, 2023; Wu et al., 2024)? It is also useful to realize that the quality of an AI-generated text originates from thousands of well-written texts by professional writers (Wu et al. 2024) whose work is usually not credited in the AI output. When you are a researcher, such things as true authorship of ideas, text, credibility and authenticity of your outcomes will be the questions you will definitely encounter (Johnson et al., 2024; The group for AI in teaching at Masaryk University, 2023; Wu et al., 2024). It might (or might not, who knows) also happen that large (uncredited) usage of LLMs in writing research papers can be reviewed as unacceptable in the future, and as such, it might even discredit your whole research work (Johnson et al., 2024). Many journals and institutions (even MUNI, check this link) have prepared guidelines regarding AI usage and its reporting. Usually, usage of AI must be reported when it is used to write longer parts of text (e.g., abstract), or to analyse or visualise data. However, broader consensus on AI use reporting is still missing and therefore it is necessary to check the specific rules and conditions set by each institution, journal, or lecturer before using AI tools.
Other ethical issues come up directly from the generated content. LLMs can suffer from issues such as training data poisoning or improper data sanitization, which may lead to significant biases in the outcomes (Johnson et al., 2024; Wu et al., 2024). LLMs are also known to hallucinate non-existent stuff, e.g. literature references or names of software packages. Especially in the case of software packages, this can pose a considerable security risk. Such packages can then be published by hackers with harmful code, which you then unsuspectingly run on your own computer (Françoisn - f@briatte.org, 2025; Mammides & Papadopoulos, 2024; Wu et al., 2024).
Heavy reliance on AI tools may also prevent you from learning crucial research skills, such as formulating your own ideas, deep understanding of your topic, various analytical approaches, critical reasoning, etc. (Johnson et al., 2024; Millard et al., 2024). The usage of AI tools also raises equity questions, especially in regard to paid versions. Naturally, LLMs will be more important for non-native English speakers, but the access to higher-level—but often paid—tools is not available for all, especially for people from low-income countries (Campbell et al., 2024; Wu et al., 2024).
The flip side of AI tools also includes their environmental impact. Training and tuning a single LLM produces more CO2 emissions than the average American does in his entire lifetime (Strubell et al., 2019). Using LLM chatbots for one year generates 25 times more CO2 emissions than training the GPT-3 model. This does not even consider the infrastructure and equipment needed, which requires a lot of water and mining of rare elements, causes contamination, and more (Chien et al., 2023; Cooper et al., 2024).
Principles of LLMs
Many people often talk about LLMs as if they were real people. We all know, or perhaps even use, phrases like “ChatGPT told me…” or “He thinks that…”. Some people are used to having long conversations with them, and some even use them as psychotherapists (e.g., Lau et al. (2025)). However, the principles of how LLMs work are far from what we would consider “thinking”.
The basic principle of LLMs is predicting the next word, or more precisely, the next “token”, based on the sequence of the previous ones. Each token—typically a word or part of a word—is first transformed into an array of numbers (an embedding). Then the model assigns probabilities of occurrence to each given token, based on its occurrence in other situations. Based on the probabilities, a token is selected and added to the string, and the process starts again. Interestingly, the best-fitting word is not always selected. Instead, it often samples from a distribution of more, but still highly probable options, to introduce some variability. This way, the model produces what looks like a “reasonable continuation” of a text (Stanford CS324, 2022; Wolfram, 2023).
And this is where it gets complicated. The “reasonable continuation” means “what a human would expect a text to look like”. It is derived from billions of websites, articles, and books, which were at some point downloaded from the internet and broken into tokens. The model learns co-occurrence patterns, context and sequences of tokens, which are then turned into probabilities (Stanford CS324, 2022; Wolfram, 2023). To put things in some perspective, the English language corpus includes around 40,000 words. However, even though those billions of web pages may seem enough for calculating probabilities of all the tokens, for many rare tokens, it is still not enough. This is where large neural networks, machine learning, and optimization algorithms step in (Wolfram, 2023).
You might have heard about general linear models, which are often used in ecology. Such models typically include a few parameters, and adding new terms (and parameters) can soon make the model quite demanding for computational capacity and also for the size of the dataset. The most advanced version of the GPT-3 model (not used anymore, but the most advanced model which the number of parameters and other information is publicly available for) has 175 billion parameters, its size is about 350 GB and the required RAM capacity is ca 300-700 GB (Brown et al., 2020) which is far beyond what a standard personal computer is able to run currently.
Tips for using LLMs
Here, we put together a list of (hopefully) useful tips for LLM (especially LLM-based chatbots, such as ChatGPT or Gemini) usage:
Different tools are useful for different tasks
ChatGPT: Summaries, explanations, research, presentations, coding.
Google Gemini: Helping with code, writing and research.
Microsoft Copilot: Writing, researching, coding, brainstorming, summarizing.
Le Chat Mistral: Quick research, writing, brainstorming, summarizing, learning.
Perplexity: My strengths: information retrieval, text generation, problem-solving, language understanding, multitask support, creativity.
Claude.ai: Writing, coding, analysis, research, problem-solving.
Other AI tools without chat-like interface:
DeepL: translator, AI-powered text editing
QuillBot: translator, grammar-checker, paraphraser
Grammarly: text editing
Elicit: literature search
Gamma: presentations
In general
Describe your question like in a conversation with your coding/learning buddy. LLM can give you not only the direct answer, but it can also explain how the script/library/workflow/topic works (Campbell et al., 2024; Ellen, 2025). LLM can usually explain things in an easier language than online forums (Campbell et al., 2024)
Don’t ask everything in one prompt, split complex tasks into smaller steps (Mammides & Papadopoulos, 2024; Vieira & Raymond, 2025; Willison, 2025b)
First answer is usually not the best (Çetinkaya-Rundel, 2025)
Be specific in what you want it to do (Willison, 2025b)
Context is king (Willison, 2025b)
Don’t ask about its opinion, better is to ask about consensus (in the topic you are focused on)
Ask for references
Ask for options (Willison, 2025b)
Give examples (Willison, 2025a)
Try different models to cross-check the outputs. Also, different models are good for different goals (Vieira & Raymond, 2025)
“Just because code looks good and runs without errors doesn’t mean it’s doing the right thing.” (Willison, 2025a)
Don’t trust everything, AI tools are overconfident “yes machines” and sometimes hallucinate (Mammides & Papadopoulos, 2024; Vieira & Raymond, 2025; Willison, 2025a)
Don’t become too dependent on ChatGPT. Shutdowns, price, you should still know how to do things manually (Lubiana et al., 2023)
Practical tips for data analysis and coding
Prior knowledge of coding and statistics is necessary to provide sufficiently detailed prompt and use and interpret AI-generated code correctly (Campbell et al., 2024)
Specify the coding language, packages you want to use (e.g., tidyverse, vegan) (Cooper et al., 2024; Vieira & Raymond, 2025)
If you analyze the data with an LLM, always ask the LLM to generate an R script and test its functionality. Do not just pick the results and export the figures. This is essential for reproducibility and further adjustments.
Use full sentences, with as much context as possible (Cooper et al., 2024)
If you are trying to resolve an error, copy not only the error message, but also the whole script where the error emerges from. This gives the LLM a context of the functions and libraries you used (Ellen, 2025)
“Effective prompts include context, specify the topic, outline the desired output, concise, focused question” (Lubiana et al., 2023)
When the conversation leads nowhere, start a new chat. Try to give the LLM a different context (Willison, 2025b)
It does not always change only the part of the script you ask it to, but also other parts without warning (Çetinkaya-Rundel, 2025)
Always check the script you want to apply (Willison, 2025a). Clean the suggested script from unnecessary parts, try to run it line-by-line to understand what is happening (Çetinkaya-Rundel, 2025)
From time to time, consult also old-school browsers or StackOverflow, because LLMs can lack information about the latest updates. Also, you can find better (or different) approaches how to do certain things there (Ellen, 2025; Willison, 2025b)
Before you start questions (Davjekar, 2024):
“I’m starting a new [type of project] using [programming language/framework]. Can you suggest a basic file structure and essential dependencies I should consider?”
“I want to build [brief project description]. Can you help me break this down into smaller tasks and suggest an order of implementation?”
Questions to ask about the code (RStudioDataLab, 2023):
Why did you generate this code?
What does this code do?
How can I fix this error?
What are the alternatives to this code?
How can I improve this code?
Handy prompts (Lubiana et al., 2023):
“Add explanatory comments to this code:”
“Rename the variables for clarity:”
“Write me a standard GitHub README file for the above code.”
“Extract functions for increased clarity:”
“Re-write and optimize this for-loop:”
“Write me regex for R/Python/Excel with a pattern that will extract {} from {}”
“Create a ggplot2 violin plot with a log10 Y axis”