Skip to Main Content

Scholarly Communications: ChatGPT and other Artificial Intelligence tools

This guide provides useful information on scholarly publishing, such as finding the best journal, author identity, or promoting publications and communications.

What are Large Language Models such as ChatGPT?

                  

Language Learning Models (LLMs), such as ChatGPT, are designed to reflect human language by using mathematical models to predict what word is most likely to come next in relation to the prompt that you have asked about. 

ChatGPT is an artificial intelligence language learning model, developed by OpenAI. ChatGPT can be responsibly and appropriately used as a research tool for students, researchers, academics, professionals and anyone seeking information on a wide variety of topics. It is crucial to recognise that Chat GPT is not a search engine whereby you input similar to a chatbot in that it is capable of engaging in conversational interactions and can offer responses where ChatGPT creates “new content”.

As such it is critical to know that LLMs do not necessarily choose, understand, read or give you the "best" or "most trustworthy" information. The easy to use and familiar search bar interface may allow it to sometimes look, feel or seem like it, but that is not how LLM technology works. Additionally LLMs can also be unreliable in terms of citing its source material or where it found the information it's pulling from. This can impact the quality or bias of the response it presents. Many LLMs are unregulated and influenced by how people interact with it, as it learns more from each interaction which influences its future interactions. As such the information we input within our initial prompt is rather important . 

 

How does Chat GPT work?

The main way users interact with ChatGPT is by asking it a question, or alrternatively the user may give ChatGPT a prompt, then ChatGPT responds with a rather quick answer. ChatGPT is currently free to access and use as it is currently in a research preview capacity, however this may not always be the case. OpenAI could at any point restrict access to this by putting it behind a paywall. OpenAI has already launched some paid subscription plans for ChatGPT plus.

 

*It is important to note, according to Open AI the owner of ChatGPT's Privacy Policy they "may share your Personal Information with third parties without further notice to you."

 

Large Language Models such as Chat GPT and Google Bard and others present a potential opportunity for enhancing academic research. However with great power comes great responsibility, researchers must use these tools responsibly and appropriately, keeping research integrity and ethical standards at the forefront of their research process. By using AI tools responsible and appropriately and considering ethical considerations, researchers can use LLM's while adhereing to the principles of academic integrity and ethical conduct.

Ways to use AI tools to benefit your research

  • Framing Research Questions - You can clearly define your research questions and objectives before using an AI tool to ensure you stay focused and succinct. You can use LLM's as a tool to explore and refine your ideas and research questions. However it is always critical to evaluate the content generated.
  • Literature Review Assistance - LLM's can assist in conducting literature reviews by summarizing research articles, generating annotated bibliographies, or suggesting relevant keywords and sources.

  • Idea Generation and Brainstorming - You can use Chat AI tools to spark creativity and brainstorm research ideas, hypotheses, or potential research questions in various fields of study.

  • Drafting and Editing - AI tools can be very useful in helping to summarise and edit research papers, reports, or grant proposals, enhancing clarity and coherence in writing.

  • Multilingual Research - LLM's can assist in translating and interpreting texts in multiple languages which can enable researchers to access a wider range of academic resources.

  • Data Analysis and Interpretation - You can use AI tools for data analysis tasks, such as summarizing survey results, creating visualizations, or generating explanations for statistical findings.

  • Collaboration and Communication - You can use LLM's to collaborate with peers and colleagues by using them to facilitate communication, generate meeting summaries, compile notes and draw out themes or draft collaborative documents.

EU Guidelines on Responsible use of Generative AI in Research

You are accountable for the integrity of the scientific output generated by or with the support of AI tools.

You should maintain a critical approach to using the output produced by generative AI. Be aware of the tools’ limitations, such as bias, hallucinations and inaccuracies. AI can produce highly inaccurate information with beautiful language.

In your work, never cite AI systems as authors or co-authors. Authorship implies agency and responsibility, so it lies with human researchers.

You should never falsify, alter or manipulate original research data with fabricated material created by generative AI.

To be transparent, you should detail which generative AI tools have been used substantially in your research processes. Reference to the tool could include the name, version, date, etc. and how it was used and affected the research process. If relevant, researchers make the input (prompts) and output available, in line with open science principles. Check with publishers for guidance on citing AI tools.

You should take into account the random nature of generative AI tools, which is the tendency to produce different output from the same input. Aim for reproducibility and robustness in your results and conclusions. Always disclose or discuss the limitations of generative AI tools used, including possible biases in the generated content, as well as possible mitigation measures.

 

Pay particular attention to issues related to privacy, confidentiality and intellectual property rights when sharing sensitive or protected information with AI tools.

Researchers should be mindful that generated or uploaded input (text, data, prompts, images, etc.) could be used for other purposes, such as the training of AI models. Therefore, they should protect unpublished or sensitive work (such as their own or others’ unpublished work) by taking care not to upload it into an online AI system unless there are assurances that the data will not be re-used, e.g., to train future language models or to the untraceable and unverifiable reuse of data.

You should take care not to provide third parties’ personal data to online generative AI systems unless the data subject (individual) has given them their consent and researchers have a clear goal for which the personal data are to be used so compliance with EU data protection rules is ensured.

You must understand the technical and ethical implications regarding privacy, confidentiality and intellectual property rights. Check, for example, the privacy options of the tools, who is managing the tool (public or private institutions, companies, etc.), where the tool is running and implications for any information uploaded. This could range from closed environments, hosting on a third-party infrastructure with guaranteed privacy, to open internet-accessible platforms.

 

Respect EU and international legislation.

You should pay attention to the potential for plagiarism (text, code, images, etc.) when using outputs from generative AI. It is vital to respect others’ authorship and cite their work where appropriate. The output of a generative AI (such a large language model) may be based on someone else’s results and require proper recognition and citation.

The output produced by generative AI can contain personal data. If this becomes apparent, researchers are responsible for handling any personal data output responsibly and appropriately, and EU data protection rules are to be followed.

 

Generative AI tools are evolving quickly, and new ways to use them are regularly discovered. You should stay up to date on the best practices and share them with colleagues and other stakeholders.

 

Refrain from using generative AI tools for sensitive content

Make sure you avoid the use of generative AI tools to eliminate the potential risks of unfair treatment or assessment that may arise from these tools’ limitations (such as hallucinations and bias).

Moreover, this will safeguard the original unpublished work of fellow researchers from potential exposure or inclusion in an AI model (under the conditions detailed above in the recommendation for researchers).

These EU guidelines on the Responsible use of Generative AI in Research are available for full consultation here.