4 Limitations and considerations

Generative AI can be an invaluable tool for enhancing education, in workplaces and in general life, offering numerous benefits. However, it’s important to be aware of the limitations and considerations when utilising generative AI.

Protecting your and others’ information and rights

Before using a generative AI tool, you should read its terms of use and privacy policy, to understand how your data and prompts may be used by others. If you decide to use the tool, comply with its terms. While interacting with generative AI, it is important to avoid providing the AI tools with certain information to protect your and others’ information and rights.

Avoid sharing protected or highly protected data with publicly available AI tools. This includes:

  • personally identifiable information,
  • names and addresses,
  • unpublished research data and results,
  • biometric data,
  • health and medical information,
  • geolocation data,
  • government-issued personal identifiers,
  • confidential or commercially sensitive material,
  • university teaching and assessment materials,
  • information about the nature and location of a place or an item of Aboriginal and/or Torres Strait Islander cultural significance,
  • security credentials.

You must obtain express permission from the copyright holder prior to using all or part of third-party copyright material as an input for generative AI tools. At RMIT, you must not provide these AI tools (including Val) with material from the RMIT Library’s eResources, such as e-books, journal articles, video, audio or images.

Depending on how you are using the AI tool, you may choose to opt out of data collection. For example, OpenAI, the makers of ChatGPT, allow you to turn off chat history This may mean that your data and prompts aren’t stored, but it is important to check the terms of use and the privacy policy to confirm this. If in doubt, assume that your data will be shared. Turning off the chat history also means you won’t have access to prior conversations, and you won’t be able to generate a shareable URL to the output (if the tool you are using has this function).

If you are using a publicly available AI tool, we recommend turning off chat history before uploading any information that you don’t want to be used or shared, including you own work. You can upload your assignment drafts and other work into RMIT’s Val as the information that you input into the tool remains secure within the RMIT system.

Consider the ethical implications

Time reported in January 2023 (read the full report) that OpenAI outsourced the task of labeling textual descriptions of sexual abuse, hate speech, and violence to a firm in Kenya. The workers faced disturbing and traumatic content while performing their duties. The Time article highlights the reliance of AI systems on hidden human labour, which can be exploitative and damaging.

Make sure you are approaching AI with ethical principles in mind such as well-being, human-centred values, fairness, privacy, reliability, transparency, contestability, and accountability. By following these principles, individuals and organizations can build trust, drive loyalty, influence outcomes, and ensure societal benefits from AI.

Be aware of generative AI’s limitations

When interacting with generative AI, if you aren’t specific enough, the AI will make assumptions. If unsure, you can ask the AI to see if any assumptions were made in the process of responding to your prompt. See the video below for an example of how different ways of prompting generative AI may lead to the AI making different assumptions.

 

“Incorrect assumptions in AI models” by University of Sydney Educational Innovation Media Team is licensed under CC BY NC 4.0

Try to use a neutral tone and framing in the way you present information or ask questions in your prompt, to avoid introducing bias or leading the AI tool towards a specific response.

Generative AI is prone to hallucination. This is where the tools make up facts. Remember that just because it sounds authoritative, that does not mean it is correct. Consider the impact if the generated text contains factual inaccuracies. For example, if the AI’s ideas are inaccurate in brainstorming, there may be less impact than where it is being used to produce a user manual for a piece of medical technology. A real-world example of this incident involves a lawyer who encountered challenges when relying on ChatGPT for legal research (see an ABC article reporting on this incident).

Generative AI tools are trained on vast and diverse datasets which may lean towards certain viewpoints. Be cautious of potential biases, including possible alignment with commercial objectives or political prejudices. Apply your apply critical thinking skills to analyse and contextualise the outputs you receive from generative AI. This process should include cross-verifying the information received from outputs and forming your own informed perspective.

AI in academic settings

The use of generative AI in academia poses challenges such as plagiarism, ethics, IP infringement, authorship misrepresentation, and evaluation issues. Familiarise yourself with RMIT’s policies to address these challenges effectively and maintain academic integrity.

You have a responsibility to critically evaluate and verify the responses you receive from generative AI tools. While utilising AI in an academic setting, it is important to critically evaluate the generated responses. Verifying the AI tool’s outputs is crucial since not all responses may be accurate or reliable. You should independently assess the accuracy, relevance, and appropriateness of the information provided by the AI by cross-referencing with reliable sources. By actively evaluating the AI-generated responses, you can ensure the reliability and integrity of your academic work. Failure to do this may lead to findings of academic misconduct and loss of credibility.

Several referencing issues can arise with using AI. Some generative AI tools may create references that do not actually exist. This is a function of how these tools work, by putting together content based on patterns and algorithms, rather than treating information sources as separate items. For this reason, it is important not to fully rely on references generated by generative AI tools. Instead, check that the reference exists and that the content cited by the AI tool is actually present in the information source.

Licence

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Generative AI at RMIT Copyright © by RMIT University Library is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book