Generative AI tools are rapidly growing in popularity. You may have experimented with tools like ChatGPT, Midjourney, or DALL-E 2 to create content or improve your productivity. Typically, these tools are based on prompt engineering and create content based on a user-entered prompt. Increasingly, we are seeing AI functionality integrated into everyday tools and platforms like Microsoft 365, Microsoft Bing, Google Bard and Padlet.
When using Generative AI in your studies, it's important to be aware of University policies on using Generative AI. It's important to use these tools in a way that acknowledges their strengths and limitations so that you can benefit from their assistance without compromising the quality of your work and deep learning.
Generative AI applications and tools use machine learning technology to connect information from a large dataset to produce content in response to a prompt entered by the user. They are trained on a large dataset of information, and are able to integrate new information over time. The tools work similarly to predictive text, and generate content based on the inputs of the user and statistical models around what the user is likely to want to see next.
The initial datasets/models contain a large volume of information (some are trained on data available to a specific point in time). Some generative AI tools are internet connected (e.g. Bing), and are able to integrate new information from online sources. To learn new information the technology draws frequently from information available freely across the public internet, but it's important to remember that there can also be a lot of inaccuracy and bias in the content of such data. Generative AI tools can reflect the biases and prejudices of the people who create them and the people who provide the data. The dataset that the AI was trained on may focus more on certain cultures and perspectives, which creates a similar bias in the outputs created.
Generative AI tools have also been known to 'hallucinate' or generate false and misleading information. An example of this might be generating a reference to a book that does not exist. This is because they do not have an evaluative capacity and are built to generate outputs - they are simply drawing upon the data from their model and generating the 'most likely' output. It's important to use your own judgement about the quality and accuracy of information produced by generative AI tools.
In a creative context (e.g. brainstorming or concept development), the generative nature of these tools can help shape new ideas.
It is important to consider whether your use of generative AI tools is ethical. There are examples of users creating content that is misleading or harmful, including the creation of fake image or video content of a person (known as deepfakes), generating content based on sensitive or personal data, or generating information or applications/scripts that can be used to harm others. The technology is not sentient and therefore the onus is on the individual using the tool to ensure that they are using it in an ethically appropriate way.
When using these tools, ask yourself:
By considering these questions, you can help ensure that you are using generative AI tools ethically.
When you add data into a generative AI tool, it is integrated into the broader knowledge base of the AI tool and could potentially be used by someone else using that same tool. For this reason it's important not to put sensitive personal information, business information, or other confidential information into a generative AI tool. It's also important to note that some tools like ChatGPT are now allowing users to opt out of having their chat history saved which prevents their personal information being saved into the broader dataset. As with all online tools, it is still best practice not to enter sensitive, personal or commercial data into generative AI tools, even if the chat history is turned off.
When adding information to a generative AI tool, ask yourself:
By following these tips, you can help protect the privacy and security of yourself and others.
Developing a good prompt makes a huge difference to the quality of the content that a generative AI tool will produce. Dave Birs in the LinkedIn Learning course 'How to Research and Write Using Generative AI tools' suggests following the C.R.E.A.T.E formula.
C - Character. Tell the AI what role you want it to play. e.g. "You are a copywriter with 20 years experience"
R - Request. Ask the AI for something very specific. Include as much detail as you can to ensure the AI knows what you want it to do.
E - Examples. Provide examples for the AI to learn from. This further clarifies your request.
A - Adjust. Make as many tweaks and adjustments to the prompt as you need to, to get a response closer to the outcome you seek.
T - Type. Tell the AI what type of output you are looking for. e.g. "Please write a 500 word blog post with up to five dot points, finishing with a two line summary".
E - Extras. Use extra prompts such as "Ask me questions before you answer" to really begin using the generative AI tool as a collaborator.
It is important to critically evaluate any information that you find online. The confident style of text generated by ChatGPT and similar text-based generative AI can make the information seem authoritative, especially when it includes references, but you have an important role to play in filtering and critically evaluating the information generated by ChatGPT before you use it.
Evaluate results from an AI tool by checking:
For more information on evaluating information, see our Finding Information guide
Generative AI does have the ability to 'hallucinate' information, including references to academic works. It's important to independently verify the details of any references, and to consult the original sources when doing your research. Because tools like ChatGPT have not been designed to build their arguments on scholarly references, more often than not the references they generate are not genuine. Rather, the tool works more like predictive text in that it recognises patterns in text that commonly follow each other and attempts to replicate them.
To establish whether sources generated by ChatGPT are correct, try searching Library SEARCH or search broadly online using your preferred search engine (such as Google or Duck Duck Go).
For tips and techniques on finding books, journals and other sources of scholarly information, see our Finding Information guide.
Think of the content produced by generative AI as a jumping off point - it's only part of the process of forming an argument. Your own original thoughts and ability to synthesise information gleaned from a variety of scholarly and relevant sources are an essential part of the process.
Important: UOW policy states that "You must not use any AI tool, including ChatGPT, to produce your assessable work for you. Using AI tools to derive and submit responses to assignment questions in place of your own work is a form of plagiarism." (Source: AI & Chat GPT)
Academic skills and study support has some fantastic resources on Organising your ideas and analysing sources that can help you organise your ideas and synthesise references and arguments.
It can be appropriate to use generative AI as part of the process of developing your understanding of a topic. It's important in the context of academic work that you are guided by your lecturer/tutor as to what is considered appropriate in your subject.
When you do use generative AI in an academic context, it's good practice to be transparent about the fact that you have used it, the way you have used it, and to cite it appropriately when needed. For more guidance on this, see our FAQ "Can I use ChatGPT and other AI?"
Generative AI is developing rapidly, and you should check University resources frequently for the most updated information. The following staff-focused UOW resources have more information about Generative AI:
Keen to dive deeper into learning about Generative AI?
Some key resources for students include LinkedIn Learning playlists and interactive tutorials.