Generative AI automates the creation of digital content, from text to images. At its core, it uses machine learning algorithms to generate content based on a user-entered prompt. This process is known as prompt engineering. You can experiment with AI by using browser or app based tools like ChatGPT, Claude, or Midjourney, and AI functionality has been integrated into everyday tools and platforms like Microsoft 365, Microsoft Copilot, Google Gemini and Padlet.
For university students, these tools can help overcome writer’s block, inspire design projects, or provide a starting point for research. However, it’s essential to use them responsibly. When using Generative AI in your studies, it's important to be aware of University policies on using Generative AI. It's important to use these tools in a way that acknowledges their strengths and limitations so that you can benefit from their assistance without compromising the quality of your work and deep learning.
Generative AI applications and tools use machine learning technology to connect information from a large dataset to produce content in response to a prompt entered by the user. They are trained on a large dataset of information, and are able to integrate new information over time. The tools work similarly to predictive text, and generate content based on the inputs of the user and statistical models around what the user is likely to want to see next.
The initial datasets/models contain a large volume of information (some are trained on data available to a specific point in time). Many generative AI tools are now internet connected and are able to integrate new information from online sources. To learn new information the technology draws frequently from information available freely across the public internet, but it's important to remember that there can also be a lot of inaccuracy and bias in the content of such data. Generative AI tools can reflect the biases and prejudices of the people who create them and the people who provide the data. The dataset that the AI was trained on may focus more on certain cultures and perspectives, which creates a similar bias in the outputs created.
Generative AI tools have also been known to 'hallucinate' or generate false and misleading information. An example of this might be generating a reference to a book that does not exist. This is because they do not have an evaluative capacity and are built to generate outputs - they are simply drawing upon the data from their model and generating the 'most likely' output. It's important to use your own judgement about the quality and accuracy of information produced by generative AI tools.
In a creative context (e.g. brainstorming or concept development), the generative nature of these tools can help shape new ideas.
It is important to consider whether your use of generative AI tools is ethical. There are examples of users creating content that is misleading or harmful, including the creation of fake image or video content of a person (known as deepfakes), generating content based on sensitive or personal data, or generating information or applications/scripts that can be used to harm others. The technology is not sentient and therefore the onus is on the individual using the tool to ensure that they are using it in an ethically appropriate way.
When using these tools, ask yourself:
By considering these questions, you can help ensure that you are using generative AI tools ethically.
When you add data into a generative AI tool, it is integrated into the broader knowledge base of the AI tool and could potentially be used by someone else using that same tool. For this reason it's important not to put sensitive personal information, business information, or other confidential information into a generative AI tool.
It's also important to note that generative AI tools are increasingly allowing users to opt out of having their chat history saved which prevents their personal information being saved into the broader dataset. UOW staff and students have access to Microsoft Copilot (previously known as Microsoft Bing Chat). When you log in to this tool with your UOW user account, you can be confident that your data is protected by enterprise security.
As with all online tools, it is still best practice not to enter sensitive, personal or commercial data into generative AI tools, even if the chat history is turned off.
When adding information to a generative AI tool, ask yourself:
By following these tips, you can help protect the privacy and security of yourself and others.
Developing a good prompt makes a huge difference to the quality of the content that a generative AI tool will produce. Dave Birs in the LinkedIn Learning course 'How to Research and Write Using Generative AI tools' suggests following the C.R.E.A.T.E formula.
C - Character. Tell the AI what role you want it to play. e.g. "You are a copywriter with 20 years experience"
R - Request. Ask the AI for something very specific. Include as much detail as you can to ensure the AI knows what you want it to do.
E - Examples. Provide examples for the AI to learn from. This further clarifies your request.
A - Adjust. Make as many tweaks and adjustments to the prompt as you need to, to get a response closer to the outcome you seek.
T - Type. Tell the AI what type of output you are looking for. e.g. "Please write a 500 word blog post with up to five dot points, finishing with a two line summary".
E - Extras. Use extra prompts such as "Ask me questions before you answer" to really begin using the generative AI tool as a collaborator.
It is important to critically evaluate any information that you find online. The confident style of text generated by ChatGPT and similar text-based generative AI can make the information seem authoritative, especially when it includes references, but you have an important role to play in filtering and critically evaluating the information generated by ChatGPT before you use it.
Evaluate results from an AI tool by checking:
For more information on evaluating information, see our Finding Information guide
Generative AI does have the ability to 'hallucinate' information, including references to academic works. It's important to independently verify the details of any references, and to consult the original sources when doing your research. Because tools like ChatGPT have not been designed to build their arguments on scholarly references, more often than not the references they generate are not genuine. Rather, the tool works more like predictive text in that it recognises patterns in text that commonly follow each other and attempts to replicate them.
To establish whether sources generated by ChatGPT are correct, try searching Library SEARCH or search broadly online using your preferred search engine (such as Google or Duck Duck Go).
For tips and techniques on finding books, journals and other sources of scholarly information, see our Finding Information guide.
The UOW Library's How to Identify and Verify Fake References video will also be a useful source to assist students in verifying the source of AI generated information.
Think of the content produced by generative AI as a jumping off point - it's only part of the process of forming an argument. Your own original thoughts and ability to synthesise information gleaned from a variety of scholarly and relevant sources are an essential part of the process.
Important: UOW policy states that "You must not use any AI tool, including ChatGPT, to produce your assessable work for you. Using AI tools to derive and submit responses to assignment questions in place of your own work is a form of plagiarism." (Source: AI & Chat GPT)
Academic skills and study support has some fantastic resources on Organising your ideas and analysing sources that can help you organise your ideas and synthesise references and arguments.
It can be appropriate to use generative AI as part of the process of developing your understanding of a topic. It's important in the context of academic work that you are guided by your lecturer/tutor as to what is considered appropriate in your subject.
When you do use generative AI in an academic context, it's good practice to be transparent about the fact that you have used it, the way you have used it, and to cite it appropriately when needed. For more guidance on this, see our FAQ "Can I use ChatGPT and other AI?"
Microsoft Copilot is available for UOW staff and student account holders and can be accessed via any web browser, such as Edge, Chrome, Safari, or Firefox. Copilot is also available as a mobile Android or iOS app. As part of the University’s Microsoft account, institutional (or commercial) data protection allows staff and students to interact with Copilot in a manner where Microsoft or other parties do not retain prompts or responses beyond the browser session or until a new chat topic is created.
IMTS has detailed information about using Microsoft Copilot safely and without cost via your UOW user account. The Library has also created a learning path to help you learn about using Microsoft CoPilot effectively.
Generative AI is developing rapidly, and you should check University resources frequently for the most updated information. Some key resources for students include LinkedIn Learning playlists and interactive tutorials.
The following staff-focused UOW resources have more information about Generative AI:
How confident are you with your skills in using Generative AI? Would you like to evaluate your skills and identify areas for development?
Take the JISC Generative AI self assessment.
To log in, use your UOW credentials. When you have completed the self-assessment, you will generate a personalised report of your Generative AI capabilities. This report is private to you and other students or staff members of UOW will not have access to your results.