Generative AI study skills guide

Advice and guidance on how to make the most of generative artificial intelligence (AI).

What is generative AI?

Generative AI (artificial Intelligence) is a rapidly evolving area that offers many possibilities for enhancing your learning and research. However, there are also concerns around possible misuse and other negative impacts.

This guidance will support your use of AI at the University and has been developed from UWE Bristol’s principles for using generative artificial intelligence within learning, teaching and assessment.

Generative AI refers to digital technology that can generate new content, such as text, images, audio and video. Unlike typical AI systems, that are trained to classify data or make predictions, generative AI can create completely new outputs based on patterns learned from data it has access to.

An example of the difference between typical AI and generative AI would be the difference between a spelling checker and a digital writing assistant:

  • A typical AI checker tool checks grammar and spelling, highlighting suggested amendments and possible errors based on a set of rules around the use of written language.
  • A generative AI writing assistant tool creates original new written content based on your writing style, intention, and ‘voice’. The content it produces is unique and did not exist before.

How generative AI works

Most generative AI is built using learning techniques such as neural networks. They are trained on huge amounts of data to recognise patterns in text, images and audio.

Please note that the University does not currently support the use of any of the platforms mentioned on this page. If you want to use them then be aware of their key limitations. Please also look at their terms and conditions before signing up for any platform.


Text-based generative AI, such as ChatGPT, are trained on billions of webpages and books to learn the statistical patterns of human language. When you give them instructions (typically called ‘prompts’) they predict the most likely next words based on patterns it recognises. This allows them to generate remarkably human sounding text.

Examples of text-based generative AI platforms are:


Platforms that use images, such as DALL-E, are trained on millions of images to learn visual concepts. The models then use these associations between images and text to generate new pixel-by-pixel outputs.

Examples of image-based generative AI platforms are:

Adobe Firefly

Students and staff can access Adobe’s Firefly generative AI platform by:

  • using your UWE Bristol email
  • selecting Company or School Account
  • entering your usual login details at the UWE Bristol login page.


Ethical considerations

These models provide exciting new possibilities for automation and creativity. However, as with any powerful technology, generative AI raises several ethical concerns:

  • Originality and copyright. Generative models are trained on vast amounts of copyrighted data which is legally protected. This raises questions around the originality of AI-generated outputs, whether they break these copyright rules and if we can legally use them in our own work. Therefore, you should not submit other peoples’ data or work without their consent. This includes your colleagues, teaching staff, and materials such as transcripts of teaching materials (including slides and exam papers) or content from UWE Bristol Library resources.
  • Bias. Models can produce biased results based on gender, race or religious stereotyping.
  • Misinformation. Text and media generated by AI could potentially be used to spread false information.
  • Authenticity. Over-reliance on AI can produce results that lack truth and do not reflect the human experience. Does art created by technology have the same merit as human-centred works? Can you produce more meaningful academic or creative work than ChatGPT?
  • Job disruption. Generative AI could impact some creative occupations like graphic design and writing.
  • Accountability. Who is responsible if an AI system creates illegal, unethical or harmful content? How can we ensure proper oversight?

There are active debates around setting policies and standards to address these concerns. In the meantime, we all must consider the ethical impacts of using generative AI in our own work and studies.

Strengths of generative AI

Some key strengths of text-based generative AI include:

  • Productivity. It can rapidly generate written content, freeing us to focus on more creative tasks. It can analyse lots of text very quickly or produce imagery that might not be immediately available.
  • Flexibility. It can generate text in many different styles, formats and voices based on your prompt instructions.
  • Interactivity. Many platforms communicate with a conversational style that allows you to work with the AI in an intuitive way.
  • Knowledge. It has vast world knowledge trained on lots of different datasets, enabling insightful text generation. However, this information is often inaccurate and should always be checked.
  • Personalisation. It can adapt its tone, structure and keywords to specific audiences. It can explain and give feedback.
  • Accessibility. Many platforms allow speech input and can produce content for those unable to physically type.

Key limitations

Human creativity, critical thinking and oversight are still essential when producing new high-quality text and imagery. To help mitigate the following limitations, you will always need to check and edit content.

Our critical thinking and writing guidance provides more support for questioning and evaluating information you may encounter.

  • Factual accuracy. AI will often generate plausible sounding but incorrect and false information if not carefully monitored. These generated inaccuracies are often called ‘hallucinations’ and can include fake referencing of authors or works that do not exist. Information may also be old and out of date.
  • Lack of understanding. AI does not actually comprehend the text it generates. It only produces statistically likely word patterns or images.
  • Bias. It can reproduce harmful societal biases that exist in the data it uses.
  • Plagiarism. Copied or rephrased content could pose plagiarism risks without proper attribution. See our guidance on checking your work for plagiarism.
  • Formulaic outputs. AI-generated text and imagery can be repetitive or generic.
  • Ethical risks. Irresponsible use could spread misinformation, infringe copyright, or negatively impact creative industries.
  • Oversimplification. You risk missing out on important details about a topic if you rely too heavily on simplified AI explanations.
  • Output quality. Do not over rely on AI-generated content. You will often find that your own wording, ideas or arguments are preferable.

Using generative AI

As well as thinking about the ethical considerations, strengths and limitations of using generative AI listed below, we have produced a guide with more information on how to use AI tools.

See our guide to using generative AI

You may also be interested in