Learning with GenerativeAI

The Structure of a Good Prompt

A good prompt has 4 key elements: Role, Task, Requirements, Instructions.

Let’s take a look at each one in depth.

Role

Prompts starting with “act as…” or “pretend to be…” will provide responses similar to that of the role which you provide. Setting a specific role for a given prompt increases the likelihood of more accurate information, when done appropriately.

  • E.g. act as an expert in the field of computer science
    • Determines the type of information and the way the information is communicated to you.
    • Also determines the interactivity of the conversation.

Task

The task is a summary of what you want the prompt to do. There is a lot of creativity that comes into writing a great task. It can range from generating birthday gift ideas to doing game show questions with the content from your last lecture.

  • Outline what you want the GenAI tool to do.
  • Be specific about the task’s objective, as unclear objectives lead to worse outputs.

Requirements

Writing clear requirements is all about giving as much information as possible to ensure your response does not use any incorrect assumptions. GenAI tools make assumptions for any information they do not have in the prompt.

  • It is often important to define what an output should look like and conditions that affect the output.
  • This includes the various conditions to limit the assumptions made by the model.

Instructions

Instructions will inform the GenAI tool how to complete the task. Instructions can include examples of how it is supposed to work, steps it can follow or any information.

  • How should the AI go about completing its task.
  • Examples of how it could go about tasks.
  • Feedback on steps it has taken will improve this even further.

Take some time to review the CLEAR Framework for Prompt Engineering (Lo, 2023)[1]

 

Review a range of exemplar prompts created by students for students (AI In Education, University of Sydney (July 2023))

Prompts to help you learn

Prompts to help you create

Assumptions that can lead to incorrect output

Large Language Models (LLMs) generate responses based on the wording and context of prompts, making assumptions that can affect the accuracy and bias of their output. It is crucial to critically evaluate these assumptions and consider multiple perspectives when interpreting AI-generated content.

Watch this short video demonstrating how the model makes an incorrect assumption from an entered prompt.


Attribution: Adapted from University of Sydney AI in Education licensed under a Creative Commons BY-NC 4.0 licence


  1. Lo, L. S. (2023). The Art and Science of Prompt Engineering: A New Literacy in the Information Age. Internet reference services quarterly, (ahead-of-print), 1-8. https://doi.org/10.1080/10875301.2023.2227621
definition

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Are You AI Ready? Investigating AI Tools in Higher Education - Student Guide Copyright © 2024 by SATLE 'Are You AI Ready?' Project Team, University College Dublin is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.