Prompt Engineering | How to Get Generative AI to Produce Useful Outputs
AI models have become prominent tools in many people’s working days, the most dominant being Chat-GPT. These can be incredibly useful; however, they often don’t produce the output we actually want, which requires us to go back and forth with it to try to get an ideal output.
Prompt engineering is a growing discipline that optimises the prompts given to large language models (LLMs) to get the most out of them. There are many LLMs, each requiring different nuances when prompting. Examples of prominent models are GPT-1, GPT-2, GPT-3, Jurassic models, GPT-J, Dall-E, Midjourney, and BERT.
It is important to remember that generative AI works with probabilities, essentially predicting the following words in a sequence. We give the prompt, and it comes back with the completion (what it predicts are the following words). Therefore, we want to guide the model to predict words that align with our desired output.
Additionally, understanding the underlying architecture of LLM that you are working with can help your prompt engineering as each model will respond differently to each prompt. Nevertheless, this does not mean you need an understanding to improve your prompts. This is just a method to enhance your prompts further. This article will focus on Chat GPT-4 as an example; however, all the strategies can be used across LLMs to help you improve your prompts.
What is a Prompt?
Firstly, it is essential to understand what prompts are and how they can be made more effective. Whether a prompt is good or bad depends on whether it results in you getting your desired outcome.
Prompts can encompass many components, including:
- Task instruction/question
- Context
- Role for the AI model to embody
- Formatting instructions for the output
- Examples
Here is an example of a prompt only including the task instruction:
“Write me a guide on how to make a cheese toastie.”
Here is an example of a prompt to show what all of the components look like together in practice:
“Write a simple, step-by-step guide on preparing a cheese toastie, including key tips for success. (this is the task instruction). The reader has basic cooking tools (skillet, spatula, stove) and ingredients (sliced bread, your choice of cheese, butter) (this is your context). As a culinary expert, provide practical advice with a friendly tone, as if you’re guiding a friend through their first cooking experience (this is the role for the AI to embody). Present your guide with numbered steps. (this is the format instructions).
(Examples)
Step 1: Begin by buttering one side of each bread slice. Use room-temperature butter for easy spreading to ensure an even, golden crust.
Step 2: Place a slice of your favourite cheese between the unbuttered sides of the bread. Cheddar or mozzarella are great choices for their melting qualities and flavour.”
Not all of these components are required in a prompt; however, effectively utilising them can help you get the best output.
How to Improve Your Prompts
Many strategies can be used in prompt engineering to improve outputs, some more technical than others. Below, we will cover eight strategies that you can quickly implement to instantly enhance your prompts.
Strategy 1 – Asking for Citations and Resources
Hallucinations are a prominent issue in generative AI models. One way to mitigate this is to ask it to use citations and resources in the answer.
An example of this is – “Is exercise good for health? Respond using only reliable sources and cite those sources in your response.”
Nevertheless, this may only partially mitigate the hallucinations. For instance, the model could still make up sources, so it is always important to check information before using it.
Strategy 2 – Make the Model Fact Check Itself
This involves asking the model to fact-check its previous response. For instance, if you ask the model to write a paragraph on the benefits of running on heart health, Your next prompt could say, “review the correctness of this paragraph as if you were a qualified cardiologist.”
Strategy 3 – Set Boundaries for the Output
Constraints in your prompts will help the model provide the desired output. For instance, include “Explain this in 2 sentences” at the end of your prompt if you want a concise answer.
Strategy 4 – Few-Shot Prompting
This strategy involves including examples in your prompt to help guide its response. Good examples will show the model what you are looking for and help it produce the desired outcome. This was demonstrated in the above example about the cheese toastie.
Strategy 5 – Chain-of-Thought Prompting
This is a method where you ask the model to follow a chain of reasoning/steps to guide it to the correct answer. This was initially discussed in the paper by Google researchers named “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models”. This paper argues that the complex reasoning ability in LLMs can be improved through a series of intermediate reasoning steps.
They present the following example to illustrate Chain-of-Thought prompting. In the Chain-of-Thought prompt, they include reasoning steps within their example in their prompt, which results in the output also including the reasoning steps it used to get to the answer.
The below image from the paper shows two prompts. Both include a question, example answer and then another question. The example answer given in each gives the LLM an indication of how we want it to structure the answer.
You can see in the Chain-of-Thought prompt given, the example answer led the LLM to produce an output which included reasoning and enabled it to give the correct answer. Whereas the standard prompt did not encourage this reasoning and resulted in the wrong answer being returned.
Strategy 6 – Set a Role for the Model to Take
Giving the model a role or tone of voice to embody can improve the quality of responses you will receive. This can also include designating a tone of voice for the output to be in. This was demonstrated in the example about the cheese toastie.
Strategy 7 – Split Complex Tasks into Smaller Subtasks
This is helpful as smaller sub-tasks have lower error rates than complex tasks. Consider splitting up a long request into bite-sized chunks for the model to respond to individually.
Strategy 8 – Requesting a Specific Format for the Output
Adding your desired format can make the output more useful. For instance, include “format your response in a table”. Additionally, this strategy can help you get around issues. For example, if the model responds that it cannot export into a CSV or other document type, instead, ask it to format it so that you can then paste it into a CSV document.
Are Specific Prompts Always Good?
Creating precise prompts can enhance the precision of your responses. Nevertheless, it’s essential to bear in mind that these LLMs draw on an extensive amount of information. Therefore, writing a prompt that allows interpretation may yield unexpectedly insightful results, unveiling perspectives or solutions you may have yet to consider. By occasionally relaxing the constraints on your prompts, you tap into the full power of the Large Language Models’ (LLMs) extensive knowledge.
Wrap-up
Prompt engineering can be very useful in guiding LLMs to produce your desired output, and many strategies are easy and quick to implement. At Varn, we use AI models such as Chat-GPT as a tool in some of our tasks, and prompt engineering can help us get the outputs we want more efficiently.
Additional Resource
Open AI has published a prompt engineering guide for their GPT-4 model, which covers strategies and tactics that you can use along with examples of effective prompts.