locxter.net

A few thoughts on prompt engineering

2023-06-30

With transformer models such as ChatGPT being all the rage right now, the art of creating clear, concise and effective prompts - also known as prompt engineering - has become increasingly important in order to utilize these tools to their full potential. Without further ado, here are my top 5 strategies to get these models to behave the way you want...

1. Dos and dont's

As always in life, specifying your requirements as clearly as possible will improve the outcome of a certain project, task or whatsoever. This also applies to prompt engineering, so try to provide certain positive (dos) and negative (dont's) requirements. Here are some examples:

2. Role plays

Surprisingly, telling the model to act as a certain person can yield vastly better results without much work. So, keep this step in mind and ideally place this part right at the beginning of the conversation (or even better in a system prompt). Here is some inspiration:

3. Templates

Especially for larger tasks such as a presentation in Markfown format, creating a certain template that the model can fill out and adapt makes sense to ensure consistant output. This can look like this:

Fill the following template and adapt it if necessary with more pages. Template:

"""
# Title

<br><br><br><br><br><br><br><br>

#### Author

---

# Agenda

- Topic 1
- Topic 2
- Topic 3

---

# Topic 1

- Point
- Point
- Point

<br><br>

**→ Conclusion of the topic**

---

# Topic 2

- Point
- Point
- Point

<br><br>

**→ Conclusion of the topic**

---

# Thanks for your attention!

Sources:

- Source
- Source
- Source
"""

4. Examples

When we just provide a certain prompt, we are technicially doing so-called zero-shot prompting, which means that we don't specify any further context or examples of how we want the model to behave. Adding some of these to use few-shot prompting, is quite advisible for universal system such as customer service bots, but is sadly only really supported through the API. If you want to give it a go, take a look at the official GPT best practices .

5. Split into smaller tasks

Last but certainly not least, I want to emphasize the possibility of splitting larger tasks into smaller ones, which can be fulfilled sequentally, in order to lower the difficulty for the model or even enable it to perfom the wanted task. Here an example of splitting a text analysis into smaller chunks:

  1. Trivia: Write an introduction, giving details of the author, title, type of text, year of publication, topic and addressee.
  2. Content: Summarize the central statements of the text, maintaining linguistic distance through the use of the subjunctive mood.
  3. Language: Analyze the language used in the text, looking at common stylistic devices, particular word choices, positive/negative connotation, etc.
  4. Effect: Based on your findings on the introduction, content and language, create an interpretation of the text's effect.

With that said, I hope you find these strategies just as useful as I do. Of course, feel free to share you findings in the comments down below and have a lovely day...

RSS feed