A Practical Guide to Prompt Engineering

Birthday table
My first day of holiday is also my 30th birthday! Julia gave me this warm place setting, which I’m still sitting at. Today I’m taking the time to write. My gift to myself.

You can jump straight to the strategies for better results in prompt engineering and skip my thoughts on the status quo of AI.

In 2020, a colleague at work showed me a video on what was then called Twitter. It showed a screenshot of a developer typing a task into a text box on a rudimentary website. Something like: “Create a React component that does X and Y.” Shortly afterward, an answer popped up in the form of code. Even though the quality of the result seemed mediocre, this process of machine processing of task→result stuck in my head. The project was something called GPT-2.

My memory is a bit fuzzy. What I do remember are two feelings: disbelief and fear. Disbelief, because I didn’t think the code was very useful and I couldn’t imagine technical progress. Fear, because intangible better results could affect me and replace me.

A few years later, OpenAI released GPT-3 and GPT-4. These systems have not only become a trend, but are increasingly used in the mainstream. I also use GPT on a daily basis – the output of the models has become usable. It’s probably just now that schools and universities are getting to grips with the tool.

Training large language models is no longer the preserve of start-ups. Tech giants are investing hundreds of millions of dollars to build their own models to compete with, and most likely surpass, OpenAI. Just a few weeks ago, the French AI start-up Mistral AI released several models that are not only free to use (Apache license), but also reach the quality level of GPT-3.5.

I Am Afraid of Being Replaced

AI processes structured patterns. Code is essentially a structured pattern. Programming is therefore a fundamental skill of AIs.
… That’s what I think.

AI cannot think in human terms, but it can solve repetitive tasks by attaching the most likely subsequent word B to a word A (in the context of a task). This is more like programming than computing. Simply put, programming produces a tasty teabag of text somewhere that can be infused or executed by a machine. This code should first and foremost fulfil its inherent purpose – solve a problem. The art of development, however, lies in writing clean code that is sustainable and maintainable.

Until now, it has been up to people to decide what to program and to implement the development. It is possible that this relationship will change in the future. I imagine that AIs will increasingly work as worker bees, executing tailored packages of tasks. It may be that it will be harder for a junior developer to break into the industry, as their work will be easier to replace, while senior developers will have to grow as AIs mature.

Even if AIs can program perfectly, there is still a crucial role to play: that of the decision-maker. A position that determines, among other things, which programming language, which framework and how maintainability is implemented in the project. Because there is no single way to achieve clean code.

Learn How the Gutenberg Press Works

Over the past few months, I have been thinking about my future every day, looking for clues to help me situate myself in current developments. I came across an analogy that stuck with me:

With the invention of the printing press by Johannes Gutenberg, fewer Bibles were handwritten, but many more books were published. Perhaps by minimizing the barriers to getting things done, project development as a whole will flourish, motivating new authors to realize their ideas.

I feel confident in learning how the printing press works, how it can be supplied and maintained. That’s why I’m planning to book a course on the technical basics of language models next year. Probably through Brilliant.org. Know of a better one? Drop me a line.

I Love Programming and AI Helps Me Do It.

To this day, changing careers has been one of the best decisions of my life. And I don’t mean that in a pathetic way. Most of the time, it takes more time than knowledge to turn an idea from your head into code. That’s where AI comes in. It works effectively for me when I put parts of tasks into defined frameworks. It doesn’t (yet) help me to shorten the learning process – to skip the actual educational struggle – but it saves me time-consuming googling and allows me to do more of what I enjoy.

I try to transform my fear into optimism. If I exclude my fear as a thought experiment, AI has so far helped me enormously to increase my productivity. Less in a capitalist sense, but more in the sense of personal fulfillment I get from developing and implementing ideas.

Cover image with text: “Strategies”

Strategies for Better Results

Over the past few months, through constant use and trial and error, I have learned how to put together task packages for AI in a way that delivers meaningful results. In the context of artificial intelligence, writing clearly defined instructions, i.e. describing the task to be performed by the AI, is called prompt engineering.

OpenAI has now published a guide to prompt engineering, replacing the countless blog articles on the internet that draw conclusions about certain ideals of task writing based on their experiences with prompting.

In the following, I summarize the key points in simple language, so that you can get better results when prompting GPT-4 (and probably the following models as well). This summary may be of little use to you now. But perhaps in a few years, when a variant of AI is more widely used in your work, you’ll remember it and think: “There was something.”

1. Write Clear Instructions

AI models cannot read minds. If the output is too long, ask for a shorter answer. Again, if the results are too simple, ask for an expert-level text. If you want a specific format (e.g. table), mention it. The less guessing the model has to do, the better.

Approaches:

  • Include details or examples to get more relevant answers.
  • Ask the model to assume a persona, e.g. Act as an academic.
  • Use separators to clearly identify different parts of the input.
  • State the steps required to complete a task.
  • State the desired length of the output.

2. Provide Reference Texts

Language models hallucinate when asked about esoteric topics or quotes and URLs, for example. The number of incorrect answers can be reduced by providing reference texts – i.e. a context for the question. Ask the model to use the references as a basis for answering the task.

3. Break Complex Tasks Into Simpler Subtasks

Complex tasks tend to have higher error rates than simpler tasks. In software development, it is therefore good practice to break down a complex system into a series of modular components. The situation is similar for tasks involving a language model. In addition, for simpler tasks, previous responses can be used to construct the input for later tasks.

Approaches:

  • Collapse or filter previous dialogs in long dialogues.
  • Collapse long documents piece by piece and recursively create a complete summary.

4. Give the Model Time to “Think”

When you add up the prices of items in a supermarket, you may not know the answer immediately, but you’ll work it out over time. Similarly, models can make thinking errors by trying to answer immediately rather than taking time to work out an answer. Asking for a chain of thought before an answer can help the model to arrive at correct answers more reliably.

Approaches:

  • Asks the model to work out its own solution before coming to a conclusion.
  • Asks the model if it has missed anything in previous answers.

End of article. If you spot a typo or have thoughts about this article, feel free to write me. 🙆‍♂️

Articles Which Are Important to Me

2023

A Practical Guide to Prompt Engineering
AI
How I Learned to Trick My Ambition
Film

2021

How to pronounce GIF properly
Language