prompt-engineering

Prompt Engineering And Why It Matters To The AI Revolution

Prompt engineering is a natural language processing (NLP) concept that involves discovering inputs that yield desirable or useful results. Prompting is the equivalent of telling the Genius in the magic lamp what to do. In this case, the magic lamp is DALL-E, ready to generate any image you wish for. 

In-context learning via prompting

In biology, emergence is an incredible property, where parts that come together, as the result of their interactions, show new behaviors (called emergent), which you can’t see on a smaller scale.

The even more incredible thing is even if the smaller scale version seems similar to the larger scale, the fact that the larger scale is comprised of more parts, and interactions, it eventually shows a completely different set of behaviors.

And there is no way of predicting what this behavior might be.

That’s the beauty (for better or worse) of scale!

In the current AI revolution, the most exciting aspect is the rise of emergent properties of machine learning models working at scale.

And it all started with the ability to have those AI models trained in an unsupervised manner. Indeed, unsupervised learning has been one of the key tenets of this AI revolution, and it has unstuck the AI progress of the last few years.

Before 2017, most AI worked by leveraging supervised learning via small, structured data datasets, which could train machine learning models on very narrow tasks.

After 2017, with a new architecture called a transformer, things started to change.

This new architecture could be used with an unsupervised learning approach. The machine learning model could be pre-trained on a very large, unstructured dataset with a very simple objective function: text-to-text prediction.

The exciting aspect is that the machine learning model, in order to learn how to properly perform a text-to-text prediction (what might seem a very simple task), started to learn a bunch of patterns and heuristics around the data on top of which it was trained.

This enabled the machine learning model to learn a wide variety of tasks.

Rather than trying to perform a single task, the large language model started to infer patterns from the data and re-used those when performing new tasks.

This has been a core revolution. In addition, the other turning point, which came out with the GPT-3 paper, was the ability to prompt these models.

In short, it enable these models to further learn the context of a user through natural language instruction, which could dramatically change the output of the model.

This other aspect was also emergent, as none expressly asked for it. Thus, this is how we got in-context learning, via prompting, as a core, emergent property of current machine learning models.

Understanding prompt engineering

Prompt Engineering is a key, emergent property of the current AI paradigm.

ai-business-models

One of the most interesting aspects of Prompt Engineering is the fact that it came out as an emergent property of scaling up the transformer architecture to train large language models.

Just like the wishes you express can turn against you, when you prompt the machine, the way you express what it needs to do can dramatically change the output. 

And the most interesting part?
Prompting was not a developed feature by AI experts. It was an emergent feature. In short, by developing these huge machine learning models, prompting became the way to have the machine execute the inputs.

None asked for it; it just happened! 

In a paper in 2021, researchers from Stanford highlighted how transformer-based models had become foundational models.

foundational-models-machine-learning

As explained in the same paper:

The story of AI has been one of increasing emergence and homogenization. With the introduction of machine learning, how a task is performed emerges (is inferred automatically) from examples; with deep learning, the high-level features used for prediction emerge; and with foundation models, even advanced functionalities such as in-context learning emerge. At the same time, machine learning homogenizes learning algorithms (e.g., logistic regression), deep learning homogenizes model architectures (e.g., Convolutional Neural Networks), and foundation models homogenizes the model itself (e.g., GPT-3).

Prompt engineering is a process used in AI where one or several tasks are converted to a prompt-based dataset that a language model is then trained to learn.

The motivation behind prompt engineering can be difficult to understand at face value, so let’s describe the idea with an example.

Imagine that you are establishing an online food delivery platform and you possess thousands of images of different vegetables to include on the site.

The only problem is that none of the image metadata describes which vegetables are in which photos.

At this point, you could tediously sort through the images and place potato photos in the potato folder, broccoli photos in the broccoli folder, and so forth.

You could also run all the images through a classifier to sort them more easily but, as you discover, training the classifier model still requires labeled data. 

Using prompt engineering, you can write a text-based prompt that you feel will produce the best image classification results.

For example, you could tell the model to show “an image containing potatoes”. The structure of this prompt – or the statement that defines how the model recognizes images – is fundamental to prompt engineering. 

Writing the best prompt is often a matter of trial and error. Indeed, the prompt “an image containing potatoes” is quite different from “a photo of potatoes” or “a collection of potatoes.”

Prompt engineering best practices

Like most processes, the quality of the inputs determines the quality of the outputs. Designing effective prompts increases the likelihood that the model will return a response that is both favorable and contextual.

Writing good prompts is a matter of understanding what the model “knows” about the world and then applying that information accordingly.

Some believe it is akin to the game of charades where the actor provides just enough information for their partner to figure out the word or phrase using their intellect.

Think of the model as representing the partner in charades. Just enough information is provided via the training prompt for the model to work out the patterns and accomplish the task at hand.

There is no point in overloading the model with all the information at once and interrupting its natural intelligence flow.

Prompt engineering and the CLIP model

The CLIP (Contrastive Language-Image Pre-training) model was developed by the AI research laboratory OpenAI in 2021.

According to researchers, CLIP is “a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3.

Based on a neural network model, CLIP was trained on over 400 million image-text pairs, which consist of an image matched with a caption.

Using this information, one can input an image into the model, and it will generate a caption or summary it believes is the most accurate.

The above quote also touches on the zero-shot capabilities of CLIP, which makes it somewhat special among machine learning models.

Most classifiers trained to recognize apples and oranges, for example, are expected to perform well on classifying apples and oranges but generally won’t detect bananas.

Some models, including CLIP, GPT-2, and GPT-3, can recognize bananas. In other words, they can execute tasks that they weren’t explicitly trained to perform. This ability is known as zero-shot learning.

Examples of prompt engineering

As of 2022, the evolution of AI models is accelerating. And this is making prompt engineering more and more important.

We first got text-to-text with language models like GPT-3, BERT, and others.

Then we got text-to-image with Dall-E, Imagen, MidJourney, and StableDiffusion.

At this stage, we’re moving to text-to-video with Meta’s Make-A-Video, and now Google’s developing its own Imagen Video.

Effective AI models today focus on getting more with much, much less!

One example is DreamFusion: Text-to-3D using 2D Diffusion, built by Google Research lab.

In short, AI diffusion models are generative models, meaning they produce an output that is similar to that on which they have been trained.

And by definition, diffusion models work by adding noise to the training data and by generating an output by recovering that data through a reversal of the noising process.

DreamFusion, by Google Research, is able to translate text to 3D images, without having a large-scale dataset of labeled 3D data (unavailable today).

And that’s the thing!

As explained by the research group:

“Adapting this approach to 3D synthesis would require large-scale datasets of labeled 3D data and efficient architectures for denoising 3D data, neither of which currently exist. In this work, we circumvent these limitations by using a pretrained 2D text-to-image diffusion model to perform text-to-3D synthesis.”

Why is this relevant?

In a web that has been primarily text-based or 2D image-based for over two decades, it has now been time to enable enhanced formats, like 3D which can work well in AR environments.

In short, imagine that you’re wearing your Google’s AR Glasses, and these AI models underneath can enhance the real world, with 3D objects, on the fly, thus enabling you to make AR experiences way more compelling.

Meta AI, a make-a-video AI system, was given a prompt, and the machine gave back a short-form video.

At the same time, OpenAI announced speech-to-text with Whisper.

Combined, these AI models would create a multi-modal environment where a single person or small team can leverage all these tools for content generation, filmmaking, medicine, and more!

This means a few industries – that could not be entered before – become more easily scalable, as barriers to entry are wrecked off.

It’s possible to test/launch/iterate much faster, thus enabling markets to evolve more quickly.

If after almost 30 years of the Internet, still many industries (from healthcare to education) are locked into old paradigms.

A decade of AI might completely reshuffle them.

Each AI model will be prompted in the same way, yet the way to prompt a machine can have such subtleties that the machine can produce many different outputs thanks to the prompt variations.

Just in October 2022:

  • Stability AI Announces $101 Million in Funding for Open-Source Artificial Intelligence.
  • Jasper AI, a startup developing what it describes as an “AI content” platform, has raised $125 million at a $1.5 billion valuation. Jasper is in the process of acquiring AI startup Outwrite, a grammar and style checker with more than a million users.
  • OpenAI, valued at Nearly $20 Billion, is in Advanced Talks with Microsoft For More Funding.

Today, with prompting, you can generate a growing number of outputs.

open-ai-use-cases
Some of the use cases of OpenAI, can be generated via prompting. From Q&A to classifiers and code generators. The number of use cases that the AI, via prompting, enables is growing exponentially.

Another cool application? You can design your own shoes with prompting:

Prompted DreamStudio AI to generate a pair of custom sneakers.

Prompting like coding?

On November 30, OpenAI released ChatGPT.

A conversational AI interface with incredible capabilities.

As I tested ChatGPT, it was mind-blowing!

I used it to generate job descriptions.

With a simple prompt, it gave me a pretty accurate output in a matter of a few seconds!

That made me realize this was another turning point for AI…

And that’s nothing, indeed the current paradigm of AI is that it can code incredibly well!

What’s ChatGPT?

ChatGPT is a tool that combines the GPT-3 model plus an additional model called InstructGPT, which is fine-tuned through reinforcement learning from human feedback to make it more grounded compared to GPT.

With ChatGPT, you can get any answer on any topic (though for this Beta release, it was restricted to various areas).

There is much more to it.

With ChatGPT, you can turn yourself into a coder.

All you need is prompting!

Here I prompted ChatGPT to generate the code for a stock trading web app!

how-does-chatgpt-make-money
ChatGPT leverages a freemium model, with a free version with limited capability and a premium version (starting at $20/mo), which also comprises access in peak times, faster response times, and early access to new features and improvements.

How much does a prompt engineer make?

In the midst of the AI (buzz) and revolution, a prompt engineer can make anywhere between $150-300 per year.

As an interesting example, a prompt engineer and librarian job posting would look like that.

prompt-engineer-job
prompt-engineer-job

How Does OpenAI Work?

Prompt engineering examples and case study

Here is a prompt engineering example with some best practices included in the process.

Customer refund for a television

Imagine that a customer contacts an electronics company requesting a refund on a television they recently purchased. The company wants to use a model that would assist the customer service department by generating a plausible response.

In a trial run, a hypothetical or “test” customer contacts the company with the following query: Hello, I’d like to get a refund for the television I purchased. Is this possible?

To design the prompt and by extension, useful ways in which the agent can interact with the customer, the company starts by informing the model of the general setting and what the rest of the prompt will contain.

The prompt may read something like this: This is a conversation between a customer and customer care agent who is helpful and polite. The customer’s question: I’d like to get a refund for the television purchased. Is this possible?

Now that the model knows what to expect, it is shown the start of the response it should provide to the customer: Response by the customer care agent: Hello, we appreciate you reaching out to us. Yes,

Combining the first and second parts the prompt clarifies that the response to the customer query comes from a customer care agent and that the answer should be positive.

Composition of the customer care language model

The above scenario can be summarized by defining the components of the model itself:

  • Task description – This is a conversation between a customer and customer care agent who is helpful and polite.
  • Input indicator – the customer’s question.
  • Current input, and
  • Output indicator – Response by the customer care agent: Hello, we appreciate you reaching out to us. Yes, 

Note that input and output indicators are an effective way to describe desired tasks to the model – especially when multiple examples are included in the prompt. Based on this, the model may produce three text outputs (known as completions) to complete the sentence after the comma:

  1. Yes, we can accept returns if the television is unused, unopened, and not damaged.
  2. Yes, we are happy to process a refund for your television purchase. However, please note that we require the television to be returned to your nearest store.
  3. Yes, this is possible. Please reply with your name, address, phone number, and receipt number at your earliest convenience. One of our customer care staff will be in touch with you as soon as possible.

While this is a somewhat simplified approach, it is clear that in this example the model clarifies several plausible completions with only a small number of customer service interactions.

In theory, the electronics company could fine-tune the model with examples of how it should respond to specific questions, requests, and comments.

ChatGPT Prompt Examples

Code Generation

Content Creation

Data Analysis

Education and training

Decision-making and problem-solving

Key takeaways:

  • Prompt engineering is a natural language processing (NLP) concept that involves discovering inputs that yield desirable or useful results.
  • Like most processes, the quality of the inputs determines the quality of the outputs in prompt engineering. Designing effective prompts increases the likelihood that the model will return a response that is both favorable and contextual.
  • Developed by OpenAI, the CLIP (Contrastive Language-Image Pre-training) model is an example of a model that utilizes prompts to classify images and captions from over 400 million image-caption pairs.

Key Highlights

  • Prompt engineering is a concept in natural language processing (NLP) where inputs are designed to yield desired outputs from AI models.
  • Prompting is the equivalent of instructing a model like DALL-E to generate a specific image based on given instructions.
  • Emergence is a property in biology and AI, where complex behaviors and patterns arise from interactions between components at a larger scale.
  • Unsupervised learning and the transformer architecture have led to the emergence of powerful language models that can perform a wide variety of tasks.
  • Prompt engineering emerged as a way to instruct language models to perform specific tasks through natural language instructions.
  • AI models like CLIP and ChatGPT are examples of prompt engineering where models can recognize images or generate code based on prompts.
  • Prompt engineering can be used in various industries for content generation, code generation, decision-making, and more.
  • AI models can be prompted to perform a growing number of tasks, and the number of use cases is expanding rapidly.
  • Prompt engineers can make around $150-300k per year, and the field is witnessing substantial growth and investment.
  • ChatGPT, an AI conversational interface, can be used as a coding tool with prompting, making it a valuable resource for developers and creators.

Read Next: AI Chips, AI Business Models, Enterprise AI, How Much Is The AI Industry Worth?, AI Economy.

Connected Business Frameworks

Artificial Intelligence vs. Machine Learning

artificial-intelligence-vs-machine-learning
Generalized AI consists of devices or systems that can handle all sorts of tasks on their own. The extension of generalized AI eventually led to the development of Machine learning. As an extension to AI, Machine Learning (ML) analyzes a series of computer algorithms to create a program that automates actions. Without explicitly programming actions, systems can learn and improve the overall experience. It explores large sets of data to find common patterns and formulate analytical models through learning.

AIOps

aiops
AIOps is the application of artificial intelligence to IT operations. It has become particularly useful for modern IT management in hybridized, distributed, and dynamic environments. AIOps has become a key operational component of modern digital-based organizations, built around software and algorithms.

Machine Learning

mlops
Machine Learning Ops (MLOps) describes a suite of best practices that successfully help a business run artificial intelligence. It consists of the skills, workflows, and processes to create, run, and maintain machine learning models to help various operational processes within organizations.

Continuous Intelligence

continuous-intelligence-business-model
The business intelligence models have transitioned to continuous intelligence, where dynamic technology infrastructure is coupled with continuous deployment and delivery to provide continuous intelligence. In short, the software offered in the cloud will integrate with the company’s data, leveraging on AI/ML to provide answers in real-time to current issues the organization might be experiencing.

Continuous Innovation

continuous-innovation
That is a process that requires a continuous feedback loop to develop a valuable product and build a viable business model. Continuous innovation is a mindset where products and services are designed and delivered to tune them around the customers’ problems and not the technical solution of its founders.

Technological Modeling

technological-modeling
Technological modeling is a discipline to provide the basis for companies to sustain innovation, thus developing incremental products. While also looking at breakthrough innovative products that can pave the way for long-term success. In a sort of Barbell Strategy, technological modeling suggests having a two-sided approach, on the one hand, to keep sustaining continuous innovation as a core part of the business model. On the other hand, it places bets on future developments that have the potential to break through and take a leap forward.

Business Engineering

business-engineering-manifesto

Tech Business Model Template

business-model-template
A tech business model is made of four main components: value model (value propositions, missionvision), technological model (R&D management), distribution model (sales and marketing organizational structure), and financial model (revenue modeling, cost structure, profitability and cash generation/management). Those elements coming together can serve as the basis to build a solid tech business model.

Additional resources:

Discover more from FourWeekMBA

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top
FourWeekMBA