Few-shot prompting / Few-shot learning

  • Glossary

"Few-shot prompting / Few-shot learning"

This glossary explains various keywords that will help you understand the mindset necessary for data utilization and successful DX.
This time, we will introduce "Few-shot prompting / Few-shot learning," a technique for utilizing conversational AI using large-scale language models such as ChatGPT.

What is Few-shot Prompting/Few-shot Learning?

Few-shot prompting is a prompting technique used when using conversational AI such as ChatGPT. By providing several examples of input and output when giving input (prompt), you can obtain a more desired output. It is also called few-shot learning.
This is an unprecedented behavior for large-scale language models, where behavior changes depending on parameters given at the time of use, without the need for re-training. Users can provide examples at the time of use that guide the model through additional training or behavior changes.

What kind of technique is it? (Example of use)

First, we will show an example that does not use Few-Shot Prompting. When introducing Few-Shot Prompting, examples such as having the subject judge whether a sentence is positive or negative, or translation examples are often used, so we will use those examples here as well.

Please rate the following as positive, negative, or neutral:

It's raining today and it's a bit tricky

I am neutral.

The system judges whether the input text is positive or negative. This ability can be used in tasks such as determining whether a survey response is favorable or critical. ChatGPT can do this too.

However, that doesn't mean that ChatGPT's judgment is high or its answers are precise. While ChatGPT can do a surprising amount of things, it often gives questionable answers. In this example, you'd probably want it to respond negatively. If you say "Today's a tricky day," it's often a roundabout way of saying "It's not a good day."

However, the expression "it's a bit unclear" does not clearly mean that something is positive or negative, so in this case, you may want to answer neutrally. In other words, rather than asking for an absolute correct answer, it may be desirable for the person to make a judgment based on their own convenience.

Use examples to get ChatGPT to adjust your answers

In other words, to make practical use of ChatGPT, you need to tailor your answers to suit your needs. This is where Few-Shot Prompting comes in.

Please rate the following as positive, negative, or neutral:

It's sunny and beautiful today // positive

It's raining today and it's a bit iffy // Negative

Today is a day of no particular feelings // Neutral

This dish is tricky //

It's negative.

When requesting an answer, three users provided "example answers." As a result, the behavior seems to have changed, as if the system had learned that the expression "it's a bit tricky" should be judged negatively.

Zero-shot / Few-shot / One-shot

The usual use of prompting, where you ask for a normal answer without providing any examples, is sometimes called "Zero-shot prompting." As mentioned above, the use of prompting by showing several answers is sometimes called "Few-shot prompting" or "Few-shot learning." To further distinguish this, the use of prompting by showing only one example is sometimes called "One-shot prompting."

How to use few-shot prompting

There are several possible uses. First, for inputs that cannot be judged correctly, you can provide input data and output examples as additional training data. In other words, you can use it to correct and compensate for mistakes.

Another way to use it is to change and adjust the response trends and criteria depending on the purpose of use. In the example above, it can be seen as changing the criteria for determining whether to respond negatively to the same text.

Also, you don't have to follow the example format given above. Various styles of writing will be recognized as examples, and suggestions for better writing styles are made daily, and the preferred style will likely change as the ChatGPT version is updated. I used a style that is often seen in English explanations.

Why Few-shot Prompting Behavior is Revolutionary

Some people may think this is an interesting technique, but being able to use it in this way is a breakthrough in the way that traditional machine learning measures behavior.

In the conventional wisdom, machine learning involves preparing data in advance, training the model, and then essentially just using it. If you wanted to slightly change the model's behavior, you had to go back to the training process, add more data, change the training criteria, and recreate the model.

However, with ChatGPT or large-scale language models, you can change their behavior by adding additional data as needed when using them.

Differences from fine tuning and transfer learning

In the past, when you wanted to make adjustments to suit your application, you would use methods called fine tuning or transfer learning.

A trained model is created using general data for a general purpose. Then, additional data tailored to the purpose is prepared and trained on the trained model, creating a trained model tailored to the purpose.

Fine-tuning would have allowed us to change the behavior by providing additional examples, but this required the time-consuming task of copying and retraining a trained model, and we would have had to prepare separate trained models for each additional use case.

With few-shot prompting, the model itself is not changed. The same trained model can be used to solve new problems. There is no cost for additional training, and there is no need to hire highly skilled engineers to perform additional training work. Everyone can use the same trained model, and easily adjust its behavior at any time to suit their needs, as if additional training had been performed. This is an incredible improvement in convenience.

However, if few-shot prompting does not allow you to adjust the behavior to suit your needs, you should also consider using fine-tuning (or relearning from scratch). Each method has its own strengths when it comes to relearning, so you should be able to combine them to suit your situation and needs.

Keywords related to Generative AI/ChatGPT (for further understanding)

Glossary Column List

Alphanumeric characters and symbols

A row

Ka row

Sa row

Ta row

Na row

Ha row

Ma row

Ya row

Ra row

Wa row

»Data Utilization Column List

Recommended Content

Related Content

Return to column list