What is prompt-based machine learning -- and how is it used?
Prompt-based learning is a strategy that machine learning engineers can use to train large language models (LLMs) so the same model can be used for different tasks without re-training.
Traditional strategies for training large language models such as GPT-3 and BERT require the model to be pre-trained with unlabeled data and then fine-tuned for specific tasks with labeled data. In contrast, prompt-based learning models can autonomously tune themselves for different tasks by transferring domain knowledge introduced through prompts.
A prompt is a snippet of natural language text that is added to unlabeled data during the pre-training phase. The art of writing useful prompts is called prompt engineering.
Prompt-based learning makes it more convenient for artificial intelligence (AI) engineers to use foundation models for different types of downstream uses.
This approach to large language model optimization is still considered to be emerging and has led to increased interest in other types of zero-shot learning. Zero-shot learning algorithms can transfer knowledge from one task to another without additional labeled training examples.
Advantages and Challenges
Prompt-based training methods are expected to benefit businesses that don't have access to large quantities of labelled data and use cases where there simply isn't a lot of data to begin with.
The challenge of using prompt-based learning is to create useful prompts that ensure the same model can be used successfully for more than one task.
Prompt engineering is often compared to the art of querying a search engine during the first days of the internet. It requires a fundamental understanding of structure and syntax -- as well as a lot of trial-and-error.
Editor's Note: If you drive your family and co-workers crazy changing the syntax of your chatbot queries until Alexa finally gives you an acceptable response, consider becoming a prompt engineer!