Few-shot prompting is a technique in natural language processing (NLP) that allows a large language model (LLM) to perform a task with a few examples of the task. The LLM is given a prompt that describes the task, and it is also given a few examples of the task being performed. The LLM uses these examples to learn how to perform the task, and it can then generate responses to new prompts that describe the task. For example, an LLM trained on a massive dataset of text and code could be prompted to write a poem, and it could be given a few examples of poems. The LLM would use these examples to learn how to write poems, and it could then generate new poems that are similar to the examples. Provide successful executions of the task before asking the model to perform the task. See: [[example of few-shot prompting use]]. [[zero-shot prompting]] < [[Hands-on LLMs]]/[[5 Prompting]] > [[use numbered steps]]