Zero-shot prompting is a technique in [[natural language processing]] that allows a [[large language model]] to perform a task without being explicitly trained on that task. Instead, the LLM is given a prompt that describes the task, and it uses its knowledge of the world to generate a response. Essentially, zero-shot prompting enables a model to understand and respond to prompts it has never seen before.
In the context of writing a [[learnius/llms/2 LLMs and Transformers/prompt|prompt]] for an [[large language model|LLM]], zero-shot prompting is asking a question without providing additional context, in contrast with [[few-shot prompting]].
[[check conditions]] < [[Hands-on LLMs]]/[[5 Prompting]] > [[few-shot prompting]]