Chain of thought (CoT) prompting is a technique where you prompt the language model to generate intermediate reasoning steps that then get incorporated into the final prompt. This provides additional context to help the model arrive at better final results.
There are two main approaches:
1. Zero-shot CoT: Ask the model to think step-by-step about a task and include its reasoning in the prompt. For example, ask it to explain its reasoning for sentiment classification of a conversation.
2. Few-shot CoT: Prompt the model to first generate sample reasoning from a few examples. Then include this reasoning to enhance the few-shot examples used in the final prompt.
In both cases, the model's intermediate reasoning is appended to the prompt to give more context. This helps the model stay on track instead of having to make large inferential leaps. CoT prompting essentially walks the model through a chain of thought, leading to better final outputs that stick closer to the intended reasoning process. The increased context acts like training data tailored to specific examples.
See an example of the [[use chain of thought to check if a solution is correct]]
[[use numbered steps]] < [[Hands-on LLMs]]/[[5 Prompting]] > [[prefer positive instructions]]