A base LLM is a [[large language model]] trained to predict the next word, based on a large amount of text training data from the internet and other sources. For example, if you prompt "once upon a time there was a unicorn" it may complete this with "that live in a magical forest with all unicorn friends". But if you were to prompt "What is the capital of France?", it's quite possible that the base LLM will complete this with "What is France's largest city?", or "What is France's population?" and so on, because articles on the internet could quite plausibly be lists of quiz questions about the country of France. These are better handled by an [[instruction-tuned LLM]]. [[ChatGPT Prompt Engineering for Developers|Fulford&Ng-2023]] [[hallucination]] < [[Hands-on LLMs]]/[[2 LLMs and Transformers]] > [[instruction-tuned LLM]]