To use the [[OpenAI API]] in Python it is necessary to load the `openai` package and initialize a variable with an [[OpenAI API key]]. The `openai.ChatCompletion()` endpoint can then be use generate a response to a [[prompt (prompt)]] using a specific [[machine learning model|model]].
To simplify the interface we will use a helper function named `get_completion()`that receives the prompt and, optionally, the model, and returns the response.
```python {pre}
import openai
openai.api_key = "sk-****"
def get_completion(prompt, model="gpt-3.5-turbo"):
messages = [{"role": "user", "content": prompt}]
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=0, # this is the degree of randomness of the model's output
)
return response.choices[0].message["content"]
```
A simple code test
```run-python
prompt = """
Translate this text into European Portuguese:
<Good Morning>
"""
response = get_completion(prompt)
print(response)
```
[[prompt injections]] < [[Hands-on LLMs]]/[[6 OpenAI API]] > [[code header]]