From an example API call to gpt-3.5-turbo:

# Note: you need to be using OpenAI Python v0.27.0 for the code below to work
import openai

openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who won the world series in 2020?"},
        {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
        {"role": "user", "content": "Where was it played?"}
    ]
)

It could be noticed that the ‘system’ role may serve as the ‘few-shot’ part in the few-shot learning. In other words, demonstrate with a few examples to the model. This is called prompt.

Reference: Chain of Thought Prompting Elicits Reasoning in Large Language Models Jason Wei and Denny Zhou et al. (2022)

Reference: Chain of Thought Prompting Elicits Reasoning in Large Language Models Jason Wei and Denny Zhou et al. (2022)

Other than ‘few-shot’ hints, tricks to gain an ideal response are:

Reference: OpenAI Cookbook - Techniques to improve reliability

How does fine-tuning work?