This notebook provides a step-by-step guide for our new gpt-3.5-turbo
fine-tuning. We'll perform entity extraction using the RecipeNLG dataset, which provides various recipes and a list of extracted generic ingredients for each. This is a common dataset for named entity recognition (NER) tasks.
We will go through the following steps:
- Setup: Loading our dataset and filtering down to one domain to fine-tune on.
- Data preparation: Preparing your data for fine-tuning by creating training and validation examples, and uploading them to the
Files
endpoint. - Fine-tuning: Creating your fine-tuned model.
- Inference: Using your fine-tuned model for inference on new inputs.
By the end of this you should be able to train, evaluate and deploy a fine-tuned gpt-3.5-turbo
model.
For more information on fine-tuning, you can refer to our documentation guide, API reference or blog post