Engineering

Create a fine-tuning dataset for gpt-4o-mini

Open AI announced fine-tuning on gpt-4o-mini this week, free through September. You can use logs from Velvet to select and export a training set. Read below for an overview of fine-tuning, when to try it, and how it works.

Fine-tuning and when to invest

Once you've experimented with a large pre-trained model, you may want to fine-tune your own model. This process is a time investment. It's worthwhile if you've already experimented with prompt engineering and you still have a problem with results. Done effectively, you'll end up with better, cheaper, and faster outputs.

Open AI released docs this week for fine-tuning gpt-4o-mini, and it's free through September. You can read the full overview from OpenAI here.

Fine-tuning process

To fine-tune a model, you'll provide training data with more examples than you could include in a prompt. This means the model itself will perform differently - rather than needing to pass through examples with every API call. Follow the steps below to fine-tune an Open AI model like gpt-4o-mini.

(1) Prepare a data set: Select example messages from past API usage to train the model. You'll need a minimum of 10 examples. 50-100 examples may result in additional improvements. You can layer on additional datasets as you experiment.

(2) Upload a training file: Upload the data file using Open AI's Files API. The maximum size is 1 GB, and it may take some time to process.

(3) Create a fine-tuning job: Use Open AI's fine-tuning UI or do it programmatically. Specify the model you want to fine-tune and the training file. This may take some time (a few minutes or hours), and you'll receive an email once it's complete.

(4) Use your fine-tuned model: Once the job is complete, you can specify your new model as a parameter in the Chat Completions API. You can start making requests.

(5) Analyze and iterate: Like any model in your system, you'll want to evaluate performance relative to other models and continue iterating on quality. Open AI provides some metrics within its ecosystem. We recommend also running your own analysis and evaluations.

Read Open AI's full documentation on fine-tuning here.

Prepare and analyze your fine-tuning dataset with Velvet

Once you set up the Velvet LLM gateway, you'll have every Open AI log warehoused to your own database. Use our AI SQL editor to query requests, select examples, and export a JSONL file that can be used for fine-tuning.

Read Velvet docs on exporting data for fine-tuning.

After your fine-tuned model is in production, you should analyze and evaluate performance. Decide which models are best for your use case, and iterate on data quality and quantity when training models.

Read Velvet docs on using the AI SQL editor.

Want to learn more? Read our docs, schedule a call, or email us.

AI gateway

Warehouse every AI request to your database

Try Velvet for free

More articles

Product
Warehouse every request from OpenAI

Use Velvet to observe, analyze, and optimize your AI features.

Product
Warehouse every request from Anthropic

Use Velvet to observe, analyze, and optimize your AI features.

Engineering
Analyze OpenAI's Batch and File APIs

Use the Batch API to reduce costs. Velvet unlocks extra log data.