Engineering

Analyze OpenAI's Batch and File APIs

With OpenAI’s batch API, you can process large volumes of data asynchronously. As a result, you’ll get lower costs, higher rate limits, and a guaranteed 24-hour response time. Velvet's proxy unlocks additional data so you can see exactly what's happening inside your batch job.

When to use the OpenAI batch API

Batching is useful when you’re automating an asynchronous task. It makes sense to use this API for features that don’t require an immediate response from the API. OpenAI’s’ Batch API enables you to package requests into a single file, then retrieve the collected results after the batch job is complete.

Benefits of OpenAI’s Batch API:

  • 50% lower costs compared to synchronous use of their APIs
  • Higher rate limits to process large volumes of data
  • 24-hour completion time or faster

Read OpenAI’s batch API docs to learn more about implementation and how it works. Velvet logs every request inside the file, so you'll have full observability.

Example use cases for batch processing

The Batch API works similarly to OpenAI’s synchronous APIs, but you pass through a list of strings instead of just one string. It’s useful for tasks that can run in the background - and you’ll reduce costs in the process.

Use batching to automate non-immediate tasks, like these.

  • Text generation: Write review summaries for every product page
  • Classification: Add categories to a large dataset of cancelation requests
  • Embeddings: Add a vector number to each article to indicate relatedness

One of our customers, Find AI, uses the Batch API to automate many tasks outside of immediate customer queries.

“OpenAI batches save lots of money, but observability is a challenge. Their APIs say that results come back within 24 hours, but we had no idea until using Velvet that our average response time was more like 3 hours. Batch prompts are difficult to debug, because it's a completely different flow than Chat. With Velvet, we can pull exact prompts that we used for batches, which makes it easier to debug and tweak.” - Find AI CTO

Analyze your usage of Batch and Files APIs

Velvet supports logging for the OpenAI batch API. When you send a batch request, we deconstruct the file sent in each request and response. As a result, you’ll have queryable data on costs, errors, and performance.

Read our batch jobs documentation to learn more.

AI gateway

Analyze, evaluate, and monitor your AI

Free up to 10k requests per month.

2 lines of code to get started.

Try Velvet for free

More articles

Product
Query logs with Velvet's text-to-SQL editor

Use our data copilot to query your AI request logs with SQL.

Product
Warehouse every request from OpenAI

Use Velvet to observe, analyze, and optimize your AI features.

Product
Warehouse every request from Anthropic

Use Velvet to observe, analyze, and optimize your AI features.