AI GATEWAY for engineers

Develop & deploy AI with confidence

Analyze, evaluate, & monitor AI features in production. Just two lines of code to get started.

hero image showing data platform features
Trusted by AI engineering teams

How it works

observe
Log requests
query
Analyze usage
Evaluate
Experiment & test
Data pipeline from OpenAI to Postgres
Ship quickly

Just 2 lines of code to get started

Warehouse LLM requests to a database you control. Use logs to analyze, evaluate, and monitor your AI features in production.

Website code snippet example of base URL
Code snippet of JSON object
analyze

Analyze & evaluate models

Query logs with SQL to analyze performance and generate datasets. Evaluate models against metrics to decide which models to use for each feature.

monitor

Continuous testing & experiments

Run experiments and monitor ongoing usage against metrics. Version models in production, and get alerts when your tests fail.

Backend workflows designed for engineers

Customer testimonial: Blaze AI

"We experiment with LLM models, settings, and optimizations. Velvet made it easy to implement logging, caching, and evals. And we're preparing training sets to eventually fine-tune our own models.

Chirag Mahapatra
Customer testimonial: Revo AI

"Velvet gives us a source of truth for what's happening between the Revo copilot, and the LLMs it orchestrates. We have the data we need to run evaluations, calculate costs, and quickly resolve issues."

Mehdi Djabri
CEO, Revo.pm
Customer testimonial: Find AI

"Our engineers use Velvet daily. It monitors AI features in production, even opaque APIs like batch. The caching feature reduces costs significantly. And, we use the logs to observe, test, and fine-tune."

Philip Thomas
CTO, Find AI
Use velvet

Flexible infrastructure for scale

table icon
Data interoperability

Log queryable requests to a database you control.

code icon
Granular analysis

Query data with SQL to analyze usage, cost, and metrics.

sparkle icon
Intelligent caching

Optimize costs and latency with request caching.

graph icon
Run experiments

One-time evaluations to test models against metrics.

flag icon
Continuous monitoring

Run ongoing evaluations of your AI features in production.

groupings icon
Dataset generation

Create datasets for evaluations, fine-tuning, and batch workflows.

AI gateway

Analyze, evaluate, and monitor your AI

Free up to 10k requests per month.

2 lines of code to get started.

Try Velvet for free

Q & A

Who is Velvet made for?
How do I get started?
Which models and DBs do you support?
What are common use cases?
How much does it cost?

Articles from Velvet

Product
Query logs with Velvet's text-to-SQL editor

Use our data copilot to query your AI request logs with SQL.

Product
Run model experiments

Test models, settings, and metrics against historical request logs.

Engineering
Monitor AI features in production

Continuously test AI features in production, set alerts to take action.