Analyze, evaluate, & monitor AI features in production. Just two lines of code to get started.
Warehouse LLM requests to a database you control. Use logs to analyze, evaluate, and monitor your AI features in production.
Query logs with SQL to analyze performance and generate datasets. Evaluate models against metrics to decide which models to use for each feature.
Run experiments and monitor ongoing usage against metrics. Version models in production, and get alerts when your tests fail.
"We experiment with LLM models, settings, and optimizations. Velvet made it easy to implement logging, caching, and evals. And we're preparing training sets to eventually fine-tune our own models.
"Velvet gives us a source of truth for what's happening between the Revo copilot, and the LLMs it orchestrates. We have the data we need to run evaluations, calculate costs, and quickly resolve issues."
"Our engineers use Velvet daily. It monitors AI features in production, even opaque APIs like batch. The caching feature reduces costs significantly. And, we use the logs to observe, test, and fine-tune."
Log queryable requests to a database you control.
Query data with SQL to analyze usage, cost, and metrics.
Optimize costs and latency with request caching.
One-time evaluations to test models against metrics.
Run ongoing evaluations of your AI features in production.
Create datasets for evaluations, fine-tuning, and batch workflows.
Use our data copilot to query your AI request logs with SQL.
Test models, settings, and metrics against historical request logs.
Continuously test AI features in production, set alerts to take action.