Skip to main content
ABV is designed to be open, extensible, and flexible. The platform provides programmatic access to all your LLM observability data, enabling custom workflows, integrations, and analytics beyond the built-in dashboard.
Your LLM observability data belongs to you. ABV never locks you into proprietary formats or restricts access to your own data. Every trace, score, and metric is accessible through APIs, SDKs, and export mechanisms.Build exactly the workflows your organization needs: custom billing systems, data warehouse integration, specialized analytics, or compliance reporting. Your data, your way.

How the Data Platform Works

Understanding the architecture helps you choose the right access pattern:

Data capture and storage

As your LLM application runs, ABV captures complete observability data: inputs, outputs, model parameters, costs, latency, user information, tags, and custom metadata. Evaluation scores and human annotations get associated with traces.All this data is stored in ABV’s managed infrastructure with retention policies you configure.

Structured data models

ABV organizes data into clear, documented models:
  • Traces: Individual LLM requests with full context
  • Observations: Steps within traces (LLM calls, tool uses, retrieval)
  • Scores: Quality evaluations attached to traces
  • Datasets: Curated collections for evaluation and fine-tuning
  • Prompts: Versioned prompt templates with metadata
These models provide consistent structure across all access methods.

Multiple access patterns

Access the same data through different interfaces depending on your needs:
  • UI export: Download traces, datasets, or fine-tuning data directly from the dashboard
  • Public API: RESTful API for querying, creating, and managing all resources
  • SDKs: Python and TypeScript/JavaScript SDKs with native language support
  • Blob storage export: Continuous export to S3, GCS, or Azure Blob Storage for data warehouse integration
Mix and match approaches based on the specific workflow.

Integration and workflows

Once you have data access, build custom workflows:
  • Billing systems: Extract cost data for customer invoicing or cost allocation
  • External dashboards: Feed LLM metrics into your existing BI tools
  • Data warehouse: Combine LLM observability with other business data
  • Fine-tuning pipelines: Export high-quality traces to improve model performance
  • Compliance reporting: Generate audit trails and compliance documentation
  • Custom alerting: Build specialized monitoring beyond standard metrics
The platform adapts to your infrastructure and processes.

Common Use Cases

Organizations use the data platform for diverse workflows:
Track LLM costs by customer or tenant to implement accurate usage-based billing. Query the API to extract per-customer costs, broken down by time period or feature. Generate invoices, implement quota enforcement, or create cost allocation reports.Pattern: Scheduled API queries extract cost metrics for billing periods, feeding into your invoicing system. Custom dashboards show customers their usage trends.
Combine LLM observability data with broader business metrics in your data warehouse. Export traces to S3/GCS/Azure, then ingest into Snowflake, BigQuery, or Databricks. Correlate LLM performance with user behavior, business outcomes, and product metrics.Pattern: Continuous blob storage export feeds your ETL pipeline. Join trace data with customer data, product events, and business metrics for comprehensive analytics.
Extract high-quality LLM interactions for fine-tuning. Filter traces by score thresholds, export in OpenAI fine-tuning format, and use them to improve model performance on your specific use cases.Pattern: Query API for highly-rated traces, transform to fine-tuning format, upload to model training pipeline. Iterate based on fine-tuned model performance.
Build specialized dashboards in tools your team already uses (Looker, Tableau, Grafana). Query metrics via API, combine with other data sources, and create executive reports tailored to your organization.Pattern: Scheduled API queries populate external dashboards. Custom views show LLM metrics alongside business KPIs in unified reports.
Generate compliance documentation showing LLM usage, content filtering, and quality controls. Export audit logs, trace histories, and evaluation results to demonstrate regulatory compliance.Pattern: Periodic exports to secure storage create immutable audit trails. API queries generate reports for auditors showing content safety measures and quality controls.
Analyze how LLM behavior correlates with downstream outcomes. Export trace data, join with application logs, user behavior events, and business metrics to understand the full impact of your LLM features.Pattern: Trace IDs flow through your entire system. Join trace data with application database to analyze conversion rates, user satisfaction, and business impact of LLM interactions.

Access Methods

ABV provides multiple ways to access your data, each suited to different needs:
RESTful API providing programmatic access to all ABV resources. Query traces, scores, datasets, and prompts. Create annotations, trigger evaluations, and manage configurations.Best for: Custom integrations, automated workflows, real-time queries, building custom toolsAuthentication: API keys scoped to specific projects with role-based access controlLearn more about the Public API →
Python and TypeScript/JavaScript SDKs provide native language support for querying ABV data. Type-safe interfaces, async support, and automatic pagination make integration straightforward.Best for: Application integration, Jupyter notebooks for analysis, programmatic data explorationCapabilities: Query traces with filtering, fetch datasets, retrieve scores, access promptsLearn more about querying via SDKs →
Export data directly from the ABV dashboard for quick analysis or one-time exports. Select traces, datasets, or fine-tuning data, choose format (JSON, CSV, JSONL), and download.Best for: Ad-hoc analysis, sharing specific traces, manual data review, one-time exportsFormats: JSON for full fidelity, CSV for spreadsheet analysis, JSONL for line-based processingLearn more about UI export →
Specialized export format optimized for LLM fine-tuning. Filter traces by quality scores, export in formats compatible with OpenAI, Anthropic, or custom training pipelines.Best for: Model improvement through fine-tuning on production interactions, creating training datasetsFormats: OpenAI fine-tuning format (JSONL), custom formats for other providersLearn more about fine-tuning export →
Continuous export to S3, Google Cloud Storage, or Azure Blob Storage for data warehouse integration. ABV automatically exports new traces, scores, and metadata as they’re created.Best for: Data warehouse integration, long-term archival, large-scale analytics, compliance storageSupported providers: AWS S3, Google Cloud Storage, Azure Blob StorageLearn more about blob storage export →

Example Workflows

Usage-Based Billing Implementation

Configure cost tracking

Add userId or customerId to all traces to track usage per customer. Optionally tag traces by feature or service tier for granular billing.

Query cost metrics

Use the Metrics API to extract per-customer costs for billing periods. Group by user, filter by time range, aggregate total costs.

Generate invoices

Feed cost data into your billing system. Generate line items for LLM usage, apply pricing tiers, create customer invoices.

Provide customer dashboards

Build customer-facing dashboards showing their LLM usage, costs, and trends using the Public API or SDK queries.

Data Warehouse Integration

Configure blob storage export

Set up continuous export to your cloud storage (S3/GCS/Azure). ABV exports traces, scores, and metadata as newline-delimited JSON.

ETL pipeline ingestion

Your ETL pipeline (Fivetran, Airbyte, custom) monitors the storage location and ingests new files into your data warehouse.

Schema definition

Define schemas in your warehouse for traces, observations, and scores. Join with existing customer, product, and business tables.

Analysis and reporting

Run SQL queries joining LLM data with business metrics. Build BI dashboards correlating LLM performance with user behavior and business outcomes.

Data Models and Schema

All access methods return data in consistent formats based on ABV’s core data models: Traces: Complete LLM interactions with inputs, outputs, costs, latency, and metadata Observations: Individual steps within traces (LLM calls, retrievals, tool uses) Scores: Quality evaluations attached to traces (user feedback, model-based scores, human annotations) Datasets: Curated collections of traces for evaluation or fine-tuning Prompts: Versioned prompt templates with metadata and change history For detailed schema documentation, see the API reference for each resource type.

Security and Access Control

All API access requires API keys scoped to specific projects. Keys inherit the access level of the project they’re associated with. Rotate keys regularly, use different keys for different systems, and monitor key usage in audit logs.
API access respects the same role-based permissions as the dashboard. Viewer roles have read-only access, Members can create resources, Admins can manage configurations. Control what different teams and systems can access.Learn more about RBAC →
Configure retention policies controlling how long ABV stores your data. Automatic deletion after retention periods helps with compliance requirements (GDPR, data minimization).Learn more about data retention →
All API access, exports, and data modifications are logged. Review who accessed what data when for security auditing and compliance verification.Learn more about audit logs →

Next Steps