Get your API key
Create an ABV account and generate API credentials:
- Sign up for ABV (free trial available)
- Navigate to Project Settings → API Keys
- Click Create API Key and copy your key (starts with
sk-abv-...)
Install the ABV SDK
Install the required packages for ABV tracing:This installs:
abvdev- ABV’s Python SDK for tracing and observabilitypython-dotenv- Environment variable management from.envfiles
Configure environment variables
Create a
.env file in your project root with your ABV credentials:.env
Choose Your Instrumentation Method
ABV’s Python SDK offers multiple ways to create traces. Choose based on your use case:- Gateway Auto-Tracing
- @observe Decorator
- Context Managers
Best for: LLM applications that need automatic tracing with zero manual instrumentationThe ABV Gateway is the fastest way to get complete observability for LLM calls. It automatically captures all metrics without any manual tracing code.What gets captured automatically:
- Full conversation context (user query and LLM response)
- Model and provider information
- Token usage (input/output counts)
- Cost tracking (deducted from gateway credits)
- Latency metrics (total duration and API timing)
The gateway requires no additional setup beyond installing
abvdev. It automatically handles tracing, cost tracking, and provider switching.Run Your First Trace
Execute your Python script to send your first trace to ABV:- The function/span name
- Input and output data
- Nested observation hierarchy (if using context managers or gateway)
- Timing information and metrics
When to Use flush()
The
abv.flush() call is only needed for short-lived scripts that exit immediately. Long-running applications (web servers, background workers) don’t need it.flush() for:
- CLI scripts
- Batch processing jobs
- One-off tasks
flush() for:
- FastAPI/Flask web applications
- Django applications
- Long-running services
Next Steps
Enhance Your Traces
Add metadata and tags
Attach business context to traces for filtering and analysis
Track users and sessions
Group related traces by user journey or conversation
Monitor costs
Track LLM spending by user, feature, or model
Store and version datasets
Manage test datasets and examples for evaluations
Build Production-Grade AI
Set up guardrails
Add safety checks, PII detection, and content moderation
Run evaluations
Measure quality with automated LLM-as-judge evaluations
Manage prompts
Version and deploy prompts without code changes
Access via API
Query traces, datasets, and metrics programmatically