Skip to main content
If you’re creating a new ABV account, the onboarding flow will guide you through these steps automatically. This guide is for manual setup or reference.

Get your API key

Create an ABV account and generate API credentials:
  1. Sign up for ABV (free trial available)
  2. Navigate to Project SettingsAPI Keys
  3. Click Create API Key and copy your key (starts with sk-abv-...)

Install the ABV SDK

Install the required packages for ABV tracing:
pip install abvdev python-dotenv
This installs:
  • abvdev - ABV’s Python SDK for tracing and observability
  • python-dotenv - Environment variable management from .env files

Configure environment variables

Create a .env file in your project root with your ABV credentials:
.env
ABV_API_KEY="sk-abv-..."
ABV_HOST="https://app.abv.dev"  # US region
# ABV_HOST="https://eu.app.abv.dev"  # EU region (uncomment if needed)
Make sure .env is in your .gitignore to avoid committing secrets.

Choose Your Instrumentation Method

ABV’s Python SDK offers multiple ways to create traces. Choose based on your use case:
Best for: LLM applications that need automatic tracing with zero manual instrumentationThe ABV Gateway is the fastest way to get complete observability for LLM calls. It automatically captures all metrics without any manual tracing code.
New users get $1 in free credits to test the gateway.
from dotenv import load_dotenv
from abvdev import ABV

# Load environment variables
load_dotenv()

# Initialize the ABV client
abv = ABV()  # Uses ABV_API_KEY from environment

# Make a gateway request - automatically creates a complete trace
response = abv.gateway.chat.completions.create(
    provider='openai',
    model='gpt-4o-mini',
    messages=[
        {'role': 'user', 'content': 'What is the capital of France?'}
    ]
)

# Access the response
output = response['choices'][0]['message']['content']
print(f"Response: {output}")
What gets captured automatically:
  • Full conversation context (user query and LLM response)
  • Model and provider information
  • Token usage (input/output counts)
  • Cost tracking (deducted from gateway credits)
  • Latency metrics (total duration and API timing)
Switch providers easily:
# Try Anthropic's Claude
response = abv.gateway.chat.completions.create(
    provider='anthropic',
    model='claude-sonnet-4-5',
    messages=[{'role': 'user', 'content': 'What is the capital of France?'}]
)

# Or try Google's Gemini
response = abv.gateway.chat.completions.create(
    provider='gemini',
    model='gemini-2.0-flash-exp',
    messages=[{'role': 'user', 'content': 'What is the capital of France?'}]
)
The gateway requires no additional setup beyond installing abvdev. It automatically handles tracing, cost tracking, and provider switching.

Run Your First Trace

Execute your Python script to send your first trace to ABV:
python your_script.py
Navigate to app.abv.dev and click Traces in the sidebar. You should see your trace with:
  • The function/span name
  • Input and output data
  • Nested observation hierarchy (if using context managers or gateway)
  • Timing information and metrics
If you don’t see traces immediately, wait a few seconds and refresh. Make sure you called abv.flush() for short-lived scripts (not needed for gateway requests).

When to Use flush()

The abv.flush() call is only needed for short-lived scripts that exit immediately. Long-running applications (web servers, background workers) don’t need it.
Use flush() for:
  • CLI scripts
  • Batch processing jobs
  • One-off tasks
Don’t use flush() for:
  • FastAPI/Flask web applications
  • Django applications
  • Long-running services

Next Steps

Enhance Your Traces

Build Production-Grade AI

Advanced Platform Features