Get your API key
Create an ABV account and generate API credentials:
- Sign up for ABV (free trial available)
- Navigate to Project Settings β API Keys
- Click Create API Key and copy your key (starts with
sk-abv-...)
Install the ABV SDK
Install the required packages for ABV tracing:This installs:
@abvdev/tracing- ABVβs tracing SDK@abvdev/otel- ABVβs OpenTelemetry integration@opentelemetry/sdk-node- OpenTelemetry Node.js SDKdotenv- Environment variable management
Configure environment variables
Create a
.env file in your project root with your ABV credentials:.env
Choose your instrumentation method
ABV offers two ways to instrument your JavaScript/TypeScript application. Choose based on your use case:
Choose Your Instrumentation Method
- Gateway Auto-Tracing
- Manual Instrumentation
Best for: LLM applications that need automatic tracing with zero manual instrumentationThe ABV Gateway is the fastest way to get complete observability for LLM calls. It automatically captures all metrics without any manual tracing code.Install the ABV client:Create a traced LLM application:What gets captured automatically:
server.ts
- Full conversation context (user query and LLM response)
- Model and provider information
- Token usage (input/output counts)
- Cost tracking (deducted from gateway credits)
- Latency metrics (total duration and API timing)
- Complete observability with zero manual instrumentation
The gateway requires no OpenTelemetry setup. Just install
@abvdev/client and start making requests.Run Your First Trace
Execute your application to send your first trace to ABV:- The input query and output
- Model and provider information (if using gateway)
- Nested observation hierarchy
- Timing information and metrics
Next Steps
Enhance Your Traces
Add metadata and tags
Attach business context to traces for filtering and analysis
Track users and sessions
Group related traces by user journey or conversation
Monitor costs
Track LLM spending by user, feature, or model
Store and version datasets
Manage test datasets and examples for evaluations
Build Production-Grade AI
Set up guardrails
Add safety checks, PII detection, and content moderation
Run evaluations
Measure quality with automated LLM-as-judge evaluations
Manage prompts
Version and deploy prompts without code changes
Access via API
Query traces, datasets, and metrics programmatically