Before You Begin
You need an ABV API key to use the gateway. Sign up at app.abv.dev to get your key (it will look like
sk_...)- TypeScript/JavaScript
- Python
Installation
Install the ABV client library in your project. This single package includes everything you need for the gateway.Your First Request
Create a file calledfirst-request.ts (or .js if youâre not using TypeScript) and add this code:node first-request.js (or ts-node first-request.ts for TypeScript). You should see an explanation of what an API gateway is appear in your terminal.View Your Trace
Now go to your ABV dashboard at app.abv.dev and click on the Traces section. Youâll see a trace for the request you just made. Click into it to see details like token usage, latency, and the full conversation.This trace appeared automatically without you writing any logging code. The gateway captures every request with complete observability built in.
Switch Providers
Now try switching providers to see how easy it is. Change yourprovider parameter from openai to anthropic and the model to claude-sonnet-4-5:Use Environment Variables
Hardcoding your API key in your code is not a good practice, especially if youâre committing code to version control. The client library supports reading your API key from an environment variable:Add a System Message
Most real applications donât just send a single user message. They set up a system message that configures the modelâs behavior:Understanding What Happened
Authentication
When you created the ABV client, you provided your API key. This key authenticates your requests to ABVâs gateway service. The client handles all the authentication details automaticallyâyou never need to think about headers, tokens, or auth flows.
Building the Request
When you called
abv.gateway.chat.completions.create(), you provided three key pieces of information:- Provider: Which AI service to route the request to (OpenAI, Anthropic, or Gemini)
- Model: Which specific model to use (like
gpt-4o-miniorclaude-sonnet-4-5) - Messages: The conversation to send to the model
Gateway Processing
The gateway received your request, created a trace, translated it into the providerâs expected format, sent it to the provider, received the response, translated it back to standard format, completed the trace, and returned the response to your code.
Reading the Response
The response contains a
choices array because the API supports generating multiple completions for a single request (controlled by the n parameter, which defaults to 1). Each choice has a message object with the content that the model generated.Automatic Observability
Meanwhile, the trace captured everything about this interaction: what you asked, what the model responded with, how many tokens were used (which affects cost), and how long it took. All of this is available in your dashboard automatically.
Related Topics
TypeScript Guide
Deep dive into TypeScript/JavaScript implementation with streaming, error handling, and framework integration
Python Guide
Complete Python implementation guide with async/await, type hints, and framework patterns
Available Models
See all supported providers and models with pricing information
LLM Gateway Overview
Learn more about how the gateway works and when to use it