Skip to main content

Before You Begin

You need an ABV API key to use the gateway. Sign up at app.abv.dev to get your key (it will look like sk_...)
New users receive $1 in free credits to test the gateway with real LLM models from OpenAI, Anthropic, and Google—no provider API keys required to get started!

Installation

Install the ABV client library in your project. This single package includes everything you need for the gateway.
npm install @abvdev/client

Your First Request

Create a file called first-request.ts (or .js if you’re not using TypeScript) and add this code:
import { ABVClient } from '@abvdev/client';

// Initialize the client with your API key
const abv = new ABVClient({
  apiKey: 'sk_...'  // Replace with your actual API key
});

// Make your first request
const response = await abv.gateway.chat.completions.create({
  provider: 'openai',
  model: 'gpt-4o-mini',
  messages: [
    { role: 'user', content: 'Explain what an API gateway is in one sentence.' }
  ]
});

// Print the response
console.log(response.choices[0].message.content);
Run your code with node first-request.js (or ts-node first-request.ts for TypeScript). You should see an explanation of what an API gateway is appear in your terminal.

View Your Trace

Now go to your ABV dashboard at app.abv.dev and click on the Traces section. You’ll see a trace for the request you just made. Click into it to see details like token usage, latency, and the full conversation.
This trace appeared automatically without you writing any logging code. The gateway captures every request with complete observability built in.

Switch Providers

Now try switching providers to see how easy it is. Change your provider parameter from openai to anthropic and the model to claude-sonnet-4-5:
const response = await abv.gateway.chat.completions.create({
  provider: 'anthropic',  // Changed from 'openai'
  model: 'claude-sonnet-4-5',  // Changed from 'gpt-4o-mini'
  messages: [
    { role: 'user', content: 'Explain what an API gateway is in one sentence.' }
  ]
});
Run your code again. Notice that you didn’t need to change anything else. The same code structure works with both providers. This is the power of the gateway’s unified interface.

Use Environment Variables

Hardcoding your API key in your code is not a good practice, especially if you’re committing code to version control. The client library supports reading your API key from an environment variable:
export ABV_API_KEY=sk_...
Then initialize the client without explicitly passing the key:
const abv = new ABVClient();  // Automatically uses ABV_API_KEY

Add a System Message

Most real applications don’t just send a single user message. They set up a system message that configures the model’s behavior:
const response = await abv.gateway.chat.completions.create({
  provider: 'openai',
  model: 'gpt-4o-mini',
  messages: [
    {
      role: 'system',
      content: 'You are a helpful assistant who explains technical concepts clearly and concisely.'
    },
    {
      role: 'user',
      content: 'What is an API gateway?'
    }
  ]
});
The system message doesn’t change the structure of your code, but it does influence how the model responds. You’ll typically use system messages to set the tone, persona, or constraints for your AI application.

Understanding What Happened

Authentication

When you created the ABV client, you provided your API key. This key authenticates your requests to ABV’s gateway service. The client handles all the authentication details automatically—you never need to think about headers, tokens, or auth flows.

Building the Request

When you called abv.gateway.chat.completions.create(), you provided three key pieces of information:
  • Provider: Which AI service to route the request to (OpenAI, Anthropic, or Gemini)
  • Model: Which specific model to use (like gpt-4o-mini or claude-sonnet-4-5)
  • Messages: The conversation to send to the model

Gateway Processing

The gateway received your request, created a trace, translated it into the provider’s expected format, sent it to the provider, received the response, translated it back to standard format, completed the trace, and returned the response to your code.

Reading the Response

The response contains a choices array because the API supports generating multiple completions for a single request (controlled by the n parameter, which defaults to 1). Each choice has a message object with the content that the model generated.

Automatic Observability

Meanwhile, the trace captured everything about this interaction: what you asked, what the model responded with, how many tokens were used (which affects cost), and how long it took. All of this is available in your dashboard automatically.

Provider Flexibility

When you switched providers, you only changed one parameter. The gateway handled all the differences between OpenAI’s API format and Anthropic’s API format automatically. This is the core value of the unified interface—provider flexibility without code rewrites.

Related Topics