This cookbook provides practical examples for using the ABV JS/TS SDK.
JS/TS applications can be traced via the JS/TS SDK.
In this notebook, we will walk you through a simple end-to-end example that:
- Uses the core features of the ABV JS/TS SDK
- Shows how to log any LLM call via the low-level SDK
For this guide, we assume that you are already familiar with the ABV data model (traces, spans, generations, etc.). If not, please read the conceptual introduction to tracing.
Set Up Environment
Get your ABV API key by signing up for ABV. You’ll also need your OpenAI API key.
Note: This cookbook uses Deno.js for execution, which requires different syntax for importing packages and setting environment variables. For Node.js applications, the setup process is similar but uses standard npm packages and process.env.
Add your ABV credentials to your environment variables. Make sure that you have a .env file in your project root and a package like dotenv to load the variables.
ABV_API_KEY="sk-abv-..."
ABV_BASE_URL="https://app.abv.dev" # US region
# ABV_BASE_URL="https://eu.app.abv.dev" # EU region
OPENAI_API_KEY="sk-proj-..." # added to use openai llm
With the environment variables set, we can now initialize the abvSpanProcessor which is passed to the main OpenTelemetry SDK that orchestrates tracing.
// Import required dependencies
import 'npm:dotenv/config';
import { NodeSDK } from "npm:@opentelemetry/sdk-node";
import { ABVSpanProcessor } from "npm:@abvdev/otel";
// Export the processor to be able to flush it later
// This is important for ensuring all spans are sent to ABV
export const abvSpanProcessor = new ABVSpanProcessor({
apiKey: process.env.ABV_API_KEY!,
baseUrl: process.env.ABV_HOST ?? 'https://app.abv.dev', // Default if not specified
environment: process.env.NODE_ENV ?? 'development', // Default to development if not specified
});
// Initialize the OpenTelemetry SDK with our ABV processor
const sdk = new NodeSDK({
spanProcessors: [abvSpanProcessor],
});
// Start the SDK to begin collecting telemetry
// The warning about crypto module is expected in Deno and doesn't affect basic tracing functionality. Media upload features will be disabled, but all core tracing works normally
sdk.start();
The ABVClient provides additional functionality beyond OpenTelemetry tracing, such as scoring, prompt management, and data retrieval. It automatically uses the same environment variables we set earlier.
import { ABVClient } from "npm:@abvdev/client";
const abv = new ABVClient();
Log LLM Calls
You can use the SDK to log any LLM call or any of the integrations that are interoperable with it.
In the following, we will demonstrate how to log LLM calls using the SDK, LangChain, Vercel AI SDK, and OpenAI integrations.
Option 1: Context Manager
To simplify nesting and context management, you can use startActiveObservation. These functions take a callback and automatically manage the observation’s lifecycle and the OpenTelemetry context. Any observation created inside the callback will automatically be nested under the active observation, and the observation will be ended when the callback finishes.
This is the recommended approach for most use cases as it prevents context leakage and ensures observations are properly ended.
// Import necessary functions from the tracing package
import { startActiveObservation, startObservation, updateActiveTrace, updateActiveObservation } from "npm:@abvdev/tracing";
// Start a new span with automatic context management
await startActiveObservation("context-manager", async (span) => {
// Log the initial user query
span.update({
input: { query: "What is the capital of France?" }
});
// Create a new generation span that will automatically be a child of "context-manager"
const generation = startObservation(
"llm-call",
{
model: "gpt-4",
input: [{ role: "user", content: "What is the capital of France?" }],
},
{ asType: "generation" },
);
// ... LLM call logic would go here ...
// Update the generation with token usage statistics
generation.update({
usageDetails: {
input: 10, // Number of input tokens
output: 5, // Number of output tokens
cache_read_input_tokens: 2, // Tokens read from cache
some_other_token_count: 10, // Custom token metric
total: 17, // Optional: automatically calculated if not provided
},
});
// End the generation with the LLM response
generation.update({
output: { content: "The capital of France is Paris." },
}).end();
// Example user information
const user = { id: "user-5678", name: "Jane Doe", sessionId: "123" };
// Add an optional log level of type warning to the active span
updateActiveObservation(
{ level: "WARNING", statusMessage: "This is a warning" },
);
// Update the trace with user context
updateActiveTrace({
userId: user.id,
sessionId: user.sessionId,
metadata: { userName: user.name },
});
// Mark the span as complete with final output
span.update({ output: "Successfully answered." });
});
// Ensure all spans are sent to ABV
await abvSpanProcessor.forceFlush();
Public trace in the ABV UI
Option 2: observe Decorator
The observe wrapper is a powerful tool for tracing existing functions without modifying their internal logic. It acts as a decorator that automatically creates a span or generation around the function call. You can use the updateActiveObservation function to add attributes to the observation from within the wrapped function.
import { observe, updateActiveObservation } from "npm:@abvdev/tracing";
// An existing function
async function fetchData(source: string) {
updateActiveObservation({
usageDetails: {
input: 10,
output: 5,
},
asType: 'generation'
});
// ... logic to fetch data
return { data: `some data from ${source}` };
}
// Wrap the function to trace it
const tracedFetchData = observe(fetchData, {
name: "observe-wrapper",
asType: "generation",
});
// Now, every time you call tracedFetchData, a span is created.
// Its input and output are automatically populated with the
// function's arguments and return value.
const result = await tracedFetchData("API");
await abvSpanProcessor.forceFlush();
Public trace in the ABV UI
Option 3: Manual Spans
This part shows how to log any LLM call by passing the model in and outputs via the ABV SDK.
Steps:
- Create span to contain this section within the trace
- Create generation, log input and model name as it is already known
- Call the LLM SDK and log the output
- End generation and span
Teams typically wrap their LLM SDK calls in a helper function that manages tracing internally. This implementation occurs once and is then reused for all LLM calls.
// Import the startObservation function for manual span creation
import { startObservation } from 'npm:@abvdev/tracing';
// Create the root span for this operation
const span = startObservation('manual-observation', {
input: { query: 'What is the capital of France?' },
});
// Create a child span for a tool call (e.g., weather API)
const toolCall = span.startObservation(
'fetch-weather',
{ input: { city: 'Paris' } },
{ asType: "tool" },
);
// Simulate API call with timeout
await new Promise((r) => setTimeout(r, 100));
// End the tool call with its output
toolCall.update({ output: { temperature: '15°C' } }).end();
// Create a generation span for the LLM call
const generation = span.startObservation(
'llm-call',
{
model: 'gpt-4',
input: [{ role: 'user', content: 'What is the capital of France?' }],
output: { content: 'The capital of France is Paris.' },
},
{ asType: "generation" },
);
// Update the generation with token usage details
generation.update({
usageDetails: {
input: 10, // Input token count
output: 5, // Output token count
cache_read_input_tokens: 2, // Cached tokens used
some_other_token_count: 10, // Custom metric
total: 17, // Total tokens (optional)
},
});
// End the generation with final output
generation.update({
output: { content: 'The capital of France is Paris.' },
}).end();
// End the root span with final status and session ID
span.update({
output: 'Successfully answered user request.',
sessionId: '123'
}).end();
// Ensure all spans are flushed to ABV
await abvSpanProcessor.forceFlush();
Public trace in the ABV UI
View the Trace in ABV
After ingesting your spans, you can view them in your ABV dashboard.
in the ABV UI.
Learn More