Skip to main content

FAQ

In short-lived serverless environments, you must explicitly flush traces before the process exits or the runtime environment is frozen.For JS/TS:Export the processor from your instrumentation.ts file:
instrumentation.ts
import { NodeSDK } from "@opentelemetry/sdk-node";
import { ABVSpanProcessor } from "@abvdev/otel";

// Export the processor to be able to flush it
export const abvSpanProcessor = new ABVSpanProcessor();

const sdk = new NodeSDK({
  spanProcessors: [abvSpanProcessor],
});

sdk.start();
Then call forceFlush() before the function exits:
handler.ts
import { abvSpanProcessor } from "./instrumentation";

export async function handler(event, context) {
  // ... your application logic ...

  // Flush before exiting
  await abvSpanProcessor.forceFlush();
}
For Vercel Cloud Functions, use the after utility:
import { after } from "next/server";
import { abvSpanProcessor } from "./instrumentation";

export async function POST() {
  // ... existing request logic ...

  // Schedule flush after request has completed
  after(async () => {
    await abvSpanProcessor.forceFlush();
  });

  // ... send response ...
}
For Python:
from abvdev import get_client

abv = get_client()

# Your application logic here

# Flush all pending observations before function exits
abv.flush()
For complete shutdown (no more events will be sent):
abv.shutdown()
ABV tracks usage and costs of your LLM generations with breakdowns by usage types (input, output, cached tokens, audio tokens, etc.).Option 1: Ingest usage and cost (most accurate)Many ABV integrations automatically capture usage from LLM responses. You can also manually ingest them:Python SDK:
from abvdev import ABV

abv = ABV(api_key="sk-abv-...", host="https://app.abv.dev")

with abv.start_as_current_observation(
    as_type='generation',
    name="llm-call",
    model="gpt-4o"
) as generation:
    response = openai_client.chat.completions.create(...)

    generation.update(
        output=response.choices[0].message.content,
        usage_details={
            "input": response.usage.input_tokens,
            "output": response.usage.output_tokens,
        },
        cost_details={
            "input": 0.01,  # USD cost
            "output": 0.03,
        }
    )
JS/TS SDK:
generation.update({
  usageDetails: {
    prompt_tokens: response.usage.prompt_tokens,
    completion_tokens: response.usage.completion_tokens,
    total_tokens: response.usage.total_tokens,
  },
  output: { content: llmOutput },
});
Option 2: Infer usage and cost automaticallyIf you don’t ingest usage/cost, ABV will automatically infer them based on the model parameter. ABV includes predefined models and tokenizers for OpenAI, Anthropic, and Google models.You can also add custom model definitions via the ABV UI or API for your own models.
Empty inputs and outputs typically occur when:
  1. You didn’t set them: Make sure to call .update() with input and output parameters:
# Python
with abv.start_as_current_span(name="my-operation") as span:
    span.update(input={"query": "user question"})
    # ... your logic ...
    span.update(output="response")
// JS/TS
await startActiveObservation("my-operation", async (span) => {
  span.update({ input: { query: "user question" } });
  // ... your logic ...
  span.update({ output: "response" });
});
  1. Timing issues in serverless: If the function exits before data is flushed, use abv.flush() (Python) or await abvSpanProcessor.forceFlush() (JS/TS).
  2. Data was masked: Check if you have masking rules that might be removing sensitive data.
To disable tracing entirely:Python SDK:Simply don’t initialize the ABV client or don’t use the @observe decorator.JS/TS SDK:Don’t import the instrumentation.ts file, or conditionally initialize it:
if (process.env.ABV_ENABLED === "true") {
  await import("./instrumentation");
}
To use sampling (partial tracing):Configure sampling rate via environment variable or in code:Python:
ABV_SAMPLE_RATE=0.1  # Sample 10% of traces
JS/TS:
import { TraceIdRatioBasedSampler } from "@opentelemetry/sdk-trace-base";

const sdk = new NodeSDK({
  sampler: new TraceIdRatioBasedSampler(0.1), // Sample 10% of traces
  spanProcessors: [new ABVSpanProcessor()],
});
Environments help you organize traces from different contexts (production, staging, development).Set via environment variable (recommended):
ABV_TRACING_ENVIRONMENT="production"
Python SDK:
from abvdev import get_client

# Will use environment variable
abv = get_client()

# Or set explicitly
abv = ABV(
    api_key="sk-abv-...",
    host="https://app.abv.dev",
    environment="staging"
)
JS/TS SDK:
.env
ABV_TRACING_ENVIRONMENT="production"
The environment is automatically attached to all traces, observations, scores, and sessions. You can filter by environment in the ABV UI.Environment naming rules:
  • Cannot start with “abv”
  • Only lowercase letters, numbers, hyphens, and underscores
  • Maximum 40 characters
Common causes and solutions:
  1. Missing flush in serverless/short-lived applications:
    • Python: Call abv.flush() before exit
    • JS/TS: Call await abvSpanProcessor.forceFlush() before exit
  2. Incorrect API credentials:
    • Verify your API key is correct
    • Check if you’re using the right region (US: https://app.abv.dev, EU: https://eu.app.abv.dev)
    • Python: Use abv.auth_check() to verify credentials (don’t use in production)
  3. Instrumentation not loaded:
    • JS/TS: Ensure import "./instrumentation" is the FIRST import in your application
    • Python: Ensure you’ve initialized the client with get_client() or ABV()
  4. Network/firewall issues:
    • Check if your application can reach the ABV API
    • Verify no proxy/firewall is blocking requests
  5. Sampling is too aggressive:
    • Check if you have sampling enabled that might be filtering out traces
    • Temporarily set sample rate to 1.0 (100%) to test
  6. Wrong project:
    • Verify you’re looking at the correct project in the ABV UI
    • Check if the API key belongs to the project you’re viewing
  7. For JS/TS with @vercel/otel:
    • Use manual OpenTelemetry setup via NodeTracerProvider instead of registerOTel from @vercel/otel
    • The @vercel/otel package doesn’t support OpenTelemetry JS SDK v2 yet
  8. Check the logs:
    • Enable debug logging to see what’s happening
    • Python: Set log level in code
    • JS/TS: Set ABV_LOG_LEVEL="DEBUG" in environment variables
  1. Sign in to your ABV account at https://app.abv.dev
  2. Navigate to Project Settings
  3. Go to the API Keys section
  4. Click Create new API credentials
API keys are project-specific and should be stored securely as environment variables:
.env
ABV_API_KEY="sk-abv-..."
ABV_BASE_URL="https://app.abv.dev"  # or https://eu.app.abv.dev for EU
ABV uses OpenTelemetry concepts with LLM-specific enhancements:Spans (Observations):
  • Generic units of work in your application
  • Can be nested to form a tree structure
  • Examples: API calls, database queries, function executions
  • Created with start_as_current_span() or startActiveObservation()
Generations:
  • Special type of span specifically for LLM calls
  • Include additional fields: model, usage, cost
  • Automatically tracked for metrics and costs
  • Created with as_type="generation" parameter
  • Examples: OpenAI completion, Anthropic message, embeddings
Events:
  • Point-in-time occurrences with no duration
  • Lightweight, don’t have start/end times
  • Examples: logging, status updates, warnings
  • Created with add_event() method
Traces:
  • Collection of related spans/observations
  • Represent a complete workflow or request
  • All spans in a trace share the same trace_id
Example hierarchy:
Trace: "User request"
  Span: "Process query"
    Generation: "LLM call to GPT-4"
    Span: "Database lookup"
    Event: "Cache miss"
  Span: "Format response"
Use generations for LLM calls, spans for other operations, and events for point-in-time logs.
Metadata and tags help you categorize, filter, and analyze traces.Metadata (arbitrary JSON object):Python SDK:
from abvdev import ABV, observe

abv = ABV(api_key="sk-abv-...", host="https://app.abv.dev")

# With decorator
@observe()
def my_function():
    abv.update_current_trace(
        metadata={"user_id": "123", "version": "1.2.3"}
    )
    abv.update_current_span(
        metadata={"stage": "processing"}
    )

# With context manager
with abv.start_as_current_span(name="operation") as span:
    span.update_trace(metadata={"request_id": "req_12345"})
    span.update(metadata={"stage": "parsing"})
JS/TS SDK:
import { startActiveObservation, updateActiveTrace } from "@abvdev/tracing";

await startActiveObservation("operation", async (span) => {
  // Update trace metadata
  updateActiveTrace({
    metadata: { user_id: "123", version: "1.2.3" }
  });

  // Update span metadata
  span.update({
    metadata: { stage: "processing" }
  });
});
Tags (list of strings):Python SDK:
# With decorator
@observe()
def my_function():
    abv.update_current_trace(tags=["production", "v2", "feature-x"])

# With context manager
with abv.start_as_current_span(name="operation") as span:
    span.update_trace(tags=["experiment-a", "beta"])
JS/TS SDK:
await startActiveObservation("operation", async (span) => {
  updateActiveTrace({
    tags: ["production", "v2", "feature-x"]
  });
});