Skip to main content

Basic Tracing

ABV provides flexible ways to create and manage traces and their constituent observations (spans and generations).

Install package

pip install abvdev

@observe Decorator

The @observe() decorator provides a convenient way to automatically trace function executions, including capturing their inputs, outputs, execution time, and any errors. It supports both synchronous and asynchronous functions.
from abvdev import ABV, observe
import asyncio

abv = ABV(
    api_key="sk-abv-...", # your api key here
    host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

@observe()
def my_data_processing_function(data, parameter):
    # ... processing logic ...
    return {"processed_data": data, "status": "ok"}

@observe(name="llm-call", as_type="generation")
async def my_async_llm_call(prompt_text):
    # ... async LLM call ...
    await asyncio.sleep(0.1)
    return "LLM response"

my_data_processing_function("test_data", "test_parameter")
asyncio.run(my_async_llm_call("test_prompt"))

Parameters:

  • name: Optional[str]: Custom name for the created span or generation observation. Defaults to the function name.
  • as_type: Optional[Literal["generation"]]: If set to "generation", a ABV generation object is created, suitable for LLM calls. Otherwise, a regular span is created.
  • capture_input: bool: Whether to capture function arguments as input. Defaults to env var ABV_OBSERVE_DECORATOR_IO_CAPTURE_ENABLED or True if not set.
  • capture_output: bool: Whether to capture function return value as output. Defaults to env var ABV_OBSERVE_DECORATOR_IO_CAPTURE_ENABLED or True if not set.
  • transform_to_string: Optional[Callable[[Iterable], str]]: For functions that return generators (sync or async), this callable can be provided to transform the collected chunks into a single string for the output field. If not provided, and all chunks are strings, they will be concatenated. Otherwise, the list of chunks is stored.

Trace Context and Special Keyword Arguments:

The @observe decorator automatically propagates the OTEL trace context. If a decorated function is called from within an active ABV span (or another OTEL span), the new observation will be nested correctly. You can also pass special keyword arguments to a decorated function to control its tracing behavior:
  • abv_trace_id: str: Explicitly set the trace ID for this function call. Must be a valid W3C Trace Context trace ID (32-char hex). If you have a trace ID from an external system, you can use ABV.create_trace_id(seed=external_trace_id) to generate a valid deterministic ID.
  • abv_parent_observation_id: str: Explicitly set the parent observation ID. Must be a valid W3C Trace Context span ID (16-char hex).
from abvdev import ABV, observe

abv = ABV(
    api_key="sk-abv-", # your api key here
    host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

@observe()
def my_function(a, b):
    return a + b

# Call with a specific trace context
my_function(1, 2, abv_trace_id="1234567890abcdef1234567890abcdef")
The observe decorator is capturing the args, kwargs and return value of decorated functions by default. This may lead to performance issues in your application if you have large or deeply nested objects there. To avoid this, explicitly disable function IO capture on the decorated function by passing capture_input=False and/or capture_output=False parameters.

Context Managers

You can create spans or generations anywhere in your application. If you need more control than the @observe decorator, the primary way to do this is using context managers (with with statements), which ensure that observations are properly started and ended.
  • abv.start_as_current_span(): Creates a new span and sets it as the currently active observation in the OTEL context for its duration. Any new observations created within this block will be its children.
  • abv.start_as_current_generation(): Similar to the above, but creates a specialized โ€œgenerationโ€ observation for LLM calls.
from abvdev import ABV, observe

abv = ABV(
    api_key="sk-abv-", # your api key here
    host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

with abv.start_as_current_span(
    name="user-request-pipeline",
    input={"user_query": "Tell me a joke about OpenTelemetry"},
) as root_span:
    # This span is now active in the context.

    # Add trace attributes
    root_span.update_trace(
        user_id="user_123",
        session_id="session_abc",
        tags=["experimental", "comedy"]
    )

    # Create a nested generation
    with abv.start_as_current_generation(
        name="joke-generation",
        model="gpt-5-2025-08-07",
        input=[{"role": "user", "content": "Tell me a joke about OpenTelemetry"}],
        model_parameters={"temperature": 0.7}
    ) as generation:
        # Simulate an LLM call
        joke_response = "Why did the OpenTelemetry collector break up with the span? Because it needed more space... for its attributes!"
        token_usage = {"input_tokens": 10, "output_tokens": 25}

        generation.update(
            output=joke_response,
            usage_details=token_usage
        )
        # Generation ends automatically here

    root_span.update(output={"final_joke": joke_response})
    # Root span ends automatically here

Manual Observations

For scenarios where you need to create an observation (a span or generation) without altering the currently active OpenTelemetry context, you can use abv.start_span() or abv.start_generation().
from abvdev import ABV

abv = ABV(
    api_key="sk-abv-", # your api key here
    host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

span = abv.start_span(name="my-span")

span.end() # Important: Manually end the span
If you use manual span management with start_span() or start_generation(), you must remember to call .end() on each observation to ensure data is properly sent to ABV. Failing to end observations can lead to incomplete traces.

Key Characteristics:

  • No Context Shift: Unlike their start_as_current_... counterparts, these methods do not set the new observation as the active one in the OpenTelemetry context. The previously active span (if any) remains the current context for subsequent operations in the main execution flow.
  • Parenting: The observation created by start_span() or start_generation() will still be a child of the span that was active in the context at the moment of its creation.
  • Manual Lifecycle: These observations are not managed by a with block and therefore must be explicitly ended by calling their .end() method.
  • Nesting Children:
    • Subsequent observations created using the global abv.start_as_current_span() (or similar global methods) will not be children of these โ€œmanualโ€ observations. Instead, they will be parented by the original active span.
    • To create children directly under a โ€œmanualโ€ observation, you would use methods on that specific observation object (e.g., manual_span.start_as_current_span(...)).
When to Use: This approach is useful when you need to:
  • Record work that is self-contained or happens in parallel to the main execution flow but should still be part of the same overall trace (e.g., a background task initiated by a request).
  • Manage the observationโ€™s lifecycle explicitly, perhaps because its start and end are determined by non-contiguous events.
  • Obtain an observation object reference before itโ€™s tied to a specific context block.
Example with more complex nesting:
from abvdev import ABV

abv = ABV(
    api_key="sk-abv-", # your api key here
    host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

# This outer span establishes an active context.
with abv.start_as_current_span(name="main-operation") as main_operation_span:
    # 'main_operation_span' is the current active context.

    # 1. Create a "manual" span using abv.start_span().
    #    - It becomes a child of 'main_operation_span'.
    #    - Crucially, 'main_operation_span' REMAINS the active context.
    #    - 'manual_side_task' does NOT become the active context.
    manual_side_task = abv.start_span(name="manual-side-task")
    manual_side_task.update(input="Data for side task")

    # 2. Start another operation that DOES become the active context.
    #    This will be a child of 'main_operation_span', NOT 'manual_side_task',
    #    because 'manual_side_task' did not alter the active context.
    with abv.start_as_current_span(name="core-step-within-main") as core_step_span:
        # 'core_step_span' is now the active context.
        # 'manual_side_task' is still open but not active in the global context.
        core_step_span.update(input="Data for core step")
        # ... perform core step logic ...
        core_step_span.update(output="Core step finished")
    # 'core_step_span' ends. 'main_operation_span' is the active context again.

    # 3. Complete and end the manual side task.
    # This could happen at any point after its creation, even after 'core_step_span'.
    manual_side_task.update(output="Side task completed")
    manual_side_task.end() # Manual end is crucial for 'manual_side_task'

    main_operation_span.update(output="Main operation finished")
# 'main_operation_span' ends automatically here.

# Expected trace structure in ABV:
# - main-operation
#   |- manual-side-task
#   |- core-step-within-main
#     (Note: 'core-step-within-main' is a sibling to 'manual-side-task', both children of 'main-operation')

Nesting Observations

Observe Decorator

The function call hierarchy is automatically captured by the @observe decorator reflected in the trace.
from abvdev import ABV, observe

abv = ABV(
    api_key="sk-abv-...", # your api key here
    host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

@observe
def my_data_processing_function(data, parameter):
    # ... processing logic ...
    return {"processed_data": data, "status": "ok"}


@observe
def main_function(data, parameter):
    return my_data_processing_function(data, parameter)

# call function
main_function("test_data", "test_parameter")

Context Managers

Nesting is handled automatically by OpenTelemetryโ€™s context propagation. When you create a new observation (span or generation) using start_as_current_span or start_as_current_generation, it becomes a child of the observation that was active in the context when it was created.
from abvdev import ABV

abv = ABV(
    api_key="sk-abv-...", # your api key here
    host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

with abv.start_as_current_span(name="outer-process") as outer_span:
    # outer_span is active

    with abv.start_as_current_observation(as_type='generation', name="llm-step-1") as gen1:
        # gen1 is active, child of outer_span
        gen1.update(output="LLM 1 output")

    with outer_span.start_as_current_observation(as_type='span', name="intermediate-step") as mid_span:
        # mid_span is active, also a child of outer_span
        # This demonstrates using the yielded span object to create children

        with mid_span.start_as_current_observation(as_type='generation', name="llm-step-2") as gen2:
            # gen2 is active, child of mid_span
            gen2.update(output="LLM 2 output")

        mid_span.update(output="Intermediate processing done")

    outer_span.update(output="Outer process finished")

Manual

If you are creating observations manually (not _as_current_), you can use the methods on the parent ABVSpan or ABVGeneration object to create children. These children will not become the current context unless their _as_current_ variants are used.
from abvdev import ABV

abv = ABV(
    api_key="sk-abv-...", # your api key here
    host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

parent = abv.start_span(name="manual-parent")

child_span = parent.start_span(name="manual-child-span")
# ... work ...
child_span.end()

child_gen = parent.start_observation(as_type='generation', name="manual-child-generation")
# ... work ...
child_gen.end()

parent.end()

Updating Observations

You can update observations with new information as your code executes.
  • For spans/generations created via context managers or assigned to variables: use the .update() method on the object.
  • To update the currently active observation in the context (without needing a direct reference to it): use abv.update_current_span() or abv.update_current_generation().
ABVSpan.update()** / ABVGeneration.update() parameters:**
ParameterTypeDescriptionApplies To
inputOptional[Any]Input data for the operation.Both
outputOptional[Any]Output data from the operation.Both
metadataOptional[Any]Additional metadata (JSON-serializable).Both
versionOptional[str]Version identifier for the code/component.Both
levelOptional[SpanLevel]Severity: "DEBUG", "DEFAULT", "WARNING", "ERROR".Both
status_messageOptional[str]A message describing the status, especially for errors.Both
completion_start_timeOptional[datetime]Timestamp when the LLM started generating the completion (streaming).Generation
modelOptional[str]Name/identifier of the AI model used.Generation
model_parametersOptional[Dict[str, MapValue]]Parameters used for the model call (e.g., temperature).Generation
usage_detailsOptional[Dict[str, int]]Token usage (e.g., {"input_tokens": 10, "output_tokens": 20}).Generation
cost_detailsOptional[Dict[str, float]]Cost information (e.g., {"total_cost": 0.0023}).Generation
promptOptional[PromptClient]Associated PromptClient object from ABV prompt management.Generation
from abvdev import ABV

abv = ABV(
    api_key="sk-abv-...", # your api key here
    host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

with abv.start_as_current_observation(as_type='generation', name="llm-call", model="gpt-5-2025-08-07") as gen:
    gen.update(input={"prompt": "Why is the sky blue?"})
    # ... make LLM call ...
    response_text = "Rayleigh scattering..."
    gen.update(
        output=response_text,
        usage_details={"input_tokens": 5, "output_tokens": 50},
        metadata={"confidence": 0.9}
    )

# Alternatively, update the current observation in context:
with abv.start_as_current_span(name="data-processing"):
    # ... some processing ...
    abv.update_current_span(metadata={"step1_complete": True})
    # ... more processing ...
    abv.update_current_span(output={"result": "final_data"})

Setting Trace Attributes

Trace-level attributes apply to the entire trace, not just a single observation. You can set or update these using:
  • The .update_trace() method on any ABVSpan or ABVGeneration object within that trace.
  • abv.update_current_trace() to update the trace associated with the currently active observation.
Trace attribute parameters:
ParameterTypeDescription
nameOptional[str]Name for the trace.
user_idOptional[str]ID of the user associated with this trace.
session_idOptional[str]Session identifier for grouping related traces.
versionOptional[str]Version of your application/service for this trace.
inputOptional[Any]Overall input for the entire trace.
outputOptional[Any]Overall output for the entire trace.
metadataOptional[Any]Additional metadata for the trace.
tagsOptional[List[str]]List of tags to categorize the trace.
publicOptional[bool]Whether the trace should be publicly accessible (if configured).
Example: Setting Multiple Trace Attributes
from abvdev import ABV

abv = ABV(
    api_key="sk-abv-...", # your api key here
    host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

with abv.start_as_current_span(name="initial-operation") as span:
    # Set trace attributes early
    span.update_trace(
        user_id="user_xyz",
        session_id="session_789",
        tags=["beta-feature", "llm-chain"]
    )
    # ...
    # Later, from another span in the same trace:
    with span.start_as_current_observation(as_type='generation', name="final-generation") as gen:
        # ...
        abv.update_current_trace(output={"final_status": "success"}, public=True)

Trace Input/Output Behavior

Trace input and output are automatically set from the root observation (first span/generation) by default.

Default Behavior

from abvdev import ABV

abv = ABV(
    api_key="sk-abv-...", # your api key here
    host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

with abv.start_as_current_span(
    name="user-request",
    input={"query": "What is the capital of France?"}  # This becomes the trace input
) as root_span:

    with abv.start_as_current_observation(
        as_type='generation',
        name="llm-call",
        model="gpt-4o",
        input={"messages": [{"role": "user", "content": "What is the capital of France?"}]}
    ) as gen:
        response = "Paris is the capital of France."
        gen.update(output=response)
        # LLM generation input/output are separate from trace input/output

    root_span.update(output={"answer": "Paris"})  # This becomes the trace output

Override Default Behavior

If you need different trace inputs/outputs than the root observation, explicitly set them:
from abvdev import ABV

abv = ABV(
    api_key="sk-abv-...", # your api key here
    host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

with abv.start_as_current_span(name="complex-pipeline") as root_span:
    # Root span has its own input/output
    root_span.update(input="Step 1 data", output="Step 1 result")

    # But trace should have different input/output (e.g., for LLM-as-a-judge)
    root_span.update_trace(
        input={"original_query": "User's actual question"},
        output={"final_answer": "Complete response", "confidence": 0.95}
    )

    # Now trace input/output are independent of root span input/output

Critical for LLM-as-a-Judge Features

LLM-as-a-judge and evaluation features typically rely on trace-level inputs and outputs. Make sure to set these appropriately:
from abvdev import ABV, observe
abv = ABV(
    api_key="sk-abv-...", # your api key here
    host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

@observe()
def process_user_query(user_question: str):
    # LLM processing simulation...
    answer = f"call_llm: {user_question}"

    # Explicitly set trace input/output for evaluation features
    abv.update_current_trace(
        input={"question": user_question},
        output={"answer": answer}
    )

    return answer

# call function
process_user_query("how are you?")

Trace and Observation IDs

ABV uses W3C Trace Context compliant IDs:
  • Trace IDs: 32-character lowercase hexadecimal string (16 bytes).
  • Observation IDs (Span IDs): 16-character lowercase hexadecimal string (8 bytes).
You can retrieve these IDs:
  • abv.get_current_trace_id(): Gets the trace ID of the currently active observation.
  • abv.``get_current_observation_id``(): Gets the ID of the currently active observation.
  • span_obj.trace_id and span_obj.id: Access IDs directly from a ABVSpan or ABVGeneration object.
For scenarios where you need to generate IDs outside of an active trace (e.g., to link scores to traces/observations that will be created later, or to correlate with external systems), use:
  • ABV.create_trace_id(seed: Optional[str] = None)(static method): Generates a new trace ID. If a seed is provided, the ID is deterministic. Use the same seed to get the same ID. This is useful for correlating external IDs with ABV traces.
from abvdev import ABV

abv = ABV(
    api_key="sk-abv-...", # your api key here
    host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

# Get current IDs
with abv.start_as_current_span(name="my-op") as current_op:
    trace_id = abv.get_current_trace_id()
    observation_id = abv.get_current_observation_id()
    print(f"Current Trace ID: {trace_id}, Current Observation ID: {observation_id}")
    print(f"From object: Trace ID: {current_op.trace_id}, Observation ID: {current_op.id}")

# Generate IDs deterministically
external_request_id = "req_12345"
deterministic_trace_id = ABV.create_trace_id(seed=external_request_id)
print(f"Deterministic Trace ID for {external_request_id}: {deterministic_trace_id}")
Linking to Existing Traces (Trace Context) If you have a trace_id (and optionally a parent_span_id) from an external source (e.g., another service, a batch job), you can link new observations to it using the trace_context parameter. Note that OpenTelemetry offers native cross-service context propagation, so this is not necessarily required for calls between services that are instrumented with OTEL.
from abvdev import ABV

abv = ABV(
    api_key="sk-abv-...", # your api key here
    host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

existing_trace_id = "abcdef1234567890abcdef1234567890" # From an upstream service
existing_parent_span_id = "fedcba0987654321" # Optional parent span in that trace

with abv.start_as_current_span(
    name="process-downstream-task",
    trace_context={
        "trace_id": existing_trace_id,
        "parent_span_id": existing_parent_span_id # If None, this becomes a root span in the existing trace
    }
) as span:
    # This span is now part of the trace `existing_trace_id`
    # and a child of `existing_parent_span_id` if provided.
    print(f"This span's trace_id: {span.trace_id}") # Will be existing_trace_id
    pass

Client Management

flush()

Manually triggers the sending of all buffered observations (spans, generations, scores, media metadata) to the ABV API. This is useful in short-lived scripts or before exiting an application to ensure all data is persisted.
from abvdev import ABV

abv = ABV(
    api_key="sk-abv-...", # your api key here
    host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

# ... create traces and observations ...
abv.flush() # Ensures all pending data is sent
The flush() method blocks until the queued data is processed by the respective background threads.

shutdown()

Gracefully shuts down the ABV client. This includes:
  1. Flushing all buffered data (similar to flush()).
  2. Waiting for background threads (for data ingestion and media uploads) to finish their current tasks and terminate.
Itโ€™s crucial to call shutdown() before your application exits to prevent data loss and ensure clean resource release. The SDK automatically registers an atexit hook to call shutdown() on normal program termination, but manual invocation is recommended in scenarios like:
  • Long-running daemons or services when they receive a shutdown signal.
  • Applications where atexit might not reliably trigger (e.g., certain serverless environments or forceful terminations).
from abvdev import ABV

abv = ABV(
    api_key="sk-abv-...", # your api key here
    host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

# ... application logic ...

# Before exiting:
abv.shutdown()

Integrations

Third-party integrations

The ABV SDK seamlessly integrates with any third-party library that uses OpenTelemetry instrumentation. When these libraries emit spans, they are automatically captured and properly nested within your trace hierarchy. This enables unified tracing across your entire application stack without requiring any additional configuration. For example, if youโ€™re using OpenTelemetry-instrumented databases, HTTP clients, or other services alongside your LLM operations, all these spans will be correctly organized within your traces in ABV.

Example Anthropic

You can use any third-party, OTEL-based instrumentation library for Anthropic to automatically trace all your Anthropic API calls in ABV. In this example, we are using the opentelemetry-instrumentation-anthropic. install packages
pip install opentelemetry-instrumentation-anthropic, anthropic
from anthropic import Anthropic
from opentelemetry.instrumentation.anthropic import AnthropicInstrumentor

from abvdev import ABV

# ABV client initialization          
abv = ABV(
  api_key="sk-abv-...", # your api key here
  host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)

# This will automatically emit OTEL-spans for all Anthropic API calls
AnthropicInstrumentor().instrument()

anthropic_client = Anthropic(api_key="sk-ant-...")

with abv.start_as_current_span(name="myspan"):
    # This will be traced as a ABV generation nested under the current span
    message = anthropic_client.messages.create(
        model="claude-sonnet-4-5",
        max_tokens=1024,
        messages=[{"role": "user", "content": "Hello, Claude"}],
    )

    print(message.content)

# Flush events to ABV in short-lived applications
abv.flush()

Example LlamaIndex

You can use the third-party, OTEL-based instrumentation library for LlamaIndex to automatically trace your LlamaIndex calls in ABV. In this example, we are using the openinference-instrumentation-llama-index. install packages
pip install openinference-instrumentation-llama-index llama-index-llms-openai llama-index -U
from abvdev import get_client
from llama_index.llms.openai import OpenAI
from openinference.instrumentation.llama_index import LlamaIndexInstrumentor
import os
 
os.environ["ABV_API_KEY"] = "sk-abv-..." 
os.environ["ABV_HOST"] = "https://app.abv.dev" # "https://eu.app.abv.dev", for EU region
 
# Your OpenAI key
os.environ["OPENAI_API_KEY"] = "sk-proj-..." 


LlamaIndexInstrumentor().instrument()

abv = get_client()
llm = OpenAI(model="gpt-4o")

with abv.start_as_current_span(name="myspan"):
    response = llm.complete("Hello, world!")
    print(response)

abv.flush()