SDKs
Python SDK
Python SDK - Instrumentation
28 min
basic tracing abv provides flexible ways to create and manage traces and their constituent observations (spans and generations) @observe decorator the @observe() decorator provides a convenient way to automatically trace function executions, including capturing their inputs, outputs, execution time, and any errors it supports both synchronous and asynchronous functions from abvdev import observe @observe() def my data processing function(data, parameter) \# processing logic return {"processed data" data, "status" "ok"} @observe(name="llm call", as type="generation") async def my async llm call(prompt text) \# async llm call return "llm response" parameters name optional\[str] custom name for the created span/generation defaults to the function name as type optional\[literal\["generation"]] if set to "generation" , a abv generation object is created, suitable for llm calls otherwise, a regular span is created capture input bool whether to capture function arguments as input defaults to env var abv observe decorator io capture enabled or true if not set capture output bool whether to capture function return value as output defaults to env var abv observe decorator io capture enabled or true if not set transform to string optional\[callable\[\[iterable], str]] for functions that return generators (sync or async), this callable can be provided to transform the collected chunks into a single string for the output field if not provided, and all chunks are strings, they will be concatenated otherwise, the list of chunks is stored trace context and special keyword arguments the @observe decorator automatically propagates the otel trace context if a decorated function is called from within an active abv span (or another otel span), the new observation will be nested correctly you can al so pass special keyword arguments to a decorated function to control its tracing behavior abv trace id str explicitly set the trace id for this function call must be a valid w3c trace context trace id (32 char hex) if you have a trace id from an external system, you can use abv create trace id(seed=external trace id) to generate a valid deterministic id abv parent observation id str explicitly set the parent o bservation id must be a valid w3c trace context span id (16 char hex) @observe() def my function(a, b) return a + b \# call with a specific trace context my function(1, 2, abv trace id="1234567890abcdef1234567890abcdef") the observe decorator is capturing the args, kwargs and return value of decorated functions by default this may lead to performance issues in your application if you have large or deeply nested objects there to avoid this, explicitly disable function io capture on the decorated function by passing the observe decorator is capturing the args, kwargs and return value of decorated functions by default this may lead to performance issues in your application if you have large or deeply nested objects there to avoid this, explicitly disable function io capture on the decorated function by passing capture input / capture output with value false or globally by setting the environment variable abv observe decorator io capture enabled=false context managers you can create spans or generations anywhere in your application if you need more control than the @observe decorator, the primary way to do this is using context managers (with with statements), which ensure that observations are properly started and ended abv start as current span() creates a new span and sets it as the currently active observation in the otel context for its duration any new observations created within this block will be its children abv start as current generation() similar to the above, but creates a specialized "generation" observation for llm calls from abvdev import get client abv = get client() with abv start as current span( name="user request pipeline", input={"user query" "tell me a joke about opentelemetry"}, ) as root span \# this span is now active in the context \# add trace attributes root span update trace( user id="user 123", session id="session abc", tags=\["experimental", "comedy"] ) \# create a nested generation with abv start as current generation( name="joke generation", model="gpt 5 2025 08 07", input=\[{"role" "user", "content" "tell me a joke about opentelemetry"}], model parameters={"temperature" 0 7} ) as generation \# simulate an llm call joke response = "why did the opentelemetry collector break up with the span? because it needed more space for its attributes!" token usage = {"input tokens" 10, "output tokens" 25} generation update( output=joke response, usage details=token usage ) \# generation ends automatically here root span update(output={"final joke" joke response}) \# root span ends automatically here manual observations for scenarios where you need to create an observation (a span or generation) without altering the currently active opentelemetry context, you can use abv start span() or abv start generation() from abvdev import get client abv = get client() span = abv start span(name="my span") span end() # important manually end the span i f you use f you use abv start span() or abv start generation() , you are responsible for calling end() on the returned observation object failure to do so will result in incomplete or missing observations in abv their start as current counterparts used with a with statement handle this automatically key characteristics no context shift unlike their start as current counterparts, these methods do not set the new observation as the active one in the opentelemetry context the previously active span (if any) remains the current context for subsequent operations in the main execution flow parenting the observation created by start span() or start generation() will still be a child of the span that was active in the context at the moment of its creation manual lifecycle these observations are not managed by a with block and therefore must be explicitly ended by calling their end() method nesting children subsequent observations created using the global abv start as current span() (or similar global methods) will not be children of these "manual" observations instead, they will be parented by the original active span to create children directly under a "manual" observation, you would use methods on that specific observation object (e g , manual span start as current span( ) ) when to use this approach is useful when you need to record work that is self contained or happens in parallel to the main execution flow but should still be part of the same overall trace (e g , a background task initiated by a request) manage the observation's lifecycle explicitly, perhaps because its start and end are determined by non contiguous events obtain an observation object reference before it's tied to a specific context block example with more complex nesting from abvdev import get client abv = get client() \# this outer span establishes an active context with abv start as current span(name="main operation") as main operation span \# 'main operation span' is the current active context \# 1 create a "manual" span using abv start span() \# it becomes a child of 'main operation span' \# crucially, 'main operation span' remains the active context \# 'manual side task' does not become the active context manual side task = abv start span(name="manual side task") manual side task update(input="data for side task") \# 2 start another operation that does become the active context \# this will be a child of 'main operation span', not 'manual side task', \# because 'manual side task' did not alter the active context with abv start as current span(name="core step within main") as core step span \# 'core step span' is now the active context \# 'manual side task' is still open but not active in the global context core step span update(input="data for core step") \# perform core step logic core step span update(output="core step finished") \# 'core step span' ends 'main operation span' is the active context again \# 3 complete and end the manual side task \# this could happen at any point after its creation, even after 'core step span' manual side task update(output="side task completed") manual side task end() # manual end is crucial for 'manual side task' main operation span update(output="main operation finished") \# 'main operation span' ends automatically here \# expected trace structure in abv \# main operation \# | manual side task \# | core step within main \# (note 'core step within main' is a sibling to 'manual side task', both children of 'main operation') nesting observations observe decorator the function call hierarc hy is automatically captured by the @observe decorator reflected in the trace from abvdev import observe @observe def my data processing function(data, parameter) \# processing logic return {"processed data" data, "status" "ok"} @observe def main function(data, parameter) return my data processing function(data, parameter) context managers nesting is handled automatically by opentelemetry’s context propagation when you create a new observation (span or generation) using start as current span or start as current generation, it becomes a child of the observation that was active in the context when it was created from abvdev import get client abv = get client() with abv start as current span(name="outer process") as outer span \# outer span is active with abv start as current generation(name="llm step 1") as gen1 \# gen1 is active, child of outer span gen1 update(output="llm 1 output") with outer span start as current span(name="intermediate step") as mid span \# mid span is active, also a child of outer span \# this demonstrates using the yielded span object to create children with mid span start as current generation(name="llm step 2") as gen2 \# gen2 is active, child of mid span gen2 update(output="llm 2 output") mid span update(output="intermediate processing done") outer span update(output="outer process finished") manual if you are creating observations manually (not as current ), you can use the methods on the parent abvspan or abvgeneration object to create children these children will not become the current context unless their as current variants are used from abvdev import get client abv = get client() parent = abv start span(name="manual parent") child span = parent start span(name="manual child span") \# work child span end() child gen = parent start generation(name="manual child generation") \# work child gen end() parent end() updating observations you can update observations with new information as your code executes for spans/generations created via context managers or assigned to variables use the update() method on the object to update the currently active observation in the context (without needing a direct reference to it) use abv update current span() or abv update current generation() abvspan update() / abvgeneration update() parameters parameter type description applies to input optional\[any] input data for the operation both output optional\[any] output data from the operation both metadata optional\[any] additional metadata (json serializable) both version optional\[str] version identifier for the code/component both level optional\[spanlevel] severity "debug" , "default" , "warning" , "error" both status message optional\[str] a message describing the status, especially for errors both completion start time optional\[datetime] timestamp when the llm started generating the completion (streaming) generation model optional\[str] name/identifier of the ai model used generation model parameters optional\[dict\[str, mapvalue]] parameters used for the model call (e g , temperature) generation usage details optional\[dict\[str, int]] token usage (e g , {"input tokens" 10, "output tokens" 20} ) generation cost details optional\[dict\[str, float]] cost information (e g , {"total cost" 0 0023} ) generation prompt optional\[promptclient] associated promptclient object from abv prompt management generation from abvdev import get client abv = get client() with abv start as current generation(name="llm call", model="gpt 5 2025 08 07") as gen gen update(input={"prompt" "why is the sky blue?"}) \# make llm call response text = "rayleigh scattering " gen update( output=response text, usage details={"input tokens" 5, "output tokens" 50}, metadata={"confidence" 0 9} ) \# alternatively, update the current observation in context with abv start as current span(name="data processing") \# some processing abv update current span(metadata={"step1 complete" true}) \# more processing abv update current span(output={"result" "final data"}) setting trace attributes trace level attributes apply to the entire trace, not just a single observation you can set or update these using the update trace() method on any abvspan or abvgeneration object within that trace abv update current trace() to update the trace associated with the currently active observation trace attribute parameters parameter type description name optional\[str] name for the trace user id optional\[str] id of the user associated with this trace session id optional\[str] session identifier for grouping related traces version optional\[str] version of your application/service for this trace input optional\[any] overall input for the entire trace output optional\[any] overall output for the entire trace metadata optional\[any] additional metadata for the trace tags optional\[list\[str]] list of tags to categorize the trace public optional\[bool] whether the trace should be publicly accessible (if configured) example setting multiple trace attributes from abvdev import get client abv = get client() with abv start as current span(name="initial operation") as span \# set trace attributes early span update trace( user id="user xyz", session id="session 789", tags=\["beta feature", "llm chain"] ) \# \# later, from another span in the same trace with span start as current generation(name="final generation") as gen \# abv update current trace(output={"final status" "success"}, public=true) trace input/output behavior trace input and output are automatically set from the root observation (first span/generation) by default default behavior from abvdev import get client abv = get client() with abv start as current span( name="user request", input={"query" "what is the capital of france?"} # this becomes the trace input ) as root span with abv start as current generation( name="llm call", model="gpt 4o", input={"messages" \[{"role" "user", "content" "what is the capital of france?"}]} ) as gen response = "paris is the capital of france " gen update(output=response) \# llm generation input/output are separate from trace input/output root span update(output={"answer" "paris"}) # this becomes the trace output override default behavior if you need different trace inputs/outputs than the root observation, explicitly set them from abvdev import get client abv = get client() with abv start as current span(name="complex pipeline") as root span \# root span has its own input/output root span update(input="step 1 data", output="step 1 result") \# but trace should have different input/output (e g , for llm as a judge) root span update trace( input={"original query" "user's actual question"}, output={"final answer" "complete response", "confidence" 0 95} ) \# now trace input/output are independent of root span input/output critical for llm as a judge features llm as a judge and evaluation features typically rely on trace level inputs and outputs make sure to set these appropriately from abvdev import observe, get client abv = get client() @observe() def process user query(user question str) \# llm processing answer = call llm(user question) \# explicitly set trace input/output for evaluation features abv update current trace( input={"question" user question}, output={"answer" answer} ) return answer trace and observation ids abv uses w3c trace context compliant ids trace ids 32 character lowercase hexadecimal string (16 bytes) observation ids (span ids) 16 character lowercase hexadecimal string (8 bytes) you can retrieve these ids abv get current trace id() gets the trace id of the currently active observation abv get current observation id () gets the id of the currently active observation span obj trace id and span obj id access ids directly from a abvspan or abvgeneration object for scenarios where you need to generate ids outside of an active trace (e g , to link scores to traces/observations that will be created later, or to correlate with external systems), use abv create trace id(seed optional\[str] = none) (static method) generates a new trace id if a seed is provided, the id is deterministic use the same seed to get the same id this is useful for correlating external ids with abv traces from abvdev import get client, abv abv = get client() \# get current ids with abv start as current span(name="my op") as current op trace id = abv get current trace id() observation id = abv get current observation id() print(f"current trace id {trace id}, current observation id {observation id}") print(f"from object trace id {current op trace id}, observation id {current op id}") \# generate ids deterministically external request id = "req 12345" deterministic trace id = abv create trace id(seed=external request id) print(f"deterministic trace id for {external request id} {deterministic trace id}") linking to existing traces (trace context) if you have a trace id (and optionally a parent span id ) from an external source (e g , another service, a batch job), you can link new observations to it using the trace context parameter note that opentelemetry offers native cross service context propagation, so this is not necessarily required for calls between services that are instrumented with otel from abvdev import get client abv = get client() existing trace id = "abcdef1234567890abcdef1234567890" # from an upstream service existing parent span id = "fedcba0987654321" # optional parent span in that trace with abv start as current span( name="process downstream task", trace context={ "trace id" existing trace id, "parent span id" existing parent span id # if none, this becomes a root span in the existing trace } ) as span \# this span is now part of the trace `existing trace id` \# and a child of `existing parent span id` if provided print(f"this span's trace id {span trace id}") # will be existing trace id pass client management flush() manually triggers the sending of all buffered observations (spans, generations, scores, media metadata) to the abv api this is useful in short lived scripts or before exiting an application to ensure all data is persisted from abvdev import get client abv = get client() \# create traces and observations abv flush() # ensures all pending data is sent the flush() method blocks until the queued data is processed by the respective background threads shutdown() gracefully shuts down the abv client this includes flushing all buffered data (similar to flush() ) waiting for background threads (for data ingestion and media uploads) to finish their current tasks and terminate it's crucial to call shutdown() before your application exits to prevent data loss and ensure clean resource release the sdk automatically registers an atexit hook to call shutdown() on normal program termination, but manual invocation is recommended in scenarios like long running daemons or services when they receive a shutdown signal applications where atexit might not reliably trigger (e g , certain serverless environments or forceful terminations ) from abvdev import get client abv = get client() \# application logic \# before exiting abv shutdown() integrations third party integrations the abv sdk seamlessly integrates with any third party library that uses opentelemetry instrumentation when these libraries emit spans, they are automatically captured and properly nested within your trace hierarchy this enables unified tracing across your entire application stack without requiring any additional configuration for example, if you're using opentelemetry instrumented databases, http clients, or other services alongside your llm operations, all these spans will be correctly organized within your traces in abv example anthropic you can use any third party, otel based instrumentation library for anthropic to automatically trace all your anthropic api calls in abv in this example , we are using the opentelemetry instrumentation anthropic https //pypi org/project/opentelemetry instrumentation anthropic/ from anthropic import anthropic from opentelemetry instrumentation anthropic import anthropicinstrumentor from abvdev import get client \# this will automatically emit otel spans for all anthropic api calls anthropicinstrumentor() instrument() abv = get client() anthropic client = anthropic() with abv start as current span(name="myspan") \# this will be traced as a abv generation nested under the current span message = anthropic client messages create( model="claude 3 7 sonnet 20250219", max tokens=1024, messages=\[{"role" "user", "content" "hello, claude"}], ) print(message content) \# flush events to abv in short lived applications abv flush() example llamaindex you can use the third party, otel based instrumentation library for llamaindex to automatically trace your llamaindex calls in abv in this example , we are using the openinference instrumentation llama index https //pypi org/project/openinference instrumentation llama index/ pip install abvdev openinference instrumentation llama index llama index llms openai llama index u from abvdev import get client from llama index llms openai import openai from openinference instrumentation llama index import llamaindexinstrumentor import os os environ\["abv api key"] = "sk abv " os environ\["abv host"] = "https //app abv dev" \# your openai key os environ\["openai api key"] = "sk proj " llamaindexinstrumentor() instrument() abv = get client() llm = openai(model="gpt 4o") with abv start as current span(name="myspan") response = llm complete("hello, world!") print(response) abv flush()