The Python SDK provides advanced usage options for your application. This includes data masking, logging, sampling, filtering, and more.
Masking Sensitive Data
Install package
If your trace data (inputs, outputs, metadata) might contain sensitive information (PII, secrets), you can provide a mask function during client initialization. This function will be applied to all relevant data before it’s sent to ABV.
The mask function should accept data as a keyword argument and return the masked data. The returned data must be JSON-serializable.
from abvdev import ABV
import re
def pii_masker(data: any, **kwargs) -> any:
# Example: Simple email masking. Implement your more robust logic here.
if isinstance(data, str):
return re.sub(r"[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+", "[EMAIL_REDACTED]", data)
elif isinstance(data, dict):
return {k: pii_masker(data=v) for k, v in data.items()}
elif isinstance(data, list):
return [pii_masker(data=item) for item in data]
return data
abv = ABV(
mask=pii_masker,
api_key="sk-abv-...", # your api key here
host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)
# Now, any input/output/metadata will be passed through pii_masker
with abv.start_as_current_span(name="user-query", input={"email": "[email protected]", "query": "..."}) as span:
# The 'email' field in the input will be masked.
pass
Logging
The ABV SDK uses Python’s standard logging module. The main logger is named "abv".
To enable detailed debug logging, you can either:
- Set the
debug=True parameter when initializing the ABV client.
- Set the
ABV_DEBUG="True" environment variable.
- Configure the
"abv" logger manually:
import logging
abv_logger = logging.getLogger("abv")
abv_logger.setLevel(logging.DEBUG)
The default log level for the abv logger is logging.WARNING.
Sampling
You can configure the SDK to sample traces by setting the sample_rate parameter during client initialization (or via the ABV_SAMPLE_RATE environment variable). This value should be a float between 0.0 (sample 0% of traces) and 1.0 (sample 100% of traces).
If a trace is not sampled, none of its observations (spans, generations) or associated scores will be sent to ABV.
from abvdev import ABV
# Sample approximately 20% of traces
abv_sampled = ABV(
sample_rate=0.2,
api_key="sk-abv-...", # your api key here
host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)
Filtering by Instrumentation Scope
You can configure the SDK to filter out spans from specific instrumentation libraries by using the blocked_instrumentation_scopes parameter. This is useful when you want to exclude infrastructure spans while keeping your LLM and application spans.
from abvdev import ABV
# Filter out database spans
abv = ABV(
blocked_instrumentation_scopes=["sqlalchemy", "psycopg"],
api_key="sk-abv-...", # your api key here
host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)
How it works:
When third-party libraries create OpenTelemetry spans (through their instrumentation packages), each span has an associated “instrumentation scope” that identifies which library created it. The ABV SDK filters spans at the export level based on these scope names.
You can see the instrumentation scope name for any span in the ABV UI under the span’s metadata (metadata.scope.name). Use this to identify which scopes you want to filter.
Cross-Library Span RelationshipsWhen filtering instrumentation scopes, be aware that blocking certain libraries may break trace tree relationships if spans from blocked and non-blocked libraries are nested together.For example, if you block parent spans but keep child spans from a separate library, you may see “orphaned” LLM spans whose parent spans were filtered out. This can make traces harder to interpret.Consider the impact on trace structure when choosing which scopes to filter.
Isolated TracerProvider
You can configure a separate OpenTelemetry TracerProvider for use with ABV. This creates isolation between ABV tracing and your other observability systems.
Benefits of isolation:
- ABV spans won’t be sent to your other observability backends (e.g., Datadog, Jaeger, Zipkin)
- Third-party library spans won’t be sent to ABV
- Independent configuration and sampling rates
While TracerProviders are isolated, they share the same OpenTelemetry context for tracking active spans. This can cause span relationship issues where:
- A parent span from one TracerProvider might have children from another TracerProvider
- Some spans may appear “orphaned” if their parent spans belong to a different TracerProvider
- Trace hierarchies may be incomplete or confusing
Plan your instrumentation carefully to avoid confusing trace structures.
from opentelemetry.sdk.trace import TracerProvider
from abvdev import ABV
abv_tracer_provider = TracerProvider() # do not set to global tracer provider to keep isolation
abv = ABV(
tracer_provider=abv_tracer_provider,
api_key="sk-abv-...", # your api key here
host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)
abv.start_span(name="myspan").end() # Span will be isolated from remaining OTEL instrumentation
Using ThreadPoolExecutors or ProcessPoolExecutors
The observe decorator uses Python’s contextvars to store the current trace context and ensures that the observations are correctly associated with the current execution context. However, when using Python’s ThreadPoolExecutors and ProcessPoolExecutors and when spawning threads from inside a trace (i.e. the executor is run inside a decorated function) the decorator will not work correctly as the contextvars are not correctly copied to the new threads or processes. There is an existing issue in Python’s standard library and a great explanation in the fastapi repo that discusses this limitation.
The recommended workaround is to pass the parent observation id and the trace ID as a keyword argument to each multithreaded execution, thus re-establishing the link to the parent span or trace:
from concurrent.futures import ThreadPoolExecutor, as_completed
from abvdev import ABV, observe
abv = ABV(
api_key="sk-abv-...", # your api key here
host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)
@observe
def execute_task(*args):
return args
@observe
def execute_groups(task_args):
trace_id = abv.get_current_trace_id()
observation_id = abv.get_current_observation_id()
with ThreadPoolExecutor(3) as executor:
futures = [
executor.submit(
execute_task,
*task_arg,
abv_trace_id=trace_id,
abv_parent_observation_id=observation_id,
)
for task_arg in task_args
]
for future in as_completed(futures):
future.result()
return [f.result() for f in futures]
@observe()
def main():
task_args = [["a", "b"], ["c", "d"]]
execute_groups(task_args)
main()
abv.flush()
Distributed tracing
To maintain the trace context across service / process boundaries, please rely on the OpenTelemetry native context propagation across service / process boundaries as much as possible.
Using the trace_context argument to ‘force’ the parent child relationship may lead to unexpected trace updates as the resulting span will be treated as a root span server side.
Multi-Project Setup (Experimental)
Multi-project setups are experimental
The ABV Python SDK supports routing traces to different projects within the same application by using multiple api keys. This works because the ABV SDK adds a specific span attribute containing the api key to all spans it generates.
How it works:
- Span Attributes: The ABV SDK adds a specific span attribute containing the api key to spans it creates
- Multiple Processors: Multiple span processors are registered onto the global tracer provider, each with their respective exporters bound to a specific api key
- Filtering: Within each span processor, spans are filtered based on the presence and value of the api key attribute
Important Limitation with Third-Party Libraries:
Third-party libraries that emit OpenTelemetry spans automatically (e.g., HTTP clients, databases, other instrumentation libraries) do not have the ABV api key span attribute. As a result:
- These spans cannot be routed to a specific project
- They are processed by all span processors and sent to all projects
- All projects will receive these third-party spans
Why is this experimental?
This approach requires that the api_key parameter be passed to all ABV SDK executions across all integrations to ensure proper routing, and third-party spans will appear in all projects.
Initialization
To set up multiple projects, initialize separate ABV clients for each project:
from abvdev import ABV
# Initialize clients for different projects
project_a_client = ABV(
api_key="sk-abv-project-a-...",
host="https://app.abv.dev"
# or https://eu.app.abv.dev
)
project_b_client = ABV(
api_key="sk-abv-project-b-...",
host="https://app.abv.dev"
# or https://eu.app.abv.dev
)
Integration Usage
For all integrations in multi-project setups, you must specify the api_key parameter to ensure traces are routed to the correct project.
Observe Decorator:
Pass abv_api_key as a keyword argument to the top-most observed function (not the decorator). Nested decorated functions will automatically pick up the api key from the execution context they are currently into. Also, calls to get_client will be also aware of the current abv_api_key in the decorated function execution context, so passing the abv_api_key here again is not necessary.
from abvdev import observe, get_client
@observe
def nested():
# get_client call is context aware
# if it runs inside another decorated function that has
# abv_api_key passed, it does not need passing here again
get_client().update_current_trace(user_id='myuser')
@observe
def process_data_for_project_a(data):
# passing `abv_api_key` here again is not necessarily
# as it is stored in execution context
nested()
return {"processed": data}
@observe
def process_data_for_project_b(data):
# passing `abv_api_key` here again is not necessarily
# as it is stored in execution context
nested()
return {"enhanced": data}
# Route to Project A
# Top-most decorated function needs `abv_api_key` kwarg
result_a = process_data_for_project_a(
data="input data",
abv_api_key="sk-abv-project-a-..."
)
# Route to Project B
# Top-most decorated function needs `abv_api_key` kwarg
result_b = process_data_for_project_b(
data="input data",
abv_api_key="sk-abv-project-b-..."
)
Important Considerations:
- Every ABV SDK execution across all integrations must include the appropriate api key parameter
- Missing api key parameters may result in traces being routed to the default project or lost
- Third-party OpenTelemetry spans (from HTTP clients, databases, etc.) will appear in all projects since they lack the ABV api key attribute
Passing completion_start_time for TTFT tracking
If you are using the Python SDK to manually create generations, you can pass the
completion_start_time parameter. This allows abv to calculate the time to first token (TTFT) for you.
from abvdev import ABV
import datetime
import time
abv = ABV(
api_key="sk-abv-...", # your api key here
host="https://app.abv.dev", # host="https://eu.app.abv.dev", for EU region
)
# Start observation with specific type
with abv.start_as_current_observation(
as_type="generation",
name="TTFT-Generation"
) as generation:
# simulate LLM time to first token
time.sleep(3)
# Update the generation with the time the model started to generate
generation.update(
completion_start_time=datetime.datetime.now(),
output="some response",
)
# Flush events in short-lived applications
abv.flush()
Observation Types
ABV supports multiple observation types to provide context for different components of LLM applications. The full list of the observation types is document here: Observation types.
Setting observation types with the @observe decorator
By setting the as_type parameter in the @observe decorator, you can specify the observation type for a method:
from abvdev import observe
# Tool calls to external services
@observe(as_type="tool")
def retrieve_context(query):
results = vector_store.get(query)
return results
The context manager approach provides automatic resource cleanup:
from abvdev import get_client
abv = get_client()
def process_with_context_managers():
with abv.start_as_current_observation(
as_type="chain",
name="retrieval-pipeline",
) as chain:
# Retrieval step
with abv.start_as_current_observation(
as_type="retriever",
name="vector-search",
) as retriever:
search_results = perform_vector_search("user question")
retriever.update(output={"results": search_results})