How Tags Work
1
Add tags during execution
Use the ABV SDK to attach one or more string tags to a trace. Tags can be added when creating a trace or updated later during execution.
2
Tags appear in the dashboard
All tags attached to a trace are visible in the ABV UI. You’ll see them as clickable labels on each trace, making it easy to identify categories at a glance.
3
Filter traces by tags
Click any tag in the UI to filter your trace list. The filter shows only traces that include that specific tag, reducing noise and focusing your analysis.
4
Combine tags for precise filtering
Use multiple tag filters simultaneously to narrow down exactly what you need. For example, filter by both
production and error to see only production errors.5
Use tags in analytics and exports
Tags are included in all exports and available for grouping in custom dashboards. Use them to segment performance metrics, cost analysis, or error rates by any dimension you choose.
Why Use Tags?
Feature Flagging & Experiments
Feature Flagging & Experiments
A/B testing a new prompt? Tag each variant to measure quality, latency, and cost separately.Filter by tag to compare metrics side-by-side and make data-driven rollout decisions.Combine with metadata for richer analysis: use tags for simple categories (
prompt:v1) and metadata for detailed attributes.Version Tracking Across Deployments
Version Tracking Across Deployments
Tag traces with your application version to isolate errors by deployment.Filter by
version:2.3.0 to see only new deployment errors and compare error rates across versions.Set version tags automatically via environment variables for complete deployment visibility.Environment Separation
Environment Separation
Separate dev, staging, and production traffic for clearer debugging.Filter by
production for real user traffic or staging for pre-release validation.For formal separation with access controls, see Environments.Technique Identification (RAG, Few-Shot, etc.)
Technique Identification (RAG, Few-Shot, etc.)
Tag traces by LLM technique (RAG, few-shot, chain-of-thought) to analyze performance and cost.Compare metrics to discover that RAG costs 3x more than few-shot or that chain-of-thought has higher latency but better accuracy.Optimize technique selection based on cost and performance data.
Error Categorization
Error Categorization
Categorize errors for effective triage: rate limits, validation failures, or unexpected errors.Filter by error type to identify quota issues, bad input, or unexpected failures.Set up alerts on specific error tags to get notified only for critical issues.
User Cohorts & A/B Testing
User Cohorts & A/B Testing
Tag traces with user cohorts to measure adoption and performance across customer segments.Filter by
tier:premium for paying customers or compare latency across regions. Segment cost analysis by customer tier.Avoid PII in tags—use cohort identifiers (tier:premium) not personal info (user:[email protected]).Implementation Guide
Python SDK
Python SDK
- With @observe() Decorator
- With Manual Spans
- Update Current Trace
The simplest approach for functions already decorated with When to use: For functions already using the
@observe():@observe() decorator. Minimal code changes required.JavaScript/TypeScript SDK
JavaScript/TypeScript SDK
- Setup
- Context Manager
- observe() Wrapper
- Manual Spans
Install packages:Add credentials to Create
.env:.env
instrumentation.ts:instrumentation.ts
Best Practices
Keep Tags Simple and Consistent
Keep Tags Simple and Consistent
Establish Naming Conventions Early
Establish Naming Conventions Early
Define and document tag patterns for your team before scaling:Option 1: Namespaced tagsOption 2: Simple tagsCreate a tag dictionary: Document your conventions in your team wiki or codebase:Why it matters: Prevents tag proliferation (
prod vs production vs prd), ensures team-wide consistency, and makes onboarding easier.Never Put PII in Tags
Never Put PII in Tags
Combine Tags with Metadata Strategically
Combine Tags with Metadata Strategically
Tag Dynamically Based on Runtime Conditions
Tag Dynamically Based on Runtime Conditions
Add tags based on execution paths, not just static configuration:Why it matters: Dynamic tagging captures what actually happened during execution, making it easier to identify patterns, debug issues, and optimize performance.
Limit the Number of Tags Per Trace
Limit the Number of Tags Per Trace
Tags vs Metadata vs Environments
Choosing the right feature for your use case:| Feature | Best For | Example Use Cases |
|---|---|---|
| Tags | Simple categorization, filtering, experiments | rag, production, beta, v2.1.0, few-shot |
| Metadata | Structured data, detailed attributes, analytics | {"tenant_id": "acme", "user_tier": "premium", "prompt_tokens": 1523} |
| Environments | Formal separation with access controls | Development, Staging, Production projects |
- Tag with environment (
production) AND use dedicated ABV Environments for formal separation - Tag with experiment variant (
prompt:v2) AND include detailed metadata ({"variant_id": "abc123", "assignment_ts": "2024-01-15T10:30:00Z"}) - Tag with feature (
rag) AND include metadata about the RAG implementation ({"chunks": 5, "embedding_model": "text-embedding-ada-002"})
Related Features
Metadata
Add structured key-value data to traces for detailed filtering and analytics
Environments
Separate development, staging, and production with dedicated projects and access controls
Sessions
Group related traces by user journey or workflow for end-to-end visibility
Trace IDs
Track requests across distributed services with custom trace IDs