How Sessions Work
A session groups multiple traces together to represent a complete user journey, conversation, or multi-step job. All traces with the samesessionId are linked, enabling end-to-end analysis.
Session benefits:
- End-to-end visibility: See complete user journeys from start to finish
- Aggregated metrics: Track total cost, latency, and token usage across all traces in a session
- Context preservation: Understand how earlier interactions influence later ones
Conversations (chatbots, assistants)
Conversations (chatbots, assistants)
All messages in a single conversation share the same session ID.Example conversation:
- User: “What’s the weather in Paris?”
- Bot: “It’s 72°F and sunny.”
- User: “What about tomorrow?”
- Bot: “Tomorrow will be 68°F with rain.”
sessionId = "chat-abc-123"Multi-step workflows (pipelines, agents)
Multi-step workflows (pipelines, agents)
All steps in a workflow share the same session ID.Example pipeline:
- Upload document → trace 1
- Extract text → trace 2
- Summarize content → trace 3
- Classify topic → trace 4
sessionId = "doc-processing-xyz"User login sessions
User login sessions
All activity during a single login session shares the same session ID.Example:
- User logs in at 9:00 AM
- Asks 10 questions over 2 hours
- Logs out at 11:00 AM
sessionId = "user-456-login-789"Viewing and Analyzing Sessions
Dashboard: Sessions View
Dashboard: Sessions View
Navigate to the Sessions view in the ABV Dashboard to see all sessions with aggregated metrics.Available metrics:
- Trace count per session
- Total cost (sum of all LLM costs)
- Total duration (first to last trace)
- Error rate
- User/tenant context

Dashboard: Filter Traces by Session
Dashboard: Filter Traces by Session
In the Traces view, filter by session ID to isolate specific sessions.Query examples:
sessionId = "chat-abc-789"- Specific sessionsessionId LIKE "chat-%"- All chat sessionssessionId = "doc-xyz" AND status = "error"- Failed traces in session
Public API: Programmatic Access
Public API: Programmatic Access
Fetch session data via API for custom analysis and reporting.Fetch session by ID:List sessions for a user:
Why Use Sessions?
Debug Multi-Turn Conversations
Debug Multi-Turn Conversations
Chatbots and assistants build context over multiple turns. When a conversation fails, you need the full history to understand why.Example: Customer support chatbot failureConversation flow:
- User: “I want to return my order”
- Bot: “Sure! What’s your order number?”
- User: “ORDER-123”
- Bot: “I found your order. What’s the reason for the return?”
- User: “The item was damaged”
- Bot: [ERROR] “I couldn’t process your return”
- You only see trace #6 (the failure)
- No context about what the user asked before
- Can’t reproduce the issue without the full conversation
- Guessing why the bot failed
- Click the session to see all 6 traces in order
- See that the bot stored
order_id = "ORDER-123"in context - Notice the bot attempted to call
process_return(order_id, reason)but the API timed out - Root cause: The return API is slow for order “ORDER-123” (large order with 10 items)
- Full conversation history for debugging
- See how context accumulated over turns
- Identify which turn introduced the error
- Reproduce issues by replaying the exact conversation
Measure End-to-End Workflow Performance
Measure End-to-End Workflow Performance
Multi-step workflows (RAG pipelines, agent workflows, document processing) span multiple traces. Sessions aggregate metrics across all steps.Example: Document processing pipelineWorkflow:
- Upload document (0.5s, $0.00)
- Extract text with OCR (3.2s, $0.10)
- Chunk into paragraphs (0.3s, $0.00)
- Embed chunks with text-embedding-ada-002 (1.8s, $0.02)
- Summarize with GPT-4 (4.5s, $0.15)
- Classify topic with GPT-3.5 (1.2s, $0.03)
- Format output (0.2s, $0.00)
- Total duration: 11.7 seconds (sum of all steps)
- Total cost: $0.30 (sum of all LLM calls)
- Bottleneck: Step 5 (GPT-4 summarization) takes 38% of total time
- Switch from GPT-4 to GPT-3.5-turbo for summarization
- New duration: 9.2 seconds (21% faster)
- New cost: $0.18 (40% cheaper)
- Quality: No degradation (validated with evaluations)
- Measure end-to-end latency instead of just individual steps
- Calculate total cost per workflow
- Identify bottlenecks across multi-step processes
- Justify optimizations with concrete data
Analyze User Journeys
Analyze User Journeys
Users interact with your application over multiple requests. Sessions group their journey for behavioral analysis.Example: Onboarding flowUser journey (session “onboarding-user-456”):
- User signs up
- User completes profile
- User asks: “How do I upload documents?”
- Bot explains document upload
- User uploads first document
- User asks: “How do I share this?”
- Bot explains sharing
- User shares document with team
- Total traces: 8
- Total duration: 12 minutes (from signup to first share)
- Questions asked: 2 (steps 3 and 6)
- Feature usage: Document upload (step 5), sharing (step 8)
- Users need help with document upload and sharing (common questions)
- Average time to first share: 12 minutes (can we reduce this?)
- 80% of users who complete onboarding ask at least one question
- Add tooltips for document upload and sharing
- Measure impact: Time to first share drops to 8 minutes
- Onboarding completion rate increases from 60% to 75%
- Understand user behavior end-to-end
- Identify friction points in user journeys
- Measure impact of UX changes
- Personalize experiences based on session history
Track Agent Workflows
Track Agent Workflows
AI agents execute complex multi-step workflows: planning, tool calls, reflection, iteration. Sessions capture the entire agent execution.Example: Research agentAgent workflow (session “research-task-789”):
- Agent: Plan research strategy (LLM call)
- Agent: Search web for “climate change impact” (tool call)
- Agent: Read 3 articles (tool calls)
- Agent: Summarize findings (LLM call)
- Agent: Search for “climate policy solutions” (tool call)
- Agent: Read 2 more articles (tool calls)
- Agent: Generate final report (LLM call)
- Total traces: 7 (3 LLM calls, 6 tool calls)
- Total cost: $0.45
- Total duration: 23 seconds
- Tools used: Web search (2×), article reader (5×)
- The agent made 2 rounds of research (steps 2-4, then 5-7)
- Reading articles took 15 seconds (65% of total time)
- LLM calls were cheap (0.30)
- Cache article summaries to avoid re-reading
- Parallelize article reads (5 sequential → 2 parallel)
- New duration: 12 seconds (48% faster)
- New cost: $0.35 (22% cheaper)
- See full agent reasoning and tool usage
- Measure cost and latency of agent workflows
- Identify inefficient tool usage patterns
- Optimize agent behavior with data
Calculate Cost Per Session
Calculate Cost Per Session
For multi-step interactions, analyze total cost per session rather than per individual trace.Customer support chatbot example:
- Short sessions (1-2 turns): Lower cost, fast resolution
- Medium sessions (3-5 turns): Moderate cost, typical cases
- Long sessions (6+ turns): Higher cost, complex issues
- Accurate cost accounting per user journey
- Identify expensive session patterns
- Justify pricing tiers based on session costs
- Optimize model selection by session complexity
Session Replay for Support Handoffs
Session Replay for Support Handoffs
Support teams need full context when taking over from a chatbot or another agent.Example: Bot-to-human handoffSession flow:
- User: “I need a refund” (bot handles)
- Bot: “Can you provide your order number?” (bot handles)
- User: “ORDER-123” (bot handles)
- Bot: “This order requires manager approval. Escalating to human support.” (bot escalates)
- Human agent joins → sees full session history instantly
- Agent: “I see you ordered 10 items and one was damaged. I’ll process your refund now.”
- Human agent asks user to repeat everything
- User frustrated: “I already told the bot this!”
- Time wasted re-gathering context
- Human agent clicks session replay
- Sees full conversation history in 5 seconds
- Jumps directly to resolution without asking user to repeat
- Seamless bot-to-human handoffs
- Reduced user frustration
- Faster time to resolution
- Better support experience
Implementation Guide
Python: Using the @observe() Decorator
Python: Using the @observe() Decorator
Use the Basic usage:Generate session IDs:
@observe() decorator and update the session ID with abv.update_current_trace().Setup:Python: Manual Span Creation
Python: Manual Span Creation
Set session IDs when creating spans manually.Example:Update session ID without direct span reference:
JavaScript/TypeScript: Context Managers
JavaScript/TypeScript: Context Managers
Use Configuration (instrumentation.ts):Add session IDs:
updateActiveTrace() to set session IDs.Setup:JavaScript/TypeScript: observe Wrapper
JavaScript/TypeScript: observe Wrapper
Wrap existing functions with automatic tracing and session IDs.Example:
JavaScript/TypeScript: Manual Span Creation
JavaScript/TypeScript: Manual Span Creation
Create spans manually and set session IDs on traces.Example:
Related Features
User Tracking
Link traces to user accounts to correlate sessions with specific users for support and analysis
Metadata
Attach structured context to sessions for precise filtering and business analysis
Trace IDs
Use custom trace IDs for distributed tracing across microservices within sessions
Cost Tracking
Calculate total costs per session to optimize pricing and model selection