How the Prompt Data Model Works
Understanding the structure and lifecycle of prompts in ABV:Prompt creation with core fields
When you create a prompt (via UI, SDK, or API), you provide:
- name: Unique identifier within your ABV project (e.g.,
"movie-critic") - type: Either
text(single string) orchat(array of messages with roles) - prompt: The template content with
{{variable}}placeholders - config (optional): JSON object for model parameters or custom metadata
- labels (optional): Deployment labels like
["production", "staging"] - tags (optional): Categorization tags like
["movies", "entertainment"]
Automatic version incrementing
When you create a new prompt with an existing name, ABV doesn’t overwrite the previous version. Instead, it creates a new version with an incremented version number (2, 3, 4…).All versions are retained in ABV, providing complete version history. You can fetch any previous version by version number, compare versions side-by-side, and roll back to earlier versions.The
latest label automatically updates to point to the most recently created version.Label-based deployment management
Labels are named pointers to specific versions. Instead of fetching prompts by version number (which changes with each update), your application fetches by label (which remains constant).Default behavior: When you call
abv.get_prompt("movie-critic") without specifying a label, ABV returns the version with the production label.Custom labels: Create labels for different environments (staging, production), tenants (tenant-1, tenant-2), or A/B testing scenarios (variant-a, variant-b).Reassigning labels: Change which version a label points to without code changes. This is how you deploy new prompt versions or roll back to previous versions.Variable substitution at compile time
Prompts can contain
{{variable}} placeholders that you fill in when compiling the prompt for each request.For text prompts: Variables in a single string template- Template:
"As a {{criticLevel}} movie critic, do you like {{movie}}?" - Compiled:
"As an expert movie critic, do you like Dune 2?"
- Template:
[{"role": "system", "content": "You are a {{criticLevel}} critic"}] - Compiled:
[{"role": "system", "content": "You are an expert critic"}]
Config storage for model parameters
The optional
config field stores JSON data associated with the prompt. Common use cases:- Model parameters:
{"model": "gpt-4o", "temperature": 0.7, "max_tokens": 1000} - Tool definitions: Store function calling tools for models that support them
- Supported languages:
{"supported_languages": ["en", "fr", "es"]} - Custom metadata: Any application-specific configuration
Prompt Object Structure
The complete prompt object structure with all fields:name (required)
name (required)
Type: StringDescription: Unique identifier for the prompt within your ABV project. Names are used to fetch prompts via SDK or API.Naming conventions:
- Use descriptive, kebab-case names:
"customer-support-greeting","code-review-assistant" - Include use case context:
"summarize-medical-records"rather than just"summarize" - Avoid version numbers in names (versions are managed automatically)
"movie-critic", "translate-to-spanish", "sql-query-generator"type (required)
type (required)
Type: String enum (
"text" or "chat")Default: "text"Description: Defines the structure of the prompt content.Text type (text):- Prompt is a single string with optional variables
- Ideal for completion models or single-turn interactions
- Compiles to a string
- Example:
"prompt": "Summarize: {\{document}\}"
chat):- Prompt is an array of message objects with
roleandcontent - Designed for conversational models with system/user/assistant roles
- Compiles to an array of message objects
- Example:
"prompt": [{"role": "system", "content": "You are helpful"}, {"role": "user", "content": "{\{query}\}"}]
- Use
textfor simple completions, summarization, translation, single-turn Q&A - Use
chatfor multi-turn conversations, role-based interactions, system message instructions
prompt (required)
prompt (required)
Type: String (for text prompts) or Array of message objects (for chat prompts)Description: The actual prompt content, with optional When compiled with Example (chat prompt):When compiled with Variable syntax: Use
{{variable}} placeholders for dynamic substitution.Example (text prompt):{criticLevel: "expert", movie: "Dune 2"}:{criticLevel: "expert", movie: "Dune 2"}:{{variableName}} for placeholders. Variable names can contain letters, numbers, and underscores.Message roles (chat prompts only): system, user, assistant, function, tool (model-dependent)config (optional)
config (optional)
Type: JSON objectDefault: Tool definitions (for function calling):Custom metadata:Accessing config in code:Config is versioned: Each prompt version can have different config values, enabling A/B testing of parameters.
null or {}Description: Arbitrary JSON storage for model parameters, tools, or custom metadata. Not used by ABV internally—available for your application to read and use.Common use cases:Model parameters:version (auto-managed)
version (auto-managed)
Type: Integer (1, 2, 3, …)Description: Automatically incremented version number assigned when creating or updating a prompt. ABV manages versioning automatically—you don’t set this field directly.Version lifecycle:Immutability: Once created, a version’s content never changes. This ensures reproducibility and safe rollbacks.Version retention: All versions are retained indefinitely unless you explicitly delete them.
- First prompt creation: Version 1
- Update (create with same name): Version 2
- Subsequent updates: Version 3, 4, 5…
labels (optional)
labels (optional)
Type: Array of stringsDefault: Fetching by label:Multiple labels per version: A single version can have multiple labels (e.g., version 3 might have both
["latest"] (automatically assigned)Description: Named pointers to specific prompt versions. Labels enable deployment management without changing code.Built-in labels:- production: Default label fetched when no label is specified. Assign this to the version you want in production.
- latest: Automatically maintained by ABV, always points to the most recently created version.
- Environment labels:
"staging","development","qa" - Tenant labels:
"tenant-acme","tenant-contoso" - A/B testing labels:
"variant-a","variant-b","control","experiment" - Geographic labels:
"us-region","eu-region"
"production" and "stable").Label reassignment for deployment: Change which version a label points to in the ABV UI—your application immediately uses the new version without code changes.tags (optional)
tags (optional)
Prompt Types: Text vs Chat
ABV supports two fundamental prompt types with different structures and use cases:Text-Based Prompts
Text-Based Prompts
Structure: Single string with optional Compilation result (with When to use: Choose text-based prompts when you need a single input string for the LLM, without structured conversation roles.
{\{variables}\}Use cases:- Simple completions:
"Summarize this text: {\{document}\}" - Translation:
"Translate to French: {\{content}\}" - Single-turn Q&A:
"Answer this query: {\{query}\}" - Code generation:
"Generate Python code to {\{task}\}" - Classification:
"Classify this sentiment: {\{review}\}"
{criticLevel: "expert", movie: "Dune 2"}):Chat Prompts
Chat Prompts
Structure: Array of message objects, each with Compilation result (with When to use: Choose chat prompts when you need structured conversation with system instructions, multi-turn context, or role-based interactions.Variables in chat prompts: Variables can appear in any message’s
role and contentMessage roles:system: Instructions for the LLM’s behavior and personalityuser: Messages from the userassistant: Messages from the LLM (for multi-turn context)function/tool: Function calling results (model-dependent)
- Multi-turn conversations with system instructions
- Role-based interactions (customer support, tutoring, therapy)
- Structured reasoning with chain-of-thought
- Function calling scenarios with tool messages
{criticLevel: "expert", movie: "Dune 2"}):content field, and can even be used in message placeholders for dynamic message insertion.Versioning and Labels
The relationship between versions and labels is central to ABV’s deployment model: Key concepts:- Versions are immutable snapshots created sequentially
- Labels are flexible pointers that can be reassigned to different versions
- Deployment is managed by reassigning labels (e.g., moving
productionfrom V1 to V3)
How Versioning Works
How Versioning Works
Automatic version creation:
- Create a prompt with name
"movie-critic"→ Version 1 created - Create another prompt with name
"movie-critic"→ Version 2 created (previous version retained) - Each update increments the version number
- Reproducibility: Fetching version 1 always returns the same prompt
- Safe rollbacks: Previous versions are always available
- Audit trails: Complete history of prompt changes
How Labels Work
How Labels Work
Labels as pointers: Labels are named references to specific versions. Think of them as Git branches or tags pointing to commits.Label lifecycle:Label reassignment for deployment: The ABV UI and API allow you to change which version a label points to. This is the primary deployment mechanism—no code changes required.
- Create version 1, assign
productionlabel - Create version 2, assign
staginglabel (for testing) - After validation, reassign
productionlabel to version 2 - Production traffic now uses version 2—instantly, without code changes
- production: Default when fetching without specifying a label
- latest: Automatically updated to newest version with each prompt creation
Deployment Workflows with Labels
Deployment Workflows with Labels
Standard deployment workflow:
- Develop: Create new prompt version in ABV UI or via SDK
- Test: Assign
staginglabel to new version - Validate: Test in staging environment (fetches
staginglabel) - Deploy: Reassign
productionlabel to new version in ABV UI - Monitor: Watch metrics for the new prompt version via linked traces
- Rollback (if needed): Reassign
productionback to previous version
- Create version 2 with variant A content, assign
variant-alabel - Create version 3 with variant B content, assign
variant-blabel - Application randomly chooses which label to fetch for each user
- After collecting metrics, promote winning variant to
production
- Create version for tenant Acme’s requirements, assign
tenant-acmelabel - Create version for tenant Contoso’s requirements, assign
tenant-contosolabel - Application fetches prompt based on current tenant context
- Each tenant gets customized prompts without separate codebases
Common Use Cases
Storing Model Parameters with Prompts
Storing Model Parameters with Prompts
Scenario: Your prompt requires specific model parameters (temperature, max tokens). You want to version these parameters alongside prompt content.Solution: Store parameters in the Usage:Benefits: Change model parameters without code deployment, A/B test parameter variations, version parameters with prompt content.
config field:Multi-Environment Deployment
Multi-Environment Deployment
Scenario: You need different prompt versions in development, staging, and production environments.Solution: Use environment-specific labels:Workflow:
- Create new version, assign
developmentlabel - Test locally
- Promote to
staginglabel for QA testing - After approval, promote to
productionlabel
Per-Tenant Prompt Customization
Per-Tenant Prompt Customization
Scenario: Different customers need customized prompts with their specific domain knowledge, tone, or requirements.Solution: Create versions with tenant-specific labels:Benefits: Customized experience per customer, no separate codebases, centralized prompt management.
Organizing Prompts with Tags
Organizing Prompts with Tags
Next Steps
Version Control
Learn how to manage versions and labels for safe prompt deployments
Get Started
Create your first prompt and integrate it with your application
Message Placeholders
Use advanced variable substitution in chat prompts
Config Field
Deep dive into storing and using the config field for model parameters
Link Prompts to Traces
Track metrics by prompt version through observability integration
A/B Testing
Compare prompt versions in production with A/B testing