Skip to main content
Understanding ABV’s prompt data model is essential for leveraging the full power of prompt management. The data model defines how prompts are structured, versioned, labeled, and configured—directly impacting how you organize prompts, deploy changes, and integrate with your LLM application.

How the Prompt Data Model Works

Understanding the structure and lifecycle of prompts in ABV:

Prompt creation with core fields

When you create a prompt (via UI, SDK, or API), you provide:
  • name: Unique identifier within your ABV project (e.g., "movie-critic")
  • type: Either text (single string) or chat (array of messages with roles)
  • prompt: The template content with {{variable}} placeholders
  • config (optional): JSON object for model parameters or custom metadata
  • labels (optional): Deployment labels like ["production", "staging"]
  • tags (optional): Categorization tags like ["movies", "entertainment"]
ABV automatically assigns version 1 to the first prompt with a given name.

Automatic version incrementing

When you create a new prompt with an existing name, ABV doesn’t overwrite the previous version. Instead, it creates a new version with an incremented version number (2, 3, 4…).All versions are retained in ABV, providing complete version history. You can fetch any previous version by version number, compare versions side-by-side, and roll back to earlier versions.The latest label automatically updates to point to the most recently created version.

Label-based deployment management

Labels are named pointers to specific versions. Instead of fetching prompts by version number (which changes with each update), your application fetches by label (which remains constant).Default behavior: When you call abv.get_prompt("movie-critic") without specifying a label, ABV returns the version with the production label.Custom labels: Create labels for different environments (staging, production), tenants (tenant-1, tenant-2), or A/B testing scenarios (variant-a, variant-b).Reassigning labels: Change which version a label points to without code changes. This is how you deploy new prompt versions or roll back to previous versions.

Variable substitution at compile time

Prompts can contain {{variable}} placeholders that you fill in when compiling the prompt for each request.For text prompts: Variables in a single string template
  • Template: "As a {{criticLevel}} movie critic, do you like {{movie}}?"
  • Compiled: "As an expert movie critic, do you like Dune 2?"
Chat prompts: Variables in message content across multiple roles
  • Template: [{"role": "system", "content": "You are a {{criticLevel}} critic"}]
  • Compiled: [{"role": "system", "content": "You are an expert critic"}]
Variables enable reusable prompt templates with dynamic content.

Config storage for model parameters

The optional config field stores JSON data associated with the prompt. Common use cases:
  • Model parameters: {"model": "gpt-4o", "temperature": 0.7, "max_tokens": 1000}
  • Tool definitions: Store function calling tools for models that support them
  • Supported languages: {"supported_languages": ["en", "fr", "es"]}
  • Custom metadata: Any application-specific configuration
Config is versioned with the prompt: Each version can have different config values, enabling A/B testing of model parameters alongside prompt content.

Prompt Object Structure

The complete prompt object structure with all fields:
{
  "name": "movie-critic",
  "type": "text",
  "prompt": "As a {{criticLevel}} movie critic, do you like {{movie}}?",
  "config": {
    "model": "gpt-4o",
    "temperature": 0.5,
    "supported_languages": ["en", "fr"]
  },
  "version": 1,
  "labels": ["production", "staging", "latest"],
  "tags": ["movies"]
}
Type: StringDescription: Unique identifier for the prompt within your ABV project. Names are used to fetch prompts via SDK or API.Naming conventions:
  • Use descriptive, kebab-case names: "customer-support-greeting", "code-review-assistant"
  • Include use case context: "summarize-medical-records" rather than just "summarize"
  • Avoid version numbers in names (versions are managed automatically)
Uniqueness: Names must be unique within a project. Creating a prompt with an existing name creates a new version of that prompt.Examples: "movie-critic", "translate-to-spanish", "sql-query-generator"
Type: String enum ("text" or "chat")Default: "text"Description: Defines the structure of the prompt content.Text type (text):
  • Prompt is a single string with optional variables
  • Ideal for completion models or single-turn interactions
  • Compiles to a string
  • Example: "prompt": "Summarize: {\{document}\}"
Chat type (chat):
  • Prompt is an array of message objects with role and content
  • Designed for conversational models with system/user/assistant roles
  • Compiles to an array of message objects
  • Example: "prompt": [{"role": "system", "content": "You are helpful"}, {"role": "user", "content": "{\{query}\}"}]
When to use each:
  • Use text for simple completions, summarization, translation, single-turn Q&A
  • Use chat for multi-turn conversations, role-based interactions, system message instructions
Type: String (for text prompts) or Array of message objects (for chat prompts)Description: The actual prompt content, with optional {{variable}} placeholders for dynamic substitution.Example (text prompt):
"prompt": "As a {{criticLevel}} movie critic, do you like {{movie}}?"
When compiled with {criticLevel: "expert", movie: "Dune 2"}:
"As an expert movie critic, do you like Dune 2?"
Example (chat prompt):
"prompt": [
  {
    "role": "system",
    "content": "You are a {{criticLevel}} movie critic"
  },
  {
    "role": "user",
    "content": "Do you like {{movie}}?"
  }
]
When compiled with {criticLevel: "expert", movie: "Dune 2"}:
[
  {"role": "system", "content": "You are an expert movie critic"},
  {"role": "user", "content": "Do you like Dune 2?"}
]
Variable syntax: Use {{variableName}} for placeholders. Variable names can contain letters, numbers, and underscores.Message roles (chat prompts only): system, user, assistant, function, tool (model-dependent)
Type: JSON objectDefault: null or {}Description: Arbitrary JSON storage for model parameters, tools, or custom metadata. Not used by ABV internally—available for your application to read and use.Common use cases:Model parameters:
"config": {
  "model": "gpt-4o",
  "temperature": 0.7,
  "max_tokens": 1000,
  "top_p": 0.9,
  "frequency_penalty": 0.0,
  "presence_penalty": 0.0
}
Tool definitions (for function calling):
"config": {
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Get current weather",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {"type": "string"}
          }
        }
      }
    }
  ]
}
Custom metadata:
"config": {
  "supported_languages": ["en", "fr", "es"],
  "max_input_length": 5000,
  "requires_authentication": true
}
Accessing config in code:
prompt = abv.get_prompt("movie-critic")
temperature = prompt.config.get("temperature", 0.7)
model = prompt.config.get("model", "gpt-4o")
Config is versioned: Each prompt version can have different config values, enabling A/B testing of parameters.
Type: Integer (1, 2, 3, …)Description: Automatically incremented version number assigned when creating or updating a prompt. ABV manages versioning automatically—you don’t set this field directly.Version lifecycle:
  1. First prompt creation: Version 1
  2. Update (create with same name): Version 2
  3. Subsequent updates: Version 3, 4, 5…
Fetching by version:
# Get specific version
prompt_v1 = abv.get_prompt("movie-critic", version=1)
prompt_v2 = abv.get_prompt("movie-critic", version=2)
Immutability: Once created, a version’s content never changes. This ensures reproducibility and safe rollbacks.Version retention: All versions are retained indefinitely unless you explicitly delete them.
Type: Array of stringsDefault: ["latest"] (automatically assigned)Description: Named pointers to specific prompt versions. Labels enable deployment management without changing code.Built-in labels:
  • production: Default label fetched when no label is specified. Assign this to the version you want in production.
  • latest: Automatically maintained by ABV, always points to the most recently created version.
Custom labels: Create any labels you need for your workflow:
  • Environment labels: "staging", "development", "qa"
  • Tenant labels: "tenant-acme", "tenant-contoso"
  • A/B testing labels: "variant-a", "variant-b", "control", "experiment"
  • Geographic labels: "us-region", "eu-region"
Label assignment:
# Assign labels when creating
abv.create_prompt(
    name="movie-critic",
    prompt="...",
    labels=["production", "staging"]
)

# Reassign labels later (via UI or API)
# This is how you deploy new versions or roll back
Fetching by label:
# Get production version (default)
prompt = abv.get_prompt("movie-critic")

# Get staging version
staging_prompt = abv.get_prompt("movie-critic", label="staging")
Multiple labels per version: A single version can have multiple labels (e.g., version 3 might have both "production" and "stable").Label reassignment for deployment: Change which version a label points to in the ABV UI—your application immediately uses the new version without code changes.
Type: Array of stringsDefault: []Description: Categorization tags for organizing and filtering prompts. Unlike labels (which point to specific versions), tags categorize the entire prompt across all versions.Common use cases:
  • Use case tags: "summarization", "translation", "code-generation"
  • Domain tags: "healthcare", "finance", "customer-support"
  • Team tags: "team-product", "team-engineering"
  • Status tags: "experimental", "production-ready", "deprecated"
Example:
"tags": ["movies", "entertainment", "customer-facing"]
Filtering by tags: Use tags to filter prompts in the ABV UI or via API queries, making it easier to find relevant prompts in large projects.Tags are shared across versions: When you create a new version, it inherits the tags from the prompt name (not from a specific version).

Prompt Types: Text vs Chat

ABV supports two fundamental prompt types with different structures and use cases:
Structure: Single string with optional {\{variables}\}Use cases:
  • Simple completions: "Summarize this text: {\{document}\}"
  • Translation: "Translate to French: {\{content}\}"
  • Single-turn Q&A: "Answer this query: {\{query}\}"
  • Code generation: "Generate Python code to {\{task}\}"
  • Classification: "Classify this sentiment: {\{review}\}"
Example prompt object:
{
  "name": "movie-critic",
  "type": "text",
  "prompt": "As a {{criticLevel}} movie critic, do you like {{movie}}?",
  "version": 1
}
Compilation result (with {criticLevel: "expert", movie: "Dune 2"}):
"As an expert movie critic, do you like Dune 2?"
When to use: Choose text-based prompts when you need a single input string for the LLM, without structured conversation roles.
Structure: Array of message objects, each with role and contentMessage roles:
  • system: Instructions for the LLM’s behavior and personality
  • user: Messages from the user
  • assistant: Messages from the LLM (for multi-turn context)
  • function/tool: Function calling results (model-dependent)
Use cases:
  • Multi-turn conversations with system instructions
  • Role-based interactions (customer support, tutoring, therapy)
  • Structured reasoning with chain-of-thought
  • Function calling scenarios with tool messages
Example prompt object:
{
  "name": "movie-critic-chat",
  "type": "chat",
  "prompt": [
    {
      "role": "system",
      "content": "You are a {{criticLevel}} movie critic"
    },
    {
      "role": "user",
      "content": "Do you like {{movie}}?"
    }
  ],
  "version": 1
}
Compilation result (with {criticLevel: "expert", movie: "Dune 2"}):
[
  {"role": "system", "content": "You are an expert movie critic"},
  {"role": "user", "content": "Do you like Dune 2?"}
]
When to use: Choose chat prompts when you need structured conversation with system instructions, multi-turn context, or role-based interactions.Variables in chat prompts: Variables can appear in any message’s content field, and can even be used in message placeholders for dynamic message insertion.

Versioning and Labels

The relationship between versions and labels is central to ABV’s deployment model: Key concepts:
  • Versions are immutable snapshots created sequentially
  • Labels are flexible pointers that can be reassigned to different versions
  • Deployment is managed by reassigning labels (e.g., moving production from V1 to V3)
Automatic version creation:
  • Create a prompt with name "movie-critic" → Version 1 created
  • Create another prompt with name "movie-critic" → Version 2 created (previous version retained)
  • Each update increments the version number
Immutable versions: Once created, a version’s content and config never change. This ensures:
  • Reproducibility: Fetching version 1 always returns the same prompt
  • Safe rollbacks: Previous versions are always available
  • Audit trails: Complete history of prompt changes
Version comparison: In the ABV UI, view side-by-side diffs between versions to see exactly what changed (prompt content, config, labels).Version metadata: Each version stores creation timestamp, creator, and optional commit message.
Labels as pointers: Labels are named references to specific versions. Think of them as Git branches or tags pointing to commits.Label lifecycle:
  1. Create version 1, assign production label
  2. Create version 2, assign staging label (for testing)
  3. After validation, reassign production label to version 2
  4. Production traffic now uses version 2—instantly, without code changes
Built-in label behavior:
  • production: Default when fetching without specifying a label
  • latest: Automatically updated to newest version with each prompt creation
Custom label examples:
# Environment-based labels
staging_prompt = abv.get_prompt("critic", label="staging")
prod_prompt = abv.get_prompt("critic", label="production")

# Tenant-specific labels
acme_prompt = abv.get_prompt("critic", label="tenant-acme")
contoso_prompt = abv.get_prompt("critic", label="tenant-contoso")

# A/B testing labels
variant_a = abv.get_prompt("critic", label="variant-a")
variant_b = abv.get_prompt("critic", label="variant-b")
Label reassignment for deployment: The ABV UI and API allow you to change which version a label points to. This is the primary deployment mechanism—no code changes required.
Standard deployment workflow:
  1. Develop: Create new prompt version in ABV UI or via SDK
  2. Test: Assign staging label to new version
  3. Validate: Test in staging environment (fetches staging label)
  4. Deploy: Reassign production label to new version in ABV UI
  5. Monitor: Watch metrics for the new prompt version via linked traces
  6. Rollback (if needed): Reassign production back to previous version
A/B testing workflow:
  1. Create version 2 with variant A content, assign variant-a label
  2. Create version 3 with variant B content, assign variant-b label
  3. Application randomly chooses which label to fetch for each user
  4. After collecting metrics, promote winning variant to production
Tenant-specific workflow:
  1. Create version for tenant Acme’s requirements, assign tenant-acme label
  2. Create version for tenant Contoso’s requirements, assign tenant-contoso label
  3. Application fetches prompt based on current tenant context
  4. Each tenant gets customized prompts without separate codebases
Learn more about version control workflows →

Common Use Cases

Scenario: Your prompt requires specific model parameters (temperature, max tokens). You want to version these parameters alongside prompt content.Solution: Store parameters in the config field:
{
  "name": "creative-writer",
  "prompt": "Write a creative story about {{topic}}",
  "config": {
    "model": "gpt-4o",
    "temperature": 0.9,
    "max_tokens": 2000,
    "top_p": 0.95
  }
}
Usage:
prompt = abv.get_prompt("creative-writer")

response = openai_client.chat.completions.create(
    model=prompt.config["model"],
    temperature=prompt.config["temperature"],
    max_tokens=prompt.config["max_tokens"],
    messages=[{"role": "user", "content": prompt.compile(topic="dragons")}]
)
Benefits: Change model parameters without code deployment, A/B test parameter variations, version parameters with prompt content.
Scenario: You need different prompt versions in development, staging, and production environments.Solution: Use environment-specific labels:
# In development environment
prompt = abv.get_prompt("customer-support", label="development")

# In staging environment
prompt = abv.get_prompt("customer-support", label="staging")

# In production environment (default)
prompt = abv.get_prompt("customer-support")  # Uses "production" label
Workflow:
  1. Create new version, assign development label
  2. Test locally
  3. Promote to staging label for QA testing
  4. After approval, promote to production label
Benefits: Separate testing from production, gradual rollout, safe experimentation.
Scenario: Different customers need customized prompts with their specific domain knowledge, tone, or requirements.Solution: Create versions with tenant-specific labels:
# Create tenant-specific versions
abv.create_prompt(
    name="support-greeting",
    prompt="Welcome to {{company}}! I'm here to help with {{product}}.",
    labels=["tenant-acme"],
    config={"company": "ACME Corp", "product": "our enterprise software"}
)

abv.create_prompt(
    name="support-greeting",
    prompt="Hi! Thanks for using {{product}}. How can I assist you?",
    labels=["tenant-contoso"],
    config={"company": "Contoso", "product": "Contoso SaaS"}
)

# Fetch based on tenant context
tenant_id = get_current_tenant()
prompt = abv.get_prompt("support-greeting", label=f"tenant-{tenant_id}")
Benefits: Customized experience per customer, no separate codebases, centralized prompt management.
Scenario: You have dozens of prompts across multiple use cases and teams. Finding relevant prompts is difficult.Solution: Use tags for organization:
# Tag by use case
abv.create_prompt(
    name="summarize-article",
    tags=["summarization", "content"]
)

# Tag by team and domain
abv.create_prompt(
    name="patient-intake",
    tags=["healthcare", "team-medical", "production-ready"]
)

# Tag by status
abv.create_prompt(
    name="experimental-rag",
    tags=["retrieval", "experimental", "team-research"]
)
Filtering in UI: Use tags to filter prompts in the ABV dashboard, making it easier to find prompts by team, use case, or status.Benefits: Easy discovery, team collaboration, status tracking, use case categorization.

Next Steps