Skip to main content
The prompt config field stores arbitrary JSON alongside your prompt content, enabling you to version model parameters, tool definitions, and custom metadata together with the prompt itself. This decouples configuration from code, allowing non-technical team members to adjust parameters without engineering involvement.

How Prompt Config Works

Understanding config structure, versioning, and usage:

Define config when creating prompts

The config field accepts arbitrary JSON when creating or updating prompts.Config is freeform JSON - ABV doesn’t enforce schema. Store whatever your application needs.

Config is versioned with the prompt

When you create a new prompt version, the config is versioned alongside the prompt content.Version 1: Prompt + Config {"model": "gpt-4o", "temperature": 0.7} Version 2: Same Prompt + Updated Config {"model": "gpt-4o-mini", "temperature": 0.8}Benefits include comparing prompts and configs side-by-side across versions, rolling back to previous config by reassigning labels, A/B testing parameter variations alongside prompt variations, and audit trail showing who changed which parameters when.

Fetch config with the prompt

When your application fetches a prompt, the config is included in the response.Config is cached with the prompt - no additional network requests needed.

Update config without code changes

Product teams can modify config directly in the ABV UI:
  1. Navigate to prompt in ABV dashboard
  2. Create new version or edit existing draft
  3. Update config JSON (change temperature, model, etc.)
  4. Assign to staging label for testing
  5. After validation, reassign production label to new version
  6. Application automatically uses new config on next cache refresh
No code deployment required - config changes deploy instantly via label reassignment.

Creating Prompts with Config

Python SDK Example:
from abvdev import ABV

abv = ABV(api_key="sk-abv-...", host="https://app.abv.dev")

abv.create_prompt(
    name="movie-critic",
    prompt="As a {{criticlevel}} critic, review {{movie}}.",
    config={
        "model": "gpt-4o",
        "temperature": 0.7,
        "max_tokens": 1000,
        "top_p": 0.9,
        "supported_languages": ["en", "fr", "es"]
    },
    labels=["production"]
)
JavaScript/TypeScript SDK Example:
await abv.prompt.create({
  name: "movie-critic",
  prompt: "As a {{criticlevel}} critic, review {{movie}}.",
  config: {
    model: "gpt-4o",
    temperature: 0.7,
    max_tokens: 1000,
    top_p: 0.9,
    supported_languages: ["en", "fr", "es"]
  },
  labels: ["production"]
});

Using Config at Runtime

Python Example:
prompt = abv.get_prompt("movie-critic")

# Access config fields
model = prompt.config.get("model", "gpt-4o")
temperature = prompt.config.get("temperature", 0.7)
max_tokens = prompt.config.get("max_tokens", 1000)

# Use config in LLM call
response = openai_client.chat.completions.create(
    model=model,
    temperature=temperature,
    max_tokens=max_tokens,
    messages=[{"role": "user", "content": prompt.compile(...)}]
)
JavaScript/TypeScript Example:
const prompt = await abv.prompt.get("movie-critic");

// Access config fields
const model = prompt.config?.model || "gpt-4o";
const temperature = prompt.config?.temperature || 0.7;
const maxTokens = prompt.config?.max_tokens || 1000;

// Use config in LLM call
const response = await openai.chat.completions.create({
  model: model,
  temperature: temperature,
  max_tokens: maxTokens,
  messages: [{ role: "user", content: prompt.compile(...) }]
});

Common Config Patterns

Store standard LLM model parameters in config for version-controlled parameter management.Common parameters:
{
  "model": "gpt-4o",
  "temperature": 0.7,
  "max_tokens": 1000,
  "top_p": 0.9,
  "frequency_penalty": 0.0,
  "presence_penalty": 0.0,
  "stop": ["\n\n", "###"]
}
Usage pattern:
prompt = abv.get_prompt("movie-critic")

# Spread config into LLM call
response = openai_client.chat.completions.create(
    **prompt.config,  # Unpacks all config fields
    messages=[{"role": "user", "content": prompt.compile(...)}]
)
Benefits: Experiment with temperature without code changes, switch models (GPT-4o → GPT-4o-mini) via UI, A/B test parameter combinations, roll back to previous parameter sets instantly.
Store function calling tool definitions in config for versioned tool management.Config with tools:
{
  "model": "gpt-4o",
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Get current weather for a location",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {
              "type": "string",
              "description": "City name"
            },
            "unit": {
              "type": "string",
              "enum": ["celsius", "fahrenheit"]
            }
          },
          "required": ["location"]
        }
      }
    },
    {
      "type": "function",
      "function": {
        "name": "get_stock_price",
        "description": "Get current stock price",
        "parameters": {
          "type": "object",
          "properties": {
            "symbol": {
              "type": "string",
              "description": "Stock ticker symbol"
            }
          },
          "required": ["symbol"]
        }
      }
    }
  ]
}
Usage:
prompt = abv.get_prompt("assistant-with-tools")

response = openai_client.chat.completions.create(
    model=prompt.config["model"],
    tools=prompt.config["tools"],
    messages=[{"role": "user", "content": prompt.compile(...)}]
)
Benefits: Add new tools without code deployment, modify tool schemas, version tool definitions with prompts, A/B test different tool configurations.
Define structured output schemas in config for consistent response parsing.Config with JSON schema:
{
  "model": "gpt-4o",
  "response_format": {
    "type": "json_schema",
    "json_schema": {
      "name": "movie_review",
      "strict": true,
      "schema": {
        "type": "object",
        "properties": {
          "rating": {
            "type": "number",
            "minimum": 0,
            "maximum": 5
          },
          "summary": {
            "type": "string"
          },
          "pros": {
            "type": "array",
            "items": {"type": "string"}
          },
          "cons": {
            "type": "array",
            "items": {"type": "string"}
          }
        },
        "required": ["rating", "summary"],
        "additionalProperties": false
      }
    }
  }
}
Usage:
prompt = abv.get_prompt("movie-critic")

response = openai_client.chat.completions.create(
    model=prompt.config["model"],
    response_format=prompt.config["response_format"],
    messages=[{"role": "user", "content": prompt.compile(...)}]
)

# Parse structured JSON response
review = json.loads(response.choices[0].message.content)
rating = review["rating"]
summary = review["summary"]
Benefits: Enforce structured outputs without code changes, version schema evolution, switch between freeform and structured responses via config.
Store application-specific metadata for business logic or feature flags.Custom config:
{
  "model": "gpt-4o",
  "temperature": 0.7,
  "supported_languages": ["en", "fr", "es"],
  "max_input_length": 5000,
  "requires_authentication": true,
  "cost_tier": "premium",
  "feature_flags": {
    "enable_streaming": true,
    "enable_citations": false
  }
}
Usage:
prompt = abv.get_prompt("multilingual-summarizer")

# Check supported languages
if user_language not in prompt.config["supported_languages"]:
    return "Language not supported"

# Enforce input length limit
if len(user_input) > prompt.config["max_input_length"]:
    return "Input too long"

# Check authentication requirement
if prompt.config["requires_authentication"] and not user.is_authenticated():
    return "Authentication required"

# Use feature flags
if prompt.config["feature_flags"]["enable_streaming"]:
    # Use streaming API
    pass
Benefits: Manage feature flags without code deployment, configure business rules per prompt version, store documentation (supported languages, limits).
Store provider-specific parameters for advanced model features.OpenAI-specific config:
{
  "model": "gpt-4o",
  "temperature": 0.7,
  "reasoning_effort": "high",
  "parallel_tool_calls": true,
  "service_tier": "default"
}
Anthropic-specific config:
{
  "model": "claude-sonnet-4-5",
  "temperature": 0.7,
  "max_tokens": 4096,
  "thinking": {
    "type": "enabled",
    "budget_tokens": 2000
  }
}
Google-specific config:
{
  "model": "gemini-2.0-flash-exp",
  "temperature": 0.7,
  "safety_settings": [
    {
      "category": "HARM_CATEGORY_HARASSMENT",
      "threshold": "BLOCK_MEDIUM_AND_ABOVE"
    }
  ]
}
Benefits: Switch LLM providers via config without code changes, use provider-specific features.

Workflows

Test whether increasing temperature improves response creativity.Setup:
  1. Create variant A with temperature: 0.7, assign variant-a label
  2. Create variant B with temperature: 0.9, assign variant-b label
  3. Application randomly selects variant, uses config from selected prompt
Implementation:
import random

# Fetch both variants
prompt_a = abv.get_prompt("movie-critic", label="variant-a")
prompt_b = abv.get_prompt("movie-critic", label="variant-b")

# Random selection
selected_prompt = random.choice([prompt_a, prompt_b])

# Use config from selected variant
response = openai_client.chat.completions.create(
    model=selected_prompt.config["model"],
    temperature=selected_prompt.config["temperature"],
    messages=[{"role": "user", "content": selected_prompt.compile(...)}]
)
Analysis: Compare quality scores, latency, and user feedback by prompt version.
Migrate from GPT-4o to GPT-4o-mini for cost savings.Create new version with updated config, test in staging, compare metrics, and promote to production via label reassignment.
Add new tools to assistant without breaking existing deployments.Create new version with additional tools, ensure function handlers support new tools, test in staging, then promote to production.
Use different parameters in development vs production.Development uses faster, cheaper models while production uses higher quality models. Single codebase with environment-specific config managed in ABV.

Next Steps