Skip to main content
This quickstart walks you through creating your first prompt, fetching it at runtime, and linking it to observability traces. You’ll see how prompt management decouples content from code, enabling rapid iteration without redeployment.

Quick Start Path

Follow these steps to create and use your first managed prompt:

Create ABV account and API key

  1. Create an ABV account (free trial available)
  2. Navigate to your project settings
  3. Create new API credentials
  4. Save your API key securely (starts with sk-abv-...)
Region selection: Choose between US region (https://app.abv.dev) and EU region (https://eu.app.abv.dev) based on your data residency requirements.

Create your first prompt

Choose your preferred method: UI (no code required), Python SDK, TypeScript/JavaScript SDK, or Public API.All methods create the same prompt object with name, version, labels, and optional configuration (model parameters, tags, metadata). See detailed examples in the “Creating Prompts” section below.Versioning is automatic: If you create a prompt with an existing name, ABV creates a new version instead of overwriting the existing prompt.

Fetch prompt at runtime

Your application fetches prompts at runtime using the SDK. By default, you get the production label (the version you’ve designated for production use).Client-side caching ensures zero latency: The SDK caches prompts locally and fetches from cache instantly, while background processes keep the cache synchronized with ABV.Variable substitution: Prompts can contain {{variables}} that you fill in when compiling the prompt for each request.

Link prompts to traces (optional but recommended)

Linking prompts to observability traces enables tracking metrics by prompt version. See exactly which prompt generated each response, compare quality across versions, and measure the impact of prompt changes.Add one line of code to associate your prompt with the LLM generation span. ABV automatically tracks latency, token usage, costs, and quality scores by prompt version.

Iterate and deploy new versions

When you create a new version of your prompt, assign it labels like staging or production. Your application fetches prompts by label, so updating a label’s target version instantly changes which prompt your application uses—no code deployment required.Roll back by reassigning the production label to a previous version. Compare versions in the ABV UI to see exactly what changed.

Creating Prompts

ABV supports four methods for creating prompts, all producing identical results:
The ABV dashboard provides a visual interface for creating and editing prompts. Non-technical team members can iterate prompts without engineering involvement.
  1. Navigate to the Prompts section in the ABV dashboard
  2. Click Create Prompt
  3. Choose prompt type: Text (single string template) or Chat (structured conversation with roles)
  4. Enter your prompt content with {{variables}} for dynamic substitution
  5. Optionally configure model parameters (temperature, max tokens, etc.)
  6. Assign labels like production to make the prompt immediately available
  7. Save to create version 1
When to use: Product managers iterating prompt content, quick experiments, non-technical stakeholders managing prompts.
Create prompts programmatically with the Python SDK for version-controlled prompt workflows.Install dependencies:
pip install abvdev python-dotenv
Set environment variables (create a .env file):
.env
ABV_API_KEY=sk-abv-...
ABV_HOST=https://app.abv.dev  # US region
# ABV_HOST=https://eu.app.abv.dev  # EU region
Create a text prompt:
from dotenv import load_dotenv
from abvdev import get_client

load_dotenv()
abv = get_client()

# Create a text prompt (single string template)
abv.create_prompt(
    name="movie-critic",
    type="text",
    prompt="As a {{criticlevel}} movie critic, do you like {{movie}}?",
    labels=["production"],  # directly promote to production
    config={
        "model": "gpt-4o",
        "temperature": 0.7,
        "supported_languages": ["en", "fr"],
    },
)
Create a chat prompt:
# Create a chat prompt (structured conversation with roles)
abv.create_prompt(
    name="movie-critic-chat",
    type="chat",
    prompt=[
        {"role": "system", "content": "You are an {{criticlevel}} movie critic"},
        {"role": "user", "content": "Do you like {{movie}}?"},
    ],
    labels=["production"],
    config={
        "model": "gpt-4o",
        "temperature": 0.7,
    },
)
Versioning: If a prompt with the same name exists, this creates a new version rather than overwriting.When to use: Infrastructure-as-code workflows, automated prompt deployment pipelines, version control integration.
Create prompts programmatically with the TypeScript/JavaScript SDK.Install dependencies:
npm install @abvdev/client dotenv
Set environment variables (create a .env file):
.env
ABV_API_KEY=sk-abv-...
ABV_BASEURL=https://app.abv.dev  # US region
# ABV_BASEURL=https://eu.app.abv.dev  # EU region
Create prompts:
import { ABVClient } from "@abvdev/client";
import dotenv from "dotenv";
dotenv.config();

const abv = new ABVClient();

async function main() {
    // Create a text prompt
    await abv.prompt.create({
        name: "movie-critic",
        type: "text",
        prompt: "As a {{criticlevel}} critic, do you like {{movie}}?",
        labels: ["production"],
        config: {
            model: "gpt-4o",
            temperature: 0.7,
            supported_languages: ["en", "fr"],
        },
    });

    // Create a chat prompt
    await abv.prompt.create({
        name: "movie-critic-chat",
        type: "chat",
        prompt: [
            { role: "system", content: "You are an {{criticlevel}} movie critic" },
            { role: "user", content: "Do you like {{movie}}?" },
        ],
        labels: ["production"],
        config: {
            model: "gpt-4o",
            temperature: 0.7,
        },
    });

    console.log("Prompts created successfully");
}

main();
Run your application:
npx tsx filename.ts
Alternative: Constructor parameters (instead of environment variables):
const abv = new ABVClient({
    apiKey: "sk-abv-...",
    baseUrl: "https://app.abv.dev",
});
Versioning: Creating a prompt with an existing name creates a new version.When to use: Node.js applications, TypeScript infrastructure, JavaScript-based deployment pipelines.
Create prompts via HTTP API for integration with any programming language or CI/CD system.Endpoint: POST https://app.abv.dev/api/public/v2/promptsAuthentication: Include your API key in the Authorization header:
Authorization: Bearer sk-abv-...
Example request:
curl https://app.abv.dev/api/public/v2/prompts \
  --request POST \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer sk-abv-...' \
  --data '{
    "type": "chat",
    "name": "movie-critic-chat",
    "prompt": [
      {
        "role": "system",
        "content": "You are an expert movie critic"
      },
      {
        "role": "user",
        "content": "Do you like {{movie}}?"
      }
    ],
    "labels": ["production"],
    "config": {
      "model": "gpt-4o",
      "temperature": 0.7
    },
    "commitMessage": "Initial version of movie critic prompt"
  }'
When to use: Languages without ABV SDK support, CI/CD automation, webhook-triggered prompt updates.View full API reference →

Fetching and Using Prompts

Once created, fetch prompts at runtime in your application:
Fetch and compile text prompts:
from abvdev import ABV

abv = ABV(
    api_key="sk-abv-...",
    host="https://app.abv.dev",
)

# Get current production version
prompt = abv.get_prompt("movie-critic")

# Insert variables into prompt template
compiled_prompt = prompt.compile(
    criticlevel="expert",
    movie="Dune 2"
)
# Result: "As an expert movie critic, do you like Dune 2?"

print(compiled_prompt)
Fetch and compile chat prompts:
# Get current production version of a chat prompt
chat_prompt = abv.get_prompt(
    "movie-critic-chat",
    type="chat"  # type arg infers the prompt type
)

# Insert variables into chat prompt template
compiled_chat_prompt = chat_prompt.compile(
    criticlevel="expert",
    movie="Dune 2"
)
# Result: [
#   {"role": "system", "content": "You are an expert movie critic"},
#   {"role": "user", "content": "Do you like Dune 2?"}
# ]

print(compiled_chat_prompt)
Optional parameters for version control:
# Get specific version
prompt = abv.get_prompt("movie-critic", version=1)

# Get specific label (e.g., staging, production)
prompt = abv.get_prompt("movie-critic", label="staging")

# Get latest version (automatically maintained by ABV)
prompt = abv.get_prompt("movie-critic", label="latest")
Access raw prompt and config:
chat_prompt = abv.get_prompt("movie-critic-chat", type="chat")

# Raw prompt template with {{variables}}
print(chat_prompt.prompt)

# Config object (model parameters, etc.)
print(chat_prompt.config)
Fetch and compile text prompts:
import { ABVClient } from "@abvdev/client";
import dotenv from "dotenv";
dotenv.config();

const abv = new ABVClient();

async function main() {
    // Get current production version
    const prompt = await abv.prompt.get("movie-critic");

    // Insert variables into prompt template
    const compiledPrompt = prompt.compile({
        criticlevel: "expert",
        movie: "Dune 2",
    });
    // Result: "As an expert movie critic, do you like Dune 2?"

    console.log(compiledPrompt);
}

main();
Fetch and compile chat prompts:
async function main() {
    // Get current production version of a chat prompt
    const chatPrompt = await abv.prompt.get("movie-critic-chat", {
        type: "chat",
    });

    // Insert variables into chat prompt template
    const compiledChatPrompt = chatPrompt.compile({
        criticlevel: "expert",
        movie: "Dune 2",
    });
    // Result: [
    //   {"role": "system", "content": "You are an expert movie critic"},
    //   {"role": "user", "content": "Do you like Dune 2?"}
    // ]

    console.log(compiledChatPrompt);
}

main();
Optional parameters for version control:
async function main() {
    // Get specific version
    const prompt1 = await abv.prompt.get("movie-critic", {
        version: 1
    });

    // Get specific label
    const prompt2 = await abv.prompt.get("movie-critic", {
        label: "staging",
    });

    // Get latest version (automatically maintained by ABV)
    const prompt3 = await abv.prompt.get("movie-critic", {
        label: "latest",
    });

    const compiledPrompt = prompt3.compile({
        criticlevel: "expert",
        movie: "Dune 2",
    });

    console.log(compiledPrompt);
}

main();
Access raw prompt and config:
async function main() {
    const prompt = await abv.prompt.get("movie-critic", {
        version: 1
    });

    // Raw prompt template with {{variables}}
    console.log(prompt.prompt);

    // Config object (model parameters, etc.)
    console.log(prompt.config);
}

main();

Linking Prompts to Observability Traces

Linking prompts to traces enables tracking metrics by prompt version. See which prompt generated each response, compare quality across versions, and measure the impact of prompt changes.
Using decorators:
from abvdev import ABV, observe

abv = ABV(
    api_key="sk-abv-...",
    host="https://app.abv.dev",
)

@observe(as_type="generation")
def nested_generation():
    prompt = abv.get_prompt("movie-critic")

    # Link prompt to current generation span
    abv.update_current_generation(
        prompt=prompt,
    )

@observe()
def main():
    nested_generation()

main()
Using context managers:
from abvdev import ABV
from openai import OpenAI

abv = ABV(
    api_key="sk-abv-...",
    host="https://app.abv.dev",
)

openai_client = OpenAI(api_key="sk-proj-...")

prompt = abv.get_prompt("movie-critic")
compiled_prompt = prompt.compile(
    criticlevel="expert",
    movie="The Lord of the Rings"
)

with abv.start_as_current_observation(
    as_type='generation',
    name="movie-generation",
    model="gpt-4o",
    prompt=prompt  # Link prompt to generation span
) as generation:
    # Make LLM call
    response = openai_client.chat.completions.create(
        messages=[{"role": "user", "content": compiled_prompt}],
        model="gpt-4o",
    )

    generation.update(output=response.choices[0].message.content)
If a fallback prompt is used (when ABV is unavailable), no link will be created to preserve application reliability.
Install additional dependencies:
npm install @abvdev/tracing @abvdev/otel @opentelemetry/sdk-node
Set up instrumentation (create instrumentation.ts):
instrumentation.ts
import dotenv from "dotenv";
dotenv.config();

import { NodeSDK } from "@opentelemetry/sdk-node";
import { ABVSpanProcessor } from "@abvdev/otel";

const sdk = new NodeSDK({
  spanProcessors: [
    new ABVSpanProcessor({
      apiKey: process.env.ABV_API_KEY,
      baseUrl: process.env.ABV_BASE_URL,
      exportMode: "immediate",
      flushAt: 1,
      flushInterval: 1,
      additionalHeaders: {
        "Content-Type": "application/json",
        "Accept": "application/json"
      }
    })
  ],
});

sdk.start();
Import instrumentation first in your application:
index.ts
import "./instrumentation"; // Must be the first import
Manual observations:
import "./instrumentation"; // Must be the first import
import { ABVClient } from "@abvdev/client";
import { startObservation } from "@abvdev/tracing";

const abv = new ABVClient();

async function main() {
  const prompt = await abv.prompt.get("movie-critic");

  const generation = startObservation(
    "llm",
    {
      input: prompt.prompt,
    },
    { asType: "generation" },
  );

  // Your LLM call here

  generation.end();
}

main();
Context manager approach:
import "./instrumentation";
import { ABVClient } from "@abvdev/client";
import { startActiveObservation } from "@abvdev/tracing";

const abv = new ABVClient();

startActiveObservation(
  "llm",
  async (generation) => {
    const prompt = await abv.prompt.get("movie-critic");
    generation.update({ input: prompt.prompt });
  },
  { asType: "generation" },
);
Observe wrapper:
import { ABVClient } from "@abvdev/client";
import { observe, updateActiveObservation } from "@abvdev/tracing";

const abv = new ABVClient();

const callLLM = async (input: string) => {
  const prompt = await abv.prompt.get("my-prompt");

  updateActiveObservation({ prompt }, { asType: "generation" });

  return await invokeLLM(input);
};

export const observedCallLLM = observe(callLLM);
If a fallback prompt is used, no link will be created.

Common Workflows

Scenario: You’ve improved your prompt and want to test it in a staging environment before deploying to production.Steps:
  1. Create a new version of your prompt (via UI, SDK, or API)
  2. Assign the staging label to the new version
  3. In your staging environment, fetch prompts with label="staging"
  4. Test thoroughly, review linked traces and metrics
  5. When satisfied, reassign the production label to the new version
  6. Production traffic immediately uses the new prompt—no code deployment required
Rollback: If issues arise, reassign production to the previous version instantly.
Scenario: Your product team wants to experiment with prompt phrasing to improve response quality, but every change currently requires engineering involvement.Steps:
  1. Grant product managers access to the ABV dashboard
  2. Product team creates new prompt versions directly in the UI
  3. They assign staging label to test versions in a non-production environment
  4. After validation (via linked traces and quality metrics), they reassign production label
  5. Changes deploy instantly without engineering involvement
Engineering role: Set up initial prompt fetching code once, then product team manages content independently.
Scenario: Your prompts are currently hardcoded in your application. You want to migrate to ABV for version control and faster iteration.Steps:
  1. Extract prompts from code and create them in ABV (via UI or SDK)
  2. Assign production label to the initial version
  3. Replace hardcoded prompts with abv.get_prompt() calls
  4. Deploy code change that fetches from ABV instead of using hardcoded strings
  5. Future prompt updates happen without code deployment
Gradual migration: Migrate prompts one at a time, testing each before moving to the next.
Scenario: You have two prompt variants and want to determine which performs better in production.Steps:
  1. Create both prompt versions in ABV
  2. Configure A/B testing to randomly assign users to version 1 or version 2
  3. Link prompts to traces to track metrics by prompt version
  4. After sufficient data collection, compare quality scores, latency, and costs
  5. Promote the winning version to production label
Learn more about A/B testing →

Related Topics

Integration with Other Features

Prompt Management works seamlessly with ABV’s broader platform:
  • Observability: Link prompts to traces to see which prompt version generated each response and track metrics over time
  • Evaluations: Use Prompt Experiments to compare prompt performance on test datasets with automated scoring
  • SDKs: Fetch prompts programmatically with Python SDK or JS/TS SDK for seamless integration
  • Metrics: Track prompt performance over time in the Metrics Dashboard, comparing versions on quality, cost, and latency
  • LLM Gateway: Route requests through the LLM Gateway with prompts managed centrally for consistent behavior across providers