Core features
Side-by-Side Comparison View
Compare multiple prompt variants alongside each other. Execute them all at once or focus on a single variant. Each variant keeps its own LLM settings, variables, tool definitions, and placeholders so you can immediately see the impact of every change.Open your prompt in the playground
You can open a prompt you created with ABV Prompt Management in the playground.Save your prompt to Prompt Management
When you’re satisfied with your prompt, you can save it to Prompt Management by clicking the save button.Open a generation in the playground
You can open a generation from ABV Observability in the playground by clicking theOpen in Playground button in the generation details page.
Tool calling and structured outputs
The ABV Playground supports tool calling and structured output schemas, enabling you to define, test, and validate LLM executions that rely on tool calls and enforce specific response formats. Tool Calling- Define custom tools with JSON schema definitions
- Test prompts relying on tools in real-time by mocking tool responses
- Save tool definitions to your project
- Enforce response formats using JSON schemas
- Save schemas to your project
- Jump into the playground from your OpenAI generation using structured output
Add prompt variables
You can add prompt variables in the playground to simulate different inputs to your prompt.
Use your favorite model
You can use your favorite model by adding the API key for the model you want to use in the ABV project settings. You can learn how to set up an LLM connection here.
Optionally, many LLM providers allow for additional parameters when invoking a model. You can pass these parameters in the playground when toggling “Additional Options” in the model selection dropdown. Read this documentation about additional provider options for more information.