Test and iterate on your prompts directly in the ABV Prompt Playground. Tweak the prompt and model parameters to see how different models respond to these input changes. This allows you to quickly iterate on your prompts and optimize them for the best results in your LLM app without having to switch between tools or use any code.
Compare multiple prompt variants alongside each other. Execute them all at once or focus on a single variant. Each variant keeps its own LLM settings, variables, tool definitions, and placeholders so you can immediately see the impact of every change.
The ABV Playground supports tool calling and structured output schemas, enabling you to define, test, and validate LLM executions that rely on tool calls and enforce specific response formats.Tool Calling
Define custom tools with JSON schema definitions
Test prompts relying on tools in real-time by mocking tool responses
Save tool definitions to your project
Structured Output
Enforce response formats using JSON schemas
Save schemas to your project
Jump into the playground from your OpenAI generation using structured output
You can use your favorite model by adding the API key for the model you want to use in the ABV project settings. You can learn how to set up an LLM connection here.Optionally, many LLM providers allow for additional parameters when invoking a model. You can pass these parameters in the playground when toggling “Additional Options” in the model selection dropdown. Read this documentation about additional provider options for more information.