Why use LLM Evaluation?
LLM evaluation is crucial for improving the accuracy and robustness of language models, ultimately enhancing the user experience and trust in your AI application. Here are the key benefits:- Quality Assurance: Detect hallucinations, factual inaccuracies, and inconsistent outputs to ensure your AI app delivers reliable results
- Performance Monitoring: Measure response quality, relevance, and user satisfaction across different scenarios and edge cases
- Continuous Improvement: Identify areas for enhancement and track improvements over time through structured evaluation metrics
- User Trust: Build confidence in your AI application by demonstrating consistent, high-quality outputs through systematic evaluation
- Risk Mitigation: Catch potential issues before they reach production users, reducing the likelihood of poor user experiences or reputational damage
Online & Offline Evaluation
Offline Evaluation involves- Evaluating the application in a controlled setting
- Typically using curated test Datasets instead of live user queries
- Heavily used during development (can be part of CI/CD pipelines) to measure improvements / regressions
- Repeatable and you can get clear accuracy metrics since you have ground truth.
- Evaluating the application in a live, real-world environment, i.e. during actual usage in production.
- Use Evaluation Methods that track success rates, user satisfaction scores, or other metrics on live traffic
- Advantage of online evaluation is that it captures things you might not anticipate in a lab setting
- Can include collecting implicit and explicit user feedback, and possibly running shadow tests or A/B tests
Core Concepts
| Concept | Description |
|---|---|
| Scores | Scores are a flexible data object that can be used to store any evaluation metric and link it to other objects in ABV. |
| Evaluation Methods | Evaluation methods are functions or tools to assign scores to other objects. |
| Datasets | Datasets are a collection of inputs and, optionally, expected outputs that can be used during Dataset runs. |
| Dataset Runs | Dataset runs are used to run a dataset through your LLM application and optionally apply evaluation methods to the results. |
Evaluation Methods
Evaluation methods are functions or tools to assign evaluationScores to other objects. ABV uses the Scores to store evaluation metrics, it is meant to be flexible to represent any evaluation metric.
ABV currently supports: automatic scoring through LLM-as-a-Judge, manual Human Annotations or fully Custom Scoring via API/SDKs. We keep adding more evaluation methods fast, so stay tuned!
LLM-as-a-Judge
Human Annotation
Custom Scores
Learn more about the Scores Data Model.
Dataset Runs
Dataset Runs are used to loop your LLM application through Datasets and optionally apply Evaluation Methods to the results. This lets you strategically evaluate your application and compare the performance of different inputs, prompts, models, or other parameters side-by-side against controlled conditions. In ABV we differentiate between Native vs. Remote Dataset Runs. Native Dataset Runs rely on Dataset, Prompts and optionally LLM-as-a-Judge Evaluators all being on the ABV platform. Remote Dataset Runs rely only on Datasets being on the ABV platform, prompts and evaluation methods can managed off platform โ they are run via code. All require managing the Datasets on the ABV platform. Create a Dataset Remote Dataset Runs Native Dataset Runs Learn more about the Dataset Runs Data Model.Integration with Other Features
Evaluations work seamlessly with other ABV features to provide comprehensive testing and monitoring:- Prompt Management: Test different prompt versions using Prompt Experiments to find the best performing prompts
- Observability: Evaluation scores appear directly on traces for real-time quality monitoring
- SDK Support: Create and manage evaluations programmatically with the Python SDK and JS/TS SDK
- Metrics Dashboard: Aggregate evaluation scores in the Metrics section to track quality trends over time
- Data Export: Export evaluation results via the Public API for further analysis
Getting Started
- Set up tracing: Start by instrumenting your application with Python SDK or JS/TS SDK
- Create datasets: Build test datasets with representative inputs and expected outputs
- Choose evaluation methods: Select from LLM-as-a-Judge, Human Annotation, or Custom Scores
- Run evaluations: Execute dataset runs to evaluate your application systematically
- Monitor and iterate: Track scores over time and improve based on insights