Skip to main content
Evaluation is a critical aspect of developing and deploying LLM applications. Usually, teams use a multitude of different evaluation methods to score the performance of their AI application depending on the use case and the stage of the development process.

Why use LLM Evaluation?

LLM evaluation is crucial for improving the accuracy and robustness of language models, ultimately enhancing the user experience and trust in your AI application. Here are the key benefits:
  • Quality Assurance: Detect hallucinations, factual inaccuracies, and inconsistent outputs to ensure your AI app delivers reliable results
  • Performance Monitoring: Measure response quality, relevance, and user satisfaction across different scenarios and edge cases
  • Continuous Improvement: Identify areas for enhancement and track improvements over time through structured evaluation metrics
  • User Trust: Build confidence in your AI application by demonstrating consistent, high-quality outputs through systematic evaluation
  • Risk Mitigation: Catch potential issues before they reach production users, reducing the likelihood of poor user experiences or reputational damage

Online & Offline Evaluation

Offline Evaluation involves
  • Evaluating the application in a controlled setting
  • Typically using curated test Datasets instead of live user queries
  • Heavily used during development (can be part of CI/CD pipelines) to measure improvements / regressions
  • Repeatable and you can get clear accuracy metrics since you have ground truth.
Online Evaluation involves
  • Evaluating the application in a live, real-world environment, i.e. during actual usage in production.
  • Use Evaluation Methods that track success rates, user satisfaction scores, or other metrics on live traffic
  • Advantage of online evaluation is that it captures things you might not anticipate in a lab setting
  • Can include collecting implicit and explicit user feedback, and possibly running shadow tests or A/B tests
In practice, successful evaluation blends online and offline evaluations. Many teams adopt a loop-like approach. This way, evaluation is continuous and ever-improving. The evaluation loop: Offline testing validates changes before deployment. Production monitoring surfaces real-world issues. Insights from production create new test cases for offline evaluation.

Core Concepts

ConceptDescription
ScoresScores are a flexible data object that can be used to store any evaluation metric and link it to other objects in ABV.
Evaluation MethodsEvaluation methods are functions or tools to assign scores to other objects.
DatasetsDatasets are a collection of inputs and, optionally, expected outputs that can be used during Dataset runs.
Dataset RunsDataset runs are used to run a dataset through your LLM application and optionally apply evaluation methods to the results.

Evaluation Methods

Evaluation methods are functions or tools to assign evaluation Scores to other objects. ABV uses the Scores to store evaluation metrics, it is meant to be flexible to represent any evaluation metric. ABV currently supports: automatic scoring through LLM-as-a-Judge, manual Human Annotations or fully Custom Scoring via API/SDKs. We keep adding more evaluation methods fast, so stay tuned! LLM-as-a-Judge Human Annotation Custom Scores Learn more about the Scores Data Model.

Dataset Runs

Dataset Runs are used to loop your LLM application through Datasets and optionally apply Evaluation Methods to the results. This lets you strategically evaluate your application and compare the performance of different inputs, prompts, models, or other parameters side-by-side against controlled conditions. In ABV we differentiate between Native vs. Remote Dataset Runs. Native Dataset Runs rely on Dataset, Prompts and optionally LLM-as-a-Judge Evaluators all being on the ABV platform. Remote Dataset Runs rely only on Datasets being on the ABV platform, prompts and evaluation methods can managed off platform โ€“ they are run via code. All require managing the Datasets on the ABV platform. Create a Dataset Remote Dataset Runs Native Dataset Runs Learn more about the Dataset Runs Data Model.

Integration with Other Features

Evaluations work seamlessly with other ABV features to provide comprehensive testing and monitoring:
  • Prompt Management: Test different prompt versions using Prompt Experiments to find the best performing prompts
  • Observability: Evaluation scores appear directly on traces for real-time quality monitoring
  • SDK Support: Create and manage evaluations programmatically with the Python SDK and JS/TS SDK
  • Metrics Dashboard: Aggregate evaluation scores in the Metrics section to track quality trends over time
  • Data Export: Export evaluation results via the Public API for further analysis

Getting Started

  1. Set up tracing: Start by instrumenting your application with Python SDK or JS/TS SDK
  2. Create datasets: Build test datasets with representative inputs and expected outputs
  3. Choose evaluation methods: Select from LLM-as-a-Judge, Human Annotation, or Custom Scores
  4. Run evaluations: Execute dataset runs to evaluate your application systematically
  5. Monitor and iterate: Track scores over time and improve based on insights