Skip to main content
LLM-as-a-judge is a technique to evaluate the quality of LLM applications by using an LLM as a judge. The LLM is given a trace or a dataset entry and asked to score and reason about the output. The scores and reasoning are stored as scores in ABV.

Why use LLM-as-a-judge?

  • Scalable & cost‑effective: Judge thousands of outputs quickly and cheaply versus human panels.
  • Human‑like judgments: Captures nuance (helpfulness, safety, coherence) better than simple metrics, especially when rubric‑guided.
  • Repeatable comparisons: With a fixed rubric, you can rerun the same prompts to get consistent scores and short rationales.

Set up step-by-step

1) Create a new LLM-as-a-Judge evaluator

Navigate to the Evaluators page and click on the Create Evaluator button.

2) Set the default model

Next, you’ll define the default model used for conducting the evaluations. The default is used by every managed evaluator; custom templates may override it. This step requires an LLM Connection to be set up. Please see LLM Connections for more information.
  • Setup: This default model needs to be set up once, though it can be changed at any point if needed.
  • Change: Existing evaluators keep evaluating with the new model—historic results stay preserved.
  • Structured Output Support: It’s crucial that the chosen default model supports structured output. This is essential for our system to correctly interpret the evaluation results from the LLM judge.

3) Pick an Evaluator

Now we select an evaluator. There are two main ways:

Managed Evaluator

ABV ships a growing catalog of evaluators built and maintained by us and partners like Ragas. Each evaluator captures best-practice evaluation prompts for a specific quality dimension—e.g. Hallucination, Context-Relevance, Toxicity, Helpfulness.
  • Ready to use: no prompt writing required.
  • Continuously expanded: by adding OSS partner-maintained evaluators and more evaluator types in the future (e.g. regex-based).

Custom Evaluator

When the library doesn’t fit your specific needs, add your own:
  1. Draft an evaluation prompt with {{variables}} placeholders (input, output, ground_truth …).
  2. Optional: Customize the score (0-1) and reasoning prompts to guide the LLM in scoring.
  3. Optional: Pin a custom dedicated model for this evaluator. If no custom model is specified, it will use the default evaluation model (see Section 2).
  4. Save → the evaluator can now be reused across your project.

4) Choose which Data to Evaluate

With your evaluator and model selected, you now specify which data to run the evaluations on. You can chose between running on production tracing data or Datasets during Dataset Runs.

Live Data

Evaluating live production traffic allows you to monitor the performance of your LLM application in real-time.
  • Scope: Choose whether to run on new traces only and/or existing traces once (for backfilling). When in doubt, we recommend running on new traces.
  • Filter: Narrow down the evaluation to a specific subset of data you’re interested in. You can filter by trace name, tags, userId and may more. Combine filters freely.
  • Preview: ABV shows a sample of traces from the last 24 hours that match your current filters, allowing you to sanity-check your selection.
  • Sampling: To manage costs and evaluation throughput, you can configure the evaluator to run on a percentage (e.g., 5%) of the matched traces.

Dataset Runs

LLM-as-a-Judge evaluators can score the results of your Dataset Runs. Native Dataset Runs: When running Native Dataset Runs through the UI, you can simply select which evaluators you want to run. These selected evaluators will then automatically execute on the data generated by your next Dataset Run. Remote Dataset Runs: Before running Remote Dataset Runs through the SDKs, you will need to set up which evaluators you want to run in UI. You will need to configure a running evaluator in the following format:
  • Dataset: Filter which source dataset the evaluator should run on.
  • Scope: Choose whether to target only new Dataset Runs and/or execute the evaluator on past Dataset Runs (for backfilling).
  • Sampling: To manage costs and evaluation throughput, you can configure the evaluator to run on a percentage (e.g., 5%) of Dataset Run items.

5) Map Variables & preview Evaluation Prompt

You now need to teach ABV which properties of your trace or dataset item represent the actual data to populate these variables for a sensible evaluation. For instance, you might map your system’s logged trace input to the prompt’s {{input}} variable, and the LLM response ie trace output to the prompt’s {{output}} variable. This mapping is crucial for ensuring the evaluation is sensible and relevant.

Live Data

  • Prompt Preview: As you configure the mapping, ABV shows a live preview of the evaluation prompt populated with actual data. This preview uses historical traces from the last 24 hours that matched your filters (from Step 3). You can navigate through several example traces to see how their respective data fills the prompt, helping you build confidence that the mapping is correct.
  • JSONPath: If the data is nested (e.g., within a JSON object), you can use a JSONPath expression (like $.choices[0].message.content) to precisely locate it.

Dataset Runs

  • Suggested mappings: The system will often be able to autocomplete common mappings based on typical field names in datasets. For example, if you’re evaluating for correctness, and your prompt includes {{input}}, {{output}}, and {{ground_truth}} variables, we would likely suggest mapping these to the trace input, trace output, and the dataset item’s expected_output respectively.
  • Edit mappings: You can easily edit these suggestions if your dataset schema differs. You can map any properties of your dataset item (e.g., input, expected_output). Further, as dataset runs create traces under the hood, using the trace input/output as the evaluation input/output is a common pattern. Think of the trace output as your experiment run’s output
✨ Done! You have successfully set up an evaluator which will run on your data.
Need custom logic? Use the SDK instead—see Custom Scores or an external pipeline example.

Monitor & Iterate

As our system evaluates your data it writes the results as scores. You can then:
  • View Logs: Check detailed logs for each evaluation, including status, any retry errors, and the full request/response bodies sent to the evaluation model.
  • Use Dashboards: Aggregate scores over time, filter by version or environment, and track the performance of your LLM application.
  • Take Actions: Pause, resume, or delete an evaluator.