Skip to main content

Overview

Evaluation in Opik helps you assess and measure the quality of your LLM outputs across different dimensions. It provides a framework to systematically test your prompts and models against datasets, using various metrics to measure performance.

Opik Evaluation

Opik also provides a set of pre-built metrics for common evaluation tasks. These metrics are designed to help you quickly and effectively gauge the performance of your LLM outputs and include metrics such as Hallucination, Answer Relevance, Context Precision/Recall and more. You can learn more about the available metrics in the Metrics Overview section.

::: tip If you are interested in evaluating your LLM application in production, please refer to the Online evaluation guide. Online evaluation rules allow you to define LLM as a Judge metrics that will automatically score all, or a subset, of your production traces. :::

Running an Evaluation

Each evaluation is defined by a dataset, an evaluation task and a set of evaluation metrics:

  1. Dataset: A dataset is a collection of samples that represent the inputs and, optionally, expected outputs for your LLM application.
  2. Evaluation task: This maps the inputs stored in the dataset to the output you would like to score. The evaluation task is typically a prompt template or the LLM application you are building.
  3. Metrics: The metrics you would like to use when scoring the outputs of your LLM

To simplify the evaluation process, Opik provides two main evaluation methods: evaluate_prompt for evaluation prompt templates and a more general evaluate method for more complex evaluation scenarios.

To evaluate a specific prompt against a dataset:

import opik
from opik.evaluation import evaluate_prompt
from opik.evaluation.metrics import Hallucination

# Create a dataset that contains the samples you want to evaluate
opik_client = opik.Opik()
dataset = opik_client.get_or_create_dataset("Evaluation test dataset")
dataset.insert([
{"input": "Hello, world!", "expected_output": "Hello, world!"},
{"input": "What is the capital of France?", "expected_output": "Paris"},
])

# Run the evaluation
result = evaluate_prompt(
dataset=dataset,
messages=[{"role": "user", "content": "Translate the following text to French: {{input}}"}],
model="gpt-3.5-turbo", # or your preferred model
scoring_metrics=[Hallucination()]
)

Analyzing Evaluation Results

Once the evaluation is complete, Opik allows you to manually review the results and compare them with previous iterations.

Experiment page

In the experiment pages, you will be able to:

  1. Review the output provided by the LLM for each sample in the dataset
  2. Deep dive into each sample by clicking on the item ID
  3. Review the experiment configuration to know how the experiment was Run
  4. Compare multiple experiments side by side

Learn more

You can learn more about Opik's evaluation features in:

  1. Evaluation concepts
  2. Evaluate prompts
  3. Evaluate complex LLM applications
  4. Evaluation metrics
  5. Manage datasets