Overview
Evaluation in Opik helps you assess and measure the quality of your LLM outputs across different dimensions. It provides a framework to systematically test your prompts and models against datasets, using various metrics to measure performance.
Opik also provides a set of pre-built metrics for common evaluation tasks. These metrics are designed to help you quickly and effectively gauge the performance of your LLM outputs and include metrics such as Hallucination, Answer Relevance, Context Precision/Recall and more. You can learn more about the available metrics in the Metrics Overview section.
::: tip If you are interested in evaluating your LLM application in production, please refer to the Online evaluation guide. Online evaluation rules allow you to define LLM as a Judge metrics that will automatically score all, or a subset, of your production traces. :::
Running an Evaluation
Each evaluation is defined by a dataset, an evaluation task and a set of evaluation metrics:
- Dataset: A dataset is a collection of samples that represent the inputs and, optionally, expected outputs for your LLM application.
- Evaluation task: This maps the inputs stored in the dataset to the output you would like to score. The evaluation task is typically a prompt template or the LLM application you are building.
- Metrics: The metrics you would like to use when scoring the outputs of your LLM
To simplify the evaluation process, Opik provides two main evaluation methods: evaluate_prompt
for evaluation prompt
templates and a more general evaluate
method for more complex evaluation scenarios.
- Evaluating Prompts
- Evaluating RAG applications and Agents
- Using the Playground
To evaluate a specific prompt against a dataset:
import opik
from opik.evaluation import evaluate_prompt
from opik.evaluation.metrics import Hallucination
# Create a dataset that contains the samples you want to evaluate
opik_client = opik.Opik()
dataset = opik_client.get_or_create_dataset("Evaluation test dataset")
dataset.insert([
{"input": "Hello, world!", "expected_output": "Hello, world!"},
{"input": "What is the capital of France?", "expected_output": "Paris"},
])
# Run the evaluation
result = evaluate_prompt(
dataset=dataset,
messages=[{"role": "user", "content": "Translate the following text to French: {{input}}"}],
model="gpt-3.5-turbo", # or your preferred model
scoring_metrics=[Hallucination()]
)
For more complex evaluation scenarios where you need custom processing:
import opik
from opik.evaluation import evaluate
from opik.evaluation.metrics import ContextPrecision, ContextRecall
# Create a dataset with questions and their contexts
opik_client = opik.Opik()
dataset = opik_client.get_or_create_dataset("RAG evaluation dataset")
dataset.insert([
{
"input": "What are the key features of Python?",
"context": "Python is known for its simplicity and readability. Key features include dynamic typing, automatic memory management, and an extensive standard library.",
"expected_output": "Python's key features include dynamic typing, automatic memory management, and an extensive standard library."
},
{
"input": "How does garbage collection work in Python?",
"context": "Python uses reference counting and a cyclic garbage collector. When an object's reference count drops to zero, it is deallocated.",
"expected_output": "Python uses reference counting for garbage collection. Objects are deallocated when their reference count reaches zero."
}
])
def rag_task(item):
# Simulate RAG pipeline
output = "<LLM response placeholder>"
return {
"output": output
}
# Run the evaluation
result = evaluate(
dataset=dataset,
task=rag_task,
scoring_metrics=[
ContextPrecision(),
ContextRecall()
],
experiment_name="rag_evaluation"
)
You can also use the Opik Playground to quickly evaluate different prompts and LLM models.
To use the Playground, you will need to navigate to the Playground page and:
- Configure the LLM provider you want to use
- Enter the prompts you want to evaluate - You should include variables in the prompts using the
{{variable}}
syntax - Select the dataset you want to evaluate on
- Click on the
Evaluate
button
You will now be able to view the LLM outputs for each sample in the dataset:
Analyzing Evaluation Results
Once the evaluation is complete, Opik allows you to manually review the results and compare them with previous iterations.
In the experiment pages, you will be able to:
- Review the output provided by the LLM for each sample in the dataset
- Deep dive into each sample by clicking on the
item ID
- Review the experiment configuration to know how the experiment was Run
- Compare multiple experiments side by side
Learn more
You can learn more about Opik's evaluation features in: