Evaluate your LLM application

Step by step guide on how to evaluate your LLM application

Evaluating your LLM application allows you to have confidence in the performance of your LLM application. In this guide, we will walk through the process of evaluating complex applications like LLM chains or agents.

In this guide, we will focus on evaluating complex LLM applications. If you are looking at evaluating single prompts you can refer to the Evaluate A Prompt guide.

The evaluation is done in five steps:

  1. Add tracing to your LLM application
  2. Define the evaluation task
  3. Choose the Dataset that you would like to evaluate your application on
  4. Choose the metrics that you would like to evaluate your application with
  5. Create and run the evaluation experiment

1. Add tracking to your LLM application

While not required, we recommend adding tracking to your LLM application. This allows you to have full visibility into each evaluation run. In the example below we will use a combination of the track decorator and the track_openai function to trace the LLM application.

1from opik import track
2from opik.integrations.openai import track_openai
3import openai
4
5openai_client = track_openai(openai.OpenAI())
6
7# This method is the LLM application that you want to evaluate
8# Typically this is not updated when creating evaluations
9@track
10def your_llm_application(input: str) -> str:
11 response = openai_client.chat.completions.create(
12 model="gpt-3.5-turbo",
13 messages=[{"role": "user", "content": input}],
14 )
15
16 return response.choices[0].message.content

Here we have added the track decorator so that this trace and all its nested steps are logged to the platform for further analysis.

2. Define the evaluation task

Once you have added instrumentation to your LLM application, we can define the evaluation task. The evaluation task takes in as an input a dataset item and needs to return a dictionary with keys that match the parameters expected by the metrics you are using. In this example we can define the evaluation task as follows:

1def evaluation_task(x):
2 return {
3 "output": your_llm_application(x['user_question'])
4 }

If the dictionary returned does not match with the parameters expected by the metrics, you will get inconsistent evaluation results.

3. Choose the evaluation Dataset

In order to create an evaluation experiment, you will need to have a Dataset that includes all your test cases.

If you have already created a Dataset, you can use the Opik.get_or_create_dataset function to fetch it:

1from opik import Opik
2
3client = Opik()
4dataset = client.get_or_create_dataset(name="Example dataset")

If you don’t have a Dataset yet, you can insert dataset items using the Dataset.insert method. You can call this method multiple times as Opik performs data deduplication before ingestion:

1from opik import Opik
2
3client = Opik()
4dataset = client.get_or_create_dataset(name="Example dataset")
5
6dataset.insert([
7 {"input": "Hello, world!", "expected_output": "Hello, world!"},
8 {"input": "What is the capital of France?", "expected_output": "Paris"},
9])

4. Choose evaluation metrics

Opik provides a set of built-in evaluation metrics that you can choose from. These are broken down into two main categories:

  1. Heuristic metrics: These metrics that are deterministic in nature, for example equals or contains
  2. LLM-as-a-judge: These metrics use an LLM to judge the quality of the output; typically these are used for detecting hallucinations or context relevance

In the same evaluation experiment, you can use multiple metrics to evaluate your application:

1from opik.evaluation.metrics import Hallucination
2
3hallucination_metric = Hallucination()

Each metric expects the data in a certain format. You will need to ensure that the task you have defined in step 1 returns the data in the correct format.

5. Run the evaluation

Now that we have the task we want to evaluate, the dataset to evaluate on, and the metrics we want to evaluate with, we can run the evaluation:

1from opik import Opik, track
2from opik.evaluation import evaluate
3from opik.evaluation.metrics import Equals, Hallucination
4from opik.integrations.openai import track_openai
5import openai
6
7# Define the task to evaluate
8openai_client = track_openai(openai.OpenAI())
9
10MODEL = "gpt-3.5-turbo"
11
12@track
13def your_llm_application(input: str) -> str:
14 response = openai_client.chat.completions.create(
15 model=MODEL,
16 messages=[{"role": "user", "content": input}],
17 )
18 return response.choices[0].message.content
19
20# Define the evaluation task
21def evaluation_task(x):
22 return {
23 "output": your_llm_application(x['input'])
24 }
25
26# Create a simple dataset
27client = Opik()
28dataset = client.get_or_create_dataset(name="Example dataset")
29dataset.insert([
30 {"input": "What is the capital of France?"},
31 {"input": "What is the capital of Germany?"},
32])
33
34# Define the metrics
35hallucination_metric = Hallucination()
36
37evaluation = evaluate(
38 dataset=dataset,
39 task=evaluation_task,
40 scoring_metrics=[hallucination_metric],
41 experiment_config={
42 "model": MODEL
43 }
44)

You can use the experiment_config parameter to store information about your evaluation task. Typically we see teams store information about the prompt template, the model used and model parameters used to evaluate the application.

Advanced usage

Missing arguments for scoring methods

When you face the opik.exceptions.ScoreMethodMissingArguments exception, it means that the dataset item and task output dictionaries do not contain all the arguments expected by the scoring method. The way the evaluate function works is by merging the dataset item and task output dictionaries and then passing the result to the scoring method. For example, if the dataset item contains the keys user_question and context while the evaluation task returns a dictionary with the key output, the scoring method will be called as scoring_method.score(user_question='...', context= '...', output= '...'). This can be an issue if the scoring method expects a different set of arguments.

You can solve this by either updating the dataset item or evaluation task to return the missing arguments or by using the scoring_key_mapping parameter of the evaluate function. In the example above, if the scoring method expects input as an argument, you can map the user_question key to the input key as follows:

1evaluation = evaluate(
2 dataset=dataset,
3 task=evaluation_task,
4 scoring_metrics=[hallucination_metric],
5 scoring_key_mapping={"input": "user_question"},
6)

Linking prompts to experiments

The Opik prompt library can be used to version your prompt templates.

When creating an Experiment, you can link the Experiment to a specific prompt version:

1import opik
2
3# Create a prompt
4prompt = opik.Prompt(
5 name="My prompt",
6 prompt="..."
7)
8
9# Run the evaluation
10evaluation = evaluate(
11 dataset=dataset,
12 task=evaluation_task,
13 scoring_metrics=[hallucination_metric],
14 prompt=prompt,
15)

The experiment will now be linked to the prompt allowing you to view all experiments that use a specific prompt:

Logging traces to a specific project

You can use the project_name parameter of the evaluate function to log evaluation traces to a specific project:

1evaluation = evaluate(
2 dataset=dataset,
3 task=evaluation_task,
4 scoring_metrics=[hallucination_metric],
5 project_name="hallucination-detection",
6)

Evaluating a subset of the dataset

You can use the nb_samples parameter to specify the number of samples to use for the evaluation. This is useful if you only want to evaluate a subset of the dataset.

1evaluation = evaluate(
2 experiment_name="My experiment",
3 dataset=dataset,
4 task=evaluation_task,
5 scoring_metrics=[hallucination_metric],
6 nb_samples=10,
7)

Disabling threading

In order to evaluate datasets more efficiently, Opik uses multiple background threads to evaluate the dataset. If this is causing issues, you can disable these by setting task_threads and scoring_threads to 1 which will lead Opik to run all calculations in the main thread.

Accessing logged experiments

You can access all the experiments logged to the platform from the SDK with the Opik.get_experiments_by_name and Opik.get_experiment_by_id methods:

1import opik
2
3# Get the experiment
4opik_client = opik.Opik()
5experiment = opik_client.get_experiment_by_name("My experiment")
6
7# Access the experiment content
8items = experiment.get_items()
9print(items)
Built with