Evaluating Opik’s Hallucination Metric

For this guide we will be evaluating the Hallucination metric included in the LLM Evaluation SDK which will showcase both how to use the evaluation functionality in the platform as well as the quality of the Hallucination metric included in the SDK.

Creating an account on Comet.com

Comet provides a hosted version of the Opik platform, simply create an account and grab you API Key.

You can also run the Opik platform locally, see the installation guide for more information.

1%pip install opik pyarrow pandas fsspec huggingface_hub --upgrade --quiet
1import opik
2
3opik.configure(use_local=False)

Preparing our environment

First, we will install configure the OpenAI API key and create a new Opik dataset

1import os
2import getpass
3
4if "OPENAI_API_KEY" not in os.environ:
5 os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")

We will be using the HaluEval dataset which according to this paper ChatGPT detects 86.2% of hallucinations. The first step will be to create a dataset in the platform so we can keep track of the results of the evaluation.

Since the insert methods in the SDK deduplicates items, we can insert 50 items and if the items already exist, Opik will automatically remove them.

1# Create dataset
2import opik
3import pandas as pd
4
5client = opik.Opik()
6
7# Create dataset
8dataset = client.get_or_create_dataset(name="HaluEval", description="HaluEval dataset")
9
10# Insert items into dataset
11df = pd.read_parquet(
12 "hf://datasets/pminervini/HaluEval/general/data-00000-of-00001.parquet"
13)
14df = df.sample(n=50, random_state=42)
15
16dataset_records = [
17 {
18 "input": x["user_query"],
19 "llm_output": x["chatgpt_response"],
20 "expected_hallucination_label": x["hallucination"],
21 }
22 for x in df.to_dict(orient="records")
23]
24
25dataset.insert(dataset_records)

Evaluating the hallucination metric

In order to evaluate the performance of the Opik hallucination metric, we will define:

  • Evaluation task: Our evaluation task will use the data in the Dataset to return a hallucination score computed using the Opik hallucination metric.
  • Scoring metric: We will use the Equals metric to check if the hallucination score computed matches the expected output.

By defining the evaluation task in this way, we will be able to understand how well Opik’s hallucination metric is able to detect hallucinations in the dataset.

1from opik.evaluation.metrics import Hallucination, Equals
2from opik.evaluation import evaluate
3from opik import Opik
4from opik.evaluation.metrics.llm_judges.hallucination.template import generate_query
5from typing import Dict
6
7
8# Define the evaluation task
9def evaluation_task(x: Dict):
10 metric = Hallucination()
11 try:
12 metric_score = metric.score(input=x["input"], output=x["llm_output"])
13 hallucination_score = metric_score.value
14 hallucination_reason = metric_score.reason
15 except Exception as e:
16 print(e)
17 hallucination_score = None
18 hallucination_reason = str(e)
19
20 return {
21 "hallucination_score": "yes" if hallucination_score == 1 else "no",
22 "hallucination_reason": hallucination_reason,
23 }
24
25
26# Get the dataset
27client = Opik()
28dataset = client.get_dataset(name="HaluEval")
29
30# Define the scoring metric
31check_hallucinated_metric = Equals(name="Correct hallucination score")
32
33# Add the prompt template as an experiment configuration
34experiment_config = {
35 "prompt_template": generate_query(
36 input="{input}", context="{context}", output="{output}", few_shot_examples=[]
37 )
38}
39
40res = evaluate(
41 dataset=dataset,
42 task=evaluation_task,
43 scoring_metrics=[check_hallucinated_metric],
44 experiment_config=experiment_config,
45 scoring_key_mapping={
46 "reference": "expected_hallucination_label",
47 "output": "hallucination_score",
48 },
49)

We can see that the hallucination metric is able to detect ~80% of the hallucinations contained in the dataset and we can see the specific items where hallucinations were not detected.

Hallucination Evaluation

Built with