Heuristic metrics

Heuristic metrics are rule-based evaluation methods that allow you to check specific aspects of language model outputs. These metrics use predefined criteria or patterns to assess the quality, consistency, or characteristics of generated text.

You can use the following heuristic metrics:

MetricDescription
EqualsChecks if the output exactly matches an expected string
ContainsCheck if the output contains a specific substring, can be both case sensitive or case insensitive
RegexMatchChecks if the output matches a specified regular expression pattern
IsJsonChecks if the output is a valid JSON object
LevenshteinCalculates the Levenshtein distance between the output and an expected string
SentenceBLEUCalculates a single-sentence BLEU score for a candidate vs. one or more references
CorpusBLEUCalculates a corpus-level BLEU score for multiple candidates vs. their references
ROUGECalculates the ROUGE score for a candidate vs. one or more references

Score an LLM response

You can score an LLM response by first initializing the metrics and then calling the score method:

1from opik.evaluation.metrics import Contains
2
3metric = Contains(name="contains_hello", case_sensitive=True)
4
5score = metric.score(output="Hello world !", reference="Hello")
6
7print(score)

Metrics

Equals

The Equals metric can be used to check if the output of an LLM exactly matches a specific string. It can be used in the following way:

1from opik.evaluation.metrics import Equals
2
3metric = Equals()
4
5score = metric.score(output="Hello world !", reference="Hello, world !")
6print(score)

Contains

The Contains metric can be used to check if the output of an LLM contains a specific substring. It can be used in the following way:

1from opik.evaluation.metrics import Contains
2
3metric = Contains(case_sensitive=False)
4
5score = metric.score(output="Hello world !", reference="Hello")
6print(score)

RegexMatch

The RegexMatch metric can be used to check if the output of an LLM matches a specified regular expression pattern. It can be used in the following way:

1from opik.evaluation.metrics import RegexMatch
2
3metric = RegexMatch(regex="^[a-zA-Z0-9]+$")
4
5score = metric.score("Hello world !")
6print(score)

IsJson

The IsJson metric can be used to check if the output of an LLM is valid. It can be used in the following way:

1from opik.evaluation.metrics import IsJson
2
3metric = IsJson(name="is_json_metric")
4
5score = metric.score(output='{"key": "some_valid_sql"}')
6print(score)

LevenshteinRatio

The LevenshteinRatio metric can be used to check if the output of an LLM is valid. It can be used in the following way:

1from opik.evaluation.metrics import LevenshteinRatio
2
3metric = LevenshteinRatio()
4
5score = metric.score(output="Hello world !", reference="hello")
6print(score)

BLEU

The BLEU (Bilingual Evaluation Understudy) metrics estimate how close the LLM outputs are to one or more reference translations. Opik provides two separate classes:

  • SentenceBLEU – Single-sentence BLEU
  • CorpusBLEU – Corpus-level BLEU Both rely on the underlying NLTK BLEU implementation with optional smoothing methods, weights, and variable n-gram orders.

You will need nltk library:

$pip install nltk

Use SentenceBLEU to compute single-sentence BLEU between a single candidate and one (or more) references:

1from opik.evaluation.metrics import SentenceBLEU
2
3metric = SentenceBLEU(n_grams=4, smoothing_method="method1")
4
5# Single reference
6score = metric.score(
7 output="Hello world!",
8 reference="Hello world"
9)
10print(score.value, score.reason)
11
12# Multiple references
13score = metric.score(
14 output="Hello world!",
15 reference=["Hello planet", "Hello world"]
16)
17print(score.value, score.reason)

Use CorpusBLEU to compute corpus-level BLEU for multiple candidates vs. multiple references. Each candidate and its references align by index in the list:

1from opik.evaluation.metrics import CorpusBLEU
2
3metric = CorpusBLEU()
4
5outputs = ["Hello there", "This is a test."]
6references = [
7 # For the first candidate, two references
8 ["Hello world", "Hello there"],
9 # For the second candidate, one reference
10 "This is a test."
11]
12
13score = metric.score(output=outputs, reference=references)
14print(score.value, score.reason)

You can also customize n-grams, smoothing methods, or weights:

1from opik.evaluation.metrics import SentenceBLEU
2
3metric = SentenceBLEU(
4 n_grams=4,
5 smoothing_method="method2",
6 weights=[0.25, 0.25, 0.25, 0.25]
7)
8
9score = metric.score(
10 output="The cat sat on the mat",
11 reference=["The cat is on the mat", "A cat sat here on the mat"]
12)
13print(score.value, score.reason)

Note: If any candidate or reference is empty, SentenceBLEU or CorpusBLEU will raise a MetricComputationError. Handle or validate inputs accordingly.

ROUGE

The ROUGE (Recall-Oriented Understudy for Gisting Evaluation) metrics estimate how close the LLM outputs are to one or more reference summaries, commonly used for evaluating summarization and text generation tasks. It measures the overlap between an output string and a reference string, with support for multiple ROUGE types. This metrics is a wrapper around the Google Research reimplementation of ROUGE, which is based on the rouge-score library. You will need rouge-score library:

$pip install rouge-score

It can be used in a following way:

1from opik.evaluation.metrics import ROUGE
2
3metric = ROUGE()
4
5# Single reference
6score = metric.score(
7 output="Hello world!",
8 reference="Hello world"
9)
10print(score.value, score.reason)
11
12# Multiple references
13score = metric.score(
14 output="Hello world!",
15 reference=["Hello planet", "Hello world"]
16)
17print(score.value, score.reason)

You can customize the ROUGE metric using the following parameters:

  • rouge_type (str): Type of ROUGE score to compute. Must be one of:

    • rouge1: Unigram-based scoring
    • rouge2: Bigram-based scoring
    • rougeL: Longest common subsequence-based scoring
    • rougeLsum: ROUGE-L score based on sentence splitting

    Default: rouge1

  • use_stemmer (bool): Whether to use stemming in ROUGE computation.
    Default: False

  • split_summaries (bool): Whether to split summaries into sentences.
    Default: False

  • tokenizer (Any | None): Custom tokenizer for sentence splitting.
    Default: None

1from opik.evaluation.metrics import ROUGE
2
3metric = ROUGE(
4 rouge_type="rouge2",
5 use_stemmer=True
6)
7
8score = metric.score(
9 output="The cats sat on the mats",
10 reference=["The cat is on the mat", "A cat sat here on the mat"]
11)
12print(score.value, score.reason)

References

Notes

  • The metric is case-insensitive.
  • ROUGE scores are useful for comparing text summarization models or evaluating text similarity.
  • Consider using stemming for improved evaluation in certain cases.