Getting Started

Quickstart

This guide helps you integrate the Opik platform with your existing LLM application. The goal of this guide is to help you log your first LLM calls and chains to the Opik platform.

Set up

Getting started is as simple as creating an account on Comet or self-hosting the platform.

Once your account is created, you can start logging traces by installing the Opik Python SDK:

$pip install opik

and configuring the SDK with:

If you are using the Python SDK, we recommend running the opik configure command from the command line which will prompt you for all the necessary information:

$opik configure

You can learn more about configuring the Python SDK here.

Adding Opik observability to your codebase

Logging LLM calls

The first step in integrating Opik with your codebase is to track your LLM calls. If you are using OpenAI, OpenRouter, or any LLM provider that is supported by LiteLLM, you can use one of our integrations:

1from opik.integrations.openai import track_openai
2from openai import OpenAI
3
4# Wrap your OpenAI client
5openai_client = OpenAI()
6openai_client = track_openai(openai_client)

All OpenAI calls made using the openai_client will now be logged to Opik.

Logging chains

It is common for LLM applications to use chains rather than just calling the LLM once. This is achieved by either using a framework like LangChain, LangGraph or LLamaIndex, or by writing custom python code.

Opik makes it easy for your to log your chains no matter how you implement them:

If you are not using any frameworks to build your chains, you can use the @track decorator to log your chains. When a function is decorated with @track, the input and output of the function will be logged to Opik. This works well even for very nested chains:

1from opik import track
2from opik.integrations.openai import track_openai
3from openai import OpenAI
4
5# Wrap your OpenAI client
6openai_client = OpenAI()
7openai_client = track_openai(openai_client)
8
9# Create your chain
10@track
11def llm_chain(input_text):
12 context = retrieve_context(input_text)
13 response = generate_response(input_text, context)
14
15 return response
16
17@track
18def retrieve_context(input_text):
19 # For the purpose of this example, we are just returning a hardcoded list of strings
20 context =[
21 "What specific information are you looking for?",
22 "How can I assist you with your interests today?",
23 "Are there any topics you'd like to explore or learn more about?",
24 ]
25 return context
26
27@track
28def generate_response(input_text, context):
29 full_prompt = (
30 f" If the user asks a question that is not specific, use the context to provide a relevant response.\n"
31 f"Context: {', '.join(context)}\n"
32 f"User: {input_text}\n"
33 f"AI:"
34 )
35
36 response = openai_client.chat.completions.create(
37 model="gpt-3.5-turbo",
38 messages=[{"role": "user", "content": full_prompt}]
39 )
40 return response.choices[0].message.content
41
42llm_chain("Hello, how are you?")

While this code sample assumes that you are using OpenAI, the same principle applies if you are using any other LLM provider.

Your chains will now be logged to Opik and can be viewed in the Opik UI. To learn more about how you can customize the logged data, see the Log Traces guide.

Next steps

Now that you have logged your first LLM calls and chains to Opik, why not check out:

  1. Opik’s evaluation metrics: Opik provides a suite of evaluation metrics (Hallucination, Answer Relevance, Context Recall, etc.) that you can use to score your LLM responses.
  2. Opik Experiments: Opik allows you to automated the evaluation process of your LLM application so that you no longer need to manually review every LLM response.
Built with