Skip to main content

Quickstart

This guide helps you integrate the Opik platform with your existing LLM application. The goal of this guide is to help you log your first LLM calls and chains to the Opik platform.

Opik Traces

Set up

Getting started is as simple as creating an account on Comet or self-hosting the platform.

Once your account is created, you can start logging traces by installing the Opik Python SDK:

pip install opik

and configuring the SDK with:

import opik

opik.configure(use_local=False)
tip

If you are self-hosting the platform, simply set the use_local flag to True in the opik.configure method.

Adding Opik observability to your codebase

Logging LLM calls

The first step in integrating Opik with your codebase is to track your LLM calls. If you are using OpenAI or any LLM provider that is supported by LiteLLM, you can use one of our integrations:

from opik.integrations.openai import track_openai
from openai import OpenAI

# Wrap your OpenAI client
openai_client = OpenAI()
openai_client = track_openai(openai_client)

All OpenAI calls made using the openai_client will now be logged to Opik.

Logging chains

It is common for LLM applications to use chains rather than just calling the LLM once. This is achieved by either using a framework like LangChain, LangGraph or LLamaIndex, or by writing custom python code.

Opik makes it easy for your to log your chains no matter how you implement them:

If you are not using any frameworks to build your chains, you can use the @track decorator to log your chains. When a function is decorated with @track, the input and output of the function will be logged to Opik. This works well even for very nested chains:

from opik.integrations.openai import track_openai, track
from openai import OpenAI

# Wrap your OpenAI client
openai_client = OpenAI()
openai_client = track_openai(openai_client)

# Create your chain
@track
def llm_chain(input_text):
context = retrieve_context(input_text)
response = generate_response(input_text, context)

return response

@track
def retrieve_context(input_text):
# For the purpose of this example, we are just returning a hardcoded list of strings
context =[
"What specific information are you looking for?",
"How can I assist you with your interests today?",
"Are there any topics you'd like to explore or learn more about?",
]
return context

@track
def generate_response(input_text, context):
full_prompt = (
f" If the user asks a question that is not specific, use the context to provide a relevant response.\n"
f"Context: {', '.join(context)}\n"
f"User: {input_text}\n"
f"AI:"
)

response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": full_prompt}]
)
return response.choices[0].message.content

llm_chain("Hello, how are you?")

While this code sample assumes that you are using OpenAI, the same principle applies if you are using any other LLM provider.

info

Your chains will now be logged to Opik and can be viewed in the Opik UI. To learn more about how you can customize the logged data, see the Log Traces guide.

Next steps

Now that you have logged your first LLM calls and chains to Opik, why not check out:

  1. Opik's evaluation metrics: Opik provides a suite of evaluation metrics (Hallucination, Answer Relevance, Context Recall, etc.) that you can use to score your LLM responses.
  2. Opik Experiments: Opik allows you to automated the evaluation process of your LLM application so that you no longer need to manually review every LLM response.