Quickstart
This guide helps you integrate the Opik platform with your existing LLM application. The goal of this guide is to help you log your first LLM calls and chains to the Opik platform.

Set up
Getting started is as simple as creating an account on Comet or self-hosting the platform.
Once your account is created, you can start logging traces by installing the Opik Python SDK:
Python
JS / TS
and configuring the SDK with:
Python
JS / TS
If you are using the Python SDK, we recommend running the opik configure
command
from the command line which will prompt you for all the necessary information:
You can learn more about configuring the Python SDK here.
Adding Opik observability to your codebase
Logging LLM calls
The first step in integrating Opik with your codebase is to track your LLM calls. If you are using OpenAI, OpenRouter, or any LLM provider that is supported by LiteLLM, you can use one of our integrations:
OpenAI (Python)
OpenRouter (Python)
LiteLLM (Python)
Decorator (Python)
JS / TS
All OpenAI calls made using the openai_client
will now be logged to Opik.
Logging chains
It is common for LLM applications to use chains rather than just calling the LLM once. This is achieved by either using a framework like LangChain, LangGraph or LLamaIndex, or by writing custom python code.
Opik makes it easy for your to log your chains no matter how you implement them:
Custom Python Code
LangChain
LLamaIndex
AI Vercel SDK (JS / TS)
If you are not using any frameworks to build your chains, you can use the @track
decorator to log your chains. When a
function is decorated with @track
, the input and output of the function will be logged to Opik. This works well even for very
nested chains:
While this code sample assumes that you are using OpenAI, the same principle applies if you are using any other LLM provider.
Your chains will now be logged to Opik and can be viewed in the Opik UI. To learn more about how you can customize the logged data, see the Log Traces guide.
Next steps
Now that you have logged your first LLM calls and chains to Opik, why not check out:
- Opik’s evaluation metrics: Opik provides a suite of evaluation metrics (Hallucination, Answer Relevance, Context Recall, etc.) that you can use to score your LLM responses.
- Opik Experiments: Opik allows you to automated the evaluation process of your LLM application so that you no longer need to manually review every LLM response.