# Home The Opik platform allows you to log, view and evaluate your LLM traces during both development and production. Using the platform and our LLM as a Judge evaluators, you can identify and fix issues in your LLM application. Opik is Open Source! You can find the full source code on [GitHub](https://github.com/comet-ml/opik) and the complete self-hosting guide can be found [here](/self-host/local_deployment). ## Overview The Opik platform allows you to track, view and evaluate your LLM traces during both development and production. ### Development During development, you can use the platform to log, view and debug your LLM traces: 1. Log traces using: a. One of our [integrations](/integrations/overview). b. The `@track` decorator for Python, learn more in the [Logging Traces](/tracing/log_traces) guide. c. Using the Typescript Opik SDK, learn more in the [Logging Traces](/tracing/log_traces#logging-with-the-js--ts-sdk) guide. 2. [Annotate and label traces](/tracing/annotate_traces) through the SDK or the UI. ### Evaluation and Testing Evaluating the output of your LLM calls is critical to ensure that your application is working as expected and can be challenging. Using the Opik platform, you can: 1. Use one of our [LLM as a Judge evaluators](/evaluation/metrics/overview) or [Heuristic evaluators](/evaluation/metrics/heuristic_metrics) to score your traces and LLM calls 2. [Store evaluation datasets](/evaluation/manage_datasets) in the platform and [run evaluations](/evaluation/evaluate_your_llm) 3. Use our [pytest integration](/testing/pytest_integration) to track unit test results and compare results between runs ### Production Monitoring Opik has been designed from the ground up to support high volumes of traces making it the ideal tool for monitoring your production LLM applications. We have stress-tested the application and even a small deployment can ingest more than 40 million traces per day! Our goal is to make it easy for you to monitor your production LLM applications and easily identify any issues with your production LLM application, for this we have included: 1. [Online evaluation metrics](/production/rules) that allow you to score all your production traces and easily identify any issues with your production LLM application. 2. [Production monitoring dashboards](/production/production_monitoring) that allow you to review your feedback scores, trace count and tokens over time at both a daily and hourly granularity. ## Getting Started [Comet](https://www.comet.com/site) provides a managed Cloud offering for Opik, simply [create an account](https://www.comet.com/signup?from=llm) to get started. You can also run Opik locally using our [local installer](/self-host/local_deployment). If you are looking for a more production ready deployment, you can also use our [Kubernetes deployment option](/self-host/kubernetes). ## Join Our Bounty Program! Want to contribute to Opik and get rewarded for your efforts? Check out our [Bounty Program](/contributing/developer-programs/bounties) to find exciting tasks and help us grow the platform! # Quickstart This guide helps you integrate the Opik platform with your existing LLM application. The goal of this guide is to help you log your first LLM calls and chains to the Opik platform. ## Prerequisites Before you begin, you'll need to choose how you want to use Opik: * **Opik Cloud**: Create a free account at [comet.com/opik](https://www.comet.com/signup?from=llm\&utm_source=opik\&utm_medium=colab\&utm_content=quickstart\&utm_campaign=opik) * **Self-hosting**: Follow the [self-hosting guide](/self-host/overview) to deploy Opik locally or on Kubernetes ## Logging your first LLM calls Opik makes it easy to integrate with your existing LLM application, here are some of our most popular integrations: If you are using the Python function decorator, you can integrate by: Install the Opik Python SDK: ```bash pip install opik ``` Configure the Opik Python SDK: ```bash opik configure ``` Wrap your function with the `@track` decorator: ```python from opik import track @track def my_function(input: str) -> str: return input ``` All calls to the `my_function` will now be logged to Opik. This works well for any function even nested ones and is also supported by most integrations (just wrap any parent function with the `@track` decorator). If you want to use the TypeScript SDK to log traces directly: Install the Opik TypeScript SDK: ```bash npm install opik ``` Configure the Opik TypeScript SDK by running the interactive CLI tool: ```bash npx opik-ts configure ``` This will detect your project setup, install required dependencies, and help you configure environment variables. Log a trace using the Opik client: ```typescript import { Opik } from "opik"; const client = new Opik(); const trace = client.trace({ name: "My LLM Application", input: { prompt: "What is the capital of France?" }, output: { response: "The capital of France is Paris." }, }); trace.end(); await client.flush(); ``` All traces will now be logged to Opik. You can also log spans within traces for more detailed observability. If you are using the OpenAI Python SDK, you can integrate by: Install the Opik Python SDK: ```bash pip install opik ``` Configure the Opik Python SDK, this will prompt you for your API key if you are using Opik Cloud or your Opik server address if you are self-hosting: ```bash opik configure ``` Wrap your OpenAI client with the `track_openai` function: ```python from opik.integrations.openai import track_openai from openai import OpenAI # Wrap your OpenAI client client = OpenAI() client = track_openai(client) # Use the client as normal completion = client.chat.completions.create( model="gpt-4o", messages=[ {"role": "user", "content": "Hello, how are you?", }, ], ) print(completion.choices[0].message.content) ``` All OpenAI calls made using the `client` will now be logged to Opik. You can combine this with the `@track` decorator to log the traces for each step of your agent. If you are using the OpenAI TypeScript SDK, you can integrate by: Install the Opik TypeScript SDK: ```bash npm install opik-openai ``` Configure the Opik TypeScript SDK by running the interactive CLI tool: ```bash npx opik-ts configure ``` This will detect your project setup, install required dependencies, and help you configure environment variables. Wrap your OpenAI client with the `trackOpenAI` function: ```typescript import OpenAI from "openai"; import { trackOpenAI } from "opik-openai"; // Initialize the original OpenAI client const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY, }); // Wrap the client with Opik tracking const trackedOpenAI = trackOpenAI(openai); // Use the tracked client just like the original const completion = await trackedOpenAI.chat.completions.create({ model: "gpt-4", messages: [{ role: "user", content: "Hello, how can you help me today?" }], }); console.log(completion.choices[0].message.content); // Ensure all traces are sent before your app terminates await trackedOpenAI.flush(); ``` All OpenAI calls made using the `trackedOpenAI` will now be logged to Opik. If you are using the AI Vercel SDK, you can integrate by: Install the Opik AI Vercel SDK: ```bash npm install opik ``` Configure the Opik AI Vercel SDK by running the interactive CLI tool: ```bash npx opik-ts configure ``` This will detect your project setup, install required dependencies, and help you configure environment variables. Initialize the OpikExporter with your AI SDK: ```ts import { openai } from "@ai-sdk/openai"; import { generateText } from "ai"; import { NodeSDK } from "@opentelemetry/sdk-node"; import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node"; import { OpikExporter } from "opik/vercel"; // Set up OpenTelemetry with Opik const sdk = new NodeSDK({ traceExporter: new OpikExporter(), instrumentations: [getNodeAutoInstrumentations()], }); sdk.start(); // Your AI SDK calls with telemetry enabled const result = await generateText({ model: openai("gpt-4o"), prompt: "What is love?", experimental_telemetry: { isEnabled: true }, }); console.log(result.text); ``` All AI SDK calls with `experimental_telemetry: { isEnabled: true }` will now be logged to Opik. If you are using Ollama with Python, you can integrate by: Install the Opik Python SDK: ```bash pip install opik ``` Configure the Opik Python SDK: ```bash opik configure ``` Integrate Opik with your Ollama calls: Wrap your Ollama calls with the `@track` decorator: ```python import ollama from opik import track @track def ollama_call(user_message: str): response = ollama.chat( model='llama3.1', messages=[{'role': 'user', 'content': user_message}] ) return response['message'] # Call your function result = ollama_call("Say this is a test") print(result) ``` Use Opik's OpenAI integration with Ollama's OpenAI-compatible API: ```python from openai import OpenAI from opik.integrations.openai import track_openai # Create an OpenAI client pointing to Ollama client = OpenAI( base_url='http://localhost:11434/v1/', api_key='ollama' # required but ignored ) # Wrap the client with Opik tracking client = track_openai(client) # Call the local Ollama model response = client.chat.completions.create( model='llama3.1', messages=[{'role': 'user', 'content': 'Say this is a test'}] ) print(response.choices[0].message.content) ``` Use Opik's LangChain integration with Ollama: ```python from langchain_ollama import ChatOllama from opik.integrations.langchain import OpikTracer # Create the Opik tracer opik_tracer = OpikTracer() # Create the Ollama model with Opik tracing llm = ChatOllama( model="llama3.1", temperature=0, ).with_config({"callbacks": [opik_tracer]}) # Call the Ollama model messages = [ ("system", "You are a helpful assistant."), ("human", "Say this is a test") ] response = llm.invoke(messages) print(response) ``` All Ollama calls will now be logged to Opik. See the [full Ollama guide](/integrations/ollama) for more advanced usage. If you are using the ADK, you can integrate by: Install the Opik SDK: ```bash pip install opik google-adk ``` Configure the Opik SDK by running the `opik configure` command in your terminal: ```bash opik configure ``` Wrap your ADK agent with the `OpikTracer`: ```python from google.adk.agents import Agent from opik.integrations.adk import OpikTracer, track_adk_agent_recursive # Create your ADK agent agent = Agent( name="helpful_assistant", model="gemini-2.0-flash", instruction="You are a helpful assistant that answers user questions." ) # Wrap your ADK agent with the OpikTracer opik_tracer = OpikTracer() track_adk_agent_recursive(agent, opik_tracer) ``` All ADK agent calls will now be logged to Opik. If you are using LangGraph, you can integrate by: Install the Opik SDK: ```bash pip install opik ``` Configure the Opik SDK by running the `opik configure` command in your terminal: ```bash opik configure ``` Wrap your LangGraph graph with the `OpikTracer` decorator: ```python from opik.integrations.langchain import OpikTracer # Create your LangGraph graph graph = ... app = graph.compile(...) # Wrap your LangGraph graph with the OpikTracer opik_tracer = OpikTracer(graph=app.get_graph(xray=True)) # Pass the OpikTracer callback to the invoke functions result = app.invoke({"messages": [HumanMessage(content = "How to use LangGraph ?")]}, config={"callbacks": [opik_tracer]}) ``` All LangGraph calls will now be logged to Opik.

Integrate with Opik faster using this pre-built prompt

The pre-built prompt will guide you through the integration process, install the Opik SDK and instrument your code. It supports both Python and TypeScript codebases, if you are using another language just let us know and we can help you out. Once the integration is complete, simply run your application and you will start seeing traces in your Opik dashboard.
Opik has **30+ integrations** with popular frameworks and model providers: } iconPosition="left" /> } iconPosition="left" /> } iconPosition="left" /> } iconPosition="left" /> } iconPosition="left" /> } iconPosition="left" /> **[View all 30+ integrations β†’](/integrations/overview)**
## Analyze your traces After running your application, you will start seeing your traces in your Opik dashboard: