Opik provides seamless integration with LangChain, allowing you to easily log and trace your LangChain-based applications. By using the OpikTracer callback, you can automatically capture detailed information about your LangChain runs, including inputs, outputs, and metadata for each step in your chain.

You can check out the Colab Notebook if you’d like to jump straight to the code:

Open In Colab

Getting Started

To use the OpikTracer with LangChain, you’ll need to have both the opik and langchain packages installed. You can install them using pip:

$pip install opik langchain langchain_openai

In addition, you can configure Opik using the opik configure command which will prompt you for the correct local server address or if you are using the Cloud platform your API key:

$opik configure

Using OpikTracer

Here’s a basic example of how to use the OpikTracer callback with a LangChain chain:

1from langchain.chains import LLMChain
2from langchain_openai import OpenAI
3from langchain.prompts import PromptTemplate
4from opik.integrations.langchain import OpikTracer
5
6# Initialize the tracer
7opik_tracer = OpikTracer()
8
9# Create the LLM Chain using LangChain
10llm = OpenAI(temperature=0)
11
12prompt_template = PromptTemplate(
13 input_variables=["input"],
14 template="Translate the following text to French: {input}"
15)
16
17llm_chain = LLMChain(llm=llm, prompt=prompt_template)
18
19# Generate the translations
20translation = llm_chain.run("Hello, how are you?", callbacks=[opik_tracer])
21print(translation)
22
23# The OpikTracer will automatically log the run and its details to Opik

This example demonstrates how to create a LangChain chain with a OpikTracer callback. When you run the chain with a prompt, the OpikTracer will automatically log the run and its details to Opik, including the input prompt, the output, and metadata for each step in the chain.

Settings tags and metadata

You can also customize the OpikTracer callback to include additional metadata or logging options. For example:

1from opik.integrations.langchain import OpikTracer
2
3opik_tracer = OpikTracer(
4 tags=["langchain"],
5 metadata={"use-case": "documentation-example"}
6)

Accessing logged traces

You can use the created_traces method to access the traces collected by the OpikTracer callback:

1from opik.integrations.langchain import OpikTracer
2
3opik_tracer = OpikTracer()
4
5# Calling Langchain object
6traces = opik_tracer.created_traces()
7print([trace.id for trace in traces])

The traces returned by the created_traces method are instances of the Trace class, which you can use to update the metadata, feedback scores and tags for the traces.

Accessing the content of logged traces

In order to access the content of logged traces you will need to use the Opik.get_trace_content method:

1import opik
2from opik.integrations.langchain import OpikTracer
3opik_client = opik.Opik()
4
5opik_tracer = OpikTracer()
6
7
8# Calling Langchain object
9
10# Getting the content of the logged traces
11traces = opik_tracer.created_traces()
12for trace in traces:
13 content = opik_client.get_trace_content(trace.id)
14 print(content)

Updating and scoring logged traces

You can update the metadata, feedback scores and tags for traces after they are created. For this you can use the created_traces method to access the traces and then update them using the update method and the log_feedback_score method:

1from opik.integrations.langchain import OpikTracer
2
3opik_tracer = OpikTracer()
4
5# Calling Langchain object
6
7traces = opik_tracer.created_traces()
8
9for trace in traces:
10 trace.update(tag=["langchain"])
11 trace.log_feedback_score(name="user-feedback", value=0.5)

Advanced usage

The OpikTracer object has a flush method that can be used to make sure that all traces are logged to the Opik platform before you exit a script. This method will return once all traces have been logged or if the timeout is reach, whichever comes first.

1from opik.integrations.langchain import OpikTracer
2
3opik_tracer = OpikTracer()
4opik_tracer.flush()

Important notes

  1. If you are using asynchronous streaming mode (calling .astream() method), the input field in the trace UI will be empty due to a Langchain limitation for this mode. However, you can find the input data inside the nested spans of this chain.

  2. if you are planning to use streaming with LLM-calls, and you want to calculate LLM-call tokens/cost, you need explicitly set argument stream_usage to True,

1from langchain_openai import ChatOpenAI
2llm = ChatOpenAI(
3 temperature=0,
4 stream_usage=True,
5)
Built with