Langchain
Describes how to use Opik with LangChain
Opik provides seamless integration with LangChain, allowing you to easily log and trace your LangChain-based applications. By using the OpikTracer
callback, you can automatically capture detailed information about your LangChain runs, including inputs, outputs, and metadata for each step in your chain.
Getting Started
To use the OpikTracer
with LangChain, you’ll need to have both the opik
and langchain
packages installed. You can install them using pip:
In addition, you can configure Opik using the opik configure
command which will prompt you for the correct local server address or if you are using the Cloud platform your API key:
Using OpikTracer
Here’s a basic example of how to use the OpikTracer
callback with a LangChain chain:
This example demonstrates how to create a LangChain chain with a OpikTracer
callback. When you run the chain with a prompt, the OpikTracer
will automatically log the run and its details to Opik, including the input prompt, the output, and metadata for each step in the chain.
Settings tags and metadata
You can also customize the OpikTracer
callback to include additional metadata or logging options. For example:
Accessing logged traces
You can use the created_traces
method to access the traces collected by the OpikTracer
callback:
The traces returned by the created_traces
method are instances of the Trace
class, which you can use to update the metadata, feedback scores and tags for the traces.
Accessing the content of logged traces
In order to access the content of logged traces you will need to use the Opik.get_trace_content
method:
Updating and scoring logged traces
You can update the metadata, feedback scores and tags for traces after they are created. For this you can use the created_traces
method to access the traces and then update them using the update
method and the log_feedback_score
method:
Advanced usage
The OpikTracer
object has a flush
method that can be used to make sure that all traces are logged to the Opik platform before you exit a script. This method will return once all traces have been logged or if the timeout is reach, whichever comes first.
Important notes
-
If you are using asynchronous streaming mode (calling
.astream()
method), theinput
field in the trace UI will be empty due to a Langchain limitation for this mode. However, you can find the input data inside the nested spans of this chain. -
if you are planning to use streaming with LLM-calls, and you want to calculate LLM-call tokens/cost, you need explicitly set argument
stream_usage
toTrue
,