LangChain (Python)
Opik provides seamless integration with LangChain, allowing you to easily log and trace your LangChain-based applications. By using the OpikTracer
callback, you can automatically capture detailed information about your LangChain runs, including inputs, outputs, metadata, and cost tracking for each step in your chain.
Key Features
- Automatic cost tracking for supported LLM providers (OpenAI, Anthropic, Google AI, AWS Bedrock, and more)
- Full compatibility with the
@opik.track
decorator for hybrid tracing approaches - Thread support for conversational applications with
thread_id
parameter - Distributed tracing support for multi-service applications
- LangGraph compatibility for complex graph-based workflows
Getting Started
To use the OpikTracer
with LangChain, you’ll need to have both the opik
and langchain
packages installed. You can install them using pip:
In addition, you can configure Opik using the opik configure
command which will prompt you for the correct local server address or if you are using the Cloud platform your API key:
Using OpikTracer
Here’s a basic example of how to use the OpikTracer
callback with a LangChain chain:
The OpikTracer
will automatically log the run and its details to Opik, including the input prompt, the output, and metadata for each step in the chain.
For detailed parameter information, see the OpikTracer SDK reference.
Cost Tracking
The OpikTracer
automatically tracks token usage and cost for all supported LLM models used within LangChain applications.
Cost information is automatically captured and displayed in the Opik UI, including:
- Token usage details
- Cost per request based on model pricing
- Total trace cost
View the complete list of supported models and providers on the Supported Models page.
For streaming with cost tracking, ensure stream_usage=True
is set:
View the complete list of supported models and providers on the Supported Models page.
Settings tags and metadata
You can customize the OpikTracer
callback to include additional metadata, logging options, and conversation threading:
Accessing logged traces
You can use the created_traces
method to access the traces collected by the OpikTracer
callback:
The traces returned by the created_traces
method are instances of the Trace
class, which you can use to update the metadata, feedback scores and tags for the traces.
Accessing the content of logged traces
In order to access the content of logged traces you will need to use the Opik.get_trace_content
method:
Updating and scoring logged traces
You can update the metadata, feedback scores and tags for traces after they are created. For this you can use the created_traces
method to access the traces and then update them using the update
method and the log_feedback_score
method:
Compatibility with @track Decorator
The OpikTracer
is fully compatible with the @track
decorator, allowing you to create hybrid tracing approaches:
Thread Support
Use the thread_id
parameter to group related conversations or interactions:
Distributed Tracing
For multi-service/thread/process applications, you can use distributed tracing headers to connect traces across services:
Learn more about distributed tracing in the Distributed Tracing guide.
LangGraph Integration
For LangGraph applications, Opik provides specialized support. The OpikTracer
works seamlessly with LangGraph, and you can also visualize graph definitions:
For detailed LangGraph integration examples, see the LangGraph Integration guide.
Advanced usage
The OpikTracer
object has a flush
method that can be used to make sure that all traces are logged to the Opik platform before you exit a script. This method will return once all traces have been logged or if the timeout is reach, whichever comes first.
Important notes
-
Asynchronous streaming: If you are using asynchronous streaming mode (calling
.astream()
method), theinput
field in the trace UI may be empty due to a LangChain limitation for this mode. However, you can find the input data inside the nested spans of this chain. -
Streaming with cost tracking: If you are planning to use streaming with LLM calls and want to calculate LLM call tokens/cost, you need to explicitly set
stream_usage=True
: