Opik Cookbooks
An open-source collection of notebooks and guides for using the Opik platform.
Guides
If you are looking at learning more about the Opik platform, the quickstart notebook is a comprehensive overview of the full platform covering both the tracing and the evaluation functionality.
Advanced guides
The advanced guides cover more advanced usage of the Opik platform.
In this guide, we evaluate the hallucination metric that is included with the Opik platform.
In this guide, we evaluate the moderation metric that is included with the Opik platform.
Integration examples
Opik provides first-class support for many popular LLM frameworks and providers. Choose your integration below to get started:
LLM Providers
Log all OpenAI LLM calls to Opik
Log all Anthropic LLM calls to Opik
AWS Bedrock is a managed service for high performing foundational models
Gemini is a family of multimodal large language models developed by Google DeepMind
Groq provides fast LLM inference for Open Source models
Ollama allows you to run open-source LLM models on your local machine
IBM’s platform for deploying ML models
Frameworks & Tools
LangChain is a framework for developing applications powered by LLMs
LlamaIndex is a framework for building agentic applications
Build production-ready LLM applications
LiteLLM allows you to call all LLM APIs using the OpenAI format
CrewAi can be used to create AI agent teams that work together to tackle complex tasks
DSPy is an LLM optimization framework for prompt engineering
Guardrails is a framework for detecting and preventing errors in LLM applications
LangGraph is a framework for building agentic applications built by the LangChain team
Simple, unified interface to multiple Generative AI providers
Predibase provides the fastest way to fine-tune and serve open-source LLMs
Ragas is a framework for evaluating Retrieval Augmented Generation (RAG) pipelines
Don’t see your preferred framework or tool? Open an issue to request it! In the meantime, you can use our SDK’s core logging functions to track your LLM interactions - check out our tracing documentation for details.