# Home Opik is an [open-source](https://github.com/comet-ml/opik) logging, debugging, and optimization platform for AI agents and LLM applications. If you're building AI features, you know it's easy to spin up a working prototype but harder to log, test, iterate, and monitor to meet production requirements. Opik gives you all the tools you need to go from LLM observability to action across your AI application footprint and dev cycle. Ship measurable improvements with gorgeous logs, annotation and scoring functions, pre-configured LLM-as-a-judge [eval metrics](/evaluation/metrics/overview), and even [automated agent optimization algorithms](/agent_optimization/overview) to maximize performance. ## End-to-End AI Engineering Opik is Open Source! You can find the full source code on [GitHub](https://github.com/comet-ml/opik) and the complete self-hosting guide can be found [here](/self-host/local_deployment). ## Core Functions Opik integrates with your existing AI stack through your model provider or LLM framework. Traces give you instant visibility into what's working, what's not, and why and includes advanced analysis and debugging features built in. Use LLM-as-a-judge and heuristic eval metrics to score your app or agent on hallucination, context recall, and more. Choose from six advanced optimization algorithms to auto-generate and score the best prompts for the steps in your agentic system. Store and version system prompts, compare results live in the [Prompt Playground](/prompt_engineering/playground), and experiment with different models with our LLM proxy. Deploy Opik on your own infrastructure with local or Kubernetes deployment options. ## Video Tutorials Prefer a visual guide ? Follow along as we cover everything from basic setup and trace logging to LLM evaluation metrics, production monitoring, and more.