Playground
The Opik prompt playground is current in public preview, if you have any feedback or suggestions, please let us know.
When working with LLMs, there are time when you want to quickly try out different prompts and see how they perform. Opik’s prompt playground is a great way to do just that.
Using the prompt playground
The prompt playground is a simple interface that allows you to enter prompts and see the output of the LLM. It allows you to enter system, user and assistant messages and see the output of the LLM in real time.
You can also easily evaluate how different models impact the prompt by duplicating a prompt and changing either the model or the model parameters.

All of the conversations from the playground are logged to the playground
project so that you can easily refer back to them later.
Configuring the prompt playground
The playground supports the following LLM providers:
- OpenAI
- Anthropic
- Ollama
- LM Studio (coming soon)
If you would like us to support additional LLM providers, please let us know by opening an issue on GitHub.
Configuring OpenAI and Anthropic
To use OpenAI or Anthropic models, you will need to add your API key to Opik. You can do this by clicking on the Configuration
tab in the sidebar
and navigating to the AI providers
tab. From there, you can select the provider you want to use and enter your API key.
Configuring Ollama
If you are using Ollama, you will need to ensure that Ollama’s security configuration is set up correctly to avoid CORS issues. If you are running Ollama in production, we recommend reviewing the Ollama documentation for advice on best practices.
If you are simply looking at using Ollama in the Oplik playground, we have released a utility to help you access Ollama from your browser. The Python SDK includes a simple reverse proxy that you can run on your local machine to proxy requests to Ollama:
You can then configure the AI providers using the URL
and models
parameters returned by the proxy server in the console:

Don’t forget to update the model list with the models supported by your proxy server!
You will need to keep the proxy server running for the playground to work. If it is not running, you will see the
error: Unexpected error
.
Running experiments in the playground
You can evaluate prompts in the playground by using variables in the prompts using the {{variable}}
syntax. You can then connect a dataset and run the prompts on each dataset item. This allows both technical and non-technical users to evaluate prompts quickly and easily.
When using datasets in the playground, you need to ensure the prompt contains variables in the mustache syntax ({{variable}}
) that align with the columns in the dataset. For example if the dataset contains a column named user_question
you need to ensure the prompt contains {{user_question}}
.
Once you are ready to run the experiment, simply select a dataset next to the run button and click on the Run
button. You will then be able to see the LLM outputs for each sample in the dataset.