watsonx is a next generation enterprise studio for AI builders to train, validate, tune and deploy AI models.

You can check out the Colab Notebook if you’d like to jump straight to the code:

Open In Colab

Getting Started

Configuring Opik

To start tracking your watsonx LLM calls, you can use our LiteLLM integration. You’ll need to have both the opik and litellm packages installed. You can install them using pip:

$pip install opik litellm

In addition, you can configure Opik using the opik configure command which will prompt you for the correct local server address or if you are using the Cloud platform your API key:

$opik configure

If you’re unable to use our LiteLLM integration with watsonx, please open an issue

Configuring watsonx

In order to configure watsonx, you will need to have:

  • The endpoint URL: Documentation for this parameter can be found here
  • Watsonx API Key: Documentation for this parameter can be found here
  • Watsonx Token: Documentation for this parameter can be found here
  • (Optional) Watsonx Project ID: Can be found in the Manage section of your project.

Once you have these, you can set them as environment variables:

1import os
2
3os.environ["WATSONX_ENDPOINT_URL"] = "" # Base URL of your WatsonX instance
4os.environ["WATSONX_API_KEY"] = "" # IBM cloud API key
5os.environ["WATSONX_TOKEN"] = "" # IAM auth token
6
7# Optional
8# os.environ["WATSONX_PROJECT_ID"] = "" # Project ID of your WatsonX instance

Logging LLM calls

In order to log the LLM calls to Opik, you will need to create the OpikLogger callback. Once the OpikLogger callback is created and added to LiteLLM, you can make calls to LiteLLM as you normally would:

1from litellm.integrations.opik.opik import OpikLogger
2import litellm
3
4opik_logger = OpikLogger()
5litellm.callbacks = [opik_logger]
6
7response = litellm.completion(
8 model="watsonx/ibm/granite-13b-chat-v2",
9 messages=[
10 {"role": "user", "content": "Why is tracking and evaluation of LLMs important?"}
11 ]
12)

Logging LLM calls within a tracked function

If you are using LiteLLM within a function tracked with the @track decorator, you will need to pass the current_span_data as metadata to the litellm.completion call:

1@track
2def generate_story(prompt):
3 response = litellm.completion(
4 model="watsonx/ibm/granite-13b-chat-v2",
5 messages=[{"role": "user", "content": prompt}],
6 metadata={
7 "opik": {
8 "current_span_data": get_current_span_data(),
9 },
10 },
11 )
12 return response.choices[0].message.content
13
14
15@track
16def generate_topic():
17 prompt = "Generate a topic for a story about Opik."
18 response = litellm.completion(
19 model="watsonx/ibm/granite-13b-chat-v2",
20 messages=[{"role": "user", "content": prompt}],
21 metadata={
22 "opik": {
23 "current_span_data": get_current_span_data(),
24 },
25 },
26 )
27 return response.choices[0].message.content
28
29
30@track
31def generate_opik_story():
32 topic = generate_topic()
33 story = generate_story(topic)
34 return story
35
36
37generate_opik_story()
Built with