watsonx
watsonx is a next generation enterprise studio for AI builders to train, validate, tune and deploy AI models.
Getting Started
Configuring Opik
To start tracking your watsonx LLM calls, you can use our LiteLLM integration. You'll need to have both the opik
and litellm
packages installed. You can install them using pip:
pip install opik litellm
In addition, you can configure Opik using the opik configure
command which will prompt you for the correct local server address or if you are using the Cloud platfrom your API key:
opik configure
If you’re unable to use our LiteLLM integration with watsonx, please open an issue
Configuring watsonx
In order to configure watsonx, you will need to have:
- The endpoint URL: Documentation for this parameter can be found here
- Watsonx API Key: Documentation for this parameter can be found here
- Watsonx Token: Documentation for this parameter can be found here
- (Optional) Watsonx Project ID: Can be found in the Manage section of your project.
Once you have these, you can set them as environment variables:
import os
os.environ["WATSONX_ENDPOINT_URL"] = "" # Base URL of your WatsonX instance
os.environ["WATSONX_API_KEY"] = "" # IBM cloud API key
os.environ["WATSONX_TOKEN"] = "" # IAM auth token
# Optional
# os.environ["WATSONX_PROJECT_ID"] = "" # Project ID of your WatsonX instance
Logging LLM calls
In order to log the LLM calls to Opik, you will need to create the OpikLogger callback. Once the OpikLogger callback is created and added to LiteLLM, you can make calls to LiteLLM as you normally would:
from litellm.integrations.opik.opik import OpikLogger
import litellm
opik_logger = OpikLogger()
litellm.callbacks = [opik_logger]
response = litellm.completion(
model="watsonx/ibm/granite-13b-chat-v2",
messages=[
{"role": "user", "content": "Why is tracking and evaluation of LLMs important?"}
]
)
Logging LLM calls within a tracked function
If you are using LiteLLM within a function tracked with the @track
decorator, you will need to pass the current_span_data
as metadata to the litellm.completion
call:
@track
def generate_story(prompt):
response = litellm.completion(
model="watsonx/ibm/granite-13b-chat-v2",
messages=[{"role": "user", "content": prompt}],
metadata={
"opik": {
"current_span_data": get_current_span_data(),
},
},
)
return response.choices[0].message.content
@track
def generate_topic():
prompt = "Generate a topic for a story about Opik."
response = litellm.completion(
model="watsonx/ibm/granite-13b-chat-v2",
messages=[{"role": "user", "content": prompt}],
metadata={
"opik": {
"current_span_data": get_current_span_data(),
},
},
)
return response.choices[0].message.content
@track
def generate_opik_story():
topic = generate_topic()
story = generate_story(topic)
return story
generate_opik_story()