Using Opik with AWS Bedrock

Opik integrates with AWS Bedrock to provide a simple way to log traces for all Bedrock LLM calls. This works for all supported models, including if you are using the streaming API.

Creating an account on Comet.com

Comet provides a hosted version of the Opik platform, simply create an account and grab you API Key.

You can also run the Opik platform locally, see the installation guide for more information.

1%pip install --upgrade opik boto3
1import opik
2
3opik.configure(use_local=False)

Preparing our environment

First, we will set up our bedrock client. Uncomment the following lines to pass AWS Credentials manually or checkout other ways of passing credentials to Boto3. You will also need to request access to the model in the UI before being able to generate text, here we are gonna use the Llama 3.2 model, you can request access to it in this page for the us-east1 region.

1import boto3
2
3REGION = "us-east-1"
4
5MODEL_ID = "us.meta.llama3-2-3b-instruct-v1:0"
6
7bedrock = boto3.client(
8 service_name="bedrock-runtime",
9 region_name=REGION,
10 # aws_access_key_id=ACCESS_KEY,
11 # aws_secret_access_key=SECRET_KEY,
12 # aws_session_token=SESSION_TOKEN,
13)

Logging traces

In order to log traces to Opik, we need to wrap our Bedrock calls with the track_bedrock function:

1import os
2
3from opik.integrations.bedrock import track_bedrock
4
5bedrock_client = track_bedrock(bedrock, project_name="bedrock-integration-demo")
1PROMPT = "Why is it important to use a LLM Monitoring like CometML Opik tool that allows you to log traces and spans when working with LLM Models hosted on AWS Bedrock?"
2
3response = bedrock_client.converse(
4 modelId=MODEL_ID,
5 messages=[{"role": "user", "content": [{"text": PROMPT}]}],
6 inferenceConfig={"temperature": 0.5, "maxTokens": 512, "topP": 0.9},
7)
8print("Response", response["output"]["message"]["content"][0]["text"])

The prompt and response messages are automatically logged to Opik and can be viewed in the UI.

Bedrock Integration

Logging traces with streaming

1def stream_conversation(
2 bedrock_client,
3 model_id,
4 messages,
5 system_prompts,
6 inference_config,
7):
8 """
9 Sends messages to a model and streams the response.
10 Args:
11 bedrock_client: The Boto3 Bedrock runtime client.
12 model_id (str): The model ID to use.
13 messages (JSON) : The messages to send.
14 system_prompts (JSON) : The system prompts to send.
15 inference_config (JSON) : The inference configuration to use.
16 additional_model_fields (JSON) : Additional model fields to use.
17
18 Returns:
19 Nothing.
20
21 """
22
23 response = bedrock_client.converse_stream(
24 modelId=model_id,
25 messages=messages,
26 system=system_prompts,
27 inferenceConfig=inference_config,
28 )
29
30 stream = response.get("stream")
31 if stream:
32 for event in stream:
33 if "messageStart" in event:
34 print(f"\nRole: {event['messageStart']['role']}")
35
36 if "contentBlockDelta" in event:
37 print(event["contentBlockDelta"]["delta"]["text"], end="")
38
39 if "messageStop" in event:
40 print(f"\nStop reason: {event['messageStop']['stopReason']}")
41
42 if "metadata" in event:
43 metadata = event["metadata"]
44 if "usage" in metadata:
45 print("\nToken usage")
46 print(f"Input tokens: {metadata['usage']['inputTokens']}")
47 print(f":Output tokens: {metadata['usage']['outputTokens']}")
48 print(f":Total tokens: {metadata['usage']['totalTokens']}")
49 if "metrics" in event["metadata"]:
50 print(f"Latency: {metadata['metrics']['latencyMs']} milliseconds")
51
52
53system_prompt = """You are an app that creates playlists for a radio station
54 that plays rock and pop music. Only return song names and the artist."""
55
56# Message to send to the model.
57input_text = "Create a list of 3 pop songs."
58
59
60message = {"role": "user", "content": [{"text": input_text}]}
61messages = [message]
62
63# System prompts.
64system_prompts = [{"text": system_prompt}]
65
66# inference parameters to use.
67temperature = 0.5
68top_p = 0.9
69# Base inference parameters.
70inference_config = {"temperature": temperature, "topP": 0.9}
71
72
73stream_conversation(
74 bedrock_client,
75 MODEL_ID,
76 messages,
77 system_prompts,
78 inference_config,
79)

Bedrock Integration

Using it with the track decorator

If you have multiple steps in your LLM pipeline, you can use the track decorator to log the traces for each step. If Bedrock is called within one of these steps, the LLM call with be associated with that corresponding step:

1from opik import track
2from opik.integrations.bedrock import track_bedrock
3
4bedrock = boto3.client(
5 service_name="bedrock-runtime",
6 region_name=REGION,
7 # aws_access_key_id=ACCESS_KEY,
8 # aws_secret_access_key=SECRET_KEY,
9 # aws_session_token=SESSION_TOKEN,
10)
11
12os.environ["OPIK_PROJECT_NAME"] = "bedrock-integration-demo"
13bedrock_client = track_bedrock(bedrock)
14
15
16@track
17def generate_story(prompt):
18 res = bedrock_client.converse(
19 modelId=MODEL_ID, messages=[{"role": "user", "content": [{"text": prompt}]}]
20 )
21 return res["output"]["message"]["content"][0]["text"]
22
23
24@track
25def generate_topic():
26 prompt = "Generate a topic for a story about Opik."
27 res = bedrock_client.converse(
28 modelId=MODEL_ID, messages=[{"role": "user", "content": [{"text": prompt}]}]
29 )
30 return res["output"]["message"]["content"][0]["text"]
31
32
33@track
34def generate_opik_story():
35 topic = generate_topic()
36 story = generate_story(topic)
37 return story
38
39
40generate_opik_story()

The trace can now be viewed in the UI:

Bedrock Integration

Built with