Gemini - Google AI Studio
Gemini is a family of multimodal large language models developed by Google DeepMind.
Getting Started
Configuring Opik
To start tracking your Gemini LLM calls, you can use our LiteLLM integration. You'll need to have both the opik
, litellm
and google-generativeai
packages installed. You can install them using pip:
pip install opik litellm google-generativeai
In addition, you can configure Opik using the opik configure
command which will prompt you for the correct local server address or if you are using the Cloud platfrom your API key:
opik configure
If you’re unable to use our LiteLLM integration with Gemini, please open an issue
Configuring Gemini
In order to configure Gemini, you will need to have:
- Your Gemini API Key: See the following documentation page how to retrieve it.
Once you have these, you can set them as environment variables:
import os
os.environ["GEMINI_API_KEY"] = "" # Your Google AI Studio Gemini API Key
Logging LLM calls
In order to log the LLM calls to Opik, you will need to create the OpikLogger callback. Once the OpikLogger callback is created and added to LiteLLM, you can make calls to LiteLLM as you normally would:
from litellm.integrations.opik.opik import OpikLogger
import litellm
opik_logger = OpikLogger()
litellm.callbacks = [opik_logger]
response = litellm.completion(
model="gemini/gemini-pro",
messages=[
{"role": "user", "content": "Why is tracking and evaluation of LLMs important?"}
]
)
Logging LLM calls within a tracked function
If you are using LiteLLM within a function tracked with the @track
decorator, you will need to pass the current_span_data
as metadata to the litellm.completion
call:
@track
def generate_story(prompt):
response = litellm.completion(
model="gemini/gemini-pro",
messages=[{"role": "user", "content": prompt}],
metadata={
"opik": {
"current_span_data": get_current_span_data(),
},
},
)
return response.choices[0].message.content
@track
def generate_topic():
prompt = "Generate a topic for a story about Opik."
response = litellm.completion(
model="gemini/gemini-pro",
messages=[{"role": "user", "content": prompt}],
metadata={
"opik": {
"current_span_data": get_current_span_data(),
},
},
)
return response.choices[0].message.content
@track
def generate_opik_story():
topic = generate_topic()
story = generate_story(topic)
return story
generate_opik_story()