Skip to content

Quickstart¶

With Comet, you can effectively manage, visualize, and optimize both traditional ML and LLM throughout their lifecycle—from initial experimentation to ongoing production monitoring.

Comet dashboard - Overview
Demo Comet project

By adding Comet tracking to your AI workflows, you unlock full reproducibility and lineage of your experiments as Comet logs and lets you manage all information associated to the code run.

Follow this quickstart to get started with ML experiment and LLM prompt management on Comet.

Set up on Comet¶

The Comet platform organizes your ML experiments in a four-tier hierarchical structure, consisting of:

  • Organization: one per company.
  • Workspace: typically, one per team.
  • Project: typically, one per ML / LLM workflow.
  • Experiment: one single code execution of the workflow.

As a Comet user, you are automatically given a default workspace, a default ML project, and a default LLM project!

Discover more about navigating and managing the Comet Platform from the Find your way around docs page.

Get started: Experiment Management¶

1. Install and configure the Comet ML SDK¶

The comet_ml Python package allows you to programmatically set up and interact with your ML experiments on Comet. You can use pip to install the library in your local environment:

pip install comet_ml
conda install -c anaconda -c conda-forge -c comet_ml comet_ml
%pip install comet_ml

The Comet ML SDK then authenticates to your user space via API key. The API key is stored in .comet.config together with other configuration settings (such as workspace and project). You can set up your API key, or update your default configuration, as follows:

comet login
1
2
3
import comet_ml

comet_ml.login()

Note that, while you can pass the api_key as an argument to login(), this is not recommended for security purposes unless the API key is retrieved dynamically from a secure vault for example.

Further details, as well as other configuration approaches, are available in the Configure the SDK page.

2. Log your training run as an experiment¶

Comet defines a training run as an experiment.

For each experiment, you can log three types of information:

  • Metadata: key-value pairs such as metrics, parameters, system info, etc.
  • Assets: unversioned file-like objects such as images, models, curves, confusion matrices, code, etc.
  • Artifacts: versioned assets, typically datasets.

While metadata and assets are associated to a single experiment, artifacts are designed to be shared across multiple experiments for efficiency.

To track your experiments with Comet:

  1. Start an Experiment object.

    1
    2
    3
    import comet_ml
    
    exp = comet_ml.start(project_name="demo-project")
    

    The complete list of supported arguments is accessible at the following page.

  2. Log metrics, parameters, visualizations, code, and other assets relevant for your experiment management.

    1
    2
    3
    4
    5
    6
    7
    import math
    import random
    
    for step in range(100):
        accuracy = math.log(1 + step + random.uniform(-0.4, 0.4))/5
        loss = 5 - math.log(1 + step + random.uniform(-0.4, 0.4))
        exp.log_metrics({'accuracy': accuracy, 'loss': loss}, step=step)
    
    1
    2
    3
    parameters = {'batch_size': 32, 'learning_rate': 0.0001}
    
    exp.log_parameters(parameters)
    

    Please refer to the Logging Data section for detailed information and examples for each metadata type.

    Tip

    By default, Comet automatically logs a number of relevant metrics, parameters, and visualizations for all the most popular ML frameworks.
    Find all references in the Integrations section.

  3. Run your code!

    Comet will automatically track your experiment and stop logging when your code terminates.

    Tip

    Consider adding exp.end() at the end of your code to ensure all logging is completed before exiting.

3. Visualize the experiment in the UI¶

Comet links you to the Single Experiment page in the Comet UI when starting the experiment. Else, refer to Find your way around for information on how to navigate to your experiment from the comet.com homepage.

From the Experiment page, you can:

  1. Access to all metadata, including training metrics and hyper-parameters, logged during the training run.

  2. Manage (rename, tag, move, and archive) the experiment.

Comet dashboard - Experiment page
Review the experiment from the demo Comet project

Please refer to the Experiment Page page for more details.

From the Panels page, you can then create custom dashboard views that allow you to visually inspect your experiment run from different perspectives and, if desired, with different team members.

You can also compare this experiment with other experiments in the demo project by applying Diff on two or more selected experiments.

Comet dashboard - Experiments Diff page
Compare multiple experiments from the demo Comet project

Please refer to the Compare Experiments page for more details.

Putting it all together!¶

To run an ML experiment with Comet from your Jupyter notebook, you need to:

  1. Install the Comet ML SDK, and initialize it with your API key.

    1
    % pip install comet_ml
    
    and
    1
    2
    import comet_ml
    comet_ml.login()
    
  2. Update your training code to start a Comet experiment, and log metrics and parameters to it.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    import math
    import random
    
    exp = comet_ml.start(project_name="demo-project")
    
    parameters = {'batch_size': 32, 'learning_rate': 0.0001}
    exp.log_parameters(parameters)
    
    for step in range(100):
        accuracy = math.log(1 + step + random.uniform(-0.4, 0.4))/5
        loss = 5 - math.log(1 + step + random.uniform(-0.4, 0.4))
        exp.log_metrics({'accuracy': accuracy, 'loss': loss}, step=step)
    
  3. Visualize and manage the experiment from the Comet UI.

Note

Comet supports multiple programming languages. The most commonly used language for Comet users is Python, which is the focus of this quickstart and all other guides.
If you are interested in R or Java, please refer to R SDK and Java SDK respectively for instructions.

Get started: LLM Evaluation (Opik)¶

Tracking traces and chains with Opik is as easy as installing the opik library and adding the track decorator to the functions you want to track.

1. Install and configure the Opik SDK¶

First, install the opik library:

pip install opik

Next we will configure the Opik SDK to connect to your Comet account:

export OPIK_API_KEY="your_opik_api_key"
export OPIK_WORKSPACE="your_opik_workspace"

Note

Your Opik workspace name is often the same as your user name.

2. Trace your LLM application¶

By adding the track decorator to the functions you can start tracing your LLM application:

from opik import track

@track
def my_function(user_question):
    # My llm application code

    response = "Hello, how are you?"
    return response

3. Learn more about Opik¶

Opik is an open-source platform for tracing, evaluating and comparing LLM applications. It can even be used to monitor your LLM applications in production !

Note

Learn more about Opik in the Opik documentation.

What's next?¶

Nov. 18, 2024