skip to Main Content

Comet is now available natively within AWS SageMaker!

Learn More

CLIPDraw Gallery: AI Art Powered by Comet and Gradio

Contribute to a growing gallery of AI-generated art

 

Jump right in and submit your prompt here

A few weeks ago, we announced a really exciting integration between Comet and Gradio, which allows data scientists and teams to interactively test and validate their ML models via real-time demos, all while tracking experiment metrics, hyperparameters, source code, and more.

There are any number of possible use cases for this integration, but to demonstrate one particularly valuable (and fun) one, our team has been hard at work creating a public gallery space for AI-generated artwork.

Motivation

The advent of extremely large pre-trained Transformer models, such as GPT3 and CLIP, has led to the rise of a new phenomenon: Prompt Engineering.

These large models are capable of producing all sorts of fantastic outputs; however, the key to making them work is using the right set of instructions as an input. ML Twitter is full of examples of people experimenting with different types of prompts and sharing their results. The issue here is since this knowledge is scattered, there is no way for other people to learn and build from past approaches.

We believe that Prompt Engineering will receive a lot of attention in the future as a way to interact with these large generative models. However, if we don’t keep track of what the community is trying and creating, we won’t be able to get very far.

To support the efforts of the ML and Generative Art community’s experimentation, we’re creating a public Comet Project to track Generative Art created with the CLIPDraw model, along with a Gradio UI to interact with the model on our community server.

How it Works

Here’s how it works, at a high level. Using Gradio, we were able to create a simple, efficient user interface for generating and visualizing individual model predictions in real time.

We paired these powerful capabilities with Kevin Frans’ CLIPDraw model implementation to create a UI that accepts a text prompt as the core input (along with a couple of optional parameters like training iterations, etc), and produces AI-generated art as an output.

Then, using the integration between Gradio and Comet, we created a Comet project where we log these text prompts, image creation parameters, and generated images of a given run.

When you submit a prompt, it’s queued up as an Experiment in the project, and as a job on our community GPU server, which will continuously run image generation jobs as they come in. Jobs are currently capped at a runtime of 10 minutes, but the intermediate images will be logged to your Experiment.

 

Don’t want to wait for the Community Server to finish your job? Try running it yourself in this Colab Notebook

 

The result is a data-rich public gallery of AI-generated art experiments! But any art gallery is only as good as the artists who populate it. Which leads us to…

How to Contribute

Contributing to this project is simple—visit the public Gradio interface, enter a text prompt, experiment with the optional parameters (as a general rule, increasing the values will result in a richer image but take more time to process), and then hit submit.

Here’s what the parameters mean:

Seed: Set the seed used for initialization in order to reproduce your drawing. This can be any positive integer.

Prompt: The text prompt that the model will use to create the drawing. It can be any type of string. The model will try to maximize the similarity between this prompt and the generated image.

Negative Prompt: The model will try to minimize the similarity between the generated drawing and this prompt.

Stroke Count: The number of Stroke Vectors to use for the drawing.

Iterations: The number of optimization steps to take when creating the drawing.

Use Normalized Clip: Controls whether you normalize the image when running the optimization process.

Social: If you’d like us to tag you on Twitter with your creation, drop your handle in here.

You’ll then see a Comet URL generated on the right side of your screen—this lets you know that your experiment is in the queue, training is underway, and that you’ll be able to see the output at that URL when training is complete. Here’s a GIF showing that process:

 

Other cool things!

With Comet’s powerful experiment management tools, there are a few additional goodies inside this project that can be fun to play around with.

Visualize training runs across steps

Inside the experiment run (found via the output URL on the Gradio interface), click on the “Graphics” tab, and you’ll see before your eyes the evolution of your AI-generated art.

You’ll initially see a gallery of static images, but you can also then view a timelapse of this progression by clicking the “Play” button under the “Filter by Step” column.

Here’s a GIF showing this in action:

 

Compare and “diff” experiment runs

If you’d like to compare outputs on the same text prompt with different optional parameters, you can also do that inside of Comet.

Simply visit the main project page, find and select experiment runs you want to compare, click the “DIFF” button at the top of the table, and then click the “Graphics” tab to see a side-by-side. You can again visualize the progression across model training steps by clicking the play button under the “Filter by Step” column. Here’s a quick look at this in practice:

 

Share your creations with us!

We’ll be checking in on the public project page to see what folks are submitting and creating, but if you end up with something you can’t resist sharing, let us know!

Take a screenshot or capture a GIF and tag us on Twitter (@Cometml) or LinkedIn (@comet-ml). We’re excited to see what you come up with!


Like what you’re reading?

Subscribe to the Comet Newsletter to stay in the loop with what we’re building and to hear our team’s takes on the latest industry news, research, perspective, and more.

Dhruv | Comet ML

Dhruv Nair

Data Scientist at comet.ml
Back To Top