Skip to main content
The Keyword

Lab Sessions: A new series of experimental AI collaborations

Collage of graphic thumbnails featuring people who have previously collaborated with Google.
10:25

Whether we’re developing products, advancing hardware or conducting new research, we know the most important thing is putting technology in the hands of actual people. For years we’ve been collaborating with individuals and communities across a variety of fields to shape emerging technologies to meet their own unique needs.

Labs.google is a place where we continue to experiment with AI: testing hypotheses, learning from one another and creating new technology together. But we also want to create a way to showcase our existing and future collaborations across all kinds of disciplines.

That’s why we’re announcing Lab Sessions, a series of experimental AI collaborations with visionaries – from artists to academics, scientists to students, creators to entrepreneurs. You can view our first three sessions below or at labs.google/sessions.

Dan Deacon x Generative AI

One of our most recent Sessions was with composer and digital musician Dan Deacon. Dan teamed up with Google Researchers to create a pre-show performance for Google I/O 2023. Dan experimented with our text-to-music model, MusicLM, to create new sounds for a new song. He used Bard, our conversational AI, to help him write a guided meditation. And he explored our generative video model, Phenaki, turning his lyrics into visuals projected on stage. Experiment with Bard and MusicLM yourself.

Musician and Composer Dan Deacon and a graphic title reads: “Dan Deacon x Generative AI”
10:25

Lupe Fiasco x Large language models

We’ve also collaborated with rapper and MIT Visiting Scholar, Lupe Fiasco to see how AI could enhance his creative process. As we began to experiment with the PaLM API and MakerSuite, it became clear that Lupe didn’t want AI to write raps for him. Instead, he wanted AI tools that could help him in his own writing process.

So we set out to build a new set of custom tools together. Lupe’s lyrical and linguistic techniques brought a whole new perspective to the way we prompt and create with large language models. The final result is an experiment called TextFX, where you’ll find 10 AI-powered tools for writers, rappers and wordsmiths of all kinds. Give it a try and if you want to take a closer look into how the experiment was built, check out this blog post or the open-sourced code.

Rapper Lupe Fiasco speaks into a microphone and a graphic title reads: “Lupe Fiasco x Large Language Models”
10:25

Georgia Tech and RIT/NTID x Sign language recognition

Since last year, we’ve been working with a group of students from the Georgia Institute of Technology and the National Technical Institute for the Deaf (NTID) at the Rochester Institute of Technology (RIT) to explore how AI computer vision models could help people learn sign language in new ways.

Together with Google researchers and the Kaggle community, the students came up with a game called PopSignAI. This bubble launcher game teaches 250 American Sign Language (ASL) signs based on the MacArthur-Bates Communicative Development Inventories, which are the first concepts used to teach a language to a child. Ninety-five percent of deaf infants are born to hearing parents, who often do not know ASL.

You can download the game on the Play Store and iOS app store to try it out yourself or head over to popsign.org to learn more.

A woman communicates in American Sign Language and a graphic title reads: “Georgia Tech and RIT/NTID x Sign Language Recognition”
10:25

Looking forward

It takes people with curiosity, creativity and compassion to harness AI’s potential. We’re excited to share more Lab Sessions and see where we can take AI together.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe