Skip to main content
The Keyword

8 impressions from co-creating the sounds of the future

Music AI Report cover

For nearly a decade, Google has proactively engaged with artists and musicians to explore how emerging technologies can support creativity and help them elevate their artistic practice. Just last year, teams across Google — YouTube, Technology & Society and Google DeepMind — launched a new program to help artists, songwriters and producers get hands-on experience with music AI tools and prototypes to offer their feedback and guidance. And today, we’re revealing key insights gathered from our collaboration with musicians in the YouTube Music AI Incubator program.

The YouTube Music AI Incubator is a key part of our ongoing collaboration with music professionals where, together, we ideate on AI tools that meet their needs and enhance their creative expression. During individual meetings with dozens of global incubator participants, our research scientists, program managers and product specialists shared and solicited feedback on our in-progress prototypes, including the Music AI Sandbox – an experimental suite of AI tools, announced publicly at this year’s I/O, designed to supercharge the workflows of artists who collaborate with us through the incubator program.

These sessions continue to facilitate honest feedback and conversational candor, with the intention of shaping the future of our genAI music products and features. Today, we’re sharing a new report that synthesizes critical insights and observations after listening and learning alongside these creative experts over the last year.

Here’s a summary of the top eight insights we’ve gathered from participant sessions as they explored the potential of new AI tools to support their artistic expression:

1. Hearing the music reduces skepticism

While many incubator participants initially doubted AI's ability to help produce music with depth, their opinions shifted after listening to AI-generated music. Initial amazement was followed by emphasis that we need to continue to build with and for the creative community.

2. Music models can expand imagination

Participants valued experimenting with AI to explore new and unconventional sounds, regardless of musical "right" and "wrong." The unpredictable nature of music AI tools sets it apart from other software used in music production, which was seen as a catalyst for creative experimentation, allowing artists to uncover new sonic possibilities.

3. Multimodal inputs unlock musical expression

While natural language prompts offered some creative control, participants found them limiting for music creation. Artists expressed a need for diverse approaches that cater to creative preferences beyond text-based interaction. This feedback led us to explore alternative interfaces, such as converting humming and mumble tracks into digital melodies.

4. AI models make it easy to explore complex sounds

Artists found that AI made it easier to experiment with complex sounds, such as crowds chanting or choirs singing, to generate placeholders and get a feel for how a piece might work. This helped them sketch out new ideas to see where to invest in new musical partnerships.

5. Real-time interaction makes generating fun

Interacting with generative models in real-time allowed the artists to engage in playful experimentation. By enabling real-time feedback and interaction, artists were able to foster a dynamic and collaborative relationship with new AI tools.

6. Different disciplines require different designs

The incubator sessions revealed that artists, songwriters and producers have distinct needs and desired controls when using these tools. While participants see the potential of AI as an inspiration tool, a song starter, a song finisher, or a pitch tool, they desire controls that keep them in the driver's seat so they can leverage their specific skills.

7. Monetization, attribution, and control are critical concerns

Nearly all the artists in the incubator emphasized the importance of addressing monetization, attribution, and control in AI music creation. They raised key questions: Who owns the music? How do songwriters get compensated? The group actively brainstormed solutions and explored different models for tackling these challenges. For YouTube, these aren't hypothetical concerns—we're actively working on responsible approaches to address these key areas.

8. Education and accessibility are keys to empowerment

Musicians indicated that AI tools could be useful for beginners or emerging producers with limited technical knowledge or resources. They imagined AI’s ability to democratize music creation, provide educational opportunities and empower individuals to explore musical expression regardless of their experience or technical background.

Looking ahead

Our focus is on co-creation – working together with artists, songwriters, creatives, storytellers and community members is essential to imagining the possibilities of technology. We believe it’s critical to invite the thoughts and opinions of those who will be using these AI tools most to ensure that the technology is more useful, engaging, and accessible in the long run.

In addition to YouTube’s ongoing partnerships with artists of all kinds, we’ll continue investing in art and technology-focused programs like Magenta Studio, Google Arts & Culture, Artists + Machine intelligence and Lab Sessions.

Our commitment to fostering dialogue around AI and creativity extends beyond our products. For a closer look at how we've been convening conversations on creativity and AI around the world in different spheres, check out our exhibit on Google Arts & Culture on a discussion we convened with artists in collaboration with The Serpentine Galleries in London. As we continue to explore the intersection of technology and society, we're eager to connect with and learn from a wide range of communities - artistic and beyond.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe