From sketches to prototype: Designing with generative AI
What happens when you combine the artistic vision of a world-renowned designer with a leading generative model? At Google DeepMind, we partnered with designer Ross Lovegrove and Creative Director Ila Colombo from Lovegrove Studio, as well as design office Modem, to find out.
Using Gemini and Google DeepMind’s generative image technology, we worked with Lovegrove Studio to build a fine-tuned model to act as a prototyping tool to support their creative process. The model translates Ross’ distinct design language of organic, fluid-like structures and biomorphic forms into outputs that reflect his style, while also offering new directions. The result is a unique, fine-tuned model that can produce new ideas true to the studio’s vision, showcasing AI as a powerful collaborative tool for artists.
The challenge: capturing a designer's style
Our goal was to use generative AI to complete a design project — from the initial digital concept to the final, physical product. We chose a chair because it presents a classic design challenge: balancing a fixed function with evolving form. This choice granted the freedom to explore style while maintaining utility.
To start, we focused on accurately capturing the nuance of Ross' unique design style. This allowed us to create an AI tool that could truly learn and express the core of his artistic vision.

Our approach: distillation and dialogue
We worked with the studio to curate a high-quality dataset of Ross’ personal sketches, using it to fine-tune our text-to-image model, Imagen. By training the model on the studio’s selected work, we were able to incorporate the core components of Ross’ design language — the specific curves, structural logic and organic patterns, which allowed us to generate new concepts that were rooted in Ross’ unique style.
From the outset, we approached the project as a human-led inquiry. The studio determined the need to prioritize language alongside the visual dataset, working to decode and articulate Ross’ design lexicon to effectively guide the model’s output. We focused on building a specific vocabulary that described the studio’s work, knowing that the right prompts were key to getting meaningful results.
Throughout the process, Lovegrove Studio observed how the model responded to specific terms and used these insights to align the outputs toward the intended design outcomes. This dialogue between designer and AI was a crucial part of the process. We paid close attention to how the model interpreted certain words, using that feedback to refine prompts and steer the output closer to the studio’s vision. We challenged the model to generate a chair without ever using the word “chair,” instead using creative synonyms to produce more diverse outputs and a richer exploration of form and function.
The result: a physical prototype
We developed many concepts with this specialized model and the Lovegrove Studio team, then used Gemini to push the creative exploration further to ideate on materials and visualize the chair from different forms and viewpoints.
From sketch generation to the final chair design, co-created by Lovegrove Studio and Google's image model.

Ultimately, this collaborative process produced a chair design that Lovegrove Studio felt was an authentic extension of their work. Together, we created a physical version using metal 3D printing, transforming the AI-generated pixels into a tangible, functional piece of art.
“For me, the final result transcends the whole debate on design. It shows us that AI can bring something unique and extraordinary to the process.” - Ross Lovegrove
The 3D-printed chair generated in partnership with Lovegrove Studio.
