Skip to main content
The Keyword
How animators and AI researchers made ‘Dear Upstairs Neighbors’
["What does AI mean for retail?", "How did Nano Banana get its name?", "How can AI help me plan travel?"]

How animators and AI researchers made ‘Dear Upstairs Neighbors’

Listen to article
This content is generated by Google AI. Generative AI is experimental
[[duration]] minutes

Director Connie He developed the story based on her personal experience with noisy neighbors. In her storyboards she envisioned a series of hallucinations that get more unhinged and ridiculous as the night progresses.

For our main character, Ada, production designer Yingzong Xin created a design that’s quirky and unique, with pushed proportions and an angular shape language.

Ada’s face is extremely expressive. Character model sheet by Yingzong Xin.

Ada’s bedroom is rendered in cool colors, conveying a sense of calm, comfort and sanctuary. Set design by Yingzong Xin.

Ada’s hallucinations have a rough style and neon palette that distinguishes them from the “real world” of her bedroom. Concept art by Yingzong Xin.

The painterly style changes from moment to moment, expressing Ada’s changing emotions through color and texture. Concept art by Yingzong Xin.

In the most intense moments, the abstract expressionist style grows to dominate the entire scene. Concept art by Yingzong Xin.

Images of Ada generated by Imagen after fine-tuning. The fine-tuned model helped the whole team explore Ada as a character.

Left: paintings by Yingzong Xin. Right: stylized animated video generated by Veo after fine-tuning. What Veo learned from our concept art surprised us: not just superficial details like color and texture, but deep artistic concepts like two-point perspective.

Top: Ada’s character design follows strictly two-dimensional rules: her characteristic hair poof and messy bun must always be part of her silhouette, never obscuring her face. Bottom left: a 3D sculpture of Ada’s hair can’t possibly look correct from every angle, because the solid form violates those 2D rules. Bottom right: Veo, after fine-tuning on images of Ada, seamlessly resolves the conflict, smoothly adapting the shapes to keep the silhouette correct as the head turns.

Using text-to-video with the fine-tuned Veo model produced scenes that looked like Ada, but their movement was random, uncontrolled, and often bizarre. Text alone can’t convey the nuance and specificity needed for narrative animated filmmaking.

To create a nuanced performance strong enough to carry the story, our animators used traditional methods. Animator Ben Knight created rough 3D animation for this scene in Maya, and researcher Andy Coenen used fine-tuned Veo models to transform it into the final look.

The video-to-video approach allowed each artist to work in their comfort zone, using their favorite animation tools. Animator Mattias Breitholtz created this rough 2D animation using TV Paint, and researcher Forrester Cole transformed it into the final look frame by frame, using fine-tuned versions of Imagen in a custom ComfyUI workflow.

Animator Steven Chao animated Ada and created dynamic low-poly effects in Maya, and researcher Ellen Jiang and director Connie He used fine-tuned Veo and Imagen models to transform these elements into the expressionist look. The staccato rhythm of the changing paint texture adds to the intensity of the action.

To create Ada’s hallucination of a howling dog, we started with a concept painting by Yingzong Xin, and used Veo image-to-video to bring it to life. Veo’s first pass (without fine-tuning) was too photorealistic for our film; so we used the fine-tuned version of Veo to bring the shot closer to our intended visual style. The video-to-video workflow allowed us to switch freely between Veo and traditional tools like Premiere.

Using fine-tuned Veo with video-to-video workflows allowed us to iterate on the design of both the dog and the painterly effects around it, exploring stylistic variations with unprecedented freedom and control.

Supervising animator Cassidy Curtis created rough 3D animation for this shot in Maya, and researcher Erika Lu fine-tuned a Veo model to transform it into the final look. To improve the silhouette of Ada’s hair, Lu added a rough mask to indicate the region where more hair was needed, and used Veo to improvise an extra tuft of hair there that fits seamlessly into the rest of the shot.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe