When artists and machine intelligence come together
Throughout history, from photography to video to hypertext, artists have pushed the expressive limits of new technologies, and artificial intelligence is no exception. At I/O 2019, Google Research and Google Arts & Culture launched the Artists + Machine Intelligence Grants, providing a range of support and technical mentorship to six artists from around the globe following an open call for proposals. The inaugural grant program sought to expand the field of artists working with Machine Learning (ML) and, through supporting pioneering artists, creatively push at the boundaries of generative ML and natural language processing.
Today, we are publishing the outcomes of the grants. The projects draw from many disciplines, including rap and hip hop, screenwriting, early cinema, phonetics, Spanish language poetry, and Indian pre-modern sound. What they all have in common is an ability to challenge our assumptions about AI’s creative potential.
Learn more about the g.co/hiphoppoetrybot
Hip Hop Poetry Bot by Alex Fefegha
Can AI rap? Alex explores speech generation trained on rap and hip hop lyrics by Black artists. For the moment it exists as a proof of concept, as building the experiment in full requires a large, public dataset of rap and hip hop lyrics on which an algorithm can be trained, and such a public archive doesn’t currently exist. The project is therefore launching with an invitation from Alex to rap and hip hop artists to become creative collaborators and contribute their lyrics to create a new, public dataset of lyrics by Black artists.
Read more about Neural Swamp
Neural Swamp by Martine Syms
Martine uses video and performance to examine representations of blackness across generations, geographies, mediums, and traditions. For this residency, Martine developed Neural Swamp, a play staged across five screens, starring five entities who talk and sing alongside and over each other. Two of the five voices are trained on Martine’s voice and generated using machine learning speech models. The project will premiere at The Philadelphia Museum of Art and Fondazione Sandretto Re Rebaudengo in Fall 2021.
Play with g.co/nonsenselaboratory
The Nonsense Laboratory by Allison Parrish
Allison invites you to adjust, poke at, mangle, curate and compress words with a series of playful tools in her Nonsense Laboratory. Powered by a bespoke code library and machine learning model developed by Allison Parrish you can mix and respell words, sequence mouth movements to create new words, rewrite a text so that the words feel different in your mouth, or go on a journey through a field of nonsense.
Explore g.co/letmedreamagain
Let Me Dream Again by Anna Ridler
Anna uses machine learning to try to recreate lost films from fragments of early Hollywood and European cinema that still exist. The outcome? An endlessly evolving, algorithmically generated film and soundtrack. The film will continually play, never repeating itself, over a period of one month.
Read more about Knots of Code
Knots of Code by Paola Torres Núñez del Prado
Paola studies the history of quipus, a pre-Columbian notation system that is based on the tying of knots in ropes, as part of a new research project, Knots of Code. The project’s first work is a Spanish language poetry-album from Paola and AIELSON, an artificial intelligence system that composes and recites poetry inspired by quipus and emulating the voice of the late Peruvian poet J.E. Eielson.
Read more about Dhvāni
Dhvāni by Budhaditya Chattopadhyay
Budhaditya brings a lifelong interest in the materiality, phenomenology, political-cultural associations, and the sociability of sound to Dhvāni, a responsive sound installation, comprising 51 temple bells and conducted with the help of machine learning. An early iteration of Dhvāni was installed at EXPERIMENTA Arts & Sciences Biennale 2020 in Grenoble, France.