Big applause in the audience even before the concert starts. A man enters the stage bathed in colorful lights and strides to the microphone stand in the center of the platform. The intro sounds, the singer sways in the beats of the electronic music. That this is no ordinary live concert becomes clear at the latest at the first sound of his voice, which is not his. It is the voice of the artist Holly Herndon, who can be heard here translated live by an AI (from minute 34:12).
Taking hold of another person’s voice is no longer a thing of the future, thanks to Holly Herndon and her project team of “open-minded people,” as she likes to call them. “Holly+,” for which she was recently awarded the STARTS Prize 2022 and which was created in collaboration with Herndon Dryhurst Studio, Never Before Heard Sounds and Voctro Labs, is a true tour de force of what machine learning can already make possible in music today.
Whether as a pre-recorded audio file via the holly.plus website or, as here, as a live performance: anyone who wants can borrow Holly’s voice to make music with it. Thanks to the decentralized autonomous organization “Holly+ DAO” built around the project, musicians can not only use their voice and earn money with it, but also use it to finance the further development of these instruments and take on the role of controlling body when it comes to the appropriate use of their digital twin.
In the beginning… There was also a time when you didn’t have a digital twin – was there a particular trigger to create it, or did it evolve over time?
Holly Herndon: Holly+ evolved out of experiments that I was working on around my album PROTO, where my partner and I, along with an ensemble in Berlin raised what we called an AI baby, Spawn. Spawn learned from a combination of training material from the group. As the tools became more sophisticated, we realized we could create a naturalistic likeness and the only approach that we felt comfortable with, was training on ourselves directly. That’s when I started just using my own training data and Holly+ was born.
Now suddenly other people you may not even know can speak and sing with your voice. What does it actually feel like to pass on your own voice to other people?
Holly Herndon: The project is evolving in stages. Our first instrument models the ‘essence’ of my voice, but isn’t naturalistic. The second instrument we are working on is more naturalistic, so we are creating guard rails for the usage. Eventually it will be freely available for anyone to use.
But by taking small steps towards this, I am able to see issues as they arise as well as acclimate myself to these ideas. When someone else performs through my voice, of course it isn’t really me. It is that person expressing themselves through me, or a piece of my identity. I find this extremely exciting because it creates a hybrid that wasn’t possible before, and the decision that the performer makes are of course different to my own.
In your project you talk about “identity play” – what does identity actually mean to you?
Holly Herndon: Identity play is allowing other people to playfully perform through my digital likeness using machine learning tools trained on information about me. This may be with my voice, or with my visual likeness. Identity is both constructed and inherited. I prefer to focus on the construction aspects!
You use sound as material, copy the human voice thanks to artificial intelligence and transpose it to other people. Is this what will completely turn the music industry and all the copyright issues upside down in the near future?
Holly Herndon: The music industry and copyright is rife for change in any case, but of course machine learning will have a profound impact on this. I hope that we can learn from the mistakes from the past around sampling, and find a way that fosters experimentation and creativity, while fairly compensating people for their contributions to training data.
I believe that many in the arts need to develop a fluent understanding of the capabilities of new machine learning technologies, as while I think there is so much to be excited about, I also think artists are not sufficiently prepared for how dramatic some shifts in habits or economies may be. I think many people have become exhausted by hyperbolic claims around tech, but this is different. This is not a drill.
Holly+ features instruments created collaboratively by Herndon Dryhurst Studio, Never Before Heard Sounds, and Voctro Labs. How did this Artistic Exploration collaboration come about?
Holly Herndon: My partner Mat Dryhurst and I have been involved with machine learning for years, so are aware of many people active in this space. Chris and Yotam from NBHS we met through social media, and they sent us a demo they had been using internally with one our songs, so it felt really organic to work together.
We were aware of VoctroLabs from watching their lectures on Youtube, and when we reached out to them they were incredibly warm and fun to work with. They are based in Barcelona, so we were able to do a performance there at Sonar with local vocalists Maria Arnal and Tarta Relena, using the VoctroLabs software, which was a great experience. It takes a village to create something special, and we are so lucky to have found open minded people to explore these ideas with.
What’s next for your project? Do you already have ideas or goals that you have set for yourself?
Holly Herndon: I’m working on an album using these new tools as well as more live performances. I am presenting a premiere at Helsinki Festival this summer, where local musicians will be performing through my voice. We are also looking to build some missing infrastructure to help artists participate in what I anticipate is coming.
Holly Herndon (US) is an American multi-disciplinary artist based in Berlin. Her work involves building new technologies to experiment with her voice and image, facilitated by critical research in Artificial Intelligence and decentralized infrastructure. In recent work she has produced an instrument for anyone to sing with her voice, distributed governance of her digital voice to the Holly+DAO, and released the Classified portrait series generated from what public available AI datasets know about her likeness. She has toured her influential musical albums PROTO (4AD) and Platform (4AD) globally, most recently with a choir composed of human and AI voices. She completed her doctorate in Composition at Stanford University, working with the Center for Computer Research in Music and Acoustics (CCRMA). She makes her research process public through the Interdependence podcast.