Purple Magazine
— The Future Issue #37 S/S 2022

interspecifics

“MUSICAL ORGANISMS”

MUSIC

interview by ALEPH MOLINARI

the latin american collective interspecifics — an art, music, and science research bureau — uses artificial intelligence to capture the bioelectrical and chemical signals of living organisms and natural phenomena. their sonic compositions expand the spectrum of musical possibilities.

ALEPH MOLINARI — Interspecifics is an interdisciplinary art, music, and science collective, right?
INTERSPECIFICS — Yes. That’s one of the definitions. We also think of it as a studio or a boudoir. It was started in 2013 by Paloma López and myself. Later, we added new members, like Emmanuel Anguiano and Felipe Rebolledo, and we also have scientific advisers who are regular collaborators on our projects. They include the microbiologist Fernan Federici, the artist and bio-specialist Maro Pebo [Mariana Pérez Bobadilla], and the physicist and mathematician Carles Tardío Pi, as well as many others.

ALEPH MOLINARI — How did this integration of science and music come about?
INTERSPECIFICS — It came really early for me because I met Miller Puckette, who was the developer of the famous [music] software Pure Data and Max/MSP. There is some history in Max. Robert Henke used Max to produce live performances. It was a patch made in Max that inspired the production of Ableton software. It’s a low-profile and mathematical way to produce sound. I got very inspired by the possibility of programming my own synthesizers. I grew up in Tijuana, so I was very close to the electronic music scene there.

ALEPH MOLINARI — At the time of Nortec Collective and Murcof?
INTERSPECIFICS — Yes, of course. Ramón [Amezcua, aka Bostich] from Nortec is a good friend. I just released a mix for his latest ambient album. I grew up around them, seeing them manipulate machines, and I was very passionate about electricity and its possibility as a language. And, of course, electronic music is tied to this idea. Later, I started Interspecifics, and the project focused on exploring the possibilities of microbiology, physics, and other sciences to use sound as a way to make visible that which is not accessible to the human interface or senses.

ALEPH MOLINARI — Hence the name Interspecifics.
INTERSPECIFICS — Exactly. We chose the name because of the idea of collective knowledge, not only as information, but also as collective experience. Our senses are limited, and technology has the ability to expand us. We also introduced the idea of the observation of otherness — for example, the way that bacteria, plants, and even rocks perceive reality. This can be translated into human experience, as a result of which you can open up doors that are usually not clear or accessible to the human spectator.

ALEPH MOLINARI — Your project Aire V.3 uses real-time data on contaminants generated in major cities around the world. I found the music to be surprisingly harmonious, modern, and instrumental. Is this a way of creating beauty out of the destruction of the world?
INTERSPECIFICS — It’s more about the behavior of the pollutants, rather than a moral judgement on them. The way pollutants are created is related to the rhythms of human behavior, which is actually quite repetitive and structured. The philosopher Henri Lefebvre wrote an important book called Rhythmanalysis. In it, he refers to humans as clocks, and the way they behave as a macro-rhythmical system that correlates economics, politics, and society. This was a big inspiration for us, to try to translate the sound of pollutants in a way that actually reflects macro-rhythmical systems, and not to place any ethical connotation on that. The aesthetic decision was to follow the nature of pollutants and the way they rise in the atmosphere. For example, you’re going to have very clear air from midnight to 10 in the morning. Then, the pollutants start increasing in different ways. In Mexico City, for example, ozone is an important pollutant and is related to the sun. If there is a lot of sun, there will be more of this pollutant, and it will trigger other kinds of pollutants including monoxides. There is a direct correlation.

ALEPH MOLINARI — So, the higher frequency of pollutants changes the patterns of the music?
INTERSPECIFICS — Yes. They have a higher frequency. They have different cycles. In the case of Aire V.3, everything is built using modular and analog synthesizers, so we treat the pollutants as electrical signals, and then I can control, for example, an LFO [low-frequency oscillation] that is connected to a specific sound wave. By building these blocks, I can get very interesting, timbrical, harmonic, and tonal responses from these inputs. In a way, we built an instrument for these phenomena to play it.

ALEPH MOLINARI — And is the impulse always part of the environment, or is it human nature that is being reflected?
INTERSPECIFICS — It’s nature or natural phenomena or phenomena in general. For us, humans are always relevant to our projects because, at the end of the day, what we are looking for is a construct of reality. Our consciousness is what feels all the things that we are perceiving and living. Even the latest trends in quantum physics state that the observer is always modifying reality. It’s a very important subject for us, and we want to explore it from different perspectives.

ALEPH MOLINARI — In your other projects, like Speculative Communications, you turn toward the microscopic by using a bio-tracker that generates sounds and music through organisms. In integrating nonhuman organisms as the creators of the sound, are you exploring a new type of interspecies dynamic?
INTERSPECIFICS — Yes. We believe those organisms are also part of the collective, so we wanted to create a space to put these ideas into action. What happens when we observe these microsystems while, at the same time, they are being observed by an artificial intelligence, learning from each other? It’s like putting into action a system of correlations and multispecies musical compositions because it’s not only us making aesthetic decisions — it’s also the nature of these bacteria. Their morphological formations — and the way they build complex behaviors and architectures — are being stimulated by observation, as well as their responses, which trigger this compositional system. In some ways, we were trying to create a multilevel collaboration between the organic, the artificial, and the cognitive processes that are at the intersection of these three species.

ALEPH MOLINARI — So, music is not exclusively created by humans — it can be created by any of these organisms?
INTERSPECIFICS — The universe is music. Everything that exists inside this universe has the capacity to produce music, but we need the tools to observe those processes and separate frequencies of interest from noise. These organisms are constantly creating order. They’re challenging the idea of entropy, where the world goes into chaos. They’re trying to fight chaos by creating beautiful patterns as a way of communicating between them. In a deeper sense, we want to understand their language and music so that we can make those connections.

ALEPH MOLINARI — By creating music with other inputs and organisms, are you trying to move away from the anthropocentrism that characterizes our present era?
INTERSPECIFICS — Sure. It’s important to make the point that everything around us has an intelligence and a mode of expression. We need to observe this otherness and be more aware of the ways they create, so we can get inspired and move away from the idea that we are the pinnacle of intelligence and creativity.

ALEPH MOLINARI — Does your music have an ecological meaning, too?
INTERSPECIFICS — Of course [laughs]. The disruption of the planet is a very important subject for us. The notion that we are the most intelligent on the planet is very sad because we actually are not. There are so many problems that we cannot solve. I think it’s not about being against it — it’s about trying to give us a break, like “Hey, humanity. It’s okay if you’re not the smartest ones. Take a break, observe others, learn from the world.”

ALEPH MOLINARI — If you could register the sounds or the music of any living organism, what would you choose?
INTERSPECIFICS — Actually, there was research done in Japan in the 1990s where they were able to record the acoustic emissions of a bacillus bacteria in a laboratory. In order to do that, you need super high-definition microphones and really sophisticated amplifiers. It’s a dream for us to be able to record sound on a nano scale, so we can actually hear the motion of cellular structures, the flowing of protons on an atomic level in unicellular organisms, the acoustic sound of the inner roots of trees. Everything is making sound all the time, but we have no access to this information.

ALEPH MOLINARI — I once heard the roots of trees while on San Pedro. And I heard a pattern of the trees moving with the wind. It was truly incredible because everything had a correlation. Everything is interconnected in nature.
INTERSPECIFICS — Oh, wow.

ALEPH MOLINARI — I want to talk to you about artificial intelligence as well — because you work with novel ways of creating music by registering the behavioral patterns of microorganisms and then programming AI platforms. Are these means of generating new musical structures, or do these structures resemble those of the past?
INTERSPECIFICS — In the case of AI and this specific project, we needed a “super observer” because bacteria are growing 24/7, even when you are sleeping. So, we decided to build an observer that has the ability to track the bacteria in a continuous process of learning, and that’s where machines are better than us — at the sustained, continuous processes. There’s also the aspect of understanding the patterns because, in this case, we’re using a system of “supervised learning.” We train the machine with the kind of musical position that we know because to teach something, it needs to exist first. But we can explore a process of breaking those rules a little bit and take another direction. Our main goal is not to make music that never existed before, but rather to create a sense of nostalgia that is connected to something completely out of our range of understanding. It creates an uncanny feeling. We want to trigger this type of psychoacoustic phenomena in the spectators, so that they start asking questions.

ALEPH MOLINARI — That’s really interesting. Your other project, Recurrent Morphing Radio, is an endless music production machine that uses AI. Is this meta creative agent you developed a critique of contemporary capitalist culture?
INTERSPECIFICS — Totally. It’s inspired by one of our heroes, Alvin Lucier, who recently passed away. Lucier made an amazing piece in the 1980s called I Am Sitting in a Room, in which he explained to the public that sound and space are interwoven in a deep way. He starts by reading a text on the microphone: “I am sitting in a room, the same as you are in now.” He then plays it back, using only a microphone and a speaker that was already in the space, turning his voice into resonant harmonies and tones. At some point, he’s reading, but you can only hear the harmonics of his voice all around the space. We thought about this process in a more cultural sense, like what happens when you have a platform like Spotify, and it gives you all these recommendations, but it turns out that the music that is recommended is almost the same as the one you already like. It’s like a repetition of something you already know.

ALEPH MOLINARI — A mirror of choices. A filter bubble, as they call it.
INTERSPECIFICS — Exactly! What happens if we train a machine with the filter bubble? Maybe it’s going to produce the same music, but because it’s so similar and it has the same harmonic content, it starts breaking it apart as a noise — as a harmonic, spectral display.

ALEPH MOLINARI — It’s an aberration of the filter bubble, of this mirror of choices that we live with in technology.
INTERSPECIFICS — Exactly. The problem is not choice. The problem is homogenization in cultural practices. If we continue in this line, we’re going to end up with a non-formative mess of sounds and cultural devices that seem like a super-postmodern aesthetic. We started this project when Zuri Maria Daiß from CTM Festival was in the process of creating the festival The Disappearance of Music at the Haus der Kulturen der Welt museum in Berlin. She asked about what’s happening culturally with all these platforms and algorithms. We took this as a personal question, and our response was a super-fast forward into the future that became Recurrent Morphing Radio.

ALEPH MOLINARI — So, the platform uses neural networks to create a type of distorted sonic heterotopia?
INTERSPECIFICS — Yes, but the distortion is not in the neural network. The distortion is in the bias that the neural network is fed, and the bias comes from the data of the streaming platforms. We are just extracting this data and seeing the repetitions. After training the system thousands of times, we get to a harmonic distortion of the auto-tune in the vocals that is super noisy. At the same time, it’s almost like an opera. For me, it’s an amazing exercise because as a sound artist, you always want to work with noise. It’s an important space. But here, mainstream music is turning in front of our eyes into a degraded version of itself.

ALEPH MOLINARI — Do you think that music in the future is going to be a morphed version of the archives of the past?
INTERSPECIFICS — I don’t know. For me, the problem, especially in the production process, is that music stopped being an art and became a commodity. In the cultural sense, music went from being a fine art to being mostly part of the entertainment industry. The limitations of what music can be in this industry are more economic, not aesthetic or technical. In order for us to take music to the next level, we need to disconnect its current commodity value and bring it back into a state of fine art, where experimentation actually encourages artists. I don’t want to hear the same chord playing over and over again. By having the idea that the public is not capable of listening to more interesting music, we are limiting ourselves and limiting the public.

ALEPH MOLINARI — Yes, and then the platforms are highly formatting the type of music that they feature. Tracks that are 12 minutes long or minimal frequencies are lost or pushed to the bottom because that means less streaming iterations for them. At the same time, pop music makes biological sense because the whole organism vibrates with excessive bass. It has a certain narrative and a certain curve that gets people hooked.
INTERSPECIFICS — Yes, but it’s more complex because pop music is built up of a series of psychoacoustic phenomena that make the human neural system respond quickly. Same as social networks. We are becoming endorphin junkies. But this stimuli is not actually good for the human brain. It causes the human brain to deteriorate and makes it lazy about processing deeper information. It’s actually quite worrying. It’s a real social problem we have right now. People are not capable of going slower because they want the fix right here, right now, in this moment. For me, the future of music will mean to be conscious of the possibilities of making human beings a better species. It’s going from being a musician to a shaman. Every time you play a concert, you are creating a ritual, and this ritual will bring society into a different state of mind. It’s going to open doors and change the course of people’s lives because that’s what music has done since its origin.

ALEPH MOLINARI — Do you think that there will be more music created from biometric impulses of humans?
INTERSPECIFICS — Sure. There is a lot being done already in that field. In our case, we are creating music that uses the responses of the human brain, stemming from the EPO [erythropoietin hormone] produced in the frontal lobe of the brain. You can actually activate a hyper-state of consciousness in humans by tracking their neural responses. And you can get to this state using psychedelics, but there are a lot of studies that demonstrate that you can recreate this through sonic means. It breaks apart our ideas of education. I think we’re going to move away from stimulation, in the sense that our bodies as systems can be influenced and developed by using sonic impulses.

ALEPH MOLINARI — Do you think we will develop technology that allows us to interpret nonhumans — like animals, insects, and plants — and communicate back to them?
INTERSPECIFICS — Sure. That work has been in process for a long time. There are scientists who are working in an experimental way on what is called the state of continuous communications of organisms. They are finding that we are communicating all the time on a bioelectrical level. The problem is that humans developed language, which replaced all the other communication systems that we had available. Even Guglielmo Marconi, the inventor of the telegraph and the telephone, was involved in these nuanced spaces of telepathic communication. He was inspired to create a mechanical and electronic process that mimics the brain’s capacity to wirelessly transmit information.

ALEPH MOLINARI — Telepathy.
INTERSPECIFICS — Exactly. At some point — and we don’t know when this happened — we started losing all these sensorial and biological abilities, and we put our senses into the machines we started building. I think there should be some sort of equilibrium between our biological mechanisms of communication and perception and our natural ability to develop technology.

ALEPH MOLINARI — What do you think is going to happen when we integrate both, when we have Neuralink [a brain-machine interface] implanted in or connected to our brains? Will it potentialize or remove the capacities of humans?
INTERSPECIFICS — There are always two options: the lazy one and the long-developing process. Neuralink is a lazy solution for something that we need to work on in a deeper way. The speed of development and consumerism asks for these kinds of developments. Instead of needing to install a Neuralink, we just need to activate a part of the brain that is actually capable of doing the same on its own. But that’s not that profitable.

END

INTERSPECIFICS AT MUTEK MONTREAL, 2018, PHOTO ELLA RINALDO DIGITAL IMAGES GENERATED BY AN ARTIFICIAL INTELLIGENCE ENGINE PROGRAMMED BY INTERSPECIFICS DIGITAL IMAGES GENERATED BY AN ARTIFICIAL INTELLIGENCE ENGINE PROGRAMMED BY INTERSPECIFICS

[Table of contents]

The Future Issue #37 S/S 2022

Table of contents

Subscribe to our newsletter