Purple Magazine
— The Brain Issue #33 S/S 2020



an early purple contributor, joshua dexter is a new york based writer, curator, art-critique and art historian. He is the author of art is a problem: selected criticism. interviews and curatorial projects (1986-2012)

Shouldn’t it be just as easy to upgrade our brains using either chip implants or lab-grown brain neurons as it currently is to upgrade software, apps, computers, operating systems, mobile devices, food, sneakers, sex, sleep, refrigerators, hotel rooms, dishwashers, toilets, cars, airplane seats, body parts, and just about anything else? Well, it turns out that we are actually accelerating toward that future much quicker than I had imagined. Beyond the realm of the theoretical, real science and applied technology may be in the process of catching up to science-fiction scenarios that have envisioned sophisticated brain-machine interfaces. It appears that two predominant directions are being pursued: the growing of brain neurons in the laboratory that could be used to augment AI brains for robots; and the implantation of chips and electrodes in human brains (a cyborg model) that would augment and amplify human intelligence, and address brain diseases and neurological disorders.

There is long-standing biotech and bioengineering research about potential applied interconnectivity between humans and computers, and the modern notion of cybernetics has been with us since the middle of the 20th century, while deep learning and AI have been developing for a number of decades. Anxieties have emerged regarding the ethics of meddling with the human brain (including the thorny issue of genetic engineering), as well as the ethics of creating worker robots that displace human workers before they can be retrained for other jobs — and before society rethinks what human labor can be. If what we produce as humans can outthink and outsmart us, as is already the norm in AI, then we may be stupider than we think. On the other hand, if AI can be harnessed to find pragmatic solutions to problems that humans have created, such as climate change and corruption, without exacerbating those human-made crises and problems, then perhaps AI will serve the public good.

In recent years, extraordinary progress has been made in the science of growing networks of brain neurons in the lab for use in robots, as well as the development of implants for human brains. The implant research has been focused on brain-computer interfaces to provide deep-brain stimulation therapy for people with Parkinson’s disease, for example. In 2012, Kevin Warwick, a British scientist who has conducted state-of-the-art chip-implant research on his own body, wrote in The Future of Artificial Intelligence and Cybernetics: “We regard the robot simply as a machine. But what if the robot has a biological brain made up of brain cells (neurons), possibly even human neurons? … It’s clear that connecting a human brain, by means of an implant, with a computer network could in the long term open up the distinct advantages of machine intelligence, communication, and sensing abilities to the implanted individual.” Warwick also reflected upon the ethical dilemmas and political repercussions of enhancing human brains, in terms of the commercialization of such technologies and the potential socioeconomic inequities that would result if only the wealthy could afford to upgrade their brains.

On the one hand, our brains will have to expand their cognitive powers to keep up with technological advancement, and on the other, artificial brains are being developed for nonhuman or post-human technological devices that may make certain aspects of the human brain redundant. There is already abundant evidence that AI, using the power of supercomputing, can outmatch the human brain. And let’s not forget that IBM’s Deep Blue supercomputer, a precursor to AI, already defeated chess grandmaster Garry Kasparov in 1997. More recently, in 2015, Google’s DeepMind program, AlphaGo, defeated a professional human Go player using deep learning to invent unprecedented moves. In 2016, Elon Musk founded Neuralink, a company focusing on body-machine interfaces (BMIs) aimed at helping restore sensory and motor function to people with neurological disorders. This would involve the implantation into the brain of thousands of tiny electrode threads, using a neurosurgical robot. There could be many other applications of this technology. Musk claims that in order for humans to keep pace with the evolution of AI — which he deems as a threat to humans — we will need the ability to plug our brains/minds into computational machines and networks to maintain some degree of control over AI’s emerging dominance — or, at the very least, a degree of symbiotic interconnectivity. There seems to be a desire to find a way for us to still control the technology through sophisticated human-machine interfaces before the technology somehow controls us. This form of what might be characterized as technological paranoia even informs part of Musk’s reasoning for establishing Neuralink: “Even under a benign AI, we will be left behind. With a high bandwidth brain-machine interface, we will have the option to go along for the ride.”

Science and science fiction have often intersected in stimulating ways. Isaac Asimov, author of I, Robot, was a scientist and professor of biochemistry, and although his stories are informed by hard science, they can be understood as a form of speculative science. Philip K. Dick and William Gibson (a cyberpunk pioneer who coined the term cyberspace) envisioned dystopian futures in which humans were at the mercy of their own technological creations, such as androids and computer networks, which had evolved, so to speak, beyond the organic capacities of the human brain and human species. Film culture has offered many visions of technology fomenting post-human worlds, as in Blade Runner, Terminator, The Matrix, Strange Days, and eXistenZ. In the 1995 movie Johnny Mnemonic, directed by the artist Robert Longo and based on a short story by Gibson, a future is envisioned wherein there is no data security due to widespread hacking. Human couriers such as the protagonist Johnny Mnemonic (played by Keanu Reeves) have data uploaded to their brains, where it is supposedly more secure. The brain is therefore re­imagined as a corporeal hard drive. Johnny’s existential dilemma is that he has uploaded so much excess data that he may destroy himself if he can’t download it. This can be understood as an allegory of data overload in the dawning of the Internet era, and the film is prescient in its anticipation of our contemporary crisis of hyper-informational deluge in the attention-distraction economies.

The irony is that we continuously reproduce these conditions as we generate content-data for the Internet and social media platforms, which are harvested and extracted, and subsequently recirculated back to us in the form of advertisements to entice us to engage in consumerism. We are devouring ourselves in this feedback loop. Disconnection from this circuitry might prove fatal since our brains are hardwired into these logics.

As we produce our own data, our behavioral/cognitive, consumerist, and ideological/political predilections are continuously tracked, and anxieties about AI are accelerating. This may be fueled by legitimate fears that AI will enable totalitarian governments to surveil citizens with increasingly repressive precision (e.g., China), that democratic governments using such technology will become less democratic, or that capitalism will be governed by AI beyond our horizon of understanding. Since Big Brother’s AI brain may be more powerful than our human brains, what happens to all of us little brothers? Likewise, there are fears that increasing automation will make redundant large numbers of workers in various sectors. And yet, to invoke Marshall McLuhan, technology is an extension of ourselves. So, it’s a bit of a catch-22 or a vicious circle: as we create increasingly complex technologies, our brain’s cognitive capacities will need to keep pace, and yet the only way to achieve this is to produce more technology that functions as an extension of our brains. What also comes to mind is Fredric Jameson’s 1984 speculation about postmodern hyperspace, wherein he mused that humans might need to grow new organs (bigger brains?) in order to cognitively map and navigate our mutating realities.

Neuroscientist David Eagleman has written: “If we can replicate human consciousness, if it turns out to be possible to download somebody’s brain and reconstruct it on a different sub-strain, then in theory you could put it inside of a robot. One of the questions is about the ethics of doing that.” Will it be possible to upload your consciousness, your mind, and the contents of your brain into an android host so that you will achieve a certain kind of virtualized immortality? Will you know that you are a consciousness inhabiting a robot? Will it matter to you? And what kind of you will you be? The machine-consciousness version of yourself? Perhaps this will be sufficient if it guarantees a certain kind of perpetual sense of (cognitive) existence. The remake of Westworld features androids — human replicants — used to determine if human consciousness can be sustained in perpetuity, in multiple futures. Ultimately, the question becomes more existential than technological because it is not clear if the androids are developing their own android consciousness in conflict with the minds of their human progenitors. There are various fantasies surrounding the technological feasibility of uploading human consciousnesses into computer programs, where our minds, so to speak, would be stored until some point in the future when they could be downloaded into a cloned or digital brain, thus allowing humans to achieve technological transcendence or virtual immortality.

In Michel Houellebecq’s 2005 novel, The Possibility of an Island, the narrator of the novel, Daniel, is the original of a number of cloned Daniels, who seems to exist as the same person but at different moments in past, present, and future time, provocatively conjecturing that the brain’s consciousness can also be cloned. Houellebecq seems to be warning us about such desires. And what is the point of storing and/or cloning our brains for a hypothetical future if there may no longer be an Earth that is habitable for us? Or are we assuming that we’ll evolve into an enlightened and benign technological post-human species of pure hyper-digital intelligence, as envisioned in Steven Spielberg’s A.I.? Speaking for my brain, I’m not counting on that future.


[Table of contents]

The Brain Issue #33 S/S 2020

Table of contents

Subscribe to our newsletter