Purple Magazine
— The Brain Issue #33

the world brain

essay by MARK ALIZART

independent art curator, writer, and author of a recent book on computer science as a new religion, l’informatique céleste, mark alizart was curator of cultural programs for the centre pompidou (2001-06) and associate director of the palais de tokyo (2006-11) in paris

At the end of the 1930s, H.G. Wells delivered a series of talks predicting the birth of a “World Brain.” Possibly because he was a witness to the invention of microfilm and the inauguration, in 1927, of the first telephone line, linking New York to London, the British science-fiction writer thought that advances in communications and information storage would give rise to a worldwide encyclopedia. It would be accessible to all and, he hoped, serve as the basis for a collective intelligence such as would end war among men.

In the 1960s, Arthur C. Clarke, cowriter of the screenplay for Stanley Kubrick’s 2001: A Space Odyssey and witness to the birth of modern computing, did Wells one better, predicting that by 2100 the universal and omniscient encyclopedia would be accessible by personal computer and feed an artificial intelligence that would be consulted on the great issues of worldwide governance.

In those same years, moreover, a French Jesuit, paleontologist, and specialist in the history of evolution, Pierre Teilhard de Chardin, prophesied that our “biosphere” would become a “noosphere” (from noos, the Greek word for “mind” or “spirit”). As the collective intelligence grew, the planet would shed its green coat of forests and don a new coat of light, symbol of humanity’s passage into the light of god himself.

Subsequent events have not belied their words, or at least not entirely. Our planet is covered with underwater cables and radio waves transporting billions of bits of information per second, which allow us to share the same news instantaneously. The Internet, Wikipedia, and even rudimentary forms of artificial intelligence actually exist. From space, the Earth resembles, or even mirrors, what Teilhard de Chardin imagined: at night, the highways — with their streetlamps and headlights — snake their way around the megalopolises of Los Angeles, Tokyo, or Lagos, making the Earth look like a body run through with a network of nerves and veins, and coursing with the phosphorescent blood of information.

But it’s also true that a huge part of what they thought would come our way, if not the biggest part, did not happen. Nobody believes anymore what many hoped after the fall of the Berlin Wall: that all men could at last be part of the same “global village” (Marshall McLuhan), of which “We Are the World” would be the anthem. We have seen not the emergence of an enlightened collective intelligence, but the return of the maddest religious superstition. Instead of the civilization of communication, what has formed is algorithmic bubbles in which closed groups speak only among their own members. Nor do we live as believers who have entered into the divine light of truth, but are as deer caught in the dazzling headlights of fake news.

Some believe we need to be a bit more patient. The trans-humanists of Silicon Valley have not lost hope of seeing the “singularity” come to be. Once artificial intelligence, as fed by the world’s knowledge, awakens at last, they say, humanity as we know it — or, more precisely, the animal, primitive, “reptilian” part of humanity that keeps it from attaining the higher stages of spiritual life — will come to a definitive end.

Others, computer scientists among them, are more pessimistic. Indeed, according to one of the very fundamental theorems that rule logical thinking, there exist problems whose solutions are effectively “undecidable.” A mathematician — and this applies to computers, when trying to solve a hard problem — cannot help but make choices that do not stem from deductive reasoning. He must proceed by induction and verify that the hypotheses he formulates arbitrarily are consistent with the said problem. Furthermore, he might find along the way that some number of contradictory hypotheses are simultaneously true. In that case, he is faced with just as many coexisting new, coherent formal systems, each of its kind, with no way to decide between them.

Kurt Gödel, the author of what is now known as the “incompleteness theorem,” was so upset to discover that what he pursued as “the truth” was not a block of certainty immune to human caprice that he let himself literally starve to death. He couldn’t bear the existence of undecidable truths that condemned humanity to division and even, he thought, delivered it to the devil (diabolos, the “divider”). Sure, this did not concern all truths (“2+2=4” is not undecidable; neither is an established historical fact), but it did concern many truths we depend upon for our lives, like the solution to well-known logical problems such as: “If a barber shaves only those who do not shave themselves, can the barber shave himself?”, of which an equivalent nearer to us would be, say: “Should we grant liberty to the enemies of liberty?”

Fortunately, not all of Gödel’s contemporaries went to such extremes. There is another way to interpret the incompleteness theorem that does not lead to the naive scientism of trans-humanism but also does not succumb to the despair of logicism. According to the mathematician Emil Post, the incompleteness theorem meant above all that truth contains an irreducible kernel of creativity, and therefore of life. For Alan Turing, it meant that truth is alive.

Alan Turing is considered to be the father of theoretical computer science and artificial intelligence. Paradoxically, however, his work is most astonishing right where it acknowledges the limits of computers. Turing was the first to suggest that computers might one day be able to think only if they managed to do something other than apply procedures apt for minor problems. Otherwise, they would come up against the impasse of incompleteness. In other words, computers could think only if designed to imitate life.

Taking a cue from Turing, no biologist now believes that the brain is a computer. It is now considered, a minima, a “population of computers” (neurons) in a chaotic environment (saturated with competing electrical impulses and chemical signals) from which a collective truth emerges by a process highly comparable to natural selection. Indeed, some believe that the brain is DNA in another form — as would hardly be surprising, since DNA is already a sort of little brain that processes information, interpreting and encoding life’s molecular grammar.

In like manner, researchers in artificial intelligence, striving to create the brain that Arthur C. Clarke sought, long ago gave up trying to write the program to render a computer intelligent as if by a wave of a magic wand. Instead, they design “adversarial networks” that produce thought by putting one another to the test.

By considering truth as a branching of species and subspecies of truth that sprout by chance over the course of events, we can understand that truth — although distributed in the multiple varieties that perform it singularly at every point of history — is nevertheless unitary. “The” truth is the unicity of the vital impetus that encourages the survival of increasingly adapted, resilient, and fertile truths, just as evolution is the history of a progressive complexification of life’s forms, from the first protozoans to the great apes, such that it is no longer absurd to imagine that humanity does indeed tend asymptotically toward concord and peace, despite the undecidability of truths, or even thanks to it.

Watching fascism and Stalinism spread their death, Teilhard de Chardin remarked that the “noosphere” could not be the kingdom of god if it was the totalitarianism of the mind; it had to be a place where all could enter freely and through love. With him, we might say that it is not simply because we do not all think the same thing that the World Brain has failed. On the contrary. Like our brain itself, the World Brain thinks well and even improves only insofar as it can multiply points of view and set up perspectives.  The whole universe is a brain.

END

AUREL SCHMIDT, DNA, 2015, COLORED PENCIL ON PAPER, COPYRIGHT AUREL SCHMIDT

[Table of contents]

The Brain Issue #33

Table of contents

Subscribe to our newsletter