Purple Magazine
— The Future Issue #37 S/S 2022

artificial intelligence the eclipse of human discretion


artwork by YNGVE HOLEN

American writer and urbanist Adam Greenfield focuses on technology (Artificial Intelligence, blockchain, surveillance, etc.) and its intersection with design and culture.

Taken together, the practical efforts we’ve discussed in this book — the massive undertakings of data collection and analysis, the representation of the world in models of ever-increasing resolution and sophistication, and the development of synthetic discretion — have a distinct directionality to them. As groups of people, each acting for their own reasons, bring these discrete capabilities together and fuse them in instrumental ensembles, we finally and suddenly arrive at the place where we must have known we were headed all along: the edge of the human. We have hauled up at the shores of a general artificial intelligence, competent to take up the world as it is, derive meaning from its play of events, and intervene in its evolution, purposively and independently.

For some, this has been a conscious project. At every step of the way, their efforts have been marked by wishful thinking, sloppy reasoning and needless reductionism. Distressingly often, the researchers involved have displayed a lack of curiosity for any form of intelligence beyond that they recognized in themselves, and a marked lack of appreciation for the actual depth and variety of human talent. The project to develop artificial intelligence has very often nurtured a special kind of stupidity in some of its most passionate supporters — a particular sort of arrogant ignorance that only afflicts those of high intellect, as if, when Dunning-Kruger syndrome appears in the very bright, it strikes with tenfold force. But for all these home truths, it has also made very significant progress toward its goals.

There is some truth to what AI supporters argue: that over time, as research succeeds at mastering some aspects of the challenge of teaching a machine to think, those aspects are then no longer thought of as “true artificial intelligence,” which is progressively redefined as something perpetually out of reach. For some of the more enthusiastic, it must feel like the prospect of recognition for their achievements is forever receding.

As one supposedly impossible goal after another yields to the advance of automation, falling one after another with the flat clack of dominoes, many of us cling to the subconscious assumption, or hope, that there are some creative tasks technical systems will simply never be able to perform. We tend to think of these in terms of some access to the ineffable that is putatively, distinctly and uniquely human, whether that access takes the form of artistic inspiration or high spiritual refinement.

The essence of learning, though, whether human or machinic, is developing the ability to detect, recognize and eventually reproduce patterns. And what poses problems for this line of argument (or hope, whichever it may be) is that many if not all of the greatest works of art — the things we regard as occupying the very pinnacle of human aspiration and achievement — consist of little other than patterns. Richly elaborated and varied upon though they may be, there is nothing magic about them, and nothing in them that ought to prevent their characterization by sufficiently powerful processing engines. The humanist in me recoils at what seems like the brute-force reductionism of statements like this, but beyond some ghostly “inspiration,” it’s hard to distinguish what constitutes style other than habitual arrangements, whether those be palettes, chord progressions, or frequencies of word use and sentence structure. And these are just the sort of features that are ready-made for extraction via algorithm.

Everyone will have their own favorite examples of an art that seems as if it must transcend reduction. For me, it’s the vocal phrasing of Nina Simone. When sitting in a quiet hour, listening to all the ache and steel of life in her voice, it’s virtually impossible for me to accept — let alone appreciate, or find tolerable — the notion that everything I hear might be flattened to a series of instructions and executed by machine. I feel much the same way, in vastly different registers, about the cool curves of an Oscar Niemeyer structure, about Ruth Asawa’s sculpture, the all-but-anonymous but nevertheless distinctive hand behind the posters of Atelier Populaire or the final words of James Joyce’s “The Dead” — about every work or act of human craft I’ve ever encountered that sent a silent thrill of recognition, glee and rightness running through me.

As I say, these are my examples; you’ll surely have your own, and I’m sure they do similar things to you. That we feel these shivery things in the presence of the works that move us feels like, must be, evidence of inspiration, if not of a soul plugged right into the infinite.

But we know by now that such structures can be detected, modeled, emulated and projected with relative ease. Something as seemingly intuitive as a Jackson Pollock canvas yields to an analysis of painterly density, force, and velocity. Similarly, entirely new Bach compositions can be generated, passages of music Bach himself never thought nor heard, simply from a rigorous parametric analysis of the BWV [Bach Works Catalog]. Nor is it simply the icons of high culture that fall before such techniques. One of the redemptive beauties of the human condition is that just about any domain of endeavor can become an expressive medium in the right hands, and someone working in just about any of them can aspire to the condition of art. Every designer has their go-to moves, every storyteller their signature tropes, and every trial lawyer their preferred patterns of precedent and emphasis. Given only sufficient processing power, though, sufficiently well-trained feature-extraction algorithms, and access to a sufficiently large corpus of example works, abstracting these motifs is not much more than trivial.

A recent project called The Next Rembrandt set out to do just this, and in at least the coarsest sense, it succeeded in its aims. A team of engineers and data modelers sponsored by Microsoft and the Dutch bank ING plumbed the painter’s corpus “to extract the features that make Rembrandt Rembrandt,” deriving from them parameters governing every aspect of his work, from his choice of subject and lighting angle to the precise proportions of the “typical Rembrandt eye or nose or ear.” Having crunched the data, they arrived at their “conclusive subject” — “a Caucasian male with facial hair, between the ages of thirty and forty, wearing black clothes with a white collar and a hat, facing to the right” — and then used this data set projectively, to create a portrait of someone who never existed, in the unique style of a master three and a half centuries in the ground.

You might quail, as I do, at the disrespectful, even obscene act of reanimation implicit in the project’s tag line (“347 years after his death, the next Rembrandt painting is unveiled”). You will very likely cringe, as you should, at the absence of any possibility that the historical Rembrandt Harmenszoon van Rijn might consent to being used in this way. Your soul might die a little death at the bathos and utter banality of the lesson ING evidently derived from their sponsorship of this effort: “Next Rembrandt makes you think about where innovation can take us — what’s next?” But to my eye, anyway, the generated painting does capture something of Rembrandt’s soulfulness, that characteristic sense of seeming to have been captured on the cusp of a moment I associate with his work. If you shuffled this portrait into a stack with authentic Rembrandts, and asked me to come back in a year and pick out the one among them that had been produced de novo via the intercession of a generative algorithm, I’m not at all sure I’d be able to. And, of course, now that the algorithm has been developed, it can be used at will to generate any number of pastiches of equal precision, accuracy and detail, an entire unspeakable postmortal œuvre — not, in other words, the next Rembrandt, but the one after that, and an endless succession of ones to follow.

This feels like a disturbing precedent. But curiously enough, the 340-odd authenticated Rembrandts known to exist present the would-be replicator with a relatively constrained parameter space. Consider by contrast the game of go. Its 19×19 board admits to some 2×10170 legal moves — as commentators seem contractually obligated to note, more configurations by many, many times than there are atoms in the universe. Perhaps this unfathomable void offers still greater scope for poetry, and a final preserve for the human?

Go is a positional game, a game of perfect information; there is no room in it for the operations of chance. Patterns of domination unfold across its board in unforgiving black and white, as one stone after another is placed on the grid of points, each one claiming territory and radiating influence to the points beyond. The set of ramifying possibilities represented by each successive move is a hypergraph far too deep to be swept by brute-force calculation techniques, like those which IBM’s Deep Blue used to defeat grandmaster and longtime world chess champion Garry Kasparov in 1997, and so for many years it was thought that mastery in go would long remain the province of human intuition.

Up until very recently, this seemed like a safe bet. At the time chess fell to computational analysis, a mediocre human player could still hold off the most advanced go program available, even if that program had first been granted a significant handicap of four or five stones; even Deep Blue didn’t have anything like the processing power necessary to sound the game’s boundless depths. There is, of course, much more to go than simply its degree of permutational complexity. But this was the quality that made it irresistible to artificial intelligence researchers, some of the brightest of whom took it up on a professional level simply so they could get a better sense for its dynamics.

A few of the most dedicated wound up working together at a London-based subsidiary of Google called DeepMind, where they succeeded in developing a program named AlphaGo. AlphaGo isn’t just one thing, but a stack of multiple kinds of neural network and learning algorithms laminated together. Its two primary tools are a “policy network,” trained to predict and select the moves that the most expert human players would make from any given position on the board, and a “value network,” which plays each of the moves identified by the policy network forward to a depth of around thirty turns, and evaluates where Black and White stand in relation to one another at that juncture. These tools are supplemented by a reinforcement-learning module that allows AlphaGo to hive off slightly different versions of itself, set them against one another, and derive the underlying strategic lessons from a long succession of training games played between the two.

For all its capability, DeepBlue was a machine of relatively conventional architecture. In defeating Kasparov, it relied on a brute-force tree search, a technique in which massive amounts of processing power are dedicated to mapping out every conceivable move accessible from the current state of play. It is, no doubt, far easier to say this in retrospect, but there’s something mechanical about this. It doesn’t feel anything like intelligence, because it isn’t anything like intelligence. Deep Blue was a special-purpose engine exquisitely optimized for — and therefore completely useless at anything other than — the rules of chess. By contrast, AlphaGo is a general learning machine, here being applied to the rules of go simply because that is the richest challenge its designers could conceive of, the highest bar they could set for it.

In March 2016, in a hotel ballroom in Seoul, DeepMind set its AlphaGo against Lee Sedol, a player of 9-dan — the highest rank. Lee has been playing go professionally since the age of 12, and is regarded among cognoscenti as one of the game’s all-time greatest players. His mastery is of a particularly counterintuitive sort: he is fond of gambits that would surely entrain disaster in the hands of any other player, including one called the “broken ladder” that is literally taught to beginners as the very definition of a situation to avoid. And from these vulnerable positions Lee all but invariably prevails. A book analyzing his games against Chinese “master of masters” Gu Li is simply titled Relentless.

In Seoul Lee fell swiftly, losing to AlphaGo by four matches to one.

Here is DeepMind lead developer David Silver, recounting the advantages AlphaGo has over Lee, or any other human player: “Humans have weaknesses. They get tired when they play a very long match; they can play mistakes. They are not able to make the precise, tree-based computation that a computer can actually perform. And perhaps even more importantly, humans have a limitation in terms of the actual number of go games that they’re able to process in a lifetime. A human can perhaps play a thousand games a year; AlphaGo can play through millions of games every single day.” Understand that here Silver is giving AlphaGo considerably short shrift. A great deal of what he describes — that it doesn’t tire, that it can delve a deep tree, that it can review and learn from a very large number of prior games — is simply brute force. That may well have been how Deep Blue beat Kasparov. It is not how AlphaGo defeated Lee Sedol.

For many, I suspect, Next Rembrandt will feel like a more ominous development than AlphaGo. The profound sense of recognition we experience in the presence of a Rembrandt is somehow more accessible than anything that might appear in the austere and highly abstract territorial maneuvering of go. But there was something almost numinous about AlphaGo’s play, an uncanny quality that caused at least one expert observer of its games against Lee to feel “physically unwell.”

It is true that human beings invented go, and elaborated its rules and traditions over some 2,500 years. So perhaps we should consider that the true achievement isn’t the ability to play within the universe bounded by its ruleset, however exceptionally well, but imagining something that resonant, that satisfying and that beautiful in the first place. There is, no doubt, something to this — that we have nothing to fear from the rise of artificial intelligence until and unless it should begin to design games we find as captivating as go. But remember that the stack of neural networks and modules called AlphaGo was designed for the general acquisition of abstract knowledge — and that even as you read these words, it is still learning, still improving, still getting stronger.

Whether most of us quite realize it or not, we already live in a time in which technical systems have learned at least some skills that have always been understood as indices of the deepest degree of spiritual attainment. These questions have rarely been more present than they are in the case of a Yaskawa industrial robot, trained in 2015 to perform precision feats with a Japanese fighting sword as part of a promotional campaign called the Bushido Project.

To accomplish this act of training, master swordsman and Guinness world record-holder Isao Machii was garbed in a full-body motion-capture suit, and recorded in high resolution as he performed the basic moves of his chosen art. (The narration of the promotional video Yaskawa released is careful to refer to this art as iaijutsu, the technical craft of swordfighting, as opposed to iaido, the Way of the Sword; as we’ll see, the distinction will become important.) This abstraction of lived, bodily human knowledge was transferred to the control unit of a Yaskawa Motoman MH24, a high-speed six-axis manipulator generally deployed in assembly, packaging, and material-handling applications.

Rather astonishingly, Yaskawa chose to refer to this effort as the “Bushido Project.” They would be perfectly aware that — as opposed to a more technical description of the swordfighting skills involved — the word bushido has the most provocative resonances. Japanese cultural activities with names ending in –do aren’t positioned as mere pastimes, but as profound spiritual investigations into a single subject understood as life in microcosm.

Bushido, understood properly, is nothing less than the Way of the Warrior. Its virtues are those of duty, of reciprocal obligation, of self-control verging on abnegation of the self, and of being prepared at any and every moment to throw one’s life away to protect that of one’s master and house — all those qualities extolled at length in the Hagakure, the classic manual of samurai discipline. As supposedly entwined with the equally ineffable Japanese national spirit, yamato-damashii, bushido is a modern invention, with obvious appeal to the authoritarian state that successfully invoked it toward a variety of domestic and external ends between the 1920s and the end of the Second World War. However the concept may have been abused for political purposes, though, as constructed bushido is unquestionably something that resides in the human heart, or does not.

This matters when we describe a machine, however casually, as possessing this spirit.

At the time his feats of swordsmanship were captured by digital apparatus, Isao Machii had trained at the advanced level for some twenty-two years. Performed without ego or attachment, each stroke of his sword will be complete, perfect, whole and in harmony with the inmost
nature of things. This necessarily raises some fairly profound questions when that same gesture is digitized in high resolution, and rendered as an instruction set that any articulated industrial machine with the necessary motive power and degrees of freedom can reproduce.

Once the necessary code is uploaded, any robot can perform the thousand cuts as well as Machii. Better, even: tirelessly, unweakeningly, ceaselessly, with uptime measured in strings of nines. It needn’t be a Japanese robot, serve Japanese masters, nor in any way partake of yamato spirit. It will nevertheless be capable of drawing a sword through whatever material it encounters until the blade itself is worn away, or becomes useless through ablation.

Of more concern is the notion that this digitized instruction set is a package. It can travel over any network, reside in and activate any processing system set up to parse it. We may joke, uneasily, about the lack of foresight implicit in teaching a global mesh of adaptive machines the highly lethal skills of a master swordsman. But it also points toward a time when just about any human skill can be mined for its implicit rules and redefined as an exercise in pattern recognition and reproduction, even those seemingly most dependent on soulful improvisation.

One final thought. We’re already past having to reckon with what happens when machines replicate the signature moves of human mastery, whether the strokes of Rembrandt’s brush or those of Machii’s sword. What we now confront is the possibility of machines transcending our definitions of mastery, pushing outward into an enormously expanded envelope of performance. And in many ways even this is already happening, as algorithmic systems, set free to optimize within whatever set of parameters they are given, do things in ways no human being would ever think to.

Consider the curiously placeless quality of drone footage, its unnerving smoothness and grace perhaps deriving from the fact that often there is no specific human intention behind the capture of a particular image or sequence of images. The target is automatically perceived, acquired, reframed, and captured, all of it accomplished with a steadiness of hand so far beyond the human norm that it is instantly recognizable.

Consider what go master Fan Hui said about the final turn in AlphaGo’s defeat of Lee Sedol: “It’s not a human move. I’ve never seen a human play this move. So beautiful.” The AI player, unbound by the structural limitations, the conventions of taste or the inherent prejudices of human play, explores fundamentally different pathways — and again, there’s an aesthetic component to the sheer otherness of its thought.

Consider the intriguing image that was not long ago circulated on Twitter by the entrepreneur Jo Liss: a picture of a load-bearing bracket, before and after a computational process of “topological optimization” has been applied to its design. The difference between before and after is stark. The preoptimized bracket looks unexceptional. It sports holes here, here and here allowing it to be bolted to other components. Clearly capable of performing to spec independent of orientation, it’s recognizably designed for standardization, ease of stocking and use by unskilled labor. In short, it’s the kind of thoroughly generic, entirely fungible part you might find five hundred to a bin down at the neighborhood hardware store.

The “after” is Lovecraftian.

It is, no doubt, effective — almost by definition, more fit for its purpose than anything we’d come up with on our own. But it is decidedly, even aggressively strange. And it stands as a reminder that should autonomous systems develop their own logics of valuation and justification, they may not necessarily be so easy for human beings, or the infrastructures we’ve designed with our needs and limitations in mind, to mesh with, plug into or make sense of.

As it starts to condition the texture of everyday experience, this push past our own standards of beauty, resonance, or meaning will do strange things to us, summoning up registers of feeling we’ll find hard to describe with any accuracy. I have little doubt that we’ll feel occasional surges of shocked delight at the newness, and yet essential correctness, of something forged by an intelligence of the deepest alterity — an image, a spatial composition, a passage of music, some artform or expressive medium we don’t yet have the words for — and these may be among the precious few sources of joy and wonder in a rapidly ruining world.

I have equally little doubt that we’ll more often find ourselves numbed, worn down by the constant onslaught of novelty when we have more pressing things to worry about. We’ll feel pride that these intelligences have our DNA in them, however deeply buried in the mix it may be, and sorrow that they’ve so far outstripped the reach of our talents. It’s surely banal to describe the coming decades as a time of great beauty and greater sadness, when all of human history might be described that way with just as much accuracy. And yet that feels like the most honest and useful way I have of characterizing the epoch I believe we’ve already entered, once it’s had time to emerge in its fullness.

By virtually any meaningful standard, we would appear to be a long way from having to worry about any of this. Systems based on current-generation learning algorithms routinely stumble when presented with situations that are even slightly different than the ones their training has prepared them for, and fold completely before the kind of everyday ambiguities of interpretation that adults generally breeze through without noticing.

And this is true on many fronts. A test for machinic intelligence called the Winograd Schema, for example, asks candidate systems to resolve the problems of pronoun disambiguation that crop up constantly in everyday speech. Sentences of this type (“I plugged my phone into the wall because it needed to be recharged”) yield to common sense more or less immediately, but still tax the competence of the most advanced natural-language processing systems. Similarly, for all the swagger of their parent company, Uber’s nominally autonomous vehicles seem unable to cope with even so simple an element of the urban environment as a bike lane, swerving in front of cyclists on multiple occasions during the few days they were permitted to operate in San Francisco. In the light of results like this, fears that algorithmic systems might take over much of anything at all can easily seem wildly overblown.

As DeepMind taught us, however — with their AlphaGo significantly improving its play overnight, between the games of its series with Lee Sedol — algorithmic systems are able to learn quickly. The lesson of Tesla’s Autopilot, where data from each individual car is continuously used to refine the performance of the entire fleet, is that algorithmic systems are increasingly able to learn from one another. And unlike we human beings, who find it increasingly difficult to take in new knowledge as we grow older, any algorithm able to learn at all can keep doing so indefinitely, folding hundreds or thousands of human days of study into each 24-hour period, and doing so for as long as its trainers allow.

For all the flaws it’s so easy to diagnose right now, the available evidence suggests that autonomous algorithmic systems will acquire an effectively human level of cognitive capability in the relatively near future, far more quickly than the more skeptical among us might imagine. More to the point, it is not at all clear what event or process (short of the complete collapse of complex civilization on Earth) might permanently prevent them from doing so.

I don’t know what it will feel like to be human in that posthuman moment. I don’t think any of us truly do. Any advent of an autonomous intelligence greater than our own can only be something like a divide-by-zero operation performed on all our ways of weighing the world, introducing a factor of infinity into a calculus that isn’t capable of containing it. I understand full well why those who believe, however foolishly, that their advantage will be at a maximum under such circumstances, and their dominance made unassailable, are in such a hurry to get there. What I can’t understand is why anyone else is.


excerpt from radical technologies: the design of everyday life, published by verso, london, 2017

all artwork courtesy of the artist/galerie neu, berlin/modern art, london/neue alte brücke, frankfurt, photos stefan korte


[Table of contents]

The Future Issue #37 S/S 2022

Table of contents

Subscribe to our newsletter