Purple Magazine
— The Revolutions Issue #40 F/W 2023

evgeny morozov

interview

by ALEPH MOLINARI

artworks by HARMONY KORINE

 

The Political theorist and writer uncovers the dangers of AI: the  insidious  neoliberalist  ideology fueling Silicon Valley’s delusional promise to solve ALL of humanity’s problems, while perpetuating voracious and destructive capitalism.

 

ALEPH MOLINARI — This issue of Purple is about how we can redirect the revolutions that are happening now — in technology, gender, psychedelics — toward a better world. Artificial intelligence is one of these revolutions that has exploded over the past year, and everybody’s using it in a very casual way. Let’s start with the basics. How would you define AI?

EVGENY MOROZOV — We inherited this clunky term from the Cold War. The concept emerged in the mid-1950s in the military milieu, in the context of automating cognitive tasks related to warfare (e.g., identifying ships in satellite imagery). As with many terms once-fashionable during the Cold War — remember the “domino theory”? — the ambiguity of “artificial intelligence” as a concept enabled collaboration between the military, the academy, and the private sector. But while other parts of the Cold War vocabulary lost their relevance after 1989, “AI” found another lease of life in Silicon Valley. For me, the prevalence of “AI” as a term that structures so many of our public debates is a testament to how much conceptual ground we have lost to the military and the private sector: we seem unable to invent a better, more current language liberated from the imagery of the Cold War.

ALEPH MOLINARI — Do you think that AI reflects the current reality of technology?

EVGENY MOROZOV — Today’s systems are neither artificial nor intelligent. They are not “artificial” because so much human labor and training goes into them. And they’re not “intelligent,” either, at least if we use human intelligence as the benchmark. Here I follow psychoanalysts like the Chilean Ignacio Matte Blanco, with their insistence on the centrality of emotions to thought. The AI systems that are being built today ignore this dimension completely. Their intelligence is about rules — and not about deciding when and how to suspend or change them. Why glorify the people working on these machines by calling it “intelligence”? And if it’s neither artificial nor intelligent, why call it AI?

ALEPH MOLINARI — Then what would you call these systems?

EVGENY MOROZOV — These are mostly predictive systems. Sure, one can rebrand them as “generative AI,” but, ultimately, these systems excel at predicting patterns based on statistical analysis. So, large language models [LLMs], generative systems for art and music, etc. — these are just predictive machines. No one is denying their utility, even if it’s certainly more limited than what Silicon Valley boasts. Once we account for those limited uses, we could — and probably should — have a conversation about what kind of political economy, geopolitics, and cultural policy can give us better statistical predictive machines. I’m not at all against it. But it’s not the momentous revolution that Silicon Valley makes it out to be.

ALEPH MOLINARI — Going back to the nature of intelligence, the intelligence of the machine doesn’t have emotions or a reaction to the real, to context, present time…

EVGENY MOROZOV — Most people working on AI don’t even get that far, believing that emotions don’t matter where intelligence is concerned; they see intelligence as being about rules, heuristics, classification mechanisms. But emotions shape what and how we classify, consciously and unconsciously; often, they help suspend the rigid classificatory schemas that have outlived their usefulness — for us personally and for society as a whole. Whether something is a utilitarian instrument or part of an art exhibit depends on what other objects or concepts we lump it with. The art world is full of these kinds of resignifications: what used to be a urinoir (Fountain, 1917, signed by R. Mutt) can become an art object by Marcel Duchamp. And what used to be an art object can become a urinal, right? Our everyday intelligence is like this: it’s context-dependent, with emotions — the stuff of history, trauma, memory — affecting the whole process. One can consume the whole of Wikipedia — as today’s LLMs have done — and still fail to understand the meaning of the Vietnam War or the fall of the Berlin Wall; that meaning is positional and contextual. Dumping more data into an even bigger server farm doesn’t magically teach computer systems about meaning.

ALEPH MOLINARI — I like what you say about how this reclassification of knowledge is part of human intelligence. So, what about AI’s potential in art’s creation? Will the artists become prompt artists?

EVGENY MOROZOV — For those who work with creative fiction, nonfiction, screenplays, or podcasts, tools like ChatGPT are of limited help. It can help you craft better sentences, but everything else remains the product of your training, feelings, emotions, reading, interpretation. Predictive systems don’t really help much with that. They are novel and interesting platforms, but most of what is produced with AI in the driving seat is bad music, bad poetry, and bad art. I don’t buy into the idea that artists will be prompt engineers because this reduces art to the mastery of technique. But art is as much about imagination, hermeneutics, forgetting, and remembering; if you don’t have these, technique is useless.

ALEPH MOLINARI — Many of the people who are developing these technologies seem concerned about the evolution of these systems and are asking for a moratorium. Where are the fears about AI coming from?

EVGENY MOROZOV — It’s a combination of factors. Some of these people, with reputations to defend, simply want to say that they have warned us. And they frame their predictions so vaguely that if anything goes badly with the LLMs and ChatGPTs of the world, they will be able to say — five years down the road — that it’s not their fault. They don’t want to be like the climate scientists working for the energy companies: they want to proactively go on the record and say that they’ve rung the alarm bells.

ALEPH MOLINARI — So, they’re seeking a preemptive pardon.

EVGENY MOROZOV — They’re beholden to an ideology that refuses to acknowledge capitalism and market competition as the ground zero of today’s AI. This capitalist elephant in the room is hard to ignore, so they have to resort to apocalyptic talk. That’s how our conversation turns into a sci-fi salon, where we talk about Terminator’s Skynet and the killer robots, debating how we can make these systems safer. What we don’t talk about is who owns these systems, who trained them, with what data, etc. These are big no-nos. Capitalism will kill us faster than AI.

ALEPH MOLINARI — So, this sci-fi narrative is meant to distract us from the actual concerns and politics of the technology and its regulation?

EVGENY MOROZOV — I’m not saying that these tech companies are doing it consciously. That’s why I use the word “ideology.” They have internalized the view that there is no alternative to solve the problems of humanity than the technological solution that Silicon Valley is promoting. And since there is no alternative, they can’t even conceive of something other than the market furnishing the “AI goods.” And so, they end up talking about all this science fiction, which heavily overestimates progress in these tools. Such a focus also distracts us from grappling with the real damage caused by AI systems, with these firms being the only ones building these tools and selling them to everybody — governments, armies, public services, hospitals, etc. And doing it based on a business model that has not yet even been proven successful: even companies like OpenAI are losing money.

ALEPH MOLINARI — We know they’re going to be used for surveillance, for military purposes, and an ultra-capitalist agenda. What are the actual dangers of developing these AI systems?

EVGENY MOROZOV — Well, I don’t see real dangers in developing these systems. If trained with proper data and in a sustainable way, and inserted into a system where people are not actually exploited and forced to compete, they can be wonderful tools. The real danger — the one that worries me the most — is that firms like OpenAI, Google, and Microsoft are going to consolidate their power in the digital economy and become even bigger. So, the question is not really about the dangers inherent in the systems; these can be contained. The dangers come from a political model that allows this industry to sell their (often dubious) services to gullible public institutions across the world. This — more than sci-fi robots — is what might radically transform the systems that we’ve taken for granted — from education to health care to transportation.

ALEPH MOLINARI — So, is the ultimate fear of AI that it becomes the most voracious manifestation of capitalism?

EVGENY MOROZOV — I think there is a certain ideological current within capitalism —  let’s call it “neoliberalism” —  that is coextensive with the dominant ideology of Silicon Valley, which I’ve described as “solutionism.” So, there’s a marriage between the neoliberal belief that there is no alternative and that everything should be privatized, outsourced, and oriented toward efficiency. It’s the ideology that Silicon Valley is capable of solving problems more effectively than everyone else and that if we only leave it alone and let its start-ups do what ever they want, then they would solve all of the problems of humanity,from climate change to obesity to hunger. The current AI enthusiasm will make the link between the two even stronger.

ALEPH MOLINARI — There’s a view of seeing AI as a savior of humanity, a technological panacea, to aid us in reaching solutions for global warming or medical issues. On the one hand, there is an opposite vision manifesting the science-fiction dangers. But it seems to me that AI is a form of techno-utopianism. Are there any redeeming aspects of this technology?

EVGENY MOROZOV — There are some redeeming aspects, yes, but they don’t herald a world where Sam Altman, the cofounder of OpenAI, is our point person for curing cancer and preventing climate change. I think that there is plenty of good stuff we can do once we understand the limits of these tools, especially once we imagine alternative models for the public provision of those technologies. We have public institutions with huge cultural reservoirs. We have libraries in France, Germany, and Italy that possess millions of books and that have curators, scholars, and professional staff curating these archives. Why assume that a start-up in Palo Alto would build a better cultural model than these institutions? Why delegate it to Silicon Valley, with their drive toward optimizing profits, staying competitive, raising money? Why take all these risks?

ALEPH MOLINARI — Would you say that these technologies are detrimental to the development of human intelligence and the capacity for complex thought?

EVGENY MOROZOV — For me, intelligence is not just a function of what happens in individual heads, which, sadly, is the assumption shared by most people in the AI industry. Intelligence is also a function of what happens in our institutions, how we create culture, how we enable people to collaborate, how we innovate, not just in terms of start-ups but also in terms of ideas and practices. There’s so much we can do — in terms of new laws and policies — to make culture and education more accessible. In accepting that today’s commercially driven AI is here to stay and that we just need to tame it, we are going for the low-hanging fruit being dangled in front of us by investors.

ALEPH MOLINARI — Who are the main actors participating in this AI race? We know of Google and Microsoft with OpenAI, and I’m sure governments are participating as well, but where is the money coming from?

EVGENY MOROZOV — There are several lesser-known AI start-ups like Anthropic, Inflection AI, and Stability AI that cover different parts of the AI landscape. A good chunk of the money that’s chasing AI used to be in crypto. After the financial crisis of 2008, a lot of funds — sovereign wealth funds and pension funds — started parking their money in venture capital funds and, through them, in the tech industry at large. Today, we see some of it, but this is happening at breakneck speed. We hear crazy stories of start-ups (like the Paris-based Mistral AI) that can raise a hundred million euros in just a few weeks. Yet, just like with crypto, often it’s real money chasing fake projects with fake visions.

ALEPH MOLINARI — What about job substitution with the proliferation of these systems? Will it be like other technologies in the past that eliminated countless jobs but then created entirely new industries?

EVGENY MOROZOV — The thing to keep in mind is that today’s digital economy is so surreal, fickle, and unpredictable that even big platforms like OpenAI might last just a few years. It’s very hard to predict what the overall effect would be; sure, jobs might disappear — but it doesn’t mean that the robots that would come would actually be functional. Remember, even these giants like OpenAI survive at the mercy of their investors — their business models are mostly untested. I vividly remember much talk about a digital world where SoftBank’s Vision Fund — a huge pot of money from Saudi Arabia, Japan, and other places — was supposed to transform everything… And yet, just a few years later, they are a shadow of themselves, the start-ups in their ecosystems dying (if not already dead). This digital economy is a very fragile thing.

ALEPH MOLINARI — So, what’s the endgame? Is it to create a digital companion or system that tells you what to buy, where to go, how to behave?

EVGENY MOROZOV — I don’t think that there’s an endgame that’s different from how capitalism has worked for centuries. It’s just a bunch of companies trying to build profitable services and competing with each other. We have a shallow understanding of the digital economy over the past few decades because we tend to think that it revolves around consumers and advertising. The reality is that companies like Google, Amazon, and Microsoft increasingly make money by selling services to other companies and governments. That’s how cloud computing and cybersecurity services have been monetized in the past two decades. And that’s how it will be with them trying to monetize AI services. Me, I’m still waiting for Uber’s self-driving cars… The only person who has unquestionably got rich while we all wait is their ousted ex-CEO, now investing in dark kitchens and other questionable projects. We can only guess where the likes of Sam Altman will end up in a couple of years…

END

 

 

Harmony Korine, Twitchy and Friends 2, 2018, oil on canvas, Copyright Harmony Korine, Courtesy of Gagosian<br />Photo Rob McKeever Harmony Korine, Twitchy Coupe, 2018, oil on canvas, Copyright Harmony Korine, Courtesy of Gagosian<br />Photo Rob McKeever

[Table of contents]

The Revolutions Issue #40 F/W 2023

Table of contents

Subscribe to our newsletter