Resisting AI

An Anti-fascist Approach to Artificial Intelligence

“AI presents a technological shift in the framework of society that will amplify austerity while enabling authoritarian politics,” says Dan McQuillan on the first page of this book. He goes on to make the case that AI’s main moves are to segregate and divide, to make predictions of the future based on an extension of the past—that is, to preserve, and to increase, a status quo of inequality. It’s not so much that AI is fascist, he explains, as that it is highly suitable to fascist aims, that it is a useful tool for those who desire to bring fascism about. But there’s hope, in that we can both learn to be attuned to when and how AI is recruited into authoritarian politics, and we can counter that recruitment through commoning, mutual aid, and solidarity. One of the best things I’ve read about how to understand—and respond—to the ways in which technology is changing.

Related writing

Smoke screen

Interrogating the story behind “artificial intelligence.”

Reading notes

An engine of precaritization

It’s become a cliché to say we live in uncertainty, but lately I’ve wondered if the better word is really precarity. That is, it’s not merely that a lot of things that seemed stable and predictable no longer are, but that a lot of things that seemed comforting and supportive have vanished. Layoffs; the end of emergency pandemic protections; ice storms and atmospheric rivers and drought; the slow-motion collapse of our healthcare industry—I could go on, but it seems everywhere you look there’s something else pushing some risk calculation higher.

Including, of course, AI:

Applied AI is not so much a means of prediction as an engine of precaritization.

McQuillan, Resisting AI, page 52

What this means, I think, is that AI as we know it today is designed to shift risks from systems to individuals, from the collective to the isolated. McQuillan notes this move in the various platforms (e.g. Uber moves risk from the company to its drivers; while Google shifts the burden of knowing what’s real from itself to its users), but once you see it there, it becomes impossible not to notice everywhere: in healthcare systems that determine which people deserve care, in flood maps that determine which neighborhoods get to rebuild, in generated lists of which jobs will be shuttered. McQuillan continues:

The scale of AI operations behind these precaritizing platforms is truly spectacular, with Uber's routing engine dealing with 500,000 requests and hundreds of thousands of GPS points per second. Watching videos about these feats of engineering, it's impossible not to be struck by the irony that such magnificent achievements are directed largely at the immiseration of ordinary workers. A further irony is that the aim of much of the data capture and algorithmic optimization is to further precaritize their conditions, hence the use of Uber’s data to develop self-driving cars, and Amazon’s use of data to increase the robotization of its warehouses: thanks to the affordances of AI, the data treadmill not only maximizes extraction of value from each worker but uses the same activity to threaten their replacement.

McQuillan, Resisting AI, page 54

As much as this is about the technology itself, I think this also reveals something instructive about the people making the technology: so much of the conversation around professional burnout anchors on overwork—on the sheer quantity of work and relative scarcity of rest. But I’m convinced that a lack of faith in the work itself—or, worse, the recognition that the work is doing harm—is at least as much a contributing factor. Likely even more so. Tech companies haven’t spent decades building up grand stories of how they are changing the world for the better for no reason. And I suspect the rapid evaporation of any credibility to those tales will be a bigger disruption than any of them have planned for. At what point do you realize the engine you are building is the one that’s preparing to run you over?