Issue 1/2024 - ArtGPT


Intelligence as Concentration of Power

Interview with Meredith Whittaker, President of Signal, about AI and Big Tech

Yannick Fritz


Meredith Whittaker is president of the non-profit Signal Foundation which oversees the eponymous messaging app. Before assuming this position in 2022, she advised the chair of the US Federal Trade Commission Lina Kahn on corporate concentration of power and AI harms in 2021. With her assessment of the current problems of AI and her work at Signal, she assumes a counter position to many popular tech representatives who at times warn of the future dangers of AI with doomsday scenarios. Besides, Whittaker was research professor at NYU and co-founder and faculty director of NYU’s AI Now Institute. In 2019, she left Google after 13 years as one of the co-organizers of the “Google Walkout”.

Yannick Fritz: Before working at Google, you also studied rhetoric and literature as an undergraduate in Berkeley, so I brought with me a quote from science fiction author Ted Chiang, commenting on technology and the prevalent fears of “superintelligent self-replicating AI” Chiang said: “I tend to think that most fears about AI are best understood as fears about capitalism […] As fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.”1 What do you make of this and the fears and risks associated with AI today?

Meredith Whittaker: Chiang’s comments make a lot of sense to me. I think it’s an anxiety about how technologies are deployed to control us. I am interested in whose problems technology has solved traditionally and which questions are answered with technology. And ultimately, who gets to make the decisions about what technological approaches to these problems get developed, designed, maintained, built, deployed, and used. After all, it comes back to relationships of power.

Fritz: OpenAI’s CEO Sam Altman is one of many tech executives that pose as scared of AI’s capabilities while, simultaneously, it was found that OpenAI lobbied certain formulations in the currently discussed EU AI Act, effectively watering down possible regulations. ChatGPT, for instance, is now not considered “high-risk”.2 The subsequent milder regulations would include a restriction on the generation of illegal content—something that OpenAI already attends to through outsourcing. How do such practices clash with imaginaries of “risky, autonomous AI” and what kind of labor dynamics are we dealing with here?

Whittaker: Whether or not Sam or an individual AI scientist truly fears these future superintelligent machines and the associated existential risk is not something I can comment on. I think religion is really powerful in people’s lives. That aside, there’s a clever way companies instrument these fears. These ghost stories are effectively advertisements for a technology that only a handful of companies have. I don’t think there’s a military in the world who doesn’t want access to a hyper-destructive superintelligent system. However, when these companies talk to regulators, the existing systems are considered independently of such speculative futures. Existing technologies are framed as part of an unassailable march of progress towards these hyper-intelligent systems—they belong to a trope we’ve heard in Silicon Valley for years now: “regulation stifles innovation.” In effect, they can have their cake and eat it too.
But, concerning OpenAI, I don’t think we can discuss it without discussing its parent company Microsoft. There’s a “bigger is better” paradigm in AI. AI technologies are effectively competing on scale. So the bigger the “compute”, the bigger the data, the bigger the model, the more performant we assume it to be. That means that organizations like OpenAI, or startups, can only survive, if they are tethered to the infrastructural resources of a Microsoft, Google, Amazon, Nvidia, etc. We’re talking about hundreds of millions of dollars for a training run, we’re talking about millions and millions of dollars to pay the labor required to coax and cajole and inform these systems so that these generative AI models behave in a way that is acceptable for polite business, for liberal society. Because of this, the imperative to make a return on investment is very pressing. That is also why we need to pay attention to where our anxiety should lie in this picture. Microsoft is licensing these systems via hidden business-to-business Azure cloud contracts, be it between startups or between government organizations. We have very little insight into where these systems are being used or how they’re affecting us in a real material way. And there are a number of harms that can occur from the ways that those in power are likely to deploy them on the subject population, whether those are workers or students or citizens.

Fritz: If we ponder the effects of AI on workers, there are those subjected to algorithmically determined work schedules, but also all the click workers training these models in the first place, who often have to work through trauma-inducing data. In this sense, the notion of risk becomes more ambiguous than it’s oftentimes made out to be.

Whittaker: There’s a recent estimate that 100 million people are or have been employed in some capacity to calibrate these systems, like labelling data, which is incredibly traumatic work. They are basically employed as a human buffer zone absorbing the racist, misogynist, ugly, violent language that these systems should not reproduce. In this sense, the story of automation is a classic story of labor arbitrage where people are hired to tend to or to monitor these so-called automated systems in locations where they don’t have to be payed a lot, while the minority world trades on the story of automation to get venture capital investments. The very distracting narratives surrounding AI erases the labor that happens at every level of these systems, at every step of the supply chain. Here I think we need to point to The Writers Guild of America and other labor struggles that are actually playing out around the power asymmetry that exists between those who decide to employ these systems as mechanisms for labor control and degradation, and those who are demanding a say in how these systems are used.
Besides, the view of human beings as fungible numerable entities that then can be controlled remotely is fundamentally imbued in the blueprints for computation. Originally, these labor control mechanisms were developed in plantation slavery and I think we’ve extrapolated on that model. Ultimately, we are looking at systems that are incredibly—I don’t want to use an adjective that does seem like I’m complimenting them—but that are incredibly good at labor control and social control.

Fritz: You described the struggle of The Writers Guild of America, which is fighting for stricter regulations for the use of AI in their field, as a frontline in the struggle for meaningful AI regulation.3 How do struggles like this link to the historization of computation you just began to sketch?

Whittaker: My argument is that computation historically was a way of importing technologies that were disciplining and managing workers, technologies that were honed and constructed on the plantation. Today, we’re using succeeding technologies as templates for mechanically organizing work and stripping away agency from workers.4 What led up to classical computing were the analytical engine by Charles Babbage and its precursor, the difference engine. Babbage is widely known for his computational theories, which are actually deeply implicated in and inseparable from his writing on economics. The “engines” were originally developed during the decline of the British Empire when the need arose to accurately calculate logarithmic tables accounting for the losses of the British fleet. In this context of empire, Babbage himself argued from the perspective of the capitalist: The less skill you can attribute to a worker, the less you need to pay them and the more easily they are controllable and—ultimately—automatable. So as a capitalist you need this epistemic authority of defining skill, to label the worker’s labor “low skilled” and to then imbue the machines with skill, or at least a program to do a certain task. The objective function of Babbage’s work at the time was to find a way to solve the “labor question,” that is, how to mechanize and control labor and workers in service of the British Empire after the abolition of slavery. Slavery here needs to be seen as a practice of treating workers as objects, treating human beings as objects and justifying that treatment through narratives of racialization. We can’t talk about work without incorporating slavery as a nucleus. There was a rich exchange between plantation management guides and industrial management guides and that was the environment that Babbage was working with.

Fritz: With this history in mind, one could think that, at least today, more transparency would help disclosing current labor dynamics that go into the production of an AI system. In general, such transparency concerns often follow the idea that observation produces the insights required to govern systems and to hold them accountable.5 Considering that AI and machine learning is often imagined as this “black box”, a peek inside seems intuitive to understand its workings. As openness and transparency are also heavily employed in the rhetorics of OpenAI, how do we have to understand the AI industry with regard to these notions?

Whittaker: Without the ability to act on that information, without agency, transparency is a flex. It’s an expression of power, it is not actually an affordance that informs governance or much else. Today, instead of actually ceding power or control, we are given a small window into these systems. Obviously though, I don’t believe that AI systems can be made open even in this limited sense. I think transparency is a fairly flimsy concept, when we’re talking about systems of centralized control and infrastructures in the hands of corporations.

Fritz: What do these infrastructures entail?

Whittaker: Infrastructure isn’t just chips. Infrastructure is data infrastructure, is labor infrastructure, is the sort of sedimentary layers of standards and practices that attend to all of that. And currently, primarily the big companies control or set the pace for that. My working hypothesis in my academic research is that this “bigger is better” impulse is not new, that it predated AI, and that the deep learning wave from around 2012 was bolted onto that. If you look at the surveillance advertising industry—the tech industry as it stands today—Google’s, Microsoft’s, or Meta’s fetish for scale existed long before. While I worked at Google, the executives would say all the time: “We serve billions, not millions.” Scale was the bar against which you would measure progress. And that enabled a vision of organizing all of the world’s information, meaning surveilling the entire world, collecting all of that data, taking ownership of all of it, in order to enable wide-scale search, in order to enable Google’s reach across the globe. Bigger is better requires more and more data, it calls for the creation and collection of data where there isn’t data. I think the AI that we’re dealing with now is a product of that, not the other way around. Bigger is better is now how these companies are competing. It’s large scale computers, large scale data, it’s the ability to attract talent. And it’s the fact that they have defined the labor processes, the norms, the standards, the fact that Google owns the [development framework] TensorFlow, which works on their proprietary chips, and Meta effectively owns and directs PyTorch6—all this is the water in which AI research swims.

Fritz: Going back to the quote we started with, many of these issues associated with AI seem not necessarily a matter of technological innovation but also social change. What is your outlook on the impacts and on the regulation of such technology?

Whittaker: The regulatory debate is very tricky. 70% of the cloud market is dominated by US firms. Between the US and China, we see Europe and a lot of nervousness around its AI Act. It’s the desire for both: to take space in the AI industry, especially for Germany and France, and, simultaneously, to prevent the US and its companies to further entrench its hegemonic power. Europe doesn’t want to be a client to the US but also doesn’t want regulation that might inhibit its companies to emerge as its own mythical national champion. The thing is, these handful of US companies grew to that size because they are built on that aforementioned model of surveillance. This is not something we will see emerge organically in Europe. Besides, we are seeing this nervousness around US hegemony at a time when everyone’s talking about Trump and rising authoritarian tendencies. But what we’re not seeing is an enforcement of the GDPR [General Data Protection Regulation] in a way that would ban surveillance advertising, or a use of these tools to meaningfully cut the roots of this business model. In my view, this would be what we need to do, while empowering labor and shifting the locus of power within the regulatory landscape. To do any of this, we will need very fierce social movements, it will need to be more painful for politicians not to act. Right now, hundreds of millions of dollars are flowing into lobbying and influence campaigns, and I think we don’t have a counterbalance.

This interview was edited for length and clarity.

 

 

[1] Ezra Klein Interviews Ted Chiang, in: The New York Times, March 30, 2021, https://www.nytimes.com/2021/03/30/podcasts/ezra-klein-podcast-ted-chiang-transcript.html.
[2] Billy Perrigo, Exclusive: OpenAI Lobbied the E.U. to Water Down AI Regulation, in: Time, June 20, 2023, https://time.com/6288245/openai-eu-lobbying-ai-act/.
[3] Meredith Whittaker, AI, Privacy, and the Surveillance Business Model, in: re:publica 2023, June 5, 2023, https://re-publica.com/de/session/ai-privacy-and-surveillance-business-model.
[4] See: Meredith Whittaker, Origin Stories: Plantations, Computers, and Industrial Control, in: Logic(s), vol. 19, May 17, 2023; https://logicmag.io/supa-dupa-skies/origin-stories-plantations-computers-and-industrial-control/.
[5] Mike Ananny/Kate Crawford, Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability, in: New Media & Society, vol. 20, no. 3, March 2018; https://doi.org/10.1177/1461444816676645.
[6] TensorFlow and PyTorch were developed by Google and Meta research teams as frameworks for machine learning, which are used, for example, in natural language processing or computer vision applications.