by Kate Crawford
read in Apr 2021
book info on goodreads
In her book, “Atlas of AI”, Kate Crawford tries to highlight how AI development today is largely based on the assumptions of Cartesian dualism. She argues that AI is primarily understood as disembodied intelligence that is removed from any social or material relations and produces knowledge that is independent from the world.
“What Is AI? Neither Artificial nor Intelligent”
She then problematises this assumption by showing how AI is neither artificial nor intelligent. On the contrary, AI is embodied and always socially and materially situated. Further, AI is not rational or autonomous. It could not achieve anything without the computationally intensive training that goes into developing it. Because AI is neither artificial nor intelligent, Crawford argues, we need to consider the multiple interlaced systems of power and dominant interests that go into the design of such systems.
To show these systems of power, Crawford develops, what she refers to as, an atlas that goes beyond the conventional maps of AI; an expanded view of the multiple topological levels across which AI is extracting and exploiting.
“This is an expanded view of artificial intelligence as an extractive industry. The creation of contemporary AI systems depends on exploiting energy and mineral resources from the planet, cheap labor, and data at scale.”
The six spaces or regimes that she relates to one another and that are overlapping in the development of AI systems are:
While the different stories and journeys are very interesting to read and highlight many of the issues that the digital transformation of society is dealing with, I feel that Crawford sometimes drift a bit too far from AI as the core technology of the book. I understand that it is precisely the point of her atlas technique to show how seemingly unrelated topics are in fact related to AI development, but some topics that she discusses seem to be covering issues of society more broadly. Data, classification, human’s role in providing those classification, affect recognition and government’s use of such tools are clearly linked to AI, but other points that the book talks about such as rare earth extraction, time coordination in Amazon warehouses, or eventually the new space race triggered by private companies seems to be a bit too far removed from it.
An interesting point that Crawford makes is related to the notion of AI (and technology more broadly) becoming infrastructure. I have increasingly read about technology becoming infrastructure in IS and management literature. Importantly, the concept was originally developed in the IS field in the paper by Star and Ruhleder (1996) “Steps Toward an Ecology of Infrastructure: Design and Access for Large Information Spaces”. Using the same dimensions as the original authors, Crawford argues that AI is becoming increasingly opaque, composed of mundane, everyday actions and has the potential for serious negative, unintended consequences.
The most interesting takeaways from the book, for me, revolved around her discussion of the privatisation of time, construction of biased training datasets, and the underlying assumptions of classification.
Work time is increasingly being modulated by algorithmic scheduling systems. The workforce that is building such systems is heavily biased toward young male workers that are committed to working around the clock; encoding such beliefs into the technologies. This standard of work hours, however, relies on the unpaid or underpaid care work of others. Such algorithmic scheduling and monitoring systems reduce the range at which the relation between work and time is observed. Time, whether at micro-level of time clocks in factories or at the macro-level of planetary time spanners, is a source of power for corporations.
People that are represented in AI training datasets are seen as part of a technical resource. The context of the data is deemed irrelevant and radically stripped off from the data. This is what Crawford calls the shift from image (or information) to infrastructure. This has important implications, because these datasets shape the epistemic boundaries of AI systems and thus decide how AI sees the world. Crucially, there seems to exist a genealogy of Ai training datasets. That means that AI datasets build on other datasets and thereby inherit the same issues and biases.
Classifications can never be neutral or objective. Classifications are always situated in a social, economical, and political context that shape them and vice versa shape future social and material worlds. Every dataset thus reflects a particular view of the world. It means that certain assumptions had to be made in order to reduce the complexity of the world that it tries to represent. When such systems are then used to predict future classes they are reconstructing particular social patterns based on these assumptions and worldviews. AI simply assumes that at a certain level of granularity, by zooming in close enough, things become sufficiently commensurate so that their similarities and differences become identifiable to the AI even when in reality their characteristics are uncontainable. This categorisation actually has real effects on humans that try to change their behaviours in order to fit into a different category.
Crawford ends with an interesting note on how to move forward or fight against the power regimes of AI. She argues for a renewed politics of refusal that instead of asking where AI will be applied next simply because it can be applied, one should be asking why it should be applied in the first place. This questions the fundamental idea underlying capitalism and AI development that everything that can be predicted should be predicted to maximise profits.
“Refusal requires rejecting the idea that the same tools that serve capital, militaries, and police are also fit to transform schools, hospitals, cities, and ecologies, as though they were value neutral calculators that can be applied everywhere.”
Overall, “The Atlas of AI” is a very interesting book as it is providing a very deep account of artificial intelligence. Starting with the extraction of rare earth materials all the way to the space race funded by Big Tech companies it seems as if the book describes a genealogical account of how different global issues and courses of action are interconnected.