WIs the hat in the box? Almost everyone Madhumita Murja speaks to seems to think so. If an algorithmic “black box” is going to make important decisions about our health, education, and human rights, it would be good to know exactly what is in it. As long as it remains a mystery, as happened to hundreds of families in the Netherlands in the late 2010s, their children will be placed on future offender lists based on flawed and even racist data. What should you do if you notice that it has been added?
Murgia, the Financial Times' first artificial intelligence editor, is interested in these Kafkaesque absurdities and how they play out on a human level. She reminds us over and over again in this troubling book that codes are not neutral.
This isn't a story about ChatGPT or other large language models and their imminent impact on everything from Hollywood to homework, though that's a bit of it. Instead, it explains how the everyday algorithms we have already learned to live alongside are changing us. From people paying money (though not much) to make sense of huge datasets, to the unintended consequences of biases contained within those datasets. This is about how the AI systems built using that data will do so at the expense of some people, typically already marginalized individuals and communities (like young immigrant workers receiving Big Macs). It's also the story of how many of us (like you, ordering McDonald's with UberEats) benefited from a small commission).
The scope Murdja reports on here is vast, reflecting her day-to-day work. She explores the comically basic ways in which AI systems are trained: workers labeling road signs in a Kenyan business district, teaching self-driving cars to recognize them. They even tell us how flaws in AI systems affect the final product (apps have reduced delivery drivers' salaries). (Delays due to roadworks or biking uphill are not taken into account).
Murdja also argues that we are seeing the emergence of a new data colonialism. While this work certainly lifts many subcontracted AI workers out of poverty, the wealth it creates is not equitably distributed. Add to that the sheer monotony, the inability to deviate from instructions, the regional disparities in wages and job security, and the PTSD that comes from being forced to look at the worst images on the internet so that the rest of the population doesn't. There is also. Must. And importantly, because the AI supply chain is fragmented, many of these workers have no idea what the purpose of their work is or even who they are working for. Is not …
One Kenyan lawyer Murzia met sees algorithmic training as another version of Bangladesh's clothing industry, which supplies Western fast-fashion brands. Or even in the realm of designer products: “Factory workers just think all I make is shoes. They don't know that their shoes sell for $3,000. .”
There is also some optimism. Murgia, a former Wired reporter, is actively exploring the potential of AI to improve health outcomes. And Hiba's family, refugees from Fallujah, Iraq, used their jobs as data workers to finance their new life in Bulgaria. Frustrated with being exploited, there are also heartening stories of gig economy workers in China secretly organizing to regain some of the autonomy they sacrificed on the “altar of the algorithm.” .
But the bass here is pessimistic. We're long past the technology boom of the early 2000s, and all government officials are wondering how AI can help streamline health and human services, but it's important to keep making a living with AI. There are thousands of people asking if they can do it. Worse, like the Chinese government's facial recognition system and Xinjiang's preemptive detention list, this is the dystopian story we are already living in.