During the course of the last few years my reading has been largely influenced with my research with The Riz Test, a project that myself and Dr Sadia Habib co-founded back in 2019 to help measure how Muslims are portrayed in the arts. As the project has evolved my focus has turned to the downstream effects of problematic representation, specifically around how Machine Learning models are often use film /TV scripts and subtitles as training data sets and how this can result in biased outcomes in AI systems.
I've given talks and run workshops to this end and it's something that keeps me up at night as it's often marginalised and under represented communities that disproportionately suffer most. So when I read the highly recommended and much cited Atlas of AI, I felt a visceral feeling of relief, here was a world expert in the field of AI beautifully articulating the problem that I was struggling to shape in my head.
In her book, Dr Kate Crawford, whose accolades include being a principal researcher at Microsoft Research and the co-founder and director of research at the AI Now Institute at NYU charts the true cost of AI. She explores the inherent extractive and exploitative nature of AI/ Machine Learning. The book is equally easily accessible to non techies and authoritative to techies - and painfully familiar to both in various guises. Dr Crawford argues that much is said about the term Big Data, analogous to Big Oil - but the analogy isn't extended to include the exploitative similarities that exist between the two.
AI is framed as extractive process in so far that raw minerals from the earth are extracted to power hardware that run AI systems, the coercive and often unethical extraction of data from humans (social media through to non consensual systems in airports), to extraction of labour from workers (Amazon workers, gig economy workers). In each layer of the stack minorities and the historically disenfranchised disproportionately suffer most - the book explores this in depth with many examples.
AI and ML industries and organisations, including otherwise celebrated tech such as Palantir are shown to lean hard into the tried and tested colonial enterprises of seeking to classify and define through predominantly a western and white gaze - inherently a colonial and demonstrably racist endeavour that has echos of phrenology. This aspect of the book spoke to me on a deep level, my personal research is centred on how biased data results in biased algorithms.
Dr Crawford ultimately seeks to highlight the asynchronous power relationship that tech firms have over those they seek to measure and observe - thereby further entrenching the power dynamic. Any ethical considerations are largely voluntary and performative - AI Ethics frameworks are created by industry associations which essentially allow them to mark their own homework.
The book is an excellent primer for those looking to learn more about the hidden by-products of the AI/ML industry and the downstream realities of problematic AI systems. It definitely warrants a second read.