Last Updated on June 13, 2023 by themigrationnews
Meredith Broussard (2019), Artificial Unintelligence: How Computers Misunderstand the World, First MIT Press Paperback Edition, 248 Pages, ISBN 9780262537018
“Artificial Unintelligence: How Computers Misunderstand the World” by Meredith Broussard is a thought-provoking exploration of the inner workings and limitations of technology, as well as an examination of the data-set biases in that technology. In this book, Broussard challenges the common assumption that computers are infallible and delves into the reasons behind their frequent misunderstandings. While the book covers a wide range of themes related to technology, this review will specifically focus on the important topics of racial and gender diversity, and the significant issue of racial bias in migratory practices and border control systems, as addressed by the author.
Perpetuating Inequalities: Technology’s Role in Reinforcing Racial Bias
The book addresses the pressing issue of diversity, or the lack thereof, within the technology sector. The author adeptly highlights the detrimental consequences that arise from the homogeneity of perspectives in the development of Artificial Intelligence (AI) systems. She emphasises that AI algorithms, shaped predominantly by a narrow subset of the population, tend to reflect, and perpetuate existing biases and inequalities in society.
The author argues that computer systems are designed through data sets and algorithms, and function accordingly. She says, “There is no consciousness inside a computer; there’s only a collection of functions running silently, simultaneously.” (pg. 17) By relying on historical data that reflects societal biases and systemic discrimination, these algorithms perpetuate inequality, disproportionately affecting racial minority groups. This bias further marginalises communities already facing socio-economic challenges and hinders their opportunities for fair and just migration.
Through insightful analysis and examples, Broussard elucidates how biases can become embedded within algorithms, leading to discriminatory outcomes. By excluding diverse voices from the creation process, AI systems can reinforce and exacerbate existing social inequalities and racial disparities in migratory processes. By relying on biased data or flawed algorithms, this can result in racial bias in decision-making, such as visa approvals or refugee resettlement assessments, as well as disproportionate harm and limited opportunities for marginalised racial groups.
The author’s emphasis for the urgent need to address racial bias in migratory practices, theorises the importance of fostering diverse and inclusive teams within the tech industry. She says that the solution lies in incorporating racial diversity in the development of algorithms and AI systems that impact migratory processes. “We need more diverse voices at the table when we create technology.” (pg. 87). And this diverse perspective needs to be at every level, from developers and designers to decision-makers and funders, to ensure that the systems we build reflect the values and needs of the people they serve.
By including diverse perspectives, experiences, and cultural contexts at all levels of decision-making, from algorithm design to deployment, these technologies can more effectively address the needs and challenges faced by racially marginalised populations, thereby addressing the complexities of the migrating world.
Stereotypes and Inequalities: The Harmful Effects of Discriminatory Algorithms
Broussard examines the ethical implications of social responsibility and racial bias in migratory practices. The book asks readers to question the consequences of relying solely on technology-driven decision-making without human oversight and to critically examine the underlying assumptions, biases, and potential harm associated with AI technologies.
It calls for a careful examination of the impact caused by discriminatory algorithms, which can perpetuate racial stereotypes, reinforce inequalities, and hinder opportunities for racially marginalised communities and encourages the implementation of safeguards to protect against the unintended consequences. Broussard warns of how automated systems can multiply the existing prejudices and “tend to make a set of the same predictable mistakes that impede progress and reinforce inequality.” (pg. 7)
The book goes beyond critique by offering potential solutions to address racial bias in migratory practices. Broussard highlights the importance of transparency, accountability, and inclusivity in the development and deployment of AI systems. This book has the potential to encourage policymakers, immigration authorities, and technology developers to work towards more equitable migratory practices that consider the diverse needs and experiences of racial minority groups.
Guided by Fairness and Equity: The Vision for a More Inclusive Future
This book is a compelling exploration of the racial biases ingrained within migratory practices and the role of technology in perpetuating these inequalities. It serves as an eye-opening critique of the consequences of racial bias in algorithmic decision-making and calling for a more inclusive and equitable approach to technology development.
Meredith Broussard expertly exposes the inherent biases and limitations of AI systems and underscores the urgent need for diverse representation to build more equitable and inclusive technologies. This book not only highlights the challenges we face but also offers a roadmap for creating a more inclusive and ethically sound future.
The book serves as a wake-up call to policymakers, border authorities and management, and AI developers to critically assess the impact of algorithms on racial minorities and take proactive steps to mitigate bias and foster more equitable migratory systems. Broussard asserts, “Algorithms don’t work fairly because people embed their unconscious biases into algorithms.” (pg. 156) So to create better systems, we need to build them with input from the people who will use them and be affected by them. We need a process that centres on equity, justice, and the representation of marginalised voices, and “uncover injustice and inequality embedded in today’s computational systems.” (pg. 47) By adhering to these principles, the decision makers and technology developers can work together to create a more equitable and just migratory system that considers the diverse needs and experiences of racial minority groups.
Limiting Discourse: Oversimplification of Technical Concepts
Other than greatly contributing to the existing literature of racial biases in computational systems, its oversimplification of technical concepts may be criticised by seasoned practitioners in the field of AI as lacking in depth. Approaches to Machine Learning and Deep Learning, which are crucial advancements in the field, are not explicitly addressed or thoroughly examined. This omission can leave scientifically inclined readers feeling unsatisfied and deprived of a comprehensive understanding of the complexities involved in AI development and its various methodologies.
That said, “Artificial Unintelligence” is written for individuals without technical expertise interested in the intersection of technology, ethics, and social justice, urging them to critically examine the role of AI in perpetuating or dismantling existing biases and inequalities, and work towards a future where technology and migratory practices are guided by fairness, justice, and respect for racial diversity.
Bushra Ali Khan is a trained anthropologist and experienced researcher, writer and editor specialising in the topics of migration and refugee studies. Bushra is a graduate of School of Oriental and African Studies (SOAS), University of London, Senior Sub-Editor at The Week Junior magazine, and Indo-Pacific Chair at a Brussels-based think tank with expertise in Afghanistan, India, and the EU. Bushra is also a mid-level consultant based in London, UK.