With increasing reliance on data-driven solutions to man’s everyday problems and increased automation of everyday activities, the digital and the human are entwined. These information systems are placed in charge of performing human tasks and/or are used to guide human actions based on its decision- all done in an attempt to increase ‘efficiency’. We often see it when it comes to facial recognition, contact tracing, digital ID-based authentication. Weapons of Math Destruction by Cathy O’Neil speaks about the increased reliance on algorithms for decision making and the errors of exclusion.
“Weapons of Math Destruction” or WMDs are mathematical models/algorithms that quantify human traits and use these insights for taking decisions that were traditionally taken by humans. She identifies that these mathematical models are based on choices made by humans themselves. Even when done with the best intentions, she believes that these algorithms encode “human prejudice, misunderstanding, and bias into the software systems” that manage our lives.
The book begins with an example of a highly-regarded teacher, Sarah Wysocki, who is laid off owing to a low score on a teacher assessment. Sarah Wysocki was fired based on a decision made by an assessment tool that was adopted in the school. This was rather bizarre for her since she had been consistently receiving positive reviews from the parents, students, principal, etc. What was shocking was the lack of accountability mechanisms available to her to seek recourse. O’Neil writes that “The human victims of WMDs, we’ll see time and again, are held to a far higher standard of evidence than the algorithms themselves.”
These opaque systems with no accountability mechanisms and harmful feedback loops further perpetuate inequality. When developing technical systems, the underlying assumption of infallibility needs to be checked along with the acknowledgment that these systems are also man-made. In a lot of cases, human biases are encoded into these technical systems, which often have real effects on real people, who are probably not in the same levels of power and privilege in society. O’Neil points out the fallacy of drawing a false equivalence between machine-generated results and on-ground reality. Couple these systems with “critical life moments” such as going to college, welfare distribution, borrowing money, or even finding a job- an error could be costly for those at the receiving end.
While exploring a few known cases of algorithmic biases affecting human lives, O’Neil sheds light on the value-based decisions that these machines codify. In a country like India, with the widespread digital divide and the state of literacy, a careful and principled approach is crucial to prevent the deepening of the divide between the rich and the poor. She says that we need to impose human values on these systems, even at the cost of “efficiency” in the manner it is understood today. She cites an example of a model that might be programmed to make sure that various ethnicities or income levels are represented within groups of voters or consumers. By advocating for algorithmic audits and measuring the impact of these systems, O’Neil aims to make sense of the blackbox. She reminds us that these models are constructed not just from data but from the choices we make about which data to pay attention to—and which to leave out. She ends the book on a positive note and urges the reader to ask more questions, to check their inherent biases, and engage with the ethics of these systems to bring about fairness and accountability.
An informative book, which engages the reader from the start till the end, the book leaves one with the urge to take action and engage more critically with one’s own unconscious biases. It leaves the reader with a sense of the transformative power of technology and the need to leverage it in a manner that would usher in an inclusive, non-discriminatory digital future.
Karthik Venkatesh is a Research Coordinator with The Dialogue and can be reached at karthik.v@thedialogue.co