EECS 649

Evan Powell
2 min readFeb 24, 2021

University of Kansas

Evan Powell

Blog #3

So far, in O’Neill’s Weapons of Math Destruction, she has gone over how people can use crude data to subconsciously form models to help with everyday life. These models could be used to help decide what to eat and other miscellaneous tasks. However, models could be crafted to perform more powerful tasks. O’Neil describes dynamic models, that will use given information to predict an outcome. What makes these models dynamic, is that will interpret outcomes and use that data to feed into itself; to learn from ‘mistakes’. Dynamic models can often be recognized as esoteric, valuable models that are designed and employed by any FANG company and the likes thereof.

But like anything so powerful, these complex models can be a double-edged sword. These models may only be as ethical as the architects. O’Neil explains how bias can so presently appear within these models, like an evil whisper in the ear, and allow these models to act in binary bias. The example that O’Neil uses (at least in the first chapter) is the LSI-R, or the Level of Service Inventory-Revised, a form for prisoners to fill out. While this form does not specifically ask about race (that would be illegal), the LSI-R asks questions that will damage those not born wealthy and privileged; it will damage those born into and around a system that makes it hard to escape. Within this model, O’Neil explains, the idea of a negative feedback loop of a community will ensue, ensuring a system like quicksand. Again, this modeling is as moral as those who design it, and perhaps the antithesis of any immoral model is the act of modeling baseball. Yes, baseball. The modeling for baseball could be seen as a sort of gold standard because it only deals in pure statistics: strikes, balls, hits, outs, and others.

While models can be an important tool for humanity to extend its grasp ever so further, we must ensure that we build something that cannot be used to inject or perpetuate anything we know to be wrong.

--

--