2 min read

Are We Guilty of AI Mathwashing?

Featured Image

“The claim that algorithms will classify more ‘objectively’ ... cannot simply be taken at face value given the degree of human judgment still involved in designing the algorithms, choices which become built-in.”
- Jenna Burrell, How The Machine Thinks

Corporations have increased the crescendo of publicly announcing how they’re integrating algorithms into their operations to serve customers better. For example, many banks blast marketing messages about how they’re use ML algorithms in approving credit card applications in a few minutes, providing the lowest insurance premium quotes in one day or automating stock recommendations tailor-fitted to one’s unique risk profile. They believe that using algorithms in their operations will burnish their reputations as efficient and objective. In their minds, algorithms will provide the holy grail of “algorithmic objectivity over biased human decision-making”. This attitude is called “automation bias”.

Automation bias or “AI Mathwashing” is the human tendency to readily accept the automated decisions of AI systems over and above everything else. It’s based on the notion that “algorithms are not biased because they involve math”. Many consider algorithms appealing because they possess a “veneer of ‘infallibility and objectivity” as compared to the flawed and biased judgments of humans.

Yet, algorithms are far from neutral. There is no such thing as “algorithmic objectivity”. This is because a plethora of human choices go into “defining features, pre-classifying training data, and adjusting thresholds and parameters”, as Burrell noted. Software engineers who design algorithms make myriad decisions that reflect, consciously or unconsciously, their values, biases and preferences. These choices get baked into the mathematical equations they create as they design the algorithms. Unfortunately, many algorithms have produced unfair and discriminatory outcomes, negatively impacting people in high-stakes life events.

That algorithms can produce unfair and discriminatory outcomes is troubling, especially given the inherent opacity of algorithms. It will be difficult, if not nearly impossible, to exactly pinpoint where exactly in the algorithms the prejudicial biases are baked into. Thus, it becomes hugely challenging to determine how the algorithms can be tweaked to correct those biases.

To eliminate automation bias, companies must ensure that all stakeholders impacted by their AI systems represent diversity of thought. This means that impacted stakeholders have seats at the table so that their values and preferences are considered when designing and deploying algorithms. Companies achieve this through regular stakeholder consultations when they conduct regular privacy impact assessments (PIAs) or algorithmic impact assessments (AIAs).

Companies should also ensure that there is always a human-in-the loop that’s looking over the AI systems. There should always be a human person making the final decisions over the AI’s automated decisions especially in cases which have high-stakes consequences on individuals. It is only when these measures are taken that companies can truly deploy AI for good.

Learn more about the dynamic interaction of AI technology, ethics and law.
Take the Skills4Good Responsible AI Program!