2 min read

AI Explainability: Looking Outside the Black Box

Featured Image

“When we seek to evaluate the justifications for decision-making that relies on a machine learning model, we are actually asking about the institutional and subjective process behind its development.”
- Andrew Selbst, The Intuitive Appeal of Explainable Machines

When we demand an explanation for an AI system’s automated decisions, we will need to counter-intuitively look “outside the black box”. Relying on our human intuitions by merely looking “inside the black box” will only get us so far. Our intuitions that make sense of phenomenon are severely limited by our domain knowledge and life experiences. If a model produces outcomes that are weird or outside our universe of what makes sense to us, our intuitive response is to dismiss those outcomes as nonsensical and useless.

But that is hubris. Because we now know that these models determine complex statistical relationships among thousands of data points that our human minds cannot even fathom. These models exceed our human abilities to make sense of complex phenomenon. Thus, it is not sufficient for us to rely on our intuitions alone. What then do we do?

Selbst suggests that we venture off and look “outside the black box” for explanations of why the AI system’s decision-making rules are what they are. How? By requiring both developers and their employers to document all the value-laden choices, policy decisions and trade-offs they made before they created the model. Documentation will provide the explanations we need to understand what specific factors shaped the choices, decisions and trade-offs that were made in creating the rules. It will help us determine whether the rules are justified.

When we seek explanations through documentation, we need to be mindful of how much unconscious bias and subjectivity get baked into creating the AI system’s decision-making rules. Developers are far from being objective even though they work with verifiable math equations. They have a full spectrum of value-based alternatives before them to choose from when they design rules. Companies also have a myriad of trade-offs to deal with (i.e., lack of talent in their office, increasing costs, regulatory constraints on data privacy) as they provide varying policy directions to their developers. These realities are not obvious but are very impactful to the outcomes of decision-making AI systems.

It is clear how crucial it is to have diverse team members designing algorithms together. Diversity of thought is one way to ensure that various perspectives come to fore so that algorithmic decision-making produces fair and positive outcomes.

Learn more about the dynamic interaction of AI technology, ethics and law.
Take the Skills4Good Responsible AI Program!