2 min read
Mind the AI Responsibility Gap
By: Josephine Yam, J.D., LLM., MA Phil (AI Ethics) 10/3/21 10:11 PM

Who is liable for accidents caused by a self-driving car? Is it the driver? The car manufacturer? Or the software company that created the self-driving technology? There are many liability frameworks and ethical approaches that differ in how blame is attributed to different people and machines involved in the accident.
One particularly interesting, if not controversial, ethical approach is the Hybrid Responsibility framework. According to David Gunkel, this is a middle-ground way of distributing responsibilities for an outcome to all participants, both humans and autonomous machines, in the long chain of events that produced the car accident. The rationale seems reasonable enough: After all, this is how we now live life in the 21st century: humans interacting with humans, humans interacting with machines, and machines interacting with machines in a complex and dynamic way.
Thus, under this approach, it seems inappropriate to hold the responsibility for an outcome that is jointly caused by both human and autonomous machines to be primarily attributed to humans because the car was autonomous. It likewise seems inappropriate to attribute blame to autonomous machines because the car was designed by humans. Consequently, “no one or nothing is accountable for anything” precisely because of the dispersed and fragmented attribution of responsibility across so many participants under the Hybrid Responsibility framework.
What I find particularly unsettling is the premise that “no one or nothing is accountable for anything”. It does not accord with our sense of justice that agents must be held responsible for the outcomes of their decisions and actions. The examples that Gunkel used of how the Hybrid Responsibility approach works (climate change and the 2009 financial crisis) are quite alarming. It offends our sense of justice to just throw up our arms in the air and conclude that no one or nothing can be held responsible for these global catastrophic events. After all these global events were caused by a complex entanglement of people, corporate interests and public institutions --- so figuring out who did what would be too messy and complicated. Never mind that these global events have wreaked havoc and continue to do so on the lives of billions of people. And never mind that a few elite corporate interests grossly enriched themselves at the expense of polluting the air we breathe or making us believe that everyone could own a home that was twenty times one’s annual income.
Because AI is also a global event that is massively transforming how we live and work, it is imperative that an algorithmic accountability regulatory framework be put in place to close the responsibility gap. This framework will ensure that all participants that caused an outcome are jointly responsible for it, in accordance with their proportionate contribution to such outcome.
Let’s not repeat our failings of allowing the players primarily responsible for climate change and the 2009 financial crisis to get away scot-free. A robust algorithmic accountability framework will help us close the AI responsibility gap. Such framework will help us ensure that AI’s benefits and burdens are allocated in a just and fair manner.
Learn more about the dynamic interaction of AI technology, ethics and law.
Take the Skills4Good Responsible AI Program!