2 min read

Is AI Really Neutral?

Featured Image

AI is the fourth industrial revolution. Its growing ability to make autonomous decisions faster, better and cheaper than humans is impacting our lives profoundly. Driver-assist features make our cars safer. Computer vision makes diagnosing diseases more accurate. Machine translation enables us to communicate across oceans despite language barriers.

But for all its technological promises to make life better for humans, AI can be perilous. It can decide if a criminal defendant will go free or stay in jail. It can decide whether an individual qualifies for a home loan based on her postal code. It can decide which soldiers and civilians will live or die.

For all its promises and perils, however, many people believe that AI is a merely a tool that is neutral. It is neither good nor bad. Its benefits or harms depend on what it is used for. AI, they reason, can be compared to a hammer. It can be used for good (i.e. to build a house) or for evil (i.e. to break into a car to steal it). That hammer’s value and benefit will be judged depending on the goal of the person wielding it.

Thus, they argue that AI’s characterization as being good or bad for society will ultimately depend on the goals and motivations of those who design and deploy it. Thus, if a company uses AI to teach Mandarin or Spanish to young students, then AI is good. But if a company uses AI to create autonomous robot weapons, then it is bad.

The thinking that AI is neutral may be an oversimplification of its tremendous power in our society. Given the never-ending news headlines of companies deploying biased and discriminatory AI, it becomes incumbent upon us to be vigilant. How? By scrutinizing how AI is being designed as a tool while it’s still on the drawing board. We should not merely evaluate how it is used as a technology tool. When engineers design AI’s complex algorithms that automate decision-making, we should take a closer look at their design choices. Why? Because engineers will be embedding, consciously or unconsciously, their values and biases into the design of such algorithms. Algorithms that make high-stakes decisions that can change people’s lives for better or worse, forever.

So is AI really neutral? The answers is absolutely not. It is inherently biased. As Brent Mittelstadt and his fellow researchers wrote in the article “The Ethics of Algorithms: Mapping the Debate”:

“Algorithms inevitably make biased decisions. An algorithm’s design and functionality reflect the values of its designer and intended uses... Development is not a neutral, linear path; there is no objectively correct choice at any given stage of development, but many possible choices... As a result,‘‘the values of the author [of an algorithm], wittingly or not, are frozen into the code, effectively institutionalizing those values.”

Learn more about the dynamic interaction of AI technology, ethics and law.
Take the Skills4Good Responsible AI Program!