3 min read

Reducing Bias In AI-Based Financial Services

Featured Image

In November 2019, Apple Card came under fire for allegedly being “sexist.” High-profile tech entrepreneur David Heinemeier Hansson and Apple co-founder Steve Wozniak took to Twitter to vent their frustrations and complained that they had been awarded significantly more credit than their wives, even though the women had high credit scores.

The outrage over the issue raised concerns about how the algorithms that are increasingly impacting people’s ability to access financial services can be very discriminating.

Fast forward to March 2021, and an investigation into the matter found that Goldman Sachs, who managed the Apple Card, did not use discriminatory practices or violate any fair lending laws.

The New York State Department of Financial Services, however, said that the “inquiry stands as a reminder of disparities in access to credit that continue nearly 50 years after the passage of the Equal Credit Opportunity Act. The data used by creditors in developing and testing a model can perpetuate unintended biased outcomes.”

Unintended Bias in Financial Algorithms

Artificial Intelligence (AI) is no doubt transforming the financial services industry — delivering new efficiencies by speeding up complex decisions and processes. While the benefits are clear, there is also the danger that unconscious bias can manifest itself in financial algorithms. If stakeholders do not create fairer models, AI risks replicating and amplifying stereotypes historically prescribed to people of color, women, and other vulnerable populations.

Here’s how financial institutions and leaders can ensure that their AI-powered services do not lead to biased, racist, and sexist outcomes.

s4g_blog_article_20210505_01

Design for Privacy and Human Rights

If left unchecked, the use of big data and AI can threaten the prohibition of discrimination, the right to equality, and the right to privacy. That’s why there must be transparency, accountability, and safeguards on how algorithms are designed and deployed.

Stakeholders must carefully consider how they can best leverage machine learning without adversely impacting fundamental rights.

Guidelines must be adopted to ensure companies uphold human rights standards. The European Commission, for example, has proposed a new regulatory framework on AI that aims to ensure the protection of fundamental rights and user safety.

In its proposal, the EU commission said that the “proportionate and flexible rules will address the specific risks posed by AI systems” and “ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

Perform Algorithmic Impact Assessments

s4g_blog_article_20210505_02

Extensive audits of algorithms will minimize the risks of unintended consequences. This involves running different scenarios, proactively identifying bias in AI systems, and building effective controls to ensure that algorithms are developed and used in full respect of human rights.

Skills4Good’s Responsible AI & Data Audit Programs can help firms evaluate their AI technology and assess how the automated decision-making process may impact their customers’ rights. Skills4Good uses the Privacy and Human Rights by Design for AI Audits to support companies in creating a set of principles for the responsible adoption of AI.

Build A Responsible AI Culture

Embedding diversity and inclusion into a financial company’s workplace culture will prevent inherent societal biases from creeping into machine learning systems. With diverse teams embarking on AI projects, financial companies can see to it that the results of their AI solutions are inclusive.

Skills4Good’s Responsible AI Courses help companies train their employees to be responsible AI stewards who can understand, anticipate, and mitigate relevant issues such as bias and discrimination.

The controversy behind Apple’s alleged ‘sexist’ card shows how machine learning algorithms can exacerbate inequality and discrimination. Financial services firms must find the right balance between innovation and human rights protection, and ensure that their AI systems respect and uphold human rights such as privacy, equality, and non-discrimination

Find out how Skills4Good’s Responsible AI & Data Audit Programs - which include the Privacy Impact and Human rights impact assessments - empower companies to harness AI without harming individual human rights.