“Black Skin, White Masks… The metaphor becomes the truth. You have to fit a norm, and that norm is not you.”
That was how Joy Buolamwini, a graduate student at MIT, described the results of her 2018 breakthrough study that revealed how facial recognition technology sold by tech giants can show bias against women and people of color. Buolamwini says the Artificial Intelligence (AI) systems she evaluated had error rates of no more than 1% for lighter-skinned men. For darker-skinned women, however, the errors soared to 35%.
Human Insight Into AI Is Essential
The impact of gender and racial bias in machine learning systems is profound, especially at a time when companies are looking at face identification and other AI technology to streamline the recruitment process. The question then is not whether AI should be used in the recruitment process, but rather how it should be used.
Here’s how businesses should keep humans in the loop to get the best out of AI and achieve fair outcomes.
Be Transparent and Accountable
Business leaders must see to it that they develop strong and ethical policies when introducing AI platforms in the business. A case in point is Microsoft’s responsible AI governance approach.
The tech giant demonstrates how a governance model is necessary to operationalize ethical AI into day-to-day work and support culture change. The program features rules that standardize Microsoft’s AI requirements, as well as training and practices to help employees act on six principles - fairness, inclusiveness, reliability & safety, transparency, privacy & security, and accountability - when developing and deploying AI systems.
Regularly Audit AI Systems To Mitigate Bias
Learning algorithms are only as strong as the data they use. That’s why AI systems must be regularly audited to ensure that the hiring algorithms are trained on data sets that include a diverse range of people.
Skills4Good’s AI & Data Audit Programs, for example, take a human-rights based approach to AI. The programs help business leaders assess and improve the algorithms they design and deploy to prevent bias and discrimination. The programs also help companies respect human rights such as the right to equality and non-discrimination. It enables leaders to ensure that their algorithms are aligned with human rights law so they can mitigate risks to their businesses and advance social justice in society.
Prioritize A People-Centered Approach
AI is a powerful tool that should be used to augment the human skillset and complement decision making, rather than replace the human resources team. Humans still have the upper hand when it comes to emotional intelligence. Recruiters are better equipped than machines to genuinely connect with an applicant and assess how well a candidate might be a good fit in a company.
Unilever has used AI tools to screen entry-level employees - through neuroscience-based games and recorded interviews analyzed by AI technology. Once candidates pass the AI screening, they go through an in-person screening. A hiring manager gives them valuable feedback on how they did in the recruitment process, and then decides whether they are the right fit for the job.
Finding The Right Balance
This issue of bias in algorithms, as shown in Buolamwini’s research on facial recognition technology, highlights how human insight is just as important in the recruitment process as it ever was.
In her study, Buolamwini explains that companies must take concrete steps to ensure that AI software is fair, transparent and responsible. By maintaining human oversight when integrating AI into the hiring process, business leaders can build stronger relationships with candidates and employees of all backgrounds, genders, and orientations.
Learn more about how Skill4Good can help your company deploy Responsible AI through its human-rights based approach to algorithmic impact assessments and data audits Book a call today!