4 min read

Addressing AI Bias in Healthcare Algorithms

Featured Image

Baltimore software engineer Avery Smith says without inclusive data, the algorithm driving a new cancer software in dermatology can be potentially dangerous for people of color. Smith has been studying how black people with melanoma have been underserved in healthcare for years. His research hit close to home when his wife, African American LaToya, died of melanoma in 2011.

Statistics show that while melanoma is most common in fair-skinned people, African Americans are more likely to die from it. In his paper , Smith warns that relying on machine learning for skin cancer screenings can worsen racial disparities. This is because the algorithm is trained on data that focus on primarily fair-skinned populations in the United States, Australia, and Europe.

“Though the disease is more prevalent among Caucasian patients, that does not mean that patients of darker skin types should be excluded from potential benefits of early detection through machine learning,” Smith said.

Artificial intelligence (AI) is showing great promise in healthcare, with doctors turning to the technology to provide reliable, accurate diagnoses, automate administrative work, and improve patient care. The technology, however, is also vulnerable to the social, economic, and systemic biases that have been entrenched in society for generations. Without careful implementation, AI can “learn” the wrong values and reinforce bias in every decision.

Here’s how stakeholders can ensure that AI reduces health disparities, not exacerbate them.

Prioritize Algorithmic Fairness

Human rights to equality and non-discrimination should be key elements in designing, validating, and implementing medical AI systems. The algorithms need to be trained with data sets drawn from diverse populations so that they do not perform worse for underrepresented groups. Stakeholders must step up efforts to build a strong infrastructure that will deliver the large and diverse data required to train medical algorithms, and ensure that women and minority groups will not be underrepresented as study participants.

In addition, building diverse programming teams will also help in eliminating bias and scaling solutions for underrepresented populations. AI designers and testers with diverse backgrounds bring diverse thoughts to the project. The intellectual diversity will improve the likelihood of detecting and correcting bias, and boost creativity and productivity.

Google’s AI breast cancer screening tool , for example, shows how fairness and inclusivity can be incorporated into every aspect of the development process. The algorithm used was trained on mammogram images from about 90,000 female patients in the US and UK, and it performed better than human radiologists.
Compared with radiologists, the AI system reduced missed cases of breast cancer in the US by 9.4% and in the UK by 2.7%. The system also reduced false-positive readings in the US by 5.7% and in the UK by 1.2%.

s4g_blog_article_20210402

Test For Bias

Frequently auditing AI algorithms and testing them for bias will help ensure accuracy and fairness across different groups of people based on ethnicity, gender, age, and health insurance. Leaders must go the extra mile to integrate bias testing into the quality assurance processes of AI algorithms so that hidden bias will not have an unintended impact on certain populations.

Skills4Good’s Responsible AI & Data Audit Programs help companies develop a framework for managing risks and controls for AI solutions . Skills4Good facilitates the development of effective protection mechanisms through a human rights impact assessment approach and meaningful algorithmic auditing.

Reducing bias in AI-based healthcare systems also requires close consultation and cooperation among doctors, researchers, manufacturers, developers, and funders. The Alliance for Artificial Intelligence in Healthcare , a nonprofit organization founded in December 2018, brings together developers, device manufacturers, researchers, and other professionals to advance the safe and fair use of AI in medicine. By working with a wide array of participants across the healthcare spectrum, the alliance is able to establish responsible, ethical, and reasonable standards for the development and implementation of AI in the industry.

Keep Humans In The Loop

Empathy cannot be replaced by any healthcare algorithm. AI must be used to support human decision-making, not replace it. Oversight of AI-system quality will help ensure that biased algorithms will not lead to bad diagnoses and care recommendations.

Skills4Good’s Responsible AI Courses can help companies train their employees to study algorithms they use, identify bias, and correct problems they discover.

As Avery Smith’s story and paper point out, AI bias is a serious issue that must be addressed to prevent health risks and even death. To keep AI from amplifying existing inequalities, stakeholders must understand the bias that can creep into algorithms, but can be mitigated through “Privacy by Design" and "Human Rights by Design" approaches to designing and deploying AI.

Find out more about how Skills4Good’s Responsible AI and Data Audit Programs - which includes Privacy Impact Assessments and Human Rights Impact Assessments - can guide providers in evaluating and improving the AI systems they will encounter in the evolving healthcare environment.