Search
Close this search box.
AI in healthcare

Share:

Overcoming AI Bias: Understanding, Identifying and Mitigating Algorithmic Bias in Healthcare

With artificial intelligence (AI) rapidly unlocking new possibilities in healthcare, we take a look at the human- and data-driven biases that are unintentionally integrated into AI technologies. Solving AI bias will be critical to earning the trust required from patients, clinicians, regulators and the general public to drive AI adoption and application in healthcare.

But to understand how the industry can begin to identify, mitigate and prevent AI bias, we need to first recognize what it is and where it comes from.

What is AI bias?

AI bias is defined as “the application of an algorithm that compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation and amplifies inequities in health systems [1].” While AI bias is most often associated with data generalizability — when the data used to train an algorithm is not representative and thus the outputs cannot be generalized confidently or safely — there are several other ways that bias can be introduced and encoded in the algorithms that drive AI technologies.

AI bias already showing up in healthcare

Several instances of algorithmic biases have already been shown to have direct and harmful impacts on the health and safety of patients:

 

  • A widely used cardiovascular risk scoring algorithm was shown to be much less accurate when applied to African American patients — likely owing to the fact that approximately 80% of training data represented Caucasians[2].
  • AI models that predict cardiovascular disease and cardiac events will be much less accurate in predicting these conditions among female patients if trained on primarily male data sets[3].
  • In radiomics, chest X-ray-reading algorithms trained primarily on male patient data were significantly less accurate when applied to female patients[4].
  • Algorithms for detecting skin cancer — trained largely on data from light-skinned individuals — are much less accurate in detecting skin cancer in patients with darker skin[5].
  • Racial disparities have occurred in the U.S as algorithms were predicting health care costs rather than illness[6].

The growing body of evidence of AI bias is now getting the attention of legislators and regulators. In the U.S., support for the Algorithmic Accountability Act, which would require companies to assess their AI systems for risks of unfair, biased or discriminatory outputs, is growing. Similar regulations have already been proposed or are in development across Europe, as well as in China.

What are the sources of AI bias?

Human biases built into AI design

The fundamental imperfection of AI lies in its human inception. This bias truly impacts AI from its genesis, as the humans developing an algorithm choose which problem they want to solve based on their own perceptions of priority.

In radiotherapy, this built-in human bias takes the shape of which indications get focus and funding for the development of AI tools, as well as which treatments are the focus of research on AI support. The indications and treatments that AI developers prioritize do not necessarily reflect the actual incidence, urgency or potential value of these indications and treatments.

Another human-driven bias in radiotherapy concerns the development of AI-supported decision-making tools that essentially answer the question, “Which treatment is right for this patient?” Factors like cost/affordability, quality of life, or loss of function may be weighed differently by men vs. women, old vs. young, people of different socioeconomic backgrounds, etc. In many cases, those creating the algorithms are not fully accounting for these variables — and are instead making definitive value judgments that code their own biases into the algorithm.

The data generalizability problem

More data means smarter AI. There has been tremendous progress toward open data sharing practices in the past several years, giving AI developers access to enormous public data sets to train and develop their algorithms. But AI is limited by what it’s seen and doesn’t know what it doesn’t know. The problem is that many populations — including several notably vulnerable and historically underserved populations — remain underrepresented in the data sets used to train healthcare AI tools[3]. This includes underrepresentation ranging from gender, race, and ethnicity, to socioeconomic status and even sexual orientation.

 

Beyond access to underrepresented populations, most healthcare organizations are just not collecting the breadth of metadata needed to get a representative sample. Information on race and ethnicity, socioeconomic status or sexual orientation is often not associated with patient health records — making it impossible to analyze and assemble a representative data set across these important variables.

Biased humans + incomplete data = algorithmic bias

Both the built-in human biases and data generalizability issues contribute to a result of algorithmic bias. This kind of algorithmic bias in healthcare technology is particularly hard to see because it typically reinforces longstanding institutional biases. For example, race, ethnicity, and socioeconomic status already impact health outcomes due to deeply ingrained institutional biases. So, if an algorithm results in poorer health outcomes among these groups, it’s extremely difficult to determine if the bias is coming from the algorithm, the other existing biased factors, or both. These deep learning algorithms undoubtedly present the greatest potential benefits but also the greatest potential risks, because the “black box” self-learning model means it’s extremely difficult to determine how the AI is arriving at the output — and thus hard to identify or correct for bias that may develop. 

Defining a path to mitigate AI bias

Many experts agree that there will always be bias in AI — much as there is bias in all human decision-making. The key will be finding the balance between the potential benefit of AI and its risks. Awareness of the bias problem is essential. Here are 3 ways in which to mitigate AI bias:

1. Research and development

Building models and collecting data should be representative of the population they’re trying to address. An inclusive development process should also take a multidisciplinary approach, bringing in statisticians and methodologists that have the tools to understand and address data bias/generalizability challenges — as well as clinicians who understand what that data represents from a patient care perspective. Those developing healthcare AI technologies should also consider bringing in representatives from underrepresented populations to consult on the design and development, pointing out potential sources or results of bias.

2. Data collection

The training data set needs to be representative of the population — and specific attention needs to be paid to increasing data representation among historically underserved, underrepresented, and other minority groups. Data sharing remains critical, but data privacy will remain the barrier to broader open data sharing policies. AI innovators should look to tackle the data anonymization and generalizability problem using synthetic data. It is likely that in the future synthetic data will increasingly give AI developers access to large, representative data sets. It is also important to consider utilizing data-driven approaches to optimize clinical trial designs that will help to reduce socioeconomic disparities during data capture, which may help to address trial diversity[7].

3. Algorithm development & application

Following open science principles, developers must be willing to open up their algorithms and AI technologies to the same level of regulatory and public review as other interventional healthcare technologies. One of the most promising results of an open-science approach to algorithm design is the potential for transparent, deterministic algorithms to be applied on smaller, local data sets by providers. In other words, the general AI model can be trained in a broadly representative data set — then given to a specific user/provider to be applied on the specific patient data that represents the patients they will treat.

Despite, a well-considered algorithm review process the claims of transparency ring hollow — if the algorithm is sealed by a “black box” approach. Developers can validate the “fairness” of the algorithm up to the point of implementation, but there is little way to monitor or control what happens from that point on as the algorithm “learns” within the black box. This is why regulators including the FDA have already begun to indicate that deterministic algorithms, and explainable AI including interpretability, trustability and liability, are the only way to fully vet AI for clinical use.

With great power comes collective responsibility

There’s little doubt that AI will come to benefit all of society in different ways. But it’s also clear that further work is required to reduce potential harm within the healthcare industry. Within the healthcare space, we all have a shared interest in working together to harness this power with great care — a challenge that will entail working outside of purely competitive business mindsets in many instances.

AI will reduce bias in the long term

Lest it seem that AI is introducing bias into the healthcare space, we must remember that institutional biases already have dire effects across the modern healthcare landscape — in clear and proven, as well as complex and unseen ways. The real risk with AI is not that it will create new biases, but rather that it will perpetuate or amplify the existing ones. But by following best practices for transparency and inclusivity and making the necessary collective commitment to mitigating AI bias, the healthcare industry as a whole can actually push things in the other direction — fighting back against the bias that’s already implicit within the healthcare system. That’s because the biggest source of bias is and always will be the human factor — the implicit and explicit biases that shape human decision-making in healthcare. Managed carefully and with shared responsibility, data-driven algorithms have the powerful potential to significantly mitigate this human-factor bias — making AI a positive force not only for expanding clinical possibilities, but for expanding equitable access and delivery of healthcare across the globe.

Listen to Professor Jean-Emmanuel Bibault experience of AI in healthcare

References
  1. Panch T, Mattie H, Atun R. Artificial intelligence and algorithmic bias: implications for health systems. J Glob Health. 2019 Dec;9(2):010318. doi: 10.7189/jogh.09.020318. PMID: 31788229; PMCID: PMC6875681.
  2. Igoe, K (© 2023) Algorithmic Bias in Health Care Exacerbates Social Inequities — How to Prevent It. https://www.hsph.harvard.edu/ecpe/how-to-prevent-algorithmic-bias-in-health-care/
  3. Norori N, Hu Q, Aellen FM, Faraci FD, Tzovara A. Addressing bias in big data and AI for health care: A call for open science. Patterns (N Y). 2021 Oct 8;2(10):100347. doi: 10.1016/j.patter.2021.100347. PMID: 34693373; PMCID: PMC8515002.
  4. Larrazabal AJ, Nieto N, Peterson V, Milone DH, Ferrante E. Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proc Natl Acad Sci U S A. 2020 Jun 9;117(23):12592-12594. doi: 10.1073/pnas.1919012117. Epub 2020 May 26. PMID: 32457147; PMCID: PMC7293650.
  5. Lashbrook, A (2018) AI-Driven Dermatology Could Leave Dark-Skinned Patients Behind. The Atlantic. https://www.theatlantic.com/health/archive/2018/08/machine-learning-dermatology-skin-color/567619/
  6. Ziad Obermeyer et al.Dissecting racial bias in an algorithm used to manage the health of populations.Science366,447-453(2019).DOI:10.1126/science.aax2342
  7. Flatiron Health (2023) ‘Driving breakthroughs in cancer care: Key takeaways from ASCO 2023’ https://flatiron.com/resources/asco-2023-key-takeaways-from-improved-oncology-clinical-trials-to-ai

More Posts

Accuray Innovation & Partnership Hub

On April 16, Accuray inaugurated its Innovation & Partnership Hub in Genolier, Switzerland, reaffirming its commitment to the radiotherapy field as well as the Canton

Learn more.

Connect with Accuray to get the answers you need.

Services

Industry-leading service to support your ongoing success, from speedy install and optimization, to proactive monitoring and unrivalled on-site service, to smart upgrade paths to keep your system at the leading edge.

Treatment Solutions

From robotic to helical radiation therapy delivery, we invent unique, market-changing solutions that are designed to deliver radiation treatments across a full spectrum of patient needs.

Treatment Centers

What's New

Your go-to source for staying informed about our advancements.

Why Accuray

Accuray is expanding radiation therapy: our products can deliver it accurately, precisely and effectively, from oncology to neuro-radiosurgery and beyond.