Navigating Algorithmic Bias in Healthcare AI: The Imperative for Explainable AI Models

Navigating Algorithmic Bias in Healthcare AI: The Imperative for Explainable AI Models


This piece has been authored by Sanjana S , Symbiosis Law School, Hyderabad

Introduction

In the ever-advancing landscape of healthcare, Artificial Intelligence (AI) stands as a transformative force, promising to enhance diagnostic accuracy, treatment recommendations, and patient care. As per Statista’s data, the AI healthcare sector, which had a market worth of $11 billion in 2021, is anticipated to surge significantly, reaching a staggering valuation of $187 billion by 2030. More specifically, healthcare AI systems have the potential to revolutionize the medical field. However, this technological leap forward is not without its challenges, and one of the most pressing concerns is that of algorithmic bias. The impact of non-inclusive training data on diagnostic accuracy cannot be understated. The omission of diverse representation within the training datasets can result in significant disparities in diagnostic accuracy, particularly across various racial groups. This not only compromises the effectiveness of medical algorithms but also has the potential to exacerbate existing healthcare inequalities.,

 This article will first initiate a comprehensive exploration into the origins and mechanisms through which algorithmic biases infiltrate healthcare AI systems These origins encompass the absence of inclusive data, systemic inequities, and the opaqueness inherent in black box AI models. Subsequently, there shall be  discussion regarding the ramifications arising from the prevalence of biased healthcare AIs within the health sector, from both a social and legal perspective. Finally, the argument in favor of  proactive adoption of industry best practices, specifically the need for an explainable AI and the usage of synthetic data, as a strategic intervention to alleviate the pervasive issue within the healthcare sector, shall be discussed.

Examining the causes and mechanisms underlying algorithmic bias in healthcare AI

Algorithmic bias in healthcare AI is not a hypothetical concept; it is a very real and pervasive issue that affects individuals’  daily lives. Simply put, algorithmic bias in healthcare AI refers to the prevalence of unjust and discriminatory consequences in medical decisions made by AI systems. It arises when algorithms, which are frequently trained on biased data, show inequalities in predicting or diagnosing health issues across demographic groups. Bias, whether deliberate or accidental, can infiltrate AI algorithms, resulting in inaccurate diagnoses, inequities in treatment, and possibly life-altering repercussions.  Compounding this issue, black box AI models exacerbate the situation by failing to offer explanations for the generated outcomes or diagnoses.

To provide additional clarity, here’s an example of algorithmic bias in healthcare AI: In October 2019, an investigation revealed that an algorithm deployed across US hospitals, assessing the likelihood of patients requiring additional medical attention for over 200 million individuals, exhibited a significant bias (in terms of medical needs) in favor of white patients over black patients. Although the algorithm did not explicitly consider race as a variable, it heavily relied on another factor closely associated with race, namely the medical cost history. The underlying assumption was that healthcare expenses provide a comprehensive reflection of an individual’s medical needs. Notably, due to various factors (such as black patients receiving lower quality of care or the lack of comprehensive health insurance coverage) , black patients tended to have lower healthcare costs compared to white patients with similar medical conditions .

Fortunately, subsequent to scrutiny, researchers collaborated with Optum to mitigate the bias substantially, achieving an 80% reduction. This intervention was crucial in preventing the perpetuation of severe discriminatory biases within the AI system. Non-intervention to address this bias could have led to serious consequences. Specifically, it would have hindered the recommendation of more comprehensive treatment programs, additional health resources, and increased care from trained providers for black patients.

Systemic Inequities in Training Datasets for Artificial Intelligence Models

Algorithmic bias in healthcare AI stems from various underlying factors. For one, there exists the inherent challenges posed by systemic inequities embedded in societies and health systems. More specifically, the lack of contextual specificity, in health systems further contributes to algorithmic bias. In other words, varying designs, objectives, and diverse populations served by different health systems create challenges in developing a universally applicable AI model. Moreover, insufficient data (that are collected to embed into datasets used for machine learning) for underrepresented socio-economic groups result in imbalances, hindering accurate predictions for these groups. For example , Manipal Hospital’s collaboration with IBM Watson for Oncology to assist healthcare professionals in diagnosing and treating seven specific cancer types revealed the fact that Watson’s dataset lacked diversity. This led to a notable bias towards U.S. patients and their healthcare standards, potentially limiting the system’s effectiveness for a global and diverse set of cancer patients. This impediment has hindered physicians in exploring personalized cancer care options for patients outside the United States.

In another instance, Nicholson Price, a scholar affiliated with the University of Michigan’s Institute for Healthcare Policy and Innovation, contends in his paper titled “Exclusion Cycles: Reinforcing Disparities in Medicine” that entrenched biases against minoritized populations, contribute to self-reinforcing cycles of exclusion in healthcare[1]. The central argument revolves around the cyclical dynamics in research participation and recruitment, extending to the realm of AI. In other words, in the majority of research studies, minorities are frequently overlooked or excluded.

Such insufficient data can also lad to inaccurate diagnoses. A notable example involves a study that found racial disparities in skin cancer diagnoses. The algorithms, trained on datasets predominantly composed of images from lighter-skinned individuals, demonstrated significantly lower accuracy in detecting skin cancer on images of darker-skinned patients. This highlights how biases in training data can lead to disparities in diagnostic accuracy across different racial groups, potentially exacerbating healthcare inequalities.  Moreover, the absence of explanatory mechanisms in these systems, attributed to the utilization of black box AI models, hinders medical professionals from comprehending the underlying reasons behind the poorly generated outcomes.

Lastly, historical healthcare biases lead to inadequate recruitment of minority groups for research studies . This insufficient engagement with minority communities perpetuates the perception that such patients are less interested in research, creating a reinforcing cycle of exclusion. Consequentially, minority communities are often inadequately recruited.  As a result, the exclusionary practices in data collection contribute to insufficient and biased training datasets for AI systems. This inadequacy perpetuates discriminatory patterns in the predictions, classifications, and recommendations made by the AI, reinforcing existing biases in clinical care.Black-box ML/DL Models

Black box deep learning can be described as an artificial intelligence model where the inputs and operations remain concealed from users or relevant stakeholders. These models function opaquely, arriving at medical conclusions or decisions without offering explicit insights into the underlying processes or reasoning that led to those outcomes.

Fig 1: Interpretable ML/DL Models vs. Black Box ML/DL Models. Source: Aaron Hui, ‘Ethical Challenges of Artificial Intelligence in Health Care: A Narrative Review’.

Black-box models, characterized by their opacity, pose difficulties in identifying and rectifying biases, particularly those leading to a higher frequency of errors among patients from underrepresented or marginalized groups. This phenomenon, often termed “uncertainty bias,” has the potential to perpetuate existing healthcare inequities and exacerbate outcomes for vulnerable patients. The challenge of avoiding bias arises due to the opaque nature of black box models and imperfect training data. This difficulty underscores the importance of ensuring representativity, richness, and accurate labeling in the training process.Legal Implications of AI Implementation in Healthcare

The intersection of artificial intelligence and healthcare introduces a critical dimension of legal scrutiny, particularly concerning the potential misdiagnoses stemming from biased AI systems. As these systems increasingly play a role in clinical decision-making, the legal landscape will grapple with the ramifications of erroneous outcomes, addressing questions of accountability, patient rights, and the ethical implications of algorithmic bias in the healthcare domain.

As indicated in the Draft National Strategy for Artificial Intelligence, India actively welcomes AI integration in healthcare. Acknowledging the potential to tackle enduring healthcare challenges, the collaboration between the government, technology firms such as Microsoft, and healthcare providers is evident. This commitment is exemplified through initiatives like the diabetic retinopathy early detection pilot using AI and the International Centre for Transformational Artificial Intelligence (a research Centre to advance AI-led solutions in healthcare), showcasing the country’s dedication to harnessing technology for enhanced healthcare outcomes. However, due to the lack of explicit rules governing AI technology, India confronts legal issues in the AI healthcare industry . Notably, specific AI health technologies, such as Artificial Intelligence Medical Device (AIMD), fall under the Medical Devices Rules of 2017, categorized as software. While this classification aids in the registration and certification process, it falls short in ensuring the safety of these devices. Furthermore, in instances of medical negligence involving AI, there is currently no provision for addressing medical malpractice that may arise from healthcare AI systems.claims

As far as the United States of America is concerned, the U.S. government, led by the Biden administration, has issued a presidential executive order establishing new guidelines and regulations for AI. Released on October 30, President Biden has issued a groundbreaking Executive Order, setting new standards for AI safety that is also applicable to healthcare AIs.The order prioritizes protecting privacy, advancing equity and civil rights, advocating for consumers and workers, promoting innovation and competition, and reinforcing American leadership globally in the realm of artificial intelligence in healthcare. In the UK, the use of AI in healthcare lacks specific legislation, relying instead on a set of general laws like the UK Medical Device Regulations 2002 and the Data Protection Act 2018. This absence of dedicated regulations raises challenges in addressing the nuanced implications of AI applications and underscores the need for comprehensive legal frameworks tailored to AI technology, particularly in the health sector.

The absence of robust regulations governing AI in healthcare raises significant concerns. As AI progresses swiftly in this sector, there is an increasing need to establish comprehensive regulations. These regulations should not only address accountability for potential negative impacts but also ensure the development of high-quality, unbiased, and reliable AI systems. Policy frameworks must be carefully crafted to prevent and mitigate biases, fostering an environment where AI technologies contribute positively to healthcare outcomes while upholding ethical standards.

Mitigating Algorithmic Bias in Healthcare AI: Explainable AI and the usage of synthetic data

Explainable AI

Explainable AI (XAI) refers to artificial intelligence systems’ ability to provide explicit and transparent justifications for their decisions. XAI works in healthcare AI to address the complexities and lack of clarity in machine learning systems, particularly in medical contexts, by providing explanations that are understandable and reliable for medical practitioners. XAI is a solution for minimizing bias in healthcare AI by providing institutional explanations that answer medical practitioners’ concerns about the dependability and credibility of AI systems. These explanations focus on demonstrating why medical practitioners should trust the AI system, with the goal of assuaging worries about the organizations supervising medical AI design. By maintaining openness about the system’s assumptions, XAI helps to avoid unaddressed biases and errors, contributing to a more egalitarian and unbiased healthcare AI field. Furthermore, XAI aids in discovering and correcting biases in AI algorithms by providing post-hoc explanations that allow stakeholders to scrutinize the most influential aspects influencing the system’s judgements. These technical explanations are critical for stakeholders involved in specifying or optimizing the machine, allowing for the detection and correction of model biases.

Figure 2: A visual comparison flowchart illustrates the distinctions between black-box and explainable artificial intelligence (AI) and their respective impacts on the user experience. The upper branch delineates the functioning of a black-box model, which typically furnishes outcomes in the form of classes, such as identifying whether an image pertains to COVID or non-COVID. Contrarily, the middle and bottom branches signify two explainable AI (XAI) approaches, showcasing alternative methods that offer transparency and insights into the decision-making process. Source: Ahmad Chaddad, ‘Survey of Explainable AI Techniques in Healthcare’.

Synthetic data

Synthetic data refers to artificially generated data that mimics real-world patient information but is not derived directly from actual persons. This artificially produced dataset is intended to retain the statistical features and patterns of genuine patient data, allowing machine learning models to be trained and improved without jeopardizing patient privacy or introducing biases from the original data sources. In healthcare AI, synthetic data is especially useful for tackling difficulties such as data scarcity, privacy concerns, and bias mitigation during AI model constructions. Synthetic data can aid in reducing AI bias in healthcare by diversifying training datasets, mitigating the impact of skewed or underrepresented patient populations and fostering more equitable and accurate AI models.

Conclusion

Artificial Intelligence (AI) is poised to be a revolutionary force in healthcare, promising advancements in diagnosis, treatment recommendations, and patient care. However, this transformative journey encounters challenges, one of the most prominent  being that of algorithmic bias. This article has first delved into the origins and mechanisms of algorithmic bias in healthcare AI, exploring its societal, legal, and ethical implications. Proactive adoption of best practices, including Explainable AI models and the integration of synthetic data, is pivotal to address challenges in the AI healthcare sector and mitigate biases effectively. The absence of robust legislation governing AI in healthcare underscores the necessity for comprehensive frameworks, ensuring the development of unbiased, reliable, and high-quality AI systems. By leveraging best practices, such as Explainable AI and synthetic data, there is promising potential to mitigate biases, fostering an environment where AI contributes positively to healthcare outcomes while upholding ethical standards.


[1] The paper also emphasizes  how these biases extend beyond Black patients to include other minority groups, such as Native American patients, transgender patients, individuals with certain disabilities, and even women, despite being a numerical majority.


Leave a Reply

Your email address will not be published. Required fields are marked *