AI Ethics in Healthcare Research Focus

Aishah Khan

Lead Author

Lead Author: Aishah Khan and the ADAPT in SC AI Ethics and Acceptance Team

AI Artificial Intelligence (AI) is a technology that can simulate human intelligence and decision-making processes. While this field has seen a rapid rise in popularity and an exponential rate of technical evolution over the past decade, the sentiment behind “self-sufficient” automation has existed for thousands of years, dating back to ancient philosophical origins. The usage of AI in medicine began back in the mid-twentieth century, when researchers used a series of explicit if-then rules to recommend antibiotics prescription (i.e. MYCIN) and other clinical decision support tools. This has advanced over the past several decades to include more sophisticated algorithms that use applications of machine learning (ML), deep learning (DL), and natural-language processing (NLP) trained models on data sets to accomplish increasingly more complex tasks. 

In the healthcare context, the potential applications of AI are vast. From clinical outcome prediction to autonomous robotic surgery, there are countless ways that AI already has and will continue to revolutionize medical care. However, there are a series of ethical considerations that researchers, developers, medical practitioners, and the general public must consider before developing and adopting AI into applied healthcare settings. Some of these considerations are outlined below. 

Informed Consent and Data Privacy. Training ML, DL, and NLP models requires large data sets. In healthcare, these data often include sensitive patient information. Ethical considerations need to be examined when these data are used, and transparent reporting is needed on how and under what circumstances they are used and by whom. Patients need to be aware of when and how their data is being used for AI product development. Additionally, once informed consent has been obtained, researchers and developers must follow corresponding rules to be compliant to legal standards of confidentiality. One method used to accomplish this is removing all personally identifiable (i.e., anonymization) information in a data set before putting it into use. 

Transparency and Explainability. In order to build trust and buy-in to AI technology, there must be transparency and explainability at every step of the process, from design, development, to application in practical settings. A common issue with AI is that many of these algorithms are a “black box”, where we can see what is input into the model and what is generated as output, but we do not know the process by which the model derived that conclusion. This issue is especially relevant in medical settings where AI may be used for high-stake-decisions like diagnosing patients. A patient may not be accepting of a diagnosis from an AI model if there is no explanation or transparency as to how it made that decision. On the other hand, a doctor might also mistrust that diagnosis without sufficient transparency of the underlying reasoning for the decision.

Bias. While there tends to be a notion that AI algorithms make objective decisions, the opposite is actually the case. Since it is people who develop and train these models, it is only natural that human biases and stigma, as well as bias on the datasets used, could lead to bias in their products. On the developers’ end, it is important to recognize and acknowledge one’s own biases in order to put into place measures that exclude or at least make it transparent where these biases can manifest in the process. It is essential to be aware of these biases in order to avoid overlooking issues that may exist in the devices. 

A risk of implementing machine learning algorithms in healthcare is that bias can also exist in the dataset that is used to train the model, resulting in the model having those biases reflected into its output. If datasets that have historically not been representative of the broader population are used to train AI models, then the products that use these trained models will have discrimination built into them from the outset. Since there are large existing disparities between patients of different demographics within the United States and elsewhere, it is important that developers and medical practitioners consider using representative datasets to overcome potential underlying biases in AI model development and deployment. While fairness regularizations are used to attempt to un-bias datasets, another common approach is to make the bias of an existing model visible. Collecting datasets that are diverse and accurately representative of the broader population should be prioritized. 

Access and Allocation. While AI solutions can positively impact medical care and health outcomes in many ways, it is important to consider if and how patients can access and use that technology and how that technology will be allocated across populations. Existing healthcare inequities can be exacerbated if AI algorithms or devices are not effective across broad populations. Ensuring that the biomedical technology is scientifically valid, affordable, and user-friendly can help AI to reach populations that stand to benefit most from its implementation in their care. 

These are just some of the many ethical considerations that must be considered when developing and using AI in healthcare settings. For a more in-depth examination of the ethics and governance of AI in healthcare, consider taking a course developed by the World Health Organization (WHO). It is free and readily accessible to anyone who wishes to learn more on the subject. This link will take you directly to the course: https://openwho.org/courses/ethics-ai

As part of the ADAPT in SC Project, the AI Ethics and Acceptance team is doing our part to aid in the development of trustworthy AI. To this end we are working on a) training the entire ADAPT in SC team on the ethical use of AI in healthcare by selecting and disseminating training materials and other relevant resources (like the one provided above), b) creating a biomedical ethics framework and identifying critical ethical decision points for all stakeholders in the process, and c) assessing stakeholder acceptance and willingness to use AI-enabled biomedical devices in order to address barriers to adoption. 

We are at the beginning of a new era in medicine and healthcare, one where AI will become increasingly immersed in daily clinical practice and used by a plethora of biomedical devices. It is our responsibility to ensure that this immersion meets the highest possible ethical standards, to ensure safety and continued benefit to all stakeholders involved.