Jump to comment:
- Page navigation anchor for RE: Identifying and Mitigating Bias in Machine Learning in MedicineRE: Identifying and Mitigating Bias in Machine Learning in Medicine
We thank Dr. Richardson for raising this important point. We agree that the 3 articles in this series alluded to, but did not discuss this issue in depth, and we very much agree that this should be a major consideration for implementing machine learning in medicine. Machine learning may perpetuate and exacerbate existing biases, which are either latent in clinical practice or in the data that are used to develop solutions. Machine learning solutions being applied in clinical contexts must make concerted efforts to identify and mitigate these biases. The U.S. Food and Drug Administration, Health Canada, and the United Kingdom's Medicines and Healthcare products Regulatory Agency jointly released 10 guiding principles for Good Machine Learning Practice for Medical Device Development (https://www.canada.ca/en/health-canada/services/drugs-health-products/me...). Among these is s a recommendation that "Clinical Study Participants and Data Sets Are Representative of the Intended Patient Population". This is a good starting point, but efforts to identify and mitigate bias must not be limited to data collection and preparation. Bias mitigation must also occur during model development, model evaluation, and solution design and deployment (https://...
Show MoreCompeting Interests: Amol Verma reports receiving a fellowship in Compassion and Artificial Intelligence from AMS Healthcare, in support of the present manuscript. Dr. Verma also reports receiving a Pathfinder Project grant from The Vector Institute (in support of the current work) and is a part-time employee of Ontario Health (which had no role in the work discussed in this paper). Amol Verma and Muhammad Mamdani led the development and implementation of the CHARTwatch early warning system at St. Michael’s Hospital, Toronto, Ontario, Canada.References
- Amol A. Verma, Joshua Murray, Russell Greiner, et al. Implementing machine learning in medicine. CMAJ 2021;193:E1351-E1357.
- Page navigation anchor for RE: Machine learning and medicine - three part seriesRE: Machine learning and medicine - three part series
While machine learning and use of machine-learned models are a technology that will revolutionize medicine and provide increasing opportunities to improve health outcomes, I was disappointed that none of the articles in the series discussed what is known to be a major, and topical, issue with algorithms and machine learning - they replicate social biases that exist in the systems they are supporting (Nelson, 2019). If biased data is used, then biased algorithms follow. Populations that are over or under-represented in data will experience the continued marginalization and failure of machine learning. Examples of bias in health abound, a recent article in Science (Obermeyer, 2019) demonstrated that the use of an algorithm resulted in much sicker Black patients being given the same severity score as White patients, boys are given a higher pain score than girls based on a cultural norm of “stoic” males (Mende-Siedlecki 2021). Most at risk of continued violence through medical interactions are Indigenous and Black patients, who experience a disproportionate amount of health inequities in our systems, these are known and highlighted “In Plain Sight” – part of these inequities are systemic and structural, and part result of anti-Indigenous bias in clinicians as a recent cross-sectional study in Alberta demonstrated (Roach, 2021). Furthermore, clinicians will be less likely to consider this bias if presented by an artificial intelligence as there is an implicit trust in the o...
Show MoreCompeting Interests: None declared.References
- Nelson GS. Bias in artificial intelligence. North Carolina medical journal. 2019 Jul 1;80(4):220-2.
- Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019 Oct 25;366(6464):447-53.
- Mende-Siedlecki P, Lin J, Ferron S, Gibbons C, Drain A, Goharzad A. Seeing no pain: Assessing the generalizability of racial bias in pain perception. Emotion. 2021 Mar 4.
- Roach PM, Ruzycki SM, Hernandez S, Carbert A, Holroyd-Leduc J, Ahmed S, Barnabe C. Prevalence and Characteristics of Anti-Indigenous Bias Among Albertan Physicians: A Cross-Sectional Survey. Available at SSRN 3889371.
- Page navigation anchor for RE: Implementing machine learning in medicineRE: Implementing machine learning in medicine
We laud the authors for their usable framework and propose this will be useful to guide the use of machine learning in paramedicine. Paramedic clinical decision-making is well positioned to benefit from machine learning - exceedingly large data are prevalent among Canadian paramedic services. Paramedic data repositories are not just large, but rich in structured patient data features (clinical, non-clinical, administrative) such as primary complaint, medications, detailed physical assessments, vital signs (including cardiac monitoring), physiological scores, paramedic interventions and time stamps to encode a sequence of events. These are the ideal preconditions to construct accurate prediction models. Given paramedics need to make accurate clinical decisions when patient presentations are complex, machine learning algorithms could inform point-of-care treatment and more appropriate non-ED transport destinations. To test the accuracy of machine learning algorithms in predicting future patient outcomes whilst in the prehospital field, integration of paramedic and hospital emergency department (ED) data is required. Assembling and housing integrated data is a barrier, but plausible if paramedic services partner with data scientists and data centres.
For example, machine learning could play a fundamental role in developing and implementing new care models for paramedics. In April 2021, the Ontario government launched and instituted new pilot projects to expand paramedi...
Show MoreCompeting Interests: None declared.References
- Amol A. Verma, Joshua Murray, Russell Greiner, Joseph Paul Cohen, Kaveh G. Shojania, Marzyeh Ghassemi, Sharon E. Straus, Chloe Pou-Prom and Muhammad Mamdani CMAJ August 30, 2021 193 (34) E1351-E1357; DOI: https://doi.org/10.1503/cmaj.202434
- Page navigation anchor for RE: Implementing machine learning in medicineRE: Implementing machine learning in medicine
Verma and colleagues address the potential utility of machine intelligence in patient care from a perspective that is constructive and optimistic. However, the progressive integration of AI into the practice of medicine and the workings of society at large should also be considered as a cautionary tale from the public health perspective.
We have the issue of displacement of jobs from humans to machines. At a societal level, this may impact the health of many as they lose the ability to work. This can affect their economic well-being as well as their personal happiness. And the health sciences are speculated to be a sector that is particularly vulnerable to the replacement of humans by machines .
The advancement of AI proceeds predominantly in secret, as directed by governments, notably with a defense agenda, by corporations with a competitive and for profit orientation and possibly by organizations which mean us harm.
The evolution of Artificial Intelligence may advance at a speed greater than linear, possibly at an exponential rate. This will include the move from narrowly focused AI to AI with more general capability. In this progression General AI may focus on the further enhancement of its own intelligence, advancing into the realm of “superintelligence,” ie, of an ability much greater than that of humans.
There are many unknowns regarding the future of AIs - how independent of us they may become and what the impact of their evolution ma...
Show MoreCompeting Interests: None declared.References
- T2P 5K2
- Ford, M., 2015. Rise of the Robots: Technology and the Threat of a Jobless Future. Basic Books, New York. 317p.
- Barrat, J., 2013. Our Final Invention: Artificial Intelligence and the End of the Human Era. Thomas Dunne Books, New York. 322p.
- Bostrom, N., Superintelligence: Paths, Dangers, Strategies. 2014. Oxford University Press, New York. 352p.