“Healthy” Machine Learning Models Could Reduce Bias

פוסט זה זמין גם ב: עברית

May 2, 2024

Written by Doug Wallace

This medical news article interview highlights the risk of biased data leading to inherently biased medical AI models. The ethics of machine learning in general and approaches to minimize bias are also touched on.

Biased Input = Biased Output
In an interview discussing the ethics of AI use in medicine, Dr. Marzyeh Ghassemi of MIT’s Department of Electrical Engineering and Computer Science breaks down how input of biased data can lead to bias in AI model output. She expressly highlights minority groups and biologic females as being at higher risk for machine learning bias, given bias in existing data sets. The importance of an ethical approach to designing AI models is emphasized. Of further note, the interesting concept of “automation bias” is defined along with the risk of algorithmic over-reliance being present even if clinicians are aware of the potential for erroneous output in early studies.

How will this change my practice?
This article is unlikely to change my (current) practice, but poses well thought out concerns about the risk of bias in AI systems and clinicians as we enter the inevitable age of AI models in medicine. We owe it to our patients to support efforts to maintain high quality and unbiased medical AI content generation going forward.

AI Developers Should Understand the Risks of Deploying Their Clinical Tools, MIT Expert Says. JAMA. 2024 Feb 27;331(8):629-631. doi: 10.1001/jama.2023.22981. PMID: 38324320.

השארת תגובה

חייבים להתחבר כדי להגיב.

גלילה לראש העמוד
Open chat
Scan the code
האיגוד הישראלי לרפואהה דחופה
שלום, קשר ישיר עם ההנהלת האתר איך אפשר לעזור?

Direct contact with the website management
How can we help?
דילוג לתוכן