Come for cognitive dissonance, stay for the existential despair.
AI Bias in Healthcare: Using ImpactPro as a Case Study for Healthcare Practitioners’ Duties to Engage in Anti-Bias Measures– Out Now from the Canadian Journal of Bioethics
My first published academic article came out today! Check out the peer-reviewed AI Bias in Healthcare: Using ImpactPro as a Case Study for Healthcare Practitioners’ Duties to Engage in Anti-Bias Measures at the link below.
I will be working on AI ethics all summer so expect more of that in the future and in the meantime there are other AI articles in my archives if you have the inclination!
Dr. Lynne Sargent (they/she) holds a Ph.D in Applied Philosophy. They specialize in applied ethics relating to technology and vulnerable groups, and have completed projects on subjects such as: the ethical implications of assistive technology, social media use in long term care homes, teaching microethics to healthcare practitioners using poetry, and the duties of healthcare practitioners to mitigate AI bias in diagnosis and care settings.
View all posts by Lynne Sargent, Ph.D
4 thoughts on “AI Bias in Healthcare: Using ImpactPro as a Case Study for Healthcare Practitioners’ Duties to Engage in Anti-Bias Measures– Out Now from the Canadian Journal of Bioethics”
[…] there can be standards of truth and expertise, even if these systems may be historically flawed and problematically biased I think being aware of the ways in which misinformation can create impairment and lead to flawed […]
[…] written before about what I think are the deeper issues around AI, but today I wanted to come back to that, specifically thinking about Chat GPT and the […]
[…] see this being possible with the current state of the technology. ML algorithms are inherently biased by their data-sets in a way that is different from an artist’s potential influences. An artist’s potential […]
[…] of the ethical questions behind artificial intelligence and machine learning are clear: the issues of bias in datasets, the technically and conceptually difficult concept of fairness, and the existential risks of […]
[…] there can be standards of truth and expertise, even if these systems may be historically flawed and problematically biased I think being aware of the ways in which misinformation can create impairment and lead to flawed […]
LikeLike
[…] written before about what I think are the deeper issues around AI, but today I wanted to come back to that, specifically thinking about Chat GPT and the […]
LikeLike
[…] see this being possible with the current state of the technology. ML algorithms are inherently biased by their data-sets in a way that is different from an artist’s potential influences. An artist’s potential […]
LikeLike
[…] of the ethical questions behind artificial intelligence and machine learning are clear: the issues of bias in datasets, the technically and conceptually difficult concept of fairness, and the existential risks of […]
LikeLike