Podcast: Play in new window | Download
Subscribe: Apple Podcasts | RSS | More
AI in healthcare can do amazing things. It can help doctors diagnose their patients. It can streamline patient care, and can help people to receive the best care possible. But what happens when AI in healthcare goes awry?
Today, our guest is Dr. Muhammad Aurangzeb Ahmad. Muhammad discusses the promise of healthcare, but also reminds us that it has limitations. Sometimes, AI can be biased, especially against certain minority populations and women. Sometimes, we humans may not understand why AI makes the decisions it does. And other times, AI could be just plain wrong.
But by knowing the limitations of AI in healthcare, we can also improve how it works and come up with ways to combat problems, biases, and to help doctors and AI systems work together.
Muhammad is an Affiliate Assistant Professor in the Department of Computer Science at University of Washington and a Research Scientist at KenSci, an AI in healthcare company based in Seattle. His research is on accountability of AI, AI in healthcare, and AI from a cross-cultural and ethical perspective. He has a PhD in Computer Science from the University of Minnesota.
If you are a patron of the podcast, be sure to check the Patreon page all this month for bonus content from this episode.
Background music you heard are clips from:
eighteen pieces (soda) by Soda (c) copyright 2008 Licensed under a Creative Commons Attribution (3.0) license. http://dig.ccmixter.org/files/soda/16738
SkyDub by Psykick (c) copyright 2016 Licensed under a Creative Commons Attribution (3.0) license. http://dig.ccmixter.org/files/Psykick/52937
Start To Grow (cdk Mix) by Analog By Nature (c) copyright 2013 Licensed under a Creative Commons Attribution (3.0) license. http://dig.ccmixter.org/files/cdk/43815 Ft: Jeris
Reusenoise (DNB Mix) by spinningmerkaba (c) copyright 2017 Licensed under a Creative Commons Attribution (3.0) license. http://dig.ccmixter.org/files/jlbrock44/56531