The world today generates an immense amount of data. Companies gather data on our buying habits, our location, and how we spend our time. The enormity of this data is too much for human analysts to dig through alone. Instead, they use machine learning algorithms. These algorithms take big data to analyze your routines, infer your race and religion, and perhaps even surmise private or sensitive information about you. But is there bias in machine learning algorithms?
Just because machine algorithms are “math” does not make them immune to incomplete data sets, incorrect assumptions, and the possibility to make false inferences. So where does bias in machine learning algorithms come from? What can we do about it? And can they be improved? From the frequent buying card at your local grocery store to the Trump Administration’s Extreme Vetting Initiative, we discuss bias in machine learning algorithms and if we should trust these algorithms.
Joining us on the podcast is Dr. Joshua Kroll. Joshua is a computer scientist and Postdoctoral Research Scholar at the School of Information at the University of California, Berkeley. He studies automated decision making algorithms, such as machine learning, focusing on fairness, accountability, and transparency.