Ep. 16: The Robotic Moral Code – Programming Ethics into Machines with guest Dr. Don Howard

A robotic moral code may seem like an odd thing to talk about. But there are now instances when robots, machines, and artificial intelligence will need an ethical framework. From self-driving cars to autonomous weapons systems to robots in healthcare, machines will be called upon to make ethical decisions. Even Google Maps displays a certain degree of morality!

How are engineers and programers approaching this problem? One way is to program in a set of ethical rules which tells the robot what do to in certain situations. This type of “top-down” approach is easier to implement, but is limited as there is no way to fully take into account every ethical dilemma a robot may encounter. On the other hand, engineers could employ a “bottom-up” approach, where the robot learns what is moral on its own – similar to how a child would learn as he or she grows. This is very powerful, but is unpredictable.

We can then take it a step further. Who determines what is moral? How would we deal with immoral robots? Do robots have the potential to be more moral than humans? Today, we talk to Dr. Don Howard, a professor of philosophy at Notre Dame and a fellow and former director of the Reilly Center for Science, Technology, and Values. Don has spent a lot of time thinking about ethics in robots, and has some good examples on how robots will be learning the new robotic moral code.
SUBSCRIBE!

One thought on “Ep. 16: The Robotic Moral Code – Programming Ethics into Machines with guest Dr. Don Howard

Leave a Reply

Your email address will not be published.