Inside the Mind of a Robot

Posted on Posted in articles

by Elizabeth Fernandez

Rule 1

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Rule 2

A robot must obey the orders given it by a human being except where such orders would conflict with the First Law.

Rule 3

A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Such go Isaac Asimov’s “Three Laws of Robotics”, which feature in many of his futuristic stories and govern human-robot interactions. Such laws guide the ethical and practical decision-making of robots, and are designed to keep both robots and humans safe.

But even within the constructs of Asimov’s stories, situations are rarely so cut and dry. Loopholes were found, and robots acted in strange, unpredictable, and sometimes dangerous ways. Such a simple set of rules, while seemingly infallible on the surface, led to very real problems.

Understanding ways to govern human/robot interactions are no longer solely in the realm of science fiction. Robots are increasingly present in our society – they vacuum our floors, they assemble our appliances, and they explore other planets. In the future, they will crop up more and more in law enforcement, in hospitals, and in our homes. These robots don’t even have to have physical bodies – systems that utilize artificial intelligence (AI) and machine learning algorithms are already being used in operating systems, to sort through huge amounts of data to make conclusions regarding weather patterns and climate change, to work on curing cancer, and to decide what ads to show us on Facebook.

Because of this, many people are asking themselves – how can we define a real-world ethical standard that robots and AI systems should abide by? Such rule sets are notoriously difficult to establish. Alan Winfield, a roboticist at Bristol Robotics Laboratory in the UK, illustrated that a simple set of ethical laws could lead to some interesting consequences in robots. Winfield and his colleagues devised an experiment where a robot was given only two rules that it should follow, in order: 1. Protect any humans and 2. Move towards its goal area. As the robot headed towards its goal, the robot encountered a “human” about to fall into a hole. With only one human in peril, the robot was successfully able to temporarily veer from its goal to push the human out of harm’s way. Once the human was safe, the robot once again proceeded to its goal.

But something very interesting happened when not one, but two humans were in danger. Sometimes, the robot was fast enough to save both of them. Other times, the robot successfully saved one of the humans, but then realized it was too far away to save the other. In these cases, the robot didn’t even bother to attempt to save the second human, and instead headed towards its goal. But in the most surprising example, the robot froze, unable to decide which of the humans to save. While the robot was frozen by indecision, both humans fell into the hole. (The video of the experiment can be seen here.) An experiment like this illustrates the difficulty of programing ethical decision-making algorithms into the mind of a robot.

 

The day is fast approaching when the first robot will need to make an ethical decision. Let’s take one example – self-driving cars. These cars are no longer a dream of the future – Google’s self-driving cars are already on the streets of Mountain View, California and Austin, Texas, and car companies like Tesla and BMW will soon release (or already have) their own models of self-driving cars. In a mere four years from now, it is projected that millions of self-driving cars will be on the roads.

On the surface, the idea behind designing a successful self-driving car is simple enough – surround the car with sensors and protect the occupants by safely avoiding obstacles. Such a self-driving car potentially provides a safer riding experience than its human-driven counterpart – it could come to a safe and gradual stop for a yellow light, avoid hitting a car in front of it that slammed on its brakes, and be free from reckless driving and road rage. These sensors would be unclouded by the problems that often affect a human driver – fatigue, impairment by drugs or alcohol, or simply a delayed response time. These new self-driving cars would be known as “cognitive” – somewhere between a “dumb” machine and a human-level sentience. In short, these cars would be able to make decisions on their own. Such self-driving cars could revolutionize our streets – allowing the blind or elderly increased mobility, providing easy and inexpensive taxi fleets, transforming cross-country shipping routes, and may even be a stepping stone to autonomous airplanes (which could potentially be much safer than those piloted by humans).

Since these cars will be making their own decisions, it is easy to imagine a situation that could pose an ethical dilemma for a self-driving car. For example, let’s imagine a self-driving car is transporting its two happy occupants down a street. Meanwhile, a refrigerator being transported by a pick-up truck ahead of the car becomes dislodged, flying through the air at the self-driving car. Colliding with the refrigerator would certainly lead to severe injury for the occupants of the car. However, if the car were to swerve out of the way, the car would hit a small child who ran into the street to follow a rogue soccer ball, likely resulting in the death of the child. In order to make a decision such as this, the car would need to make an “ethical decision”.

Far more terrifying scenarios can be dreamed up regarding Lethal Autonomous Weapon Systems, or LAWS. These militarized robots would make decisions on the battlefield without input from a human operator, selecting targets or identifying and killing a certain person. Such robots could penetrate locations that are impossible for a human operator or even a drone to reach, and could make exceedingly quick decisions. Here, ethical decision-making algorithms are even more important to establish. Even a robot, simply carrying supplies for the military, would hypothetically need the means to defend itself against enemy capture. A robot in this situation would need to be able to tell the difference between a combatant solider and a curious child.

LAWS and self-driving cars are just two examples of robots that are feasible in the near future that would need to implement an ethical decision-making tree. One way to do this is to provide the robot with a set of rules to follow. Asimov’s laws of robotics are only one example – real life variants could be quite complicated. Such rule sets offer little flexibility, and there could potentially be many real life situations that would fall outside of the narrow parameters defined by the rule sets.

Alternately, robotic programmers could use machine learning algorithms, such as neural nets. These robots would be programmed with an initial rule set, but as the robot progressed through its “life”, it would take ambiguous, real-life data to develop consistently better ways of dealing with problems. In essence, the robot would “learn” on its own how to deal with the situations it encounters. Such machine learning could successfully fill in the gray areas that a finite set of rules might not cover. However, machine learning is, by nature, unpredictable. Decisions made by the robot would not be encoded into the original program, so it would be very difficult to predict how a robot would react to an unexpected situation.

David Burke is a research lead in machine learning at Galois, a company that takes a look at various ways to increase the trustworthiness of critical systems, allowing them to be more predictable and reliable. Burke envisions a future where robots would be present in many areas of our lives and may even outnumber humans. Rather than waiting until problems arise, he believes that it is important to begin to think about how to program ethics into robots now.

 

Where do we even begin? Ethics and morality are very complex, and we are not used to thinking of them mathematically, in a way that could be programed into a robotic system. But it’s a useful exercise – by thinking of how we could program morality into a robot, we are forced to take a deeper look on what we consider morality in the first place. Whether we are able to program a robot with a set of ethical rules with a few lines of computer code or with a complex neural net, we must ask – can morality be described by mathematics? And what can the system that robots use to establish their ethical code teach us about our own morality?

In a very simple way, morality can be looked at as a resource allocation problem. As humanity developed, individuals constantly had to ask themselves – how much food do I keep for me, and how much do I share with my tribe? How much of my reputation do I risk to blow the whistle on my company? How many sacrifices should my family make to protect the environment? By defining morality like this, it is also possible to create a non-human-centric morality – one that a robot could follow.

But this is not enough. An artificial intelligence would have to understand which decisions are the important ones. If a decision were important, the robot would, in a way, need to care about the outcome. Humans deal with prioritizing decisions with something we don’t often associate with ethics – with our emotions. In humans, emotions are fundamentally linked with morality. Burke says, “Emotions tell us what to care about, and warn us when something we care about is at stake.” He continues. “Emotions help us to organize our lives and tell us what matters and what doesn’t matter.” Certain people, after suffering from a type of brain lesion, lost control of their emotions. People suffering from this ailment found making even the most mundane decisions – what clothes to wear or what to choose on a menu – nearly impossible since they no longer had any idea about what was important to care about. If an artificial intelligence were making an important decision, it would need to, at least in its own way, understand the gravity of the choice and care about the outcome. If the emotion is extracted from the situation, there is no way for a self-driving car, say, to differentiate between hitting a rabbit and hitting a child. Emotions in an artificial intelligence system may be very different than what is found in a human, but some robotic framework for emotions may be needed for a robot to truly be able to grasp the minutiae of morality.

To build an artificial intelligence with this level of complexity, it’s likely we’ll need more than rule sets, comprised of thousands of lines of code filled with statements saying “in this situation, do that”. A robot utilizing this degree of programming would need a more complex system, perhaps using a type of machine learning or neural net.

But neural nets make some people very nervous. The resulting artificial intelligence utilizing a neural net is a complex mix of its initial programming, its environment, the situations it encounters, and many, many unknown variables. In short, designers no longer have complete control of the final result.

Could people ever trust an artificial intelligence built around a neural net? Perhaps, if systems were released gradually and gained our trust, just like we reward trust to a fellow human.

But things could always go wrong. Imagine a situation in the not-so-distant future, when a robot, relying on a neural net, makes a decision that results in the death of a child.

Who would be held responsible? Would it satisfy us to punish the robot, by decommissioning it or by cutting it in half? Would it be reasonable to punish the engineers and computer scientists who developed the initial code that led to the development of the neural net? If the robot instead relied on a rule set, it would be easy to diagnose the problem. A computer programmer could look at the lines of code and find the one statement that caused everything to go terribly wrong. However, for systems that utilize machine learning such as a neural net, diagnosing the problem may be so hard as to be impossible. Herein lies the fundamental problem, but also the strength, of a neural net – there is no way to predict what the robot would experience, and hence learn, and therefore, it is impossible to predict with certainty how the robot would react in any given situation. According to Burke, neural nets “are impossible to interrogate. It’s analogous to saying if you did something spiteful to a friend of yours, I can’t go into your brain and find the offending neuron. It’s not like it was just one neuron that is the evil part – it’s the whole complex system.”

One could imagine this robot on trial. During its closing statements, the robot would stand and say, “Yes, the actions I took resulted in the death of this child, but all that I am and all that I do is a product of my original programming and my experiences. If I were in the same situation again, I would have done the same thing. I couldn’t have done it any other way.”

The thought experiment of a robot on trial leads us to ask – are we much different? Are we, along with the actions we take, merely a product of our initial programming (our genetics) and our experiences? When we question the odd actions of others, we are often told that we should “walk a mile in their shoes”. If we were in the same position as another person, would we take the same odd, dangerous, or morally ambiguous course of action? Burke says, “You can’t blame the machine for doing something wrong. It’s only doing its original programming plus its experiences. You can’t say, ‘No machine, you have free will, and you could have risen above your programming and your experiences and done something else’. Yet, we make that claim about human beings all the time.”

It’s a chilling thought. We are no longer dealing with solely a difficult programming problem. We are now beginning to touch on something much deeper – philosophical questions humanity has grappled with for hundreds of years. Are we humans indeed “special”, made up of more than our genetics or our experiences? Do we indeed have free will and the ability to make our own decisions, despite our backgrounds? Or are we much like that robot on the witness stand trying to drum up our own defense, ascribing our actions and the actions of others to the decisions made by the complex neural nets in our own heads?

Right now, the idea of ethics within artificial intelligence is speculative, but it’s not frivolous. We live in a world where robots and neural nets will become increasingly prevalent, and it’s important to begin the conversation centered about these seemingly impossible questions. The answers can be fascinating, provocative, and chilling. Developing intelligence that can think on its own holds the mirror up to our own humanity, giving us a new view on what it truly means to be human.

Article © Elizabeth Fernandez, 2016

Leave a Reply

Your email address will not be published. Required fields are marked *