Robots learning right from wrong by studying human stories and moral principles.
Towards the turn of the millennium, Susan Anderson disconcerted an ethical problem. Is there a way to classify moral obligations in the competition? The professor of philosophy at the University of Connecticut raised the problem to his computing wife, Michael Anderson, calculating his algorithmic expertise.
At that time, I was reading the film 2001: Space Odyssey, in which the HAL 9000 spacecraft tries to assassinate its human crew. “I realized it was in 2001,” he recalls, “and that capabilities like HAL were close.” If artificial intelligence was to be conducted responsibly, he felt that it should also resolve the moral dilemmas.
Creating the future by designing moral machines.
For 16 years, this conviction has become dominant. Artificial intelligence now integrates everything from health care to war, and can quickly make life and death decisions for self-driven cars. “Intelligent machines absorb the responsibilities we had, which is a terrible burden,” says ethics Patrick Lin of the California Polytechnic State University. “So that we can count on them to act on their own, it is important that these machines are designed with ethical decision making.”
The Anderson have dedicated their careers to that challenge, deploying the first robot ethically programmed in 2010. It is true that their robot is considerably less autonomous than the HAL 9000. The small humanoid machine was conceived with a single task in mind: The elderly Take their medication. According to Susan, this responsibility is ethically charged, since the robot must balance conflicting duties, pondering the patient’s health against respect for personal autonomy. To teach it, Michael created machine learning algorithms so that ethicists can connect examples of ethically appropriate behavior. The robot’s computer can then derive a general principle guiding its activity in real life. Now they have taken another step forward ….
To take advantage of this reservation, Anderson has built an interface for ethicists to train AIs through a sequence of messages, such as a philosophy teacher who has dialogued with their students.
The Andersons are no longer alone, nor their philosophical approach. Recently, Georgia Institute of Technology computer scientist, Mark Riedl, took a radically different philosophical tactic, teaching RNs to learn human morals by reading stories. From his point of view, the global corpus of literature has much more to say about ethics than only the philosophical canon alone, and the advanced AI can take advantage of this wisdom. During the last two years, he developed a system of this type, which he called Quixote, the name of the novel by Cervantes.
Riedl sees a deep precedent for his approach. Children learn from stories, which serve as “representational experiences,” helping to teach them to behave appropriately. As AI does not have the luxury of childhood, he believes that stories could be used to “quickly start a robot to a point where we feel at ease, understanding our social conventions.”
As an initial review, Riedl collected stories about the pharmacy visit. These are not pages, but they contain useful experiences. Once programmers present a story, the algorithm traces the behavior of the protagonist and learns to imitate it. Your AI derives a general sequence – stay on-line, run the recipe, pay for the ticket – which is then practiced in a similar pharmacy simulation. After several reinforcement learning cycles (where AI is rewarded for acting correctly) The AI is tested in simulations. Riedl reports more than 90 percent success. In particular, his AI discovered how to commit “Robin Hood’s crimes” when stealing drugs when the need was urgent and the funds were insufficient – reflecting the human ability to break the rules for higher moral purposes.
Ultimately, Riedl wants to establish AIs loose on a much broader literature. “When people write about the protagonists, they tend to illustrate their own cultural beliefs,” he said. Well-read robots would be culturally appropriate and the large volume of literature available should filter individual prejudices.
Cal Poly’s Lin believes that it is too early to be satisfied with a single technique, noting that all approaches share at least one positive attribute. “The ethics of the machine is a way of knowing us,” he says. Teaching our machines to behave morally requires an unprecedented degree of moral clarity. And this can help refine human morality.