The question of how to control their behavior naturally arises as robots become more and more present in daily life. Does Asimov have the answer?
In 1942, a short story called Runaround was published by science fiction author Isaac Asimov in which he introduced three laws governing the behavior of robots. The three laws are the following:
1. A robot may not harm a human being or permit a human being to come to harm through inaction.
2. A robot must obey human beings ‘ orders, unless such orders conflict with the First Law.
3. A robot must protect its own existence as long as it is not in conflict with the First or Second Law.
He later introduced a fourth or zero law which outstripped the others:
0. A robot may not harm humanity or, through inaction, may harm humanity.
Asimov’s robotics laws have since become a key part of a culture of science fiction that has gradually become the mainstream. In recent years, robotics have made rapid advances in technologies that bring closer to Asimov’s kind of advanced robots. Increasingly, on factory floors, robots and people are working together, driving cars, flying aircraft and even helping around the home. And this raises an interesting question: do we need a set of Asimov-like laws to govern robots ‘ behavior as they progress?
Today at the University of Koblenz in Germany, we get a response of sorts from Ulrike Barthelmess and Ulrich Furbach. These guys review the history of robots in society and argue that they are unfounded about our fears about their potential to destroy us. Asimov’s laws are not needed for this, they say. The word robot comes from the Czech word robota, which means forced labour, first appeared in a play by the Czech author Karel Capek in 1924. After this, the anglicised version spread quickly along with the idea that these machines could all too easily destroy their creators, a theme that has since become common in science fiction. But Barthelmess and Furbach argue that our culture is far more rooted in this fear of machines.
While science fiction stories often use plots where robots destroy their creators, this is a theme that literature has a long history. For instance, a monster made of human body parts turns against Frankenstein, his creator, in Mary Shelley’s Frankenstein, because he refuses to make a monster mate. Then there is the Jewish Golem narrative of the 16th century, in one version of which a Rabbi builds a creature out of clay to protect the community while promising to disable it after the Sabbath. But the Rabbi forgets and the golem becomes a destroyed monster. Barthelmess and Furbach argue that in both of these stories the religious undertone is that human beings are prohibited from acting like God. And the creator will always punish any attempt to do so. Similar episodes appear in Greek mythology, where people like Prometheus and Niobe are also punished for demonstrating arrogance towards the gods. That’s why stories like this are part of our thousand-year-old culture. Science fiction authors play on this deep-rooted fear in stories about robots.
Of course, there are real human-machine conflicts. For example, there was a great fear of machines and their manifest ability to change the world in ways that had a profound impact on many people during the industrial revolution in Europe. Barthelmess and Furbach point out that people began a movement in England in the 18th century to destroy weaving machines that became so severe that the Parliament made demolishing machines a capital crime. A group known as the Luddites was even fighting against these issues by the British army. “There has been some kind of technophobia that has led to battles against machines,” they say. Of course, it is not beyond the realms of possibility that a similar kind of antagonism could develop towards the new generation of robots that are set to take over the highly repetitive tasks currently performed by human workers in factories all over the world and particularly in Asia. However, Asia’s attitude toward robots is very different.
Countries like Japan are leading the world in the development of robots for automated factories and as human assistants, partly due to Japan’s aging population and the well-known health-care problems this will produce in the not too distant future. That attitude may be embodied by Astro Boy, a fictional robot named as the Japanese envoy for safe overseas travel by Japan’s Ministry of Foreign Affairs in 2007. Because of this, Barthelmess and Furbach argue that what we fear about robots is not the possibility that they will take over and destroy us but the possibility that other people will use them to destroy our way of life in ways that we can not control.
They point out in particular that many robots are going to protect us through design. Automated vehicles and aircraft, for example, are designed to drive and fly safer than human operatorscan ever. So we’ll use them safer than we don’t use them. The growing number of robots specifically designed to kill humans is an important exception. The US, in particular, is using drones in foreign countries for targeted killings. The legality of these actions, not to mention morality, is still being fiercely discussed. But Barthelmess and Furbach imply that people are ultimately still responsible for these killings and that international law, rather than the laws of Asimov, should be able to deal with issues that arise or adapt to do so. They end their discussion by looking into the potential convergence in the near future between people and robots. The idea here is that humans will incorporate different technologies into their own bodies, such as extra memory or processing power, and fuse with robots eventually. At this point, everyday law will have to cope with ordinary people’s behaviors and actions, and the laws of Asimov will be obsolete.