No discussion on science fiction can be complete without the Three Laws of Robotics of Isaac Asimov, considered by many to be the father of hard science fiction. They are one of the most common themes running through science-fiction writing – especially when dealing with the subject of robots.
The Three Laws of Robotics are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
While the Three Laws are quite evidently rudimentary in nature – Asimov was no lawyer after all – many ethicists and roboticists accept them as starting point for discussions on the applications of artificial intelligence and on governing the conduct of robots towards humans.
The Three Laws are believed to have their genesis in the general expectations about human behaviour. In one of his short stories, Evidence, Asimov expounds on this through the protagonist, pointing out that humans are typically expected to refrain from harming one another – the basis for the First Law. With reference to the Second Law, humans are generally expected to obey authority figures such as the police, judges, and ministers, unless obeying them would conflict with the first principle of not harming another human. Lastly, humans are not expected to harm themselves, except perhaps when sacrificing themselves in pursuit of the first and second principles. Note that when it comes to humans, none of these are set in stone. Society always accepts certain exceptions — such as military men killing other military men during wars and conflict, or people refusing to follow orders when such orders are blatantly immoral, or in the case of euthanasia, which is now legal in several jurisdictions. When it comes to machines however, there is a clear moral ambivalence, some would say even fear, about imbuing them with free will beyond a point.
The Three Laws can only serve as a foundation. Indeed, it is sometimes said that they are already obsolete. Asimov himself demonstrated twenty-nine variations in his writing and even propounded a Zeroeth Law, which would override the three other laws. Rules always follow the advent of technology – and we are only at the beginning of imagining, let alone understanding, what we are capable of in terms of creating machines that today look like Honda’s Asimo or Sony’s Aibo (now discontinued) but that someday may look like us, talk like us, and act like us.
New challenges will come to light, especially given that some of the most promising developments in robotics are taking place under military control — the United States’ military plans to have a fifth of its combat units fully automated by the year 2020 – where the Three Laws are not likely to find much purchase. There is no doubt however, that the creators of robots with military applications will likely build in some level of protection for their users and compatriots — imagine the hullabaloo if an autonomous machine were to lead to American casualties. Another very practical challenge is posed by the use of robotic assistants for the elderly, a field of engineering being pioneered in Japan due to its rapidly-ageing population. Recently, Google announced that it had launched a fleet of automated cars that had driven 1,40,000 km across California with minimal human intervention; and they had been involved in only one minor accident — the Google car was rear-ended by another human-driven one. Liability will be a major issue — an autonomous machine programmed with the ability to learn from its circumstances — who is responsible for its actions: its owner, user, or creator? Or will the machine itself someday be recognised to have rights and responsibilities?
(Abhishek Shinde is a New Delhi-based lawyer. This post was first published on myLaw.net here.)