Contributory Info: Three Laws of Robotics
The Three Laws of Robotics are a set of rules devised by the science fiction author Isaac Asimov that form an organizing principle and unifying theme for Asimov’s robotic-based fiction.
The Laws are incorporated into almost all of the positronic robots appearing in his fiction, and cannot be bypassed, being intended as a safety feature. Many of Asimov’s robot-focused stories involve robots behaving in unusual and counter-intuitive ways as an intended consequence of how the robot applies the Three Laws to the situation it finds itself in.
The Three Laws are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Whether or not these rules are outdated is an issue still debated in scientific circles. In 2007, The Singularity Institute issued a press release concerning Asimov’s Three Laws:
“The Singularity Institute’s Advocacy Director, Michael Anissimov: ‘It is essential that more considerate thinkers get involved in dialogues of AI ethics and strategy. Although AI as a discipline has a dubious history of false starts, the accelerating growth of computing power and brain science knowledge will very likely result in its creation at some point. In the past few years, technologists such as Ray Kurzweil and Bill Joy have been informing the public about this critical issue; but more awareness is now needed.'”