Robot ethics, sometimes known by the short expression "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic (such as in healthcare or as 'killer robots' in war), and how robots should be designed such as they act 'ethically' (this last concern is also called machine ethics). Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns.
While the issues are as old as the word robot, serious academic discussions started around the year 2000. Robot ethics requires the combined commitment of experts of several disciplines, who have to adjust laws and regulations to the problems resulting from the scientific and technological achievements in Robotics and AI. The main fields involved in robot ethics are: robotics, computer science, artificial intelligence, philosophy, ethics, theology, biology, physiology, cognitive science, neurosciences, law, sociology, psychology, and industrial design.
Since antiquity, the discussion of ethics in relation to the treatment of non-human and even non-living things and their potential "spirituality" have been discussed. With the development of machinery and eventually robots, this philosophy was also applied to robotics. The first publication directly addressing robot ethics was developed by Isaac Asimov as his Three Laws of Robotics in 1942, in the context of his science fiction works. The short term "roboethics" was probably coined by Gianmarco Veruggio.