Who or what is going to be held responsible when or if an autonomous system malfunctions or harms humans? In the 1940s, American writer Isaac Asimov developed the Three Laws of Robotics arguings that intelligent robots should be programmed in a way that when facing conflict they should remit and obey the following three laws: A robot may not injure a human being, or, through inaction, allow a human being to come to harm A robot must obey the orders given it by human beings except where such orders would conflict with the First LawA robot must protect its own existence as long as such protection does not conflict with the First or Second Law