“1: A robot may not injure a human being or, through inaction, allow a human being to come to harm;
2: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law;
3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law;”
– Isaac Asimov
For some reason I started thinking of the three laws of robotics last night. I must have been around fifteen years old when I read my first Isaac Asimov book and I remember reading these laws and I found them extremely fascinating, and I remember spending a lot of time thinking about them.
How you define harm? Is it only physical or is it emotional harm as well? Are the robots allowed to go around being rude jerks or do they have to be polite all the time? And if it includes emotional harm as well, then the robots must be extremely good at being able to read and understand humans. Because one thing I have noticed is that things that seem extremely harmless or fun to me, might really piss someone else off. It is impossible to be perfect all the time, and it is much more important to learn how to say ‘I’m sorry’ than to try tiptoeing around in life.
What I like about the laws is that they take into acount both action and inaction. To say that it is equally wrong to not do anything is so important and I think it ties in perfectly with the forth law that he added later. I think it is self-explanatory and something to think about every day:
“The Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”