There’s a fun article in the Winter 2007 AI Magazine about “Machine Ethics“. The basic argument is that as machines get more and more in control (e.g., planned army robots that would fire weapons), it is more and more important (to humans) that they behave in an ethical manner.

The article argues that there is a fundamental difference between implicit and explicit ethics. Implicit ethics would be programmed into a machine by its designer, much as Asimov’s imagined three laws of robotics. Explicit ethics would also be programmed in by a designer, but at a more fundamental level: the robot would be able to compute the ethics of new situations based on a fundamental understanding of ethics. The authors argue that explicit ethics are necessary for several reasons:

1) So it could explain why a particular action is right or wrong by appealing to an ethical principle.
2) Because otherwise it would be lacking something essential to being accepted as an ethical agent. (Kant admired agents that work consciously from ethical principles more than those that work slavishly from rules.)
3) So it could adjust to new situations, evolving the appropriate ethics.

(1) is a red herring: explanation systems often appeal to principles they don’t understand in any sort of principled way. For instance, in our work on explanations for recommender systems some of the most effective (for humans) explanations were only loosely connected to the operation of the recommender.

(2) is in contradiction with an argument the authors make later in the paper. They argue that even though computers won’t be conscious in the near term, they should be accepted as ethical agents if they act ethically. Agreed! So, then, all we need is that they act ethically.

(3) is intriguing. On the one hand, it would be remarkable if an AI agent could evolve new ethical patterns for situations it has never seen, based on core ethical principles.On the other hand, the results of that evolution might be very surprising. For instance, if a military robot were to decide, based on ethical principles, that it ought to prevent an attack on Iran that a general wishes to carry out, how would that be perceived, by the military, by the loyal opposition, by the anti-war effort? What if the robot assassinates the general to prevent the war? Overall, given our track record in predicting the performance of complicated software systems, I have some doubts about this approach.

I liked a later quote in the article, which says that ethical relativists cannot say that anything is absolutely good, even tolerance.

John

Written by


Comments are closed.