Wednesday, November 26, 2008

Moral Machines



It seems the military has embarked on coding our morals. These moral codes will then be programmed into autonomous (or nearly-autonomous) machines which would then act in accordance with their programming.


The article in the NYTimes, by Cornelia Dean, brings to light an interesting branch of inquiry. Mainly, how do we determine these ethical or moral criteria? Whose expertise do we trust in the matter? Which culture's codes do we apply? If we're deploying them in foreign lands, what happens when cultures clash?

The article explains that the developers are merely sparking discussion with their work, but I suspect that is a fluid interpretation. As soon as something is executable, I doubt much opposition would exist, at least in the decision-makers' minds. So what are the repercussions of such a technology?

To me, it illustrates the possible endgame of what we try to do with teaching the Hidden Curriculum in our classrooms. When it is carried to such an extent, where all possible scenarios are discussed and codified, with concrete outcomes and criteria, do we run the risk of destroying the human element of human interaction? Can it become so scripted that the chance for a clash, or spark of creativity or invention, is eradicated?

I've purchased a copy of the book cited in the article, "Moral Machines: Teaching Robots Right from Wrong," by Wendell Wallach and Colin Allen. I plan to read it and revisit these questions. What I'm looking for is the language used in determining right and wrong and the presence of cultural norms/stereotypes. How do these authors perceive a culture clash or moral dilemma?

No comments: