Tuesday, January 29, 2008

Can Robots Fight Humanely?


Fascinating new report from Georgia Tech discusses the specs and moral conundrums involved in the DoD's gradual shift toward the use of AI on the battlefield.

It's not a new trend - DOD has been deploying the equivalent of Imperial droids in Iraq and Afghanistan for years - but the concept that robots might actually be better than their "emotional" human counterparts at upholding the laws of war is catching on now in the popular consciousness.

Would robots be inherently more humane, less susceptible to group-think and the kind of "passion" that supposedly led Wuterich and his gang to massacre civilians in Iraq? So argued Ronald Arkin, director of the Mobile Robot Laboratory at the Georgia Institute of Technology, at a conference this week sponsored by Computer Professionals for Social Responsibility.

Of course, as Dave Grossman has detailed in his brilliant book On Killing, it's emotions (like, empathy, honor, compassion) that account for weapons-bearers' uncanny ability to stay atrocity. Would robots instead simply be the perfect tools by which to carry out the unlawful orders of their programmers?

A great deal of faith is being placed here on the idea that the generals, civilian policymakers, and their minions in the R&D industries want the troops to behave well, and it's just the bad apples who muck things up. A lot of history suggests otherwise. Maybe we need robot robot-programmers, as well...

1 comment:

hank_F_M said...

Three Laws of Robotics

1. A human may not injure a robot, or, through inaction, allow a robot to come to harm.

2. A human must obey the orders given it by robot, except where such orders would conflict with the First Law.

3. A human must protect its own existence, except where such protection would conflict with the First or Second Law.

:- )


I suspect equipment failure and bad programming (or bad specs given to programmers) will be a bigger problem.

 
"; urchinTracker();