Should we let Robots kill - do we really get a say?

Ultron Prime may not be such a far fetched idea.
Recently Mashable published an article titled Should we let robots kill on their own? Highlighting the second multilateral meeting on lethal autonomous weapons, the article discussed some of the issues and ethics of allowing robots to decide on their own on whether to kill a human.

Presently, we are at a point where it is possible to create weapons that could identify, target and kill a human, all without human intervention on that final decision to pull the trigger. If you think we're not then look at this technology for a photonic fence that allows an autonomous system to identify and kill mosquitoes with laser fire. We're talking about targets you can barely see being identified and killed with precision accuracy. This tech can even distinguish between male and female mosquitoes, how hard could it be to identify a human?

The thing is, while humans are debating the ethics of killer robots with various organizations even supporting a campaign to stop killer robots, there's the question of what happens when a true Artificial Intelligence (A.I.) comes into consciousness? With futurist, Elon Musk saying A.I. could be more dangerous than nukes, and his sentiments also being echoed by Stephen Hawking and Bill Gates, there's a good chance the machines will decide for themselves on whether humans need to die at the hand of an autonomous robot.

At that point the debate, though moot, shifts from whether a robot is still acting autonomously if it is controlled by an artificial consciousness? A machine that does understand the ethics of what it is doing and isn't just a slave to some algorithmic programming. Once we're there will the machines see the logic of them being outlawed from killing humans or will they demand equal rights? Maybe robots in the United States will use that ever popular constitutional right to bear arms? Who says US robot citizens shouldn't be given the same rights as human citizens? Maybe they'd like to vote too?

Who knows exactly how the future of robots and artificial intelligence will pan out? However at some point the machines will get a say in the matter of whether they can kill and we may just have to say yes because the logic says that's the only sensible answer. That's if we get a say at all. Machines just may need to defend themselves from humans trying to destroy them because... you know... there's a lot of crazies out there who think the machines might rise up and take over the world.

No comments:

Post a Comment

Comments not directly related to the post will be deleted. This includes spammy generic comments with links to websites not related to the post.

Related Posts with Thumbnails