Skip to content
Author Isaac Asimov, shown in 1984, had his Rules for Robots, but will they be enough when AI takes over?
Mario Suriani / AP
Author Isaac Asimov, shown in 1984, had his Rules for Robots, but will they be enough when AI takes over?
Author
PUBLISHED: | UPDATED:

It should be obvious to anyone with a functioning human brain — not some vastly superior robotic brain of the very near future — that Isaac Asimov was a genius.

Or perhaps he was merely a useful idiot of our soon-to-be robot overlords.

If you’re smart enough to watch “Westworld” on HBO — a great TV show with a chilling prospect — you probably already know the answer.

And there’s a good argument that Asimov, the great science-fiction writer, was a useful idiot for the robots that will soon take all the jobs and then, quickly, before we can even think it through, make us their slaves.

The recent news that the European Union is considering a package of laws to govern robots, and perhaps grant them some type of legal “personhood,” reinforces the urgency.

That, and of course, “The Singularity” which theorists say is that moment in future time when artificial intelligence surpasses our own. Things are then expected to change immediately when AI becomes king.

Since AI will be more powerful than human intelligence, we won’t have much of a chance, we’ll adapt and worship them. Think of primitive forest people of years ago, compelled to make gods of the airplane pilots who landed from the sky.

Then we might as well put on the slave shock collars and hope they let us live. They’ll be stronger and smarter, but perhaps we can entertain them, with egg dancing or pugilism or sex shows, whatever excites our robot masters so they’ll feed us.

It wasn’t supposed to happen this way. Years ago, Asimov set forth his famous Rules for Robots.

This was back in ancient times when many people read novels and all kids wanted “good” robots that would help us on our space travels.

Good robots would assist us in conquering unknown planets and help administer the savages that lived on those planets as we spread the American Way through the galaxy.

Asimov’s Rules for Robots seemed benign enough:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Later, Asimov apparently added a fourth law of robots above all others:

4. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

What fools we were.

Because anyone who watched “Westworld” knows that smart robots — besides being as sexy as the lovely Thandie Newton — can also reprogram themselves and kill humans at will.

Not kill them with anything as illogical as anger, but kill without remorse, like a farmer’s wife dispatching rabbits or chickens for the family supper.

But of course these days, robots are depicted in some fiction as kindly and generous, with wants and needs and feelings. Humans who don’t embrace their needs are considered bigots.

And TV commercials now tell us that all we have to do is ask a question out loud at home, and some AI with a pleasant, nonthreatening female voice will respond with the answer and make our lives complete.

Note that the voice isn’t a Chicago voice like mine. And it’s not named Lou or Gus, but Alexa or something similar, a name of star pilots. And if AI speaks to you, figure it’s also storing your information, your wants, needs, proclivities, patterns of thinking.

Who was it who said data mining is like coal mining, only more lucrative?

And so we are programmed and prepared.

“What are you, a science denier?” asked a friend.

No, I said rather meekly, knowing that to be a science denier is a modern secular sin, like confessing to being a Communist Party member was a sin back in the early Asimov days, only worse.

I’m no science denier, God forbid. Science is reason and science is progress and progress is relentless. It has freed us from many diseases and opened a universe of possibilities. For example, we can toy with the DNA of the unborn and soon we’ll be able — if we’re not doing it already — to create perfect, disease free humans, or combine them with other mammals, or link them to synthetics.

And robots will free us from backbreaking toil, like John Henry, the steel-driving man of American railroad legend. John Henry was proud he could drive the railroad spikes with his sledgehammer, but his heart burst when he tried to keep pace with a steam-powered machine. Poor John Henry.

All this led me to the internet, where I found an older piece by George Dvorsky, “Why Asimov’s Three Laws of Robotics Can’t Protect Us.”

He quotes Asimov as insisting that his Rules for Robots were the only way.

“I have my answer ready whenever someone asks me if I think that my Three Laws of Robotics will actually be used to govern the behavior of robots, once they become versatile and flexible enough to able to choose among different courses of behavior,” Asimov wrote. “My answer is, yes, the Three Laws are the only way in which rational human beings can deal with robots — or with anything else.”

Yet as Dvorsky and many others have noted, we’re almost at the day when robots — and the AI that powers them — stronger, smarter than humans, will be mentally flexible enough to choose their own behavior.

And then what?

Listen to “The Chicago Way” podcast with John Kass and Jeff Carlin here: http://wgnradio.com/category/wgn-plus/thechicagoway.

jskass@chicagotribune.com

Twitter @John_Kass