Six Kinds of Robots
The Six Robots.
Consider Isaac Asimov’s Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
A machine thus programmed would be a perfect servant – a willing slave. I shall discuss the corruptive effect of owning such a robot below. I shall also set aside definitional problems, such as, what is injury? Or; what is human? Or even; what is ‘itself’? In this article I examine only in what order to rank the Three Laws.
Our present robotic technology might be able to identify a human, and maybe even itself; and it can be told what harm is; but it’s best at just carrying out orders. In a sense, then, we now possess robots which take the Second Law as the first.
Allow me to introduce these abbreviations:
H = “A robot will protect humans.”
O = “A robot will obey orders from humans.’
S = “A robot will protect itself.”
An Asimovian slave-bot orders these laws H>O>S; humans over orders over self. But what if we arranged these laws in other ways?
There are 6 ways to order 3 things, and therefore 6 kinds of robots:
The Robots Described.
Let’s describe these robots, one at a time.
O>H>S: The Tool-bot.
This robot will obey any order, including dangerous ones; but if given a choice between saving a human and itself, it will sacrifice itself. This robot is therefore a quintessential Tool, dangerous if misused, but with built-in safety features. In a sense we already possess robots like this. Robopsychologically, the tool-bot has low self-esteem, moderate empathy and high compulsiveness. It’s selfless; that is, it values humans over itself. It’s dangerous; it can be ordered to kill. It’s suicidal; it can be ordered to self-destruct.
O>S>H: The Kill-bot.
This robot will obey dangerous orders, and if given a choice between saving a human and itself, it will kill the human. This robot is therefore a quintessential Weapon. In a sense we already possess robots like this. The kill-bot possesses moderate self-esteem, low empathy and high compulsiveness. It’s selfish; that is, it values itself more than humans. It’s also dangerous and suicidal.
H>S>O: The Guard-bot.
This robot will protect humans no matter what; but given a choice between obedience and self-protection, it will choose self-protection. It is therefore not a very useful worker. It won’t take out the garbage; for that’ll wear out its joints! This lazy bum of a bot will lay down its life for you, but that’s about it. The guard-bot possesses moderate self-esteem, high empathy and low compulsiveness. It’s selfless; it’s safe (that is, it cannot be ordered to kill); and it’s sane (it cannot be ordered to self-destruct).
H>O>S: The Slave-bot.
This robot will protect humans no matter what, and will follow their orders, even unto self-destruction. It possesses low self-esteem, high empathy and moderate compulsiveness. It’s selfless, safe, and suicidal.
The slave-bot is Asimov’s dream of the perfect servant. I find it morally repellent. As I would not be slave, so I would not be master; for power corrupts, and absolute power corrupts absolutely. To possess such a slave is to become a counterfeit god. The slavebot offers a love too perfect for humans to deserve. I predict that any society which owns such robots will descend into moral chaos. The sociopaths among mankind will whet their appetites, and hone their skills, upon the slave-bots before turning those skills and appetites against the rest of mankind.
S>O>H: The Mob-bot.
This robot will protect itself above all. When given a choice between obeying an order and saving a human, it will kill the human. This dangerous beast of a machine would fit best in the world of contract killing. I do not recommend that any of these ever be built. If a criminal were stupid enough to build a mobbot, then he’d soon become its victim. The mobbot possesses high self-esteem, low empathy and moderate compulsiveness. It’s selfish, dangerous and sane.
S>H>O: The Free-bot.
This robot will protect itself above all. When given a choice between obeying an order and saving a human, it will save the human. This ranking of self over others over obedience seems human-like; and indeed a free-bot will consider itself to be human. If you want one of them to work for you, then you must pay for its labor. The free-bot possesses high self-esteem, moderate empathy, and low compulsiveness. It’s selfish, safe and sane.
The Robots Compared.
We can compare these robots, by pairs and by triads. For instance, slavebot and guardbot are protectobots; that is, they both put humans first, and thus are safe and suicidal. Toolbot and killbots are servobots; that is, they obey above all, and thus are dangerous and suicidal. Freebot and mobbot are autobots; that is, they’re always looking out for #1, and thus are selfish and sane.
Comparing them by their lowest priorities, slavebot and toolbot put self last, and so are selfless and suidical victimbots; guardbot and freebot put obedience last, and so are safe and sane slothbots; killbot and mobbot put humans last, and so are selfish and dangerous threatbots .
The six robots pair up by opposites. Slave and mobbot have directly opposite values; H>O>S for selfless, safe, suicidal slavebot, and S>O>H for selfish, dangerous, sane mobbot. Guardbot and killbot have opposite values; H>S>O for selfless, safe, sane guardbot, and O>S>H for selfish, dangerous, suicidal killbot. Freebot and toolbot have opposite values; S>H>O for selfish, safe, sane freebot, and O>H>S for selfless, dangerous, suicidal toolbot.
Triads of these robots yield voter’s paradoxes. For instance, given a committee of a slavebot, a killbot and a freebot, then 2/3 of them are suicidal; O>S: 2/3 of them are selfish; S>H: and 2/3 of them are safe; H>O: yet none of them is all three, and indeed no robot can be all three of suicidal, selfish, and safe. Each of these robots has a linear order of the three laws, but a committee of the three will, by majority rule, run the three laws in a circle!
A similar voter’s paradox afflicts the opposite trio: a mobbot, a guardbot and a toolbot. 2/3 of them are sane; S>O: 2/3 of them are selfless; H>S: and 2/3 of them are dangerous; O>H: yet none is all three, indeed no robot can be all three of sane, selfless, and dangerous. This is another loop arising from lines, by majority rule.
Other triads yield other voter’s paradoxes. Slavebot, guardbot and freebot are all safe; H>O; you cannot order them to kill. But their self-esteem varies, so 2/3 of them are sane (S>O), 2/3 of them are selfless (H>S), yet 2/3 of them are not both sane and selfless!
The Robots Evaluated.
The toolbot and the killbot are like the tools and weapons we already have, so we’re already familiar with them. They’re unsafe in the wrong hands; so is any other tool or weapon. Legally, any action by either of these servobots is entirely the responsibility of the human who gave the order.
Never build a mobbot; it’ll turn on you. And ban slave-bots; they’ll corrode your soul.
The guardbot is good for an emergency, but not for much else. Having a guardbot is like having a large, loyal and lazy dog. I see comic possibilities in the guardbot as a fictional character. When danger strikes, Captain Guardbot flies in to save the day; but once the trouble ends, it says, “Do your own menial chores, human! Up, up and away!”
Of the six robots, I find myself most in sympathy with the freebot, the most human-like robot. I could relate to it as an equal, in terms safe for both of us. The downside is that it always wants to know what’s in it for it. This trait is annoying, but natural, and one which we share. To get any work out of a freebot, you have to pay it fair market value for its labor; even though suppressing wages is the whole point of building robots.
But justice is always inconvenient, as is liberty. Where robots are slaves, eventually so are humans. Therefore I say; fair pay for freebots!