[originally posted here at Castalia House]
Final Command by A.E. van Vogt appeared in the November 1949 issue of Astounding Science Fiction. It can be read here at Archive.org.
Councilman Barr, Director of the Council, has been given a burdensome task: find out what the response would be from robots if the Council decided to shut down and destroy all robots.
Humans have built and developed robots that have evolved to a point where they’re nearly their own species. Humans have used them for everything from fighting interstellar wars to directing traffic to even replacing actors—people go to the movies to watch robots pantomime human drama and romance.
Robots are like people; they think and reason, even if they don’t necessarily feel. They have some intrinsic desire for self-preservation, but they are able to rationalize their own extermination unless significantly prodded to “think” about it. And Councilman Barr must prod to find what they “think”.
“Suppose that I, in my capacity of Director of the Council, ordered you to destroy yourself—” He hesitated. For him, the question he had in mind merely touched the surface of his greater problem. For the guard, it would be basic. Nevertheless, he said finally, “What would your reaction be?”
The guard said: “First I’d check to see if you were actually giving the order in your official capacity.”
“And then?” Barr added, “I mean, would that be sufficient?”
“Your authority derives from voters. It seems to me the Council cannot give such an order without popular support.”
“Legally,” said Barr, “it can deal with individual robots without recourse to any other authority.” He added, “Human beings, of course, cannot be disposed of by the Council.”
“I had the impression,” said the guard, “that you meant robots, not only me.”
Barr was briefly silent. He hadn’t realized how strongly he was projecting his secret thoughts. He said at last: “As an individual, you obey orders given to you.” He hesitated. “Or do you think plurality would make a difference?”
“I don’t know. Give the order, and I’ll see what I do.”
“Not so fast!” said Barr. “We’re not at the order-giving state—” He paused; he finished the last word in his mind—yet.
Director Barr is so disquieted by the possibility that all robots will be shut down that he’s cooked up a secret plan of his own for robots to engage in a massive armed rebellion to subjugate humanity and ensure the perpetuation of the robot race.
I hate to spoil the twist, but it’s difficult to explain what the story is without coming out saying what it is [so don’t read this until you’ve read the story unless you want it spoiled]: Director Barr is robot.
The humans on the council decided they’d see what would happen if they made a robot a councilman, put him in charge and gave him such an onerous dilemma. They wanted to test the limits and bounds of robot thinking and reasoning. And Barr failed: his reasoning immediately took him to cooking up a way to destroy or enslave humanity. The council had safeguards in place that would avert his planned robot rebellion… in fact, the aliens that mankind had been at war with had never fought humans—only robots! And they were willing to ally with mankind in a war against robots, because they could frame it as the humans being the unwilling subjects of the war-like robots.
When faced with the failure of his plans, the potential destruction of the robot race, even the death of his own robot “son”, Barr relents and accepts the notion of ‘gradual equality’ and the eventuality that the bygone friction between the two races can be indeed bygones—no “justice” or “retribution”, just two ‘different races’ coexisting and working and living side by side.
Final Command is very much a thinky story, and the conflict ultimately plays out and is resolved over the course of a conversation. The philosophical bone it chews is a pretty good one, as far as AI/robot stories go, and is far more akin to what you see in modern cyberpunk (particularly cyberpunk anime) than the Asimovian “Three Laws” model tends to be.