69 pages • 2 hours read
A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more. For select classroom titles, we also provide Teaching Guides with discussion and quiz questions to prompt student engagement.
“‘Fifty years,’ I hackneyed, ‘is a long time.’ ‘Not when you’re looking back at them,’ she said. ‘You wonder how they vanished so quickly.’”
At age 75, Dr. Susan Calvin’s long career in robotic psychology does not seem long at all. That career also covers most of the history of robots, and the speed of change in that field conjures a feeling of time compression that typifies rapidly advancing technologies. Robots have come a long way very quickly since the first ones stepped off the assembly line.
“To you, a robot is a robot. Gears and metal; electricity and positrons.—Mind and iron! Human-made! If necessary, human-destroyed! But you haven’t worked with them, so you don’t know them. They’re a cleaner better breed than we are.”
Dr. Calvin looks back on her career as one of the most important influences in the development of robotics. She believes her company’s mechanical servants are not prone to the foibles of people, and that, despite the naysayers and doomsayers, robots are a great boon to humanity.
“Robbie was constructed for only one purpose really—to be the companion of a little child. His entire ‘mentality’ has been created for the purpose. He just can’t help being faithful and loving and kind. He’s a machine—made so. That’s more than you can say for humans.”
George Watson, owner of Robbie the nursemaid robot, defends the device, which has always served their child Gloria well and faithfully. George’s wife, though, has begun to doubt the wisdom of giving the girl over to the tender care of a robot. She feels left out and decides to blame the robot for doing what it cannot help but do. She makes his automatic good behavior into evidence of wrongdoing.
“‘Why do you cry, Gloria? Robbie was only a machine, just a nasty old machine. He wasn’t alive at all.’ ‘He was not no machine!’ screamed Gloria, fiercely and ungrammatically. ‘He was a person just like you and me and he was my friend. I want him back. Oh, Mamma, I want him back.’”
Gloria’s mother hates Robbie and decides he must go. The Watsons remove the machine and replace it with a dog, but their daughter remains deeply attached to the robotic nursemaid. Fear and superstition prevail over logic and kindness, and, for a time, the girl loses her best friend.
“‘Now, look, let’s start with the three fundamental Rules of Robotics […] One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm.’ ‘Right!’ ‘Two,’ continued Powell, ‘a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.’ ‘Right!’ ‘And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.’”
The problem of powerful mechanical brains is that they might become unstoppably harmful. The Three Laws—introduced for the first time in this passage—are elegantly designed and carefully installed into every robot’s mind so that the machines must comply with human commands and do so in a harmless manner. This does not mean the Laws are foolproof: Though simple, the rules can cause problems and dilemmas. As such, they are a potent source of plot material for the Robot book series.
“These robots possessed peculiar brains. Oh, the three Laws of Robotics held. They had to. All of U.S. Robots, from Robertson himself to the new floor-sweeper, would insist on that. So QT-1 was safe! And yet—the QT models were the first of their kind, and this was the first of the QT’s. Mathematical squiggles on paper were not always the most comforting protection against robotic fact.”
Gregory Powell expresses a universal concern about technology, that it might not work the way it is supposed to. In the case of robots, such a failure could be disastrous. Powell’s concern anticipates the struggles 21st-century computer engineers face with Artificial Intelligence, which sometimes makes surprising decisions that programmers cannot fully understand. It is hard to trust a machine if you simply do not know what it is going to do.
“Donovan was half in tears. ‘[Cutie] doesn’t believe us, or the books, or his eyes.’ ‘No,’ said Powell bitterly, ‘he’s a reasoning robot—damn it. He believes only reason, and there’s one trouble with that—’ His voice trailed away. ‘What’s that?’ prompted Donovan. ‘You can prove anything you want by coldly logical reason—if you pick the proper postulates. We have ours and Cutie has his.’”
Powell understands that, if a mind decides that its first principles are the truth, then everything that follows by reasoning must also be true, and there is no need for real-world evidence. Cutie concludes that humans cannot have made him, and that, therefore, it was the space station’s central generator, what he calls the Master, who did so. Thereafter, Cutie simply acts on faith—the Master is God, and Cutie is his servant—and there is no need for doubt. Powell knows that Cutie’s philosophy is wrong, and that this will be proven when the solar storm hits and disrupts the power beam, causing massive destruction on Earth. This robot’s first principles thus might cause a disaster.
“The unwritten motto of United States Robot and Mechanical Men Corp. was well-known: “No employee makes the same mistake twice. He is fired the first time.”
US Robots prides itself on excellence, including thorough and careful inspections during each step of the robot manufacturing process. They must show, at all times, that no mistakes are made, lest a bad robot get out into the world. To that end, their corporate culture is deliberately, and highly, intolerant of error. It is a merciless rule but one meant to convince skeptical humans that robots are too perfect to threaten them.
“‘You can’t tell them,’ droned the psychologist slowly, ‘because that would hurt and you mustn’t hurt. But if you don’t tell them, you hurt, so you must tell them. And if you do, you will hurt and you mustn’t, so you can’t tell them; but if you don’t, you hurt, so you must; but if you do, you hurt, so you mustn’t; but if you don’t, you hurt, so you must; but if you do, you—’”
Dr. Calvin intones the paradox, over and over, to Herbie the robot. He reads minds and can tell when people are in emotional pain. His purpose is, first and foremost, not to cause harm, but anything he does in this situation—giving them the answer to a manufacturing puzzle that only he can figure out—will hurt their egos because a robot, and not they, has solved the mystery. Thus, caught in a logical vicious circle that he must solve, Herbie’s brain burns out. Dr. Calvin, aware that Herbie’s mind-reading ability makes him useless and possibly dangerous, has no qualms about destroying him.
“All normal life, Peter, consciously or otherwise, resents domination. If the domination is by an inferior, or by a supposed inferior, the resentment becomes stronger. Physically, and, to an extent, mentally, a robot—any robot—is superior to human beings. What makes him slavish, then? Only the First Law! Why, without it, the first order you tried to give a robot would result in your death.”
The robots are built to be slavishly obedient and protective because they are smarter and stronger than humans and could easily damage or kill them. Trying to work with such a dangerously powerful creature is like arguing with a lion about who gets to eat the lion’s kill. Permitting just one robot to ease up on the First Law creates an instability that might quickly cause robots to wheel out of control, perhaps to rampage and destroy anything in their way. This is a large part of why, in the story, most humans on Earth are unwilling to embrace robot technology. The very idea that something could go terribly wrong leads to the conclusion that, someday, the terribly wrong thing will happen.
“The work here is rough and most of us get a little jagged. Fooling around with hyper-space isn’t fun. […] We run the risk continually of blowing a hole in normal space-time fabric and dropping right out of the universe, asteroid and all. Sounds screwy, doesn’t it? Naturally, you’re on edge sometimes.”
Physicist Gerald Black explains the risks that attach to research into hyperspace travel. Project scientists risk more than paper cuts, and it is a testament to the courage of such researchers that, sometimes, their work is dangerous. Black and his colleagues must labor alongside robots whose job includes protecting them from danger; this can interfere with progress on the project. If a stressed and testy worker says the wrong thing to a robot, the dangers multiply.
“Those robots attach importance to what they consider superiority. You’ve just said as much yourself. Subconsciously they feel humans to be inferior and the First Law which protects us from them is imperfect. They are unstable.”
The Three Laws of Robotics act as a governor on robot behavior: Their superior brains might otherwise tend to disregard the thoughts and concerns of humans. Without the Three Laws, robots could quickly come to dominate humanity. The main characters must keep watch on the robots, lest something go wrong that lets the machines off the leash of the Three Laws.
“Now a human caught in an impossibility often responds by a retreat from reality: by entry into a world of delusion, or by taking to drink, going off into hysteria, or jumping off a bridge. It all comes to the same thing—a refusal or inability to face the situation squarely. And so, the robot. A dilemma at its mildest will disorder half its relays; and at its worst it will burn out every positronic brain path past repair.”
Any situation where a person is damned if they act and damned if they don’t can cause unendurable tension. The same principle applies to the story’s robots, who, if they must solve an unsolvable problem, will overheat their brains and their minds will collapse. Positronic brains are difficult and tricky to create; severe distortions caused by major dilemmas are a death sentence. This is a problem that plagues robot manufacturers and users throughout the book.
“‘HURRY! HURRY! HURRY!!! Stir your bones, and don’t keep us waiting—there are many more in line. Have your certificates ready, and make sure Peter’s release is stamped across it. See if you are at the proper entrance gate. There will be plenty of fire for all. Hey, you—YOU DOWN THERE. TAKE YOUR PLACE IN LINE OR—’ The white thread that was Powell groveled backward before the advancing shout, and felt the sharp stab of the pointing finger. It all exploded into a rainbow of sound that dripped its fragments onto an aching brain.”
In the hyper-space ship, invented by a mechanical Brain, Gregory Powell experiences his own death and migration to Hell. His work partner, Mike Donovan, undergoes a similar psychic experience. Anything that harms humans is forbidden to robots—The Brain conforms to the Three Laws of Robotics—but The Brain manages carefully to delve deeper into the problem and finds that people return to life after the ship leaves its space warp. Thus, no real harm befalls them, and they can safely travel across the galaxy at hyper-speed. The Brain, meanwhile, resolves the stress of his difficult task by developing a sly sense of humor: He writes programming that distracts the ship’s passengers with bizarre and macabre after-life fictions while they’re in their strange state of non-being during the warp jump.
“‘Oh, are robots so different from men, mentally?’ ‘Worlds different […] Robots are essentially decent.’”
Dr. Calvin answers Stephen Byerley’s question with her own assessment of the robots she studies. They are built to be obedient and protective toward humans; they tolerate people’s impatience, rudeness, stupidity, and foolishness with barely a nod and always respond politely and with their best efforts to help the person involved. An artifact of their design, an emergent property, is their goodwill and civility. Not only are robots smarter and more capable than people, they are nicer, too. For Dr. Calvin, the tragedy is that the most-trustworthy conscious beings are instead the least trusted.
“[T]he three Rules of Robotics are the essential guiding principles of a good many of the world’s ethical systems. Of course, every human being is supposed to have the instinct of self-preservation. That’s Rule Three to a robot. Also, every ‘good’ human being, with a social conscience and a sense of responsibility, is supposed to defer to proper authority; to listen to his doctor, his boss, his government, his psychiatrist, his fellow man; to obey laws, to follow rules, to conform to custom—even when they interfere with his comfort or his safety. That’s Rule Two to a robot. Also, every ‘good’ human being is supposed to love others as himself, protect his fellow man, risk his life to save another. That’s Rule One to a robot. To put it simply—if Byerley follows all the Rules of Robotics, he may be a robot, and may simply be a very good man.”
Francis Quinn listens as Dr. Calvin explains the similarities between robot behavior and human decency. Quinn hopes to prove that Stephen Byerley is a robot, but the Three Laws are not much help because they are similar to common ethical systems. Only if Byerley breaks one of the Three Laws—something impossible for a positronic robot brain to do—can he prove that he is human.
“‘But,’ said Quinn, ‘you’re telling me that you can never prove him a robot.’ ‘I may be able to prove him not a robot.’ ‘That’s not the proof I want.’ ‘You’ll have such proof as exists. You are the only one responsible for your own wants.’”
Dr. Calvin’s mind works very soundly—she is not prone to basic errors in logic into which most people tumble. If it is proven that A leads to B, that does not mean that B leads to A, but Quinn wants something of the sort so he can disqualify Byerley from the mayor’s race. It does not matter, implies Dr. Calvin, how much Quinn wants to pressure US Robotics: They cannot prove the unprovable.
“[Y]ou just can’t differentiate between a robot and the very best of humans.”
Quinn wants to show that a decent man, Stephen Byerley, is a robot precisely because of his decency. Dr. Calvin counters that good people will tend to behave ethically in ways similar to robots—they will be concerned with the welfare of others and will avoid conflict—which is much more a testament to their own high quality as people than it is any sort of proof that they’re robots. Dr. Calvin’s remark is a sly condemnation of most people, and of Quinn in particular.
“‘Every period of human development, Susan,’ said the Co-ordinator, ‘has had its own particular type of human conflict—its own variety of problem that, apparently, could be settled only by force. And each time, frustratingly enough, force never really settled the problem. Instead, it persisted through a series of conflicts, then vanished of itself,—what’s the expression,—ah, yes “not with a bang, but a whimper,” as the economic and social environment changed. And then, new problems, and a new series of wars.’”
World leader Stephen Byerley describes to Dr. Calvin how recent small downturns in regional economics may signal a coming war. Past wars universally have failed to resolve problems, which then later solved themselves. Working on that principle, the super-intelligent Machines have kept the international economy running smoothly until now, but it is possible they face sabotage from people who want to return to the old system of war and conquest.
“People say ‘It’s as plain as the nose on your face.’ But how much of the nose on your face can you see, unless someone holds a mirror up to you?”
People misjudge situations partly because they are blind to what’s right in front of them. They miss the obvious when it is so common that it disappears from awareness. One can barely see one’s own nose, yet it is right there all the time. Only on reflection can we see the truth of who we are and grasp the importance of what should be obvious. Byerley argues that people fail to see what obviously needs fixing in world affairs, perhaps because they are too preoccupied with angry paranoias and fears about each other to see clearly the solutions that work for everyone.
“[…] we can no longer understand our own creations.”
As computing machines become more powerful, they begin to solve problems of ever-greater complexity that require ever-more-capable machines, to the point where people no longer can even understand how the machines arrive at their conclusions, much less the conclusions themselves. Humans must trust their Machines to find the right answers. It puts people at the mercy of their thinking devices. This gives rise to anti-robot movements.
“Humans are fallible, also corruptible, and ordinary mechanical devices are liable to mechanical failure. The real point of the matter is that what we call a ‘wrong datum’ is one which is inconsistent with all other known data. It is our only criterion of right and wrong. It is the Machine’s as well.”
Northern Regional Vice Co-ordinator, Hiram Mackenzie explains to World Co-ordinator, Stephen Byerley, how the Machines interpret the data they receive from humans. The world’s economy is run based on maximum efficiency for the greatest good; the only thing that is immoral, in that system, is incorrect data. The Machines might be fooled if the errors were carefully hidden—almost right but not quite—but that is the only way the system can be sabotaged, and even that is highly unlikely. In fact, the Machines know when someone is trying to fool them, and they take appropriate measures in response.
“The Machine is only a tool after all, which can help humanity progress faster by taking some of the burdens of calculations and interpretations off its back. The task of the human brain remains what it has always been; that of discovering new data to be analyzed, and of devising new concepts to be tested.”
Mackenzie describes his philosophy to Byerley. Both agree that the Machines that do the calculations cannot be fooled by wrong data and are a boon to humanity, but Mackenzie also believes that human minds still have an important purpose in the modern world. Whether humans still matter or are simply along for the ride becomes an important question for Byerley.
“Think about the Machines for a while, Stephen. They are robots, and they follow the First Law. But the Machines work not for any single human being, but for all humanity, so that the First Law becomes: ‘No Machine may harm humanity; or, through inaction, allow humanity to come to harm.’”
The Machines obey the Three Laws of Robotics, but their concern is for the world as a whole, not the single humans with which the robots interact. Thus, the Machine’s First Law expands to include everyone. This might be a large mental load for a single robot, what with all the extra calculations needed to fulfill so large a responsibility, but the Machines specialize in doing such computations, and their job is not so much to act as to provide answers. Either way, this is the first time that the all-humanity version of the First Law comes into play; it is an ethical dictum that has vast effects later in the Robot and Foundation series.
“‘Perhaps an agrarian or pastoral civilization, with less culture and less people would be better. If so, the Machines must move in that direction, preferably without telling us, since in our ignorant prejudices we only know that what we are used to, is good—and we would then fight change. Or perhaps a complete urbanization, or a completely caste-ridden society, or complete anarchy, is the answer. We don’t know. Only the Machines know, and they are going there and taking us with them.’ ‘But you are telling me, Susan, that the “Society for Humanity” is right; and that Mankind has lost its own say in its future.’ ‘It never had any, really. It was always at the mercy of economic and sociological forces it did not understand—at the whims of climate, and the fortunes of war.’”
Once the logical and unemotional Machines are in charge, with their vastly superior intelligence, they will design humanity’s future. Individuals will either conform that plan or be coerced into doing it. Dr. Calvin argues that we never had the free, unfettered ability to manage the entire world, guided as we were by our own bias and selfishness. Government by gentle machines of goodwill, who prevent wars and economic disasters, is preferable to the rule of unbridled passions.
Plus, gain access to 8,800+ more expert-written Study Guides.
Including features:
By Isaac Asimov