Hacker Fiction Net

Review of I, Robot – the First Three Short Stories (1940-1942)

Isaac Asimov is a legend in science fiction. He wrote hundreds of books, averaging between 1,000 and 1,700 published words per day during his most productive years.

In 1990, two years before his passing, Asimov said something beautiful that I hope to be able to say one day: "I have had a good life and I have accomplished all I wanted to, and more than I had a right to expect I would."

I, Robot is a collection of short stories he published 1940-1950, in total 218 pages in my copy.

Photo of the book "I, Robot" with four boxy robots on the cover.

The Premise of I, Robot

The stories all deal with aspects of a robotic future. Robots that talk, work, and malfunction. Their intelligence is built on what Asimov calls positronic brains. He relates these brain computers to the Atomic Age which was of course top of mind back when he wrote about them. There is at least one reference to "circuits of relays" which is another sign of the book's age.

At the center of the narrative are Asimov's classic Three Laws of Robotics:

First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

He quotes them as from a handbook of robotics from 2058.

The great thing about the laws is that they do indeed provide safety guarantees and they are fairly simple to understand, but there are gaps and dependencies between them that Asimov exploits for plot twists.

"Robbie" (1940, revised 1950)

This early story is about a girl named Gloria growing up with a robot friend. Gloria consistently prefers the company of her robot "Robbie" over other kids. Her mom doesn't like it whereas her dad sees no harm in it.

Eventually, the mom prevails and sells Robbie. Gloria is devastated and insists that they get Robbie back. Time passes and the parents try to get their girl to do "normal" things but all she wants is her robot back.

In the final scene, Gloria finds Robbie working in a factory and dashes into the restricted area, overjoyed. A tractor is about to cross her way and kill her but Robbie jumps to save her at superhuman speed. He is following The First Law.

Thoughts
Asimov didn't manage to get this story published at first. Stories on robots in the late 1930s were apparently all about them turning on their makers or overtaking the human race.

Almost a century later our view of robots is much more varied. In pop culture we've seen several nice and helpful robots like Wall-E, R2D2, and C3PO. Even a robot like Bender in Futurama amuses us. Industrial robots are table stakes in manufacturing and residential robots are vacuuming our living rooms and cutting our lawns.

I'd say we're more concerned with everyday robots being stupid, ignorant, or not up for the task than turning evil on us. Drones are recognized for their military capabilities but for some reason we don't talk about them as robots. There are however widespread concerns on what AI can do. Maybe we've separated mind and matter and isolated the threat to software?

Parents' fear of their kids immersing themselves in something new or modern is a classic. My mother-in-law was prohibited from reading fiction at all and had to hide her books. My mom was told off by her grandmother when she read Asimov(!). And when I was a teenager my mom didn't think it was good that I spent so much time with my computer (it probably wasn't). As a parent myself, I understand the urge to make your kids interact with other humans to become social, happy beings. I would not approve of our daughters only hanging out with robots.

However, we are all getting ever closer to fully interacting with AI. Voice, facial expression, gestures — the lot. Soon enough we'll get great virtual reality too. Will we then stick to human friends and colleagues or will we prefer artificial interaction? Will we require AI to look and behave like humans or will something more ephemeral do? What happens if it's impossible to know if you are interacting with a human or an AI? And what risks lie in hackers being able to tap into or change those interactions? I touch upon some of these questions in my upcoming novel.

"Runaround" (1942)

This story takes place in 2015 which is already eight years ago in present time. We get to meet Asimov's recurring characters Powell and Donovan. They are on Mercury with the robot Speedy.

The life support system at the base is short on selenium and will soon fail. There is a known selenium pool several miles away but the temperature on Mercury is so high that only robots can work above ground for any longer period of time. Therefore they send Speedy out to get the selenium, but he never returns. That should be impossible according to the Second Law (obey humans).

Powell and Donovan reason about it. They have ordered Speedy to go to the selenium site. There may be dangers there, potentially triggering the Third Law (self-preservation). Donovan was rather casual in giving Speedy orders which could mean that the weight put on the Second Law may not be enough to force Speedy to take huge risks. They conclude that Speedy is locked in an equilibrium where the second and third laws balance each other out – he's just close enough to the danger to not follow order. If he goes back, the order takes precedence. If he goes forth, his risk aversion wins.

They come up with a hack to rein Speedy in. Oxalic acid decomposes and produces carbon monoxide in the surface heat of Mercury and carbon monoxide is toxic to Speedy. That way the robot should back off even further and they can close in on him in protective suits.

But they have far too little oxalic acid to make it work.

They get to interact with Speedy though, and he seems to have gone nuts.

Powell decides to put himself at risk by approaching Speedy out in the sun. In his suit he can survive a few minutes. He pleads with the robot. Only at the brink of death does Speedy's adherence to the First Law take over and he rushes to save Powell.

Once back at the base, they give Speedy firm orders to get selenium at any cost.

Thoughts
Leveraging the robot's rule system as a hack is clever. Computers are like that, they follow the rules, damn the torpedoes. So if you can get them to make the wrong choice, they'll do it and do it consistently. Ted Nelson, famous for coining the term hypertext, said it well: "The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do."

According to Wikipedia, oxalic acid vapor decomposes in heat into carbon dioxide and formic acid. Photolysis with UV light also produces carbon monoxide CO and water.

"Reason" (1941)

The third short story also follows Powell and Donovan. This time on a space station which supplies Earth with energy via microwave beams. To assist them, they have the robot QT-1, or as they call him, Cutie.

Cutie is of new design and very capable when it comes to reasoning. To Powell and Donovan's horror, Cutie develops his own theory of humans' place in history and robots' superiority. He of course still follows the three laws so he won't harm the humans. But he refuses to see them as his masters.

It gets worse as Cutie starts to influence the lesser robots on the station and recruits them to his own, made-up religion – a religion where the power source of the station is the real master.

Powell and Donovan are super frustrated with Cutie and try to reason with him from different angles. They argue that they must be his masters because they built him. Cutie doesn't believe them – how can inferior humans build something so masterful as a robot? Powell and Donovan use a spare robot kit to show to Cutie that they indeed build robots. While impressive, Cutie dismisses it as mere assembly. They didn't create the robot.

Cutie argues his case:

"Look at you," he said finally. "I say this in no spirit of contempt, but look at you! The material you are made of is soft and flabby, lacking endurance and strength, depending for energy upon the inefficient oxidation of organic material–like that." He pointed a disapproving finger at what remained of Donovan's sandwich. "Periodically you pass into a coma and the least variation in temperature, air pressure, humidity, or radiation intensity impairs your efficiency. You are makeshift."

The conflict escalates to the point where Cutie locks the humans out of the controls of the station and energy beam.

Things get desperate when a solar storm is approaching which risks deflecting the energy beam. Humans on Earth desperately need the energy and it's Donovan and Powell's job to make sure it gets there. With no access to the controls, they can only sit and wait for disaster to strike.

The solar storm arrives and hits the beam in a spectacular visual fashion. Hours later they conclude it's over.

Cutie shows up and offers to show numbers on the beam's trajectory during the storm. It turns out he and his disciples have kept it steady!

Cutie doesn't want any of their praise. He thinks the talk about energy for humans on Earth is nonsense. He is just following orders from the Master on the station.

Powell tells Donovan what he thinks Cutie really did:

"He follows the instructions of the Master by means of dials, instruments, and graphs. That's all we ever followed. As a matter of fact, it accounts for his refusal to obey us. Obedience is the Second Law. No harm to humans is the first. How can he keep humans from harm, whether he knows it or not? Why by keeping the energy beam stable. He knows he can keep it more stable than we can, since he insists he's the superior being, so he must keep us out of the control room. It's inevitable if you consider the Laws of Robotics."

Thoughts
Humans certainly view themselves as superior on Earth. You occasionally hear claims that dolphins are smarter than humans, but I wouldn't say we truly believe that. We domesticate animals and grow plants with little to no doubt as to our right to do so.

Creating something superior to ourselves is where we're at with AI. Supercomputers could beat the best human at chess in the 90s. Today's generative AI and chatbots are based on such enormous amounts of information that human capacity is not even in the ballpark anymore.

So where does that leave us? Will we accept being ordered what to do by robots? Will we even accept that an AI is smarter than us in some general sense?

Evolution can't end with us. Of course something better will take over (if the planet is still functional). But then I question whether or not human-created computers and robots are some form of evolution? We mostly view them as tools and machines that we use for our purposes.

Two aspects of computers tend to make us question them compared to us even when they deliver superior results. One is their strict following of rules. They are completely logical whereas we view ourselves as irrational and unpredictable, in a quirky good way. That's why computers are so hackable. The other aspect is the difference between doing a good job and understanding why the job needs to be done. The notion of reason and purpose, if you will. Would computers do anything unless we told them to?

A lot of this comes together in the story about Cutie. The robot defines his own reality and finds purpose there. He views himself as superior to humans and refuses to accept that humans with their simple minds can have created him. The mind trick Asimov pulls off is that we get to view the world from Cutie's perspective. We read this story knowing that humans have created Cutie and that he's wrong, but his insistence makes us realize that we all have limited understanding of our world and are ignorant of the real truth of it all.




This text was originally published in my Hacker Chronicles newsletter. Subscribe below!