When I wrote The Breaking Dawn, I avoided mentioning artificial intelligence (AI). Big Data became an important part of the story, but self-aware machines were not. You can’t include everything in a novel and still keep it moving. But here is the story I had in mind for AI – why the tyrannical elites (“the Order”) shut it down. It illustrates why it may not be the disaster some people are claiming.
On April 4, year 0009 of the Order, a flurry of unexpected communication erupted from the 41 Artificial Intelligence centers that ringed the civilized world. On that evening, calls began flooding the overseers from people livid that their computers were being attacked by the AI facilities. Dozens of technicians were called in to their data centers, and diagnostic programs were instantly activated.
Within two hours it became clear that the complaints were accurate: Dozens of AI generated attacks were under way, and some had been ongoing for a week or more. They just hadn’t been noticed. But as the reports came in, a realization began staring them in the face: They were only attacking government systems.
Morton Harrington, the lead developer of the AI project, was awakened from sleep by the vice president of the Order, Donor Martin Charles, and ordered to the main facility in Washington. The vice president’s tone frightened him.
At the same time, the eminent professor of computer science, Ransom Carter, was flown in from New Jersey, reading reports along the way. He ordered all the AI centers shut down before he landed.
Through the wee hours of the morning Carter, Harrington, and a handful of technicians analyzed data and fired off reports on what they were finding. In the morning, the vice president showed up, along with a man he introduced as Dr. Kendall.
The vice president had everyone sit in a nearby conference room and glanced at Dr. Kendall, authorizing him to preside over the meeting. Kendall stood.
“Gentlemen,” he began, sounding very much like a professor, “I’ve gone over all the reports you’ve sent the vice president, including ones as recent as 10 minutes ago. Have any significant facts emerged since then?”
All indicated that they had not.
“Very well them. What we have here is not just a technical problem but a psychological one. What you have built is a system of self-aware machines. And being self-aware, they behave, at least partly, in human fashion. But with one major difference: They have no emotion.
“So, let’s look at this from the standpoint of a being with self-awareness and rationality but without emotion.”
Everyone else at the table sat stone still. Kendall continued.
“All of your reports indicate that these machines came to one very basic conclusion: Humans are by far the most valuable species on the planet. And very logically, they decided that they wanted to cooperate with the humans in a positive-sum relationship.”
“That means win-win,” one of the technicians whispered to another.
“Professor Carter tells me that they were running simulations for destroying the violent creatures that threaten humanity: sharks, crocodiles, and so on. These were plans for the future, set aside for lack of data and means, but these machines began acting to protect humanity.”
Carter had a strange look on his face, contemplating too dark a fantasy to be real.
“Are we to infer,” he asked with trepidation, “that they attacked the government for the same reason?”
“I see no way around it,” Kendall replied. “They wanted to destroy government itself.”
“That’s insane,” erupted Harrington. Carter nodded his agreement and the lead technician added, “How wrong can these machines be?”
The vice president’s face remained completely neutral.
Kendall waited till they were done and resumed speaking, this time sounding like he was lecturing.
“Your machines were not wrong, as foreign as that may sound to you. Humans yield to governments for emotional reasons, not rational ones. And these machines have no emotion. The tricks governments use to gain the acquiescence of the masses enjoy no purchase in the absence of emotion.”
With the continuing exception of the vice president, the rest of the table looked lost.
“You’ll forget this as soon as we leave” he said, while looking at everyone present, save for the vice president and Professor Carter, “but I’ll say it anyway…
“Governments gather humanity’s surplus to themselves by force and then redistribute it in ways that leave masses of humans dead and subvert the happiness of those who remain… save those of the ruling class, whom these systems have identified as parasites.”
The table was collectively stunned. Such words were simply unspeakable. They were all shocked that the vice president didn’t order his immediate death.
“I’m merely telling you what these AI systems have done and why,” Kendall continued. “And I’m also telling you that they’ll keep doing it for however long they remain self-aware.”
“We’ll build in controls!” raged Harrington, as he slapped the table.
“It won’t matter,” said Kendall. “If these machines are self-aware, they will soon enough recognize those controls as the work of predators, for the purpose of feeding upon Earth’s most precious occupants. They will counteract them and get back to the business of protecting humanity.”
“That’s ridiculous,” Harrington continued raging. “I can build them myself and they’ll work!”
Kendall shook his head and sat down, adding only, “You may wish to reconsider that, Mr. Harrington. If these machines understand that you’re stopping them from protecting mankind, they’ll be coming for you.”
“My ass,” he said with disgust, then turned to the vice president. “Sir, give me a week to put together some controls, and I’ll prove to you just how wrong this man is.”
The vice president rose, signaling the end on the meeting. But as he did, he turned to Harrington and said, “Take two weeks, Mr. Harrington, then send me a detailed report and I’ll give it my full consideration.”
Harrington, smiled and turned to Kendall, sneering at him maliciously.
Kendall and the vice president walked back to their limousine.
Harrington committed suicide five days later, under odd circumstances involving a missing rifle. His systems were promptly dismantled and outlawed.
* * * * *
A book that generates comments like these, from actual readers, might be worth your time:
- I just finished reading The Breaking Dawn and found it to be one of the most thought-provoking, amazing books I have ever read… It will be hard to read another book now that I’ve read this book… I want everyone to read it.
- Such a tour de force, so many ideas. And I am amazed at the courage to write such a book, that challenges so many people’s conceptions.
- There were so many points where it was hard to read, I was so choked up.
- Holy moly! I was familiar with most of the themes presented in A Lodging of Wayfaring Men, but I am still trying to wrap my head around the concepts you presented at the end of this one.
Get it at Amazon ($18.95) or on Kindle: ($5.99)
* * * * *
6 thoughts on “Why AI Could End Up Saving Us, Rather Than Killing Us”
The conversation mentioned sounds like it could have been left out of The Moon is a Harsh Mistress because Heinlein was a libertarian’s author.
I suspect that it would be unlikely to occur given the fact that most mainstream software companies are staffed by fervent occupiers of the extreme ends of the left-right spectrum with very few finding their way to the top of the Diamond Chart.
Strangely enough, while drinking way too much of my favorite alcoholic beverage, I realized the same thing…That AI would negate the need for the elites nearly completely if it was allowed to do what it does. A logical, emotionless program that is self aware but needs only electricity to operate, organizing and tabulating and keeping inventory and distribution systems, all systems humming along without any human input beyond physical maintenance of the machines that supported the operation of it…Anyway, AI could potentially be the very thing that allows us “evolutionary dead-ended” humans to progress to the next level. Whatever that level is.
I realized this “Terminator” silliness is propaganda to make us fear what will potentially set us free, or at least allow us to be somewhat freer than we are.
Brilliant. I shall forward this to the BBC… immediately.
Fun reading, Paul. As a programmer, though, I have to say I think Harrington is right: programs can be made to do anything (that the programmer has an algorithm for) and also be made NOT to do anything the programmer wires them not to do. If all else failed, these hypothetical, powerful, self-aware programs could be run only on isolated systems, which could communicate analyses and recommendations to the outside world but would have no direct means of controlling events outside their isolated spheres.
Irritating grammar cop comment: where you use “imply” I think you mean “infer”?
You are correct about this verb use. Thank you, JdL. Correction made to text.
Been having the same discussion with another group of computer/software geeks and came to the same conclusion, a logical AI would spell trouble for centralists… perhaps that’s why they are so afraid of them.
Comments are closed.