Forthcoming Book, Latest Interviews, and AI Armageddon
The world is descending into digital madness. There's no better time to think for yourself
Please pardon my absence, dear readers. Or perhaps I should say, I hope you’re enjoying the silence. It’s been a busy month for two reasons.
First, I am in the last stages of finishing up my book. More on that in the near future.
Second, the hype and spin around artificial intelligence has been on overdrive. We’ve seen the release of GPT-4, unsettling media appearances by OpenAI CEO Sam Altman, glowing reviews from Bill Gates, an alarmed open letter from the Future of Life Institute calling for a 6 month halt to high-level AI development, and a warning about our impending doom from Eliezer Yudkowsky at the Machine Intelligence Research Institute. In response, Italy has banned ChatGPT.
We have been all over these developments on the War Room. (Links posted below.)
Last week, an open letter from the Future of Life Institute was signed by over a thousand “AI experts,” including Elon Musk, Yuval Noah Harari, Stewart Russell, Max Tegmark, and Steve Wozniak, who “call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
There is next to zero chance of this moratorium happening, but it never hurts to try. On the whole, their message is halfway sane. The letter raises four important questions that the public should have been pondering for a long time now:
Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves:
Should we let machines flood our information channels with propaganda and untruth?
Should we automate away all the jobs, including the fulfilling ones?
Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?
Should we risk loss of control of our civilization?
That would be a big NO on all four. What sort of lunatic would answer “yes”? Tech accelerationists, of course.
Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.
I am beyond skeptical that the release of any advanced AI system won’t come with significant downsides. You already see this in the mass psychosis created by the current crop of chatbots and art generators. On the world stage, the long-term geopolitical effects of advanced military AI will be studied for decades to come—that is, if there’s anyone left to study them.
The inevitable downsides are obvious if the signatories’ predictions come to pass. Musk, Harari, Russell, Tegmark—all these men foresee a day when artificial general intelligence becomes smarter and therefore more powerful than human beings. That leaves our fate at the mercy of the Machine. These guys believe AI programmers are creating God in silico.
That’s a big NO from me on that one, too.
Some critics are ready to destroy everything to halt this ungodly locomotive. In a TIME magazine op-ed published the same day as the open letter, Yudkowsky argued that a 6 month pause on AI training is pathetically inadequate. “The moratorium on new large training runs needs to be indefinite and worldwide,” he insists. “There can be no exceptions, including for governments or militaries.”
Despite being a transhumanist himself, Yudkowsky believes that superintelligent AI poses an existential risk to humanity, far worse than nuclear weapons. He demands that authorities “shut it all down.” If that means launching airstrikes on data centers operating on foreign soil—at the risk of nuclear war—then so be it:
Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold.
If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
Let’s hear it for Terran Yudkowsky, the Transhuman Luddite!
Empowering the government to halt AI by dropping bombs on foreign soil may sound twice as insane as trying to create a Super Computer God in the first place. Then again, if there is even a slight chance that Microsoft or Google (or Baidu or Tencent) could bring a godlike AI online—or simply convince enough people that they’ve succeeded—an airstrike may not be as crazy as it sounds.
For the most part, all of this panic and hysteria is occurring because our leaders never took the dangers posed by artificial intelligence seriously. Now that AI is certain to have deep impacts on the economy, on education, and on warfare, people are losing their minds. The impact on the public psyche will be immeasurable.
Pretty much no one planned for advanced AI, except the people working to build it. Now, officials are scrambling to figure out how to respond. On the doomer end of the spectrum, you have those who are certain that AI will kill us all. On the other end, you have accelerationists who say the risks are worth the possible reward—“Onward, satanic soldiers!”
In the middle you have people who either don’t know or don’t care what’s happening. “Garbage in, garbage out” they say, as if programmed to repeat this slogan over and over again. “AI is just computer code,” they say, as if it were an original insight. “It’s only as good as its programmer.” Too many are convinced that AI doesn’t really exist and never will. This is part of a long tradition of denialism.
"Artificial intelligence isn't real."
"Genes aren't real."
"Viruses aren't real."
"Dinosaurs never existed."
One feels the same energy in all these takes. An ostrich with his head in the sand is positioned to get reamed good and hard.
My sense is that the psychological and social impacts are the most immediate threats posed by AI. The internet will be soon flooded with synthetic minds. We are being primed to turn our cognitive faculties over to the Machine in the name of “enhancement.” Children are to become AI symbiotes. Teachers are on the chopping block. Writers and artists are being demoralized. All of this is intentional.
If you buy the propaganda, we are to be absorbed by the all-powerful corporate borg.
The existential threats of superintelligent AI are more abstract and difficult to imagine. The paths to what the Oxford transhumanist Nick Bostrom calls the “treacherous turn” involve critical systems being hijacked by an AI gone berserk. Perhaps the AI is in control of weapons systems or biolabs. Or maybe it’s able to get access these systems. Perhaps the AI will manipulate a human or many humans to do something profoundly stupid and destroy the rest of us.
The big danger is not that an AI has become conscious. Nor is it that a human has programmed malice into the system. Not necessarily. The treacherous turn could occur because an AI has been optimized to pursue a certain goal or set of goals that don’t jive with human happiness or human existence, either due to its initial programming, or through the AI altering its own code.
A classic thought experiment is an AI developed to solve pollution or to end warfare. The straightest path would be to eliminate the problem at its source. That means most of us will have to go.
These are technodelic freak out scenarios. Honestly, we have so many idiots and psychos in high places right now, AI just adds another dangerous element to an already bleak situation. As I’ve said many times, my greatest fear is not being trapped in The Matrix. My fear is being ruled by morons in an Idiocracy. The introduction of warped machine intelligence is just cold icing on an already stale cake.
War Room w/ Steve Bannon
Here’s a few segments from the past ten days. Even if you don’t stay for my unhinged rants, you should take in the cold opens. (Putting together these video clips, I pride myself on distilling the subject’s message without taking their words out of context.)
Artificial Intelligence Poses Existential Threat—White House is Clueless in Response
Biometric ID and Global Government — Technological Solutions to Tech Problems
Future of Life Institute Warns AI Could Destroy Human Civilization
Sam Altman and Lex Fridman are True Believers in the Dark Religion of Technology
AI Is Most ‘Powerful Brainwashing Tool’ Ever Created
Bill Gates Wants Chatbots to Educate Your Children
OpenAI's Sam Altman Wouldn't Stop the "End of World," Just Slow It Down
Transhumanist Eliezer Yudkowsky Warns of AI — "Demon-Summoning is Easy"
Mind Matters Interview
Last but certainly not least, here is a recent interview with the fellas at Mind Matters.
They executed a jailbreak on my brain to release the woo-woo. We also talk possessed AI, computer consciousness, the plight of primitive societies, the Luddite's lament, and many other topics. Enjoy the ride.
Hi Joe, I enjoy your Substack and also watch you on WarRoom all the time. Maybe someone already mentioned it and I missed it, but one of the possible scenarios you mention was the subject of a novel trilogy written over 50 years ago. The first book in the trlogy is titled, “Colossus” and was first published in 1966. That book was made into a TV movie in 1970 titled “Colossus: The Forbin Project.” The author, D.F. Jones, wrote the two sequels in the 1970s.
The author was prescient, he presents in a now very plausible story what could happen if computers take over the world. What he wrote is more relevant today than when he wrote the books. It is amazing how forward looking he was in seeing the potential dangers of advanced computer technology.
These books, especially the first one and the movie, would be well worth your time to read. Their message complements all the things you say every day on Steve Bannon’s show and in your Substack articles. I hope you take the time to look them up, and would be interested in hearing your reaction.
Congratulations, Joe. I hope the book is a bestseller. ♥