Elon: Artificial Intelligence May Be the End of Civilization
An endlessly benevolent artificial intelligence is a pipe dream, he warns.
Sign up to unveil the relationship between Wall Street and Washington.
Possibly one of the most fascinating, off-the-cuff remarks made by Elon Musk, the multi-hyphenate CEO of Tesla, Twitter and SpaceX, during a wide-ranging interview aired this week, was an anecdote about his old pal, Larry Page, the co-founder of Google.
Musk said that he and Page were once close friends and the two would often whimsically debate artificial intelligence issues late into the night when he would stay at Page’s home in Palo Alto, Calif. Musk, who says he’s thought deeply about the potential of A.I. threats since he was a teenager (it is not at all hard to imagine a surly, pubescent Elon stewing over robots), described his alarm at learning how Page seemed to be completely ambivalent to the fate of humanity when considering the possible hazards of A.I.
“At least my perception was that Larry was not taking A.I. safety seriously enough,” Musk said in a chat with Fox News’s Tucker Carlson. “He really seemed to be for digital superintelligence. Basically, digital God, if you will, as soon as possible.”
Musk said he agreed with Page that A.I. had the potential to do good. “But there’s also potential for bad… it can’t just be helpful if you just go barreling forward and, you know, hope for the best.”
If A.I. is not well-regulated, tested and developed, with clear guardrails, Musk said it could be like a black hole or a “singularity” that, once unleashed, could be impossible to rein in. In tech-speak, a singularity is a hypothetical point in the future where a technological event becomes uncontrollable or irreversible, leading to unexpected or unforeseen changes to human civilization.
“It wouldn’t quite happen like ‘The Terminator,’ because the intelligence would be in data centers,” he said, referring to the 1984 Arnold Schwarzenegger film about a cyborg assassin, adding, “the robot’s just the end-effector.”
The ongoing debate with Page did not go well. Based on Musk’s account, it probably would have been better for him, Page and humanity if they’d let the A.I. topics lie. “At one point, I said, what about, you know, you gotta’ make sure humanity’s ok here,” Musk said he told Page. In response, Page “called me a specist,” he said. “And so, I was like, ok, that’s it, yes, I’m a specist ok, you got me – what are you?”
Specist is a term typically used for people who place a higher value on humans than animals but, in this instance, it was referring to A.I. For Musk, “that was the last straw.” Musk explained that at that time – it appears this was circa 2015 – Google had acquired the British artificial intelligence research lab DeepMind and amassed one of the world’s largest pools of A.I. talent, “a tremendous amount of money and more computers than anyone else, so I’m like we are in a unipolar world here where there’s one company that has close to a monopoly on A.I. talent and computers…and the person who’s in charge does not seem to care about safety; this is not good.”
This revelation led Musk to become one of the original founders of OpenAI, a San Francisco-based, artificial intelligence research lab that began as an open-source nonprofit and is behind A.I. applications such as ChatGPT.
“The reason OpenAI exists at all” is due to his arguments with Page, he says, adding that he sought to launch an organization he hoped could be the opposite of Google, which would be not-for-profit and fully transparent, “so people know what’s going on.”
Musk has since left OpenAI, which moved on to run a for-profit arm, and says he believes OpenAI engineers are now “training AI to lie.”
Late last month, Musk, Apple co-founder Steve Wozniak and 1,600 others, including some of the biggest names in tech, signed a letter exhorting the world’s top artificial intelligence labs to take a break from training their powerful A.I. systems and allow more time to put safeguards in place, lest A.I. outpace human intelligence.
The letter, issued by the Future of Life Institute, a Cambridge, Mass., nonprofit that works to reduce catastrophic and existential risks facing humanity like those purportedly posed by A.I., insisted that “powerful A.I. systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
The letter asked the government to pause the development of any A.I. system more powerful than the ChatGPT-4 application.
So far, the letter does not appear to have ground A.I. labs to a halt. But Musk appears bent on continuing his campaign, saying he fears that regulations won’t be put in place until “terrible things have happened.”
His efforts to create a safer environment for humans and A.I. is part of how ChatGPT came to be developed in the first place. Perhaps he feels culpable?
“We want pro-human,” he told Carlson. “Let’s make the future good for the humans. Because we’re humans.”
Even if Musk and Page turn out to be the ones to blame for the hypothetical, future killer robot wars, at least Musk is working on a self-sustaining city on Mars as an escape hatch, if it all goes awry.
*Power Corridor is the newest publication from The Daily Upside. Delivered twice weekly, Lead Editor Leah McGrath Goodman gives readers a unique view into the interplay between Wall Street and Washington. Sign up for free here.*
The views expressed in this op-ed are solely those of the author and do not necessarily reflect the opinions or policies of The Daily Upside, its editors, or any affiliated entities. Any information provided herein is for informational purposes only and should not be construed as professional advice. Readers are encouraged to seek independent advice or conduct their own research to form their own opinions.