|

Another Threat to Human Existence

Humans are facing extinction. Because of humans. Again.

Sign up to unveil the relationship between Wall Street and Washington.

Innovators, activists, academics and even celebrities have banded together to issue a warning this week that the wipeout of the human race may be closer than it appears if urgent measures aren’t taken to rein in artificial intelligence.

A brief, 22-word statement, signed by a long list of tech luminaries, CEOs and executives, instantly set off alarm bells. “Mitigating the risk of extinction from AI [artificial intelligence] should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” it said. 

Among the signatories were Sam Altman, the chief executive of OpenAI, maker of the wildly popular AI app ChatGPT; Geoffrey Hinton, the so-called “godfather” of AI; Kevin Scott, Microsoft’s chief technology officer, in addition to a growing list of top executives, researchers, cryptographers, environmentalists and academics from around the world, as well as from companies such as Google DeepMind, Asana and Anthropic.

The statement, posted on the website of the Center for AI Safety, a San Francisco nonprofit organization, was kept short for a reason: Many people still disagree on exactly how to talk about the dangers of AI.  

“AI experts, journalists, policymakers and the public are increasingly discussing a broad spectrum of important and urgent risks from AI,” the nonprofit wrote. “Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks.” In keeping the statement short, the signatories hoped to “overcome this obstacle and open up discussion.” 

Depending on who you’ve been listening to, the rise of artificial intelligence is the greatest thing since the internet or it is what is going to do us all in. It is also possible that it can be both – although, as AI gets increasingly smarter, many are asking, what does that mean for the future of humans? 

An earlier warning letter went out in March signed by more than 1,600 people, including Apple co-founder Steve Wozniak and Elon Musk, chief executive of Tesla, Twitter and SpaceX, who has long stated that AI could doom the human race. The letter urged the U.S. government to pause the development of any AI system more powerful than the ChatGPT-4 app. But confusion has abounded over just how to ascertain the breadth and depth of AI’s potential dangers.

In an interview this spring, Musk said that if AI is not well-regulated, tested and developed with clear guardrails, it could be like a black hole or a “singularity” that, once unleashed, could be impossible to rein in. In tech-speak, a singularity is a hypothetical point in the future where a technological event becomes uncontrollable or irreversible, leading to unexpected or unforeseen changes to human civilization.

Specifically in the conversations surrounding AI, the moment of singularity is understood as when AI transcends all of human intelligence, potentially leading to the end of human existence. 

In other words, if humans aren’t extremely careful with their innovations, they could annihilate themselves. (You’d think we would have learned all this by now with nuclear arms, climate change and, possibly, the pandemic.)

As Musk put it in the interview, “We want pro-human. Let’s make the future good for the humans. Because we’re humans.”