OpenAI: It’s Complicated
The first large-scale rebellion among artificial intelligence workers raises some serious questions.
Sign up to unveil the relationship between Wall Street and Washington.
The first large-scale rebellion among artificial intelligence workers raises some serious questions.
It’s the classic tale of mutiny at a Silicon Valley artificial intelligence company that may (or may not) hold the future of humanity in its hands, with a nonprofit board that ousts its CEO for unknown reasons – then thinks the better of it.
Alright, maybe not so classic.
After five days of pandemonium following the heave-ho of OpenAI’s Sam Altman, which came as a surprise to many on the back of what seemed to be a fawning press tour this month, those closely tracking the fallout are now beginning to get an inkling as to what may have caused it.
What’s probably not at all surprising (but certainly very frightening) is that this turbulence appears to stem from a delicate debate going on inside the company over how to ensure the development of artificial intelligence, which could match, rival or outstrip human intelligence, is done safely enough to protect the existence of the human race.
It seems some scientists inside OpenAI already believe the technology has reached that inflection point, where critical decisions need to be made, now. One of them is OpenAI’s chief scientist, Ilya Sutskever, who, The Wall Street Journal reported this week, led a board coup resulting in him personally firing Altman. According to the Journal:
“In recent years, Sutskever had become increasingly concerned with A.I. safety. At OpenAI, he headed up the company’s Superalignment team, which was set up to use 20 percent of the company’s computing power to make sure that the A.I. systems it built wouldn’t be harmful to humans.”
Sutskever, who is a Russian-born Israeli-Canadian, joined OpenAI in 2016, where he became laser-focused on two serious objectives: One, to help A.I. systems achieve the same thinking as humans, a level known as “artificial general intelligence,” or AGI. Two, to ensure A.I. systems were not developed in such a way as to become dangerous to humans. (Privately, according to the Journal, Sutskever told employees that he fears AGI systems could ultimately “treat humans the way humans currently treat animals.” Again, frightening.)
Sutskever worked with Geoffrey Hinton, the so-called “godfather” of A.I., at the University of Toronto. In a New Yorker story featured in last week’s Power Reads, Hinton said he believes it may already be too late to rein in the rapid development of A.I.
Fundamentally, the argument seems to be, if humanity has already passed the point of no return, and many smart people say they believe it has, there might be good reason to insert some guardrails or speed bumps, so humans don’t, in the heat of this fervent A.I. competition, accidentally wipe themselves out. (Power Corridor pondered this very notion in its last issue.)
While it might actually be very fitting for humanity to meet its demise due to its voracious need to be the first to create AGI and mint billions, some are calling for a pause, or at least for scientists to slow their roll.
While Sutskever has not publicly disclosed the reason for Altman’s abrupt firing, people close to the board say that he and Altman differed on how to handle A.I. safety. Altman’s erstwhile replacement, Emmett Shear, approved by Sutskever and OpenAI’s board, has stated that he believes if the speed of A.I. is at a 10, slowing it to “aim for a 1-2 instead” would be advisable.
While OpenAI’s board and executives have tried to quell the idea that a debate over A.I. safety is central to the recent turmoil,The New York Times noted this week that OpenAI directors had been clashing for more than a year over how to balance the rapid development of A.I. against safety concerns. According to the paper, Altman was on the side of accelerating the development of the technology.
As of this writing, the high-tensile drama is still playing out. But Altman has retained the upper hand in this battle, as the majority of his staff at OpenAI threatened to quit if the OpenAI board did not resign and reinstate Altman.
Adding to the pressure, Microsoft agreed to poach both Altman and the hundreds of departing OpenAI workers, which would mean it would effectively take over the A.I. project Sutskever sought to safeguard.
As a result, Sutskever and the board recanted, with Sutskever posting on X (formerly known as Twitter) this week that he would “do everything I can to reunite the company.”
In response, Altman, who agreed to return to the company as of Wednesday, posted Sutskever’s mea culpa to his own social media feed, with a series of red hearts.
Saving humanity by emoji? Maybe that is fitting as well.
The views expressed in this op-ed are solely those of the author and do not necessarily reflect the opinions or policies of The Daily Upside, its editors, or any affiliated entities. Any information provided herein is for informational purposes only and should not be construed as professional advice. Readers are encouraged to seek independent advice or conduct their own research to form their own opinions.