The US Government is Ready to Regulate AI
The US Artificial Intelligence Safety Institute announced OpenAI and Anthropic agreed to allow it to test and evaluate new models for safety.
Sign up for smart news, insights, and analysis on the biggest financial stories of the day.
In “Terminator 2,” Arnold Schwarzenegger’s T-1000 cyborg cites Aug. 29 — seriously, we just checked! — as the day that the artificial intelligence military system Skynet goes rogue. US regulators must have known they were working on a tight deadline.
On Thursday — a.k.a. Aug. 29, a.a.k.a. Skynet Day — the US Artificial Intelligence Safety Institute announced that leading AI firms OpenAI and Anthropic had agreed to allow it to test and evaluate new AI models for safety. The news comes amid a hot fight over AI regulation in Silicon Valley’s home state of California.
RoboCops
Under the agreement, the AI Safety Institute — itself housed within the Department of Commerce’s National Institute of Standards and Technology (NIST) — will test major new models from OpenAI and the Amazon-backed Anthropic for both capabilities and risks ahead of release, the NIST said Thursday in a press release. The agency said it will also be working in collaboration with the UK’s AI Safety Institute, which previously tested a major Anthropic model ahead of its release.
The news may come as a bit of a surprise to lawmakers in California, some of whom are locked in a bitter fight with AI developers over a sweeping piece of AI legislation. That bill, dubbed SB 1047, passed nearly unanimously in the state Senate all the way back in May, and was approved by the state Assembly on Wednesday, leaving one more process vote before it arrives on the desk of Gov. Gavin Newsom. If passed, it’d establish even more guardrails for the mostly California-based AI industry:
- The bill would require developers of major AI models to submit safety testing plans to the state’s attorney general, who could then sue the companies if said models were to cause harm or pose an imminent threat to public safety.
- Among other guardrails, the bill would also require AI developers to essentially create a “kill switch” to power down their AI systems if they go awry.
The bill has created some strange bedfellows. In support of the bill: Elon Musk, Anthropic, and state Sen. Scott Wiener, who represents San Francisco. In opposition: OpenAI, Google, Meta, Nancy Pelosi, Andreessen Horowitz, and SF Mayor London Breed.
Join ‘Em: The race to regulate comes as the AI industry continues to heat up. On Thursday, OpenAI boasted that ChatGPT’s weekly active user base doubled in the past year, reaching 200 million. Meanwhile, Meta announced Thursday that its mostly free, mostly open-source Llama AI model has been downloaded around 350 million times in the past year. In perhaps the biggest news of the day, The Wall Street Journal and Bloomberg reported that both Apple and Nvidia were in talks to join Microsoft in the next funding round for OpenAI, which would value the firm at $100 billion. Safety regulators seem to be hard at work, but we’re sure the antitrust regulators are now paying attention, too. If a hulking man with a vaguely Austrian accent appears naked in Washington, D.C. via lightning storm and starts searching for Lina Khan, we’ll know something’s up.