OpenAI Whistleblowers File SEC Complaint over NDAs
OpenAI allegedly issued overly-restrictive non-disclosure agreements that could penalize anyone who blows the whistle to government agencies.
Sign up for smart news, insights, and analysis on the biggest financial stories of the day.
Open discussion does not seem to be a cardinal virtue at OpenAI.
At least, that’s what a group of company whistleblowers allege in a complaint to the SEC. According to a seven-page letter sent to the agency and seen by The Washington Post, the artificial intelligence company issued employees overly-restrictive non-disclosure agreements that could penalize anyone who blows the whistle to government agencies. In the rapidly developing Wild West of AI, what could go wrong?
Blow Out
The contents of the employees’ letter shouldn’t exactly be surprising. In June, a group of current and former employees of OpenAI and Google’s DeepMind AI unit published an open letter criticizing a culture of retaliation against and silencing of internal dissenters, and called on the government to establish effective guardrails for the industry.
In the letter to the SEC, employees alleged that said culture is illegal, and violates four specific SEC rules:
- The letter writers alleged OpenAI issued “[n]on-disparagement clauses that failed to exempt disclosures of securities violations to the SEC.” They also said OpenAI required “prior consent from the company to disclose confidential information to federal authorities.”
- Employees were also allegedly required to sign “[c]onfidentiality requirements with respect to agreements, that themselves contain securities violations” and had to “waive compensation that was intended by Congress to incentivize reporting and provide financial relief to whistleblowers.”
“These contracts sent a message that ‘we don’t want … employees talking to federal regulators,’” one of the whistleblowers, who spoke anonymously, told The Washington Post.
Stress Test: Of course, federal regulators have little oversight over the industry anyway. Congress has yet to pass legislation that includes comprehensive AI regulation, leaving an executive order from October that calls on AI firms to share some critical information with the government as the strongest guardrail on the industry. It may not be strong enough. On Friday, The Washington Post published a report in which sources within OpenAI said employees were pushed to rush through safety-testing procedures to meet the May launch date for GPT-4 Omni, the firm’s latest model. “We basically failed at the process,” one source said.