What Enterprises Need to Know About Silicon Valley’s Pro-AI Super PACs
‘They’re trying to lock in certain rules that makes building and selling AI easier, cheaper and faster.’

Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.
The regulatory landscape around AI may become even more uncertain as Silicon Valley heavyweights invest in curbing the imposition of new rules.
Industry leaders are putting more than $100 million into political-action committees advocating against AI regulation, The Wall Street Journal reported earlier this week. VC firm Andreessen Horowitz and Greg Brockman, president of OpenAI, are helping launch an AI-focused super-PAC network called Leading the Future that will back campaigns against candidates and policies that seek to regulate the technology.
Meta, meanwhile, is preparing to spend tens of millions on its own political arm with a similar focus called Mobilizing Economic Transformation Across California, aiming to support candidates who favor light-touch regulation, according to POLITICO.
“It’s a signal that companies want to be more involved in the AI regulatory conversation,” said Betsy Cooper, executive director for the Aspen Tech Policy Hub “The divide between Silicon Valley and DC is starting to close.”
With investments in political action committees, tech figureheads are throwing their weight behind deregulating AI “in parallel” with their stakes in major model providers, said Thomas Randall, research specialist at Info-Tech Research Group. They’re seeking to preempt the patchwork of state laws starting to take form.
“They’re trying to lock in certain rules … that make building and selling AI easier, cheaper and faster,” said Randall.
However, as enterprises face murky waters with their AI investments and deployments, loosening rules around development could make it even more complicated, said Randall:
- While large model providers could see a shorter time to deployment and less red tape in scaling their AI infrastructure, a “strong majority” of organizations still don’t have strong governance and security protocols in place to internally regulate their use of AI, he noted.
- “If there are limited regulations about implementation of the models within the enterprises, and you have immature organizations trying to leverage these solutions … there will be security pieces that come into play,” he said.
- Additionally, large enterprises that operate internationally may be forced to comply with different rules based on the regions in which they’re doing business, said Randall.
Organizations may begin navigating the regulatory labyrinth by establishing a “governance baseline,” such as following ethical practices created by the National Institute of Standards and Technology. Additionally, being shrewd about copyright and AI output indemnities in vendor contracts could help an organization avoid losing control of its data.
Still, enterprises creating their own regulation may create a headache for the major model providers trying to sell to them, Randall added: “If you don’t have any (regulatory) floor and everyone’s creating something, then you still end up with patchwork – but not across states, just across organizations.”