Trump Administration Set to Ramp Up AI Development – Never Mind the Risks
With less safety regulation and more infrastructure, AI companies are ready to sprint.
Sign up for smart news, insights, and analysis on the biggest financial stories of the day.
Within hours of taking office, the Trump administration made no secret of its AI stance: Bring it on.
Among the flurry of new executive orders and announcements, several have the potential to shake up the tech industry. With fewer regulations and less oversight to worry about, the organizations building massive AI models are ready to stomp on the gas (which the new administration also happens to favor).
For starters, Trump revoked a 2023 executive order on Monday that required AI developers creating systems that could pose a threat to national security and safety to share the results of safety tests with the U.S. government.
Additionally, on Tuesday, Trump trumpeted a joint venture between OpenAI, Softbank, Oracle, and Abu Dhabi’s MGX called Stargate, in which the firms are expected to commit up to $500 billion over the next four years. Oracle CEO Larry Ellison said that 10 of the project’s data centers were already under construction in Texas.
- In a blog post, OpenAI said it would begin by “deploying $100 billion immediately,” and that it, alongside Arm, Microsoft, Nvidia, and Oracle, are the initial technology partners for the project. “This infrastructure will secure American leadership in AI,” the blog post said.
- Trump said in a press conference Tuesday that he would “help a lot through emergency declarations … we’ll make it possible for them to get that production done very easily.”
- Not every tech figurehead is all in, however: Anthropic CEO Dario Amodei called the plan “a bit chaotic,” and Elon Musk, the centibillionaire Trump chose to lead a government efficiency initiative, cast doubt that the project has enough funding in a post on X.
The support may be music to model developers’ ears, but what do these changes mean for enterprises? For one, they probably will put a lot more security and safety responsibility into the hands of the people using AI, rather than the ones building it, said Thomas Randall, advisory director at Info-Tech Research Group.
Pumping a model’s size and capability doesn’t get rid of the data security issues that AI still faces. But the repeal of Biden’s executive order strips AI developers of their already-minimal safety responsibilities, meaning it will be entirely up to them whether they want to implement safeguards.
This may force organizations adopting AI to be even more diligent about responsible AI principles and risk framework, said Randall.
“The order’s repeal means that organizations must take on increased accountability for adopting responsible AI principles and implement an AI risk framework in their use and governance of AI technologies – especially regarding privacy, safety, and security,” said Randall. “There will be less federal support for organizations to have peace of mind with a safety baseline.”
Plus, while the infrastructure investment may open the door for big AI developers to make their models even bigger, it doesn’t mean that adoption will grow alongside it, said Randall. In fact, some enterprises may take adoption even slower, he said, “especially if organizations do not have a strong data management strategy, cannot form a business case to justify the investment, or are put off by safety risks.”
The impact of this administration’s approach may stretch beyond the U.S., said Randall: Compared with the EU’s “high standards for risk and privacy,” the U.S. already had lower regulatory barriers “to spur innovation,” he said.
“The next few years of the Trump administration, especially given the presence of technology leaders at the inauguration, will potentially exacerbate those regional differences including economic, social, and legal repercussions,” said Randall.