|

White House Deregulation of AI Creates Fresh Pitfalls for Businesses

‘Part of the challenge is that (the executive orders) fundamentally misunderstand how the technology works’

Photo of The White House
Photo via Getty Images

Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.

The White House’s stated goal was to smooth the path of progress.  

Its AI Action Plan, a sweeping strategy document unveiled last week, includes more than 90 policy actions aimed at cutting regulations and red tape surrounding AI. 

Along with simplifying federal rules governing data center development and chip exports, the order seeks to limit AI regulation inside states by cutting federal funding to those with “burdensome AI regulations.” It also orders the National Institute of Standards and Technology to excise any references to diversity, equity and inclusion, misinformation and climate change from its AI risk framework. 

Additionally, the administration announced an executive order meant to prevent “woke AI” in the federal government. The order states that the government is obligated “not to procure models that sacrifice truthfulness and accuracy to ideological agendas.” 

Targeting diversity, equity and inclusion, the order states that LLMs should be “truthful” in responding to user prompts, and developers “shall not intentionally encode partisan or ideological judgments into an LLM’s outputs unless those judgments are prompted by or otherwise readily accessible to the end user.” 

While the executive order applies only to AI used in the federal government, plans like these can create more regulatory confusion for enterprises, both those creating AI and those using it, said Brenda Leong, director of the AI division at law firm ZwillGen. “Part of the challenge is that they fundamentally misunderstand how the technology works,” said Leong:

  • There is no way to guarantee that a model will be completely accurate, she said, as they always run the risk of hallucination. 
  • “Even when you put some of the controls around them … They are still predicting. They’re just predicting within narrower confines,” she said. “There is no way ever to make one of these systems truthful, because there’s no step in the process that checks for some kind of verification or validation.” 

For companies building AI, the policies may create a compliance headache, giving them different sets of laws in different countries that are “almost diametrically opposed,” she said. One option is that AI developers could create different versions of models, such as one that’s US government-compliant and another that’s European Union-compliant.   

But even within such boundaries, there are potential pitfalls. Companies that want to consider diversity, equity and inclusion for reputational purposes may have to take self-governance further into consideration, she said. “It’s going to be harder for them to know if a system is creating those imbalances,” said Leong, and “to actually rely on these and to integrate them in the same way that they have been, or maybe they were intending to.”

Sign Up for CIO Upside to Unlock This Article
Cutting-edge insights into technology trends impacting CIOs and IT leaders.