|

Murky AI Rules Leave Enterprises Grappling With Self-Regulation

“I think everybody would agree: It’s a free-for-all.”

Photo of an Airbus plane
Photo by Daniel Eledut via Unsplash

Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.

Thanks to the ever-shifting nature of AI, regulating the tech can be a moving target. 

The looming effective date for some EU AI Act rules offers a case in point. European tech firms have rallied in opposition, with a group of more than 45 companies — including Mistral AI, Airbus and ASML — writing a letter to the president of the European Commission, Ursula von der Leyen, calling for the postponement of rules that would rein in models in favor of a more “innovation-friendly regulatory approach.”

The signatories, calling themselves the EU AI Champions Initiative, claim that Europe’s competitiveness in the race for AI innovation will be “disrupted” by unclear, overlapping and increasingly complex regulations in the region:

  • “This puts Europe’s AI ambitions at risk, as it jeopardizes not only the development of European champions, but also the ability of all industries to deploy AI at the scale required by global competition,” the letter claims. 
  • The EU AI act was passed last year, with the goal of curbing risks posed by AI. Giants like OpenAI must be in compliance by August or face fines. 

Despite the fear that regulation will stymie innovation, rules like these simply aim to ensure that the technology is being developed in a way that won’t cause more harm than good, said Bill Wong, AI research fellow at Info-Tech Research Group. “It’s possible to innovate and still keep safety in mind,” said Wong. 

The EU is often a trendsetter in regulation, he added. Many governments followed the lead of the region’s General Data Protection Regulation, whose groundbreaking privacy rules went into effect in 2018, he noted. AI regulation may follow a similar path – though likely not in “innovation-driven and very legislation-light” regions like the US, he noted. 

With tech that’s constantly evolving, regulation can be a tricky beast, said Wong, especially as many regulators operate “at a snail’s pace.” Instead of regulating the tech itself, “most folks around the world are going to measure the risk by the context,” he said. 

And demand for this regulation is high among enterprises and IT leaders, said Wong. Many companies don’t trust the handful of Big Tech vendors that pull the strings of the most powerful AI models to properly manage the risks, he said. 

The only option that enterprises have as it stands is to self-regulate, Wong said. Adopting risk management frameworks, like those released by the National Institute of Standards and Technology, International Organization for Standardization, and the Organisation for Economic Cooperation and Development, could provide companies with at least some protection as regulation develops. 

Without risk management, whether an enterprise is beholden to regulation or not, the tech presents major risks for both the companies and the users putting it into practice, he said. 

“I think everybody would agree: It’s a free-for-all,” said Wong. “It’s the Wild West right now.” 

Sign Up for CIO Upside to Unlock This Article
Cutting-edge insights into technology trends impacting CIOs and IT leaders.