|

With New Models Every Day, Enterprise AI Battle is ‘Fair Game’

“There is no one model that does everything.”

Photo of five AI model logos on a phone
Photo by Hapabapa via iStock

Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.

With tech giants pushing bigger and better AI models at a near constant rate, it’s natural to feel a bit of model fatigue. 

Google, Meta and OpenAI all added to their families of AI models last week, broadening a field of choices from AI firms big and small that have debuted in recent months. While the models perform differently, their developers are often chasing similar benchmarks and a similar market, experts told CIO Upside. 

So how do you choose which family will work best for your business? There are “five dimensions” that enterprises should consider, said Martin Bufi, research director at Info-Tech Research Group: Intelligence, output, speed, latency and price. 

Many consider factors such as the best “bang for my buck” when it comes to cost, operational speed and robustness for their specific use cases, said Bufi. And because some models are better suited for different tasks, “it’s not really just who has the biggest, the baddest and the most intelligent model,” he added.

“The other thing to consider is that there is no one model that does everything, when we think about what enterprise means,” said Bufi. For example, while one model may work best for front-end or chatbot operation, another may work better for workflow or task automation. 

And with the risk of autonomous AI, agentic processing capabilities are often top of mind for enterprises, said  Brian Sathianathan, CTO and co-founder of Iterate.ai. “In order for an LLM to be successful within an enterprise, the model has to execute agentic processes really well.”

For a long time, OpenAI’s models, in partnership with Microsoft’s Azure cloud suite, were the “de facto standard” that enterprises leaned toward, said Bufi. But as enterprises have started running into limitations, said Bufi, some have sought alternatives. 

One of those limitations is OpenAI’s context window, or the maximum amount of text that a model can take in at once, he said. Short windows “can be potentially problematic with dealing with long-winded conversations or background context,” he said: 

  • Google’s context window, meanwhile, is “unmatched” at up to 2 million tokens, Bufi noted. That, in combination with the company’s in-house hardware and search prowess has given Google a boost that’s made scaling in the market far easier, he said. 
  • Anthropic’s Claude models are another contender for enterprise AI darling, said Bufi. They are particularly talented at technical tasks, such as coding, and are often preferred by software developers. 
  • Open source models are also “catching up,” said Bufi. DeepSeek’s R-1, Alibaba’s Qwen series, and Meta’s Llama models (despite the company fudging its AI benchmarks) could offer more affordable alternatives to their closed source competitors. 

With all these different factors to consider, “the enterprise battle is fair game,” said Sathianathan. “There is a lot of room left for algorithmic innovation … on the business side, from a cloud perspective, a few people will take over everything.” 

Who those winners will be, however, has yet to be revealed.

Sign Up for CIO Upside to Unlock This Article
Cutting-edge insights into technology trends impacting CIOs and IT leaders.