Big Tech’s AGI Dreams Overshadowed by Smaller, In-Demand AI Advances
Though AGI could have some enterprise viability, most businesses are better off with small models.

Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.
Does your model really need to be able to do anything and everything?
Though AI firms have long chased the elusive concept of artificial general intelligence, or a model that’s capable of matching human intelligence in virtually all domains, some may be questioning the validity of that dream: OpenAI CEO Sam Altman told CNBC last week that “AGI” is “not a super useful term,” especially since its definition varies between companies and developers.
“The point of all of this is it doesn’t really matter and it’s just this continuing exponential of model capability that we’ll rely on for more and more things,” Altman told CNBC.
So why is OpenAI seemingly turning tail on its long-time mission? Possibly because large language models might not be the way to achieve AGI. Yann Lecun, who is Meta’s chief AI scientist and largely regarded as one of the godfathers of AI, said in an interview with Big Technology’s Alex Kantrowitz that “human-level AI” won’t be achieved by simply scaling up large language models.
OpenAI, meanwhile, is a “great large language model factory,” said Bob Rogers, chief product and technology officer of Oii.ai and co-founder of BeeKeeper AI. “Their product is LLMs right now. Maybe they’re starting to think that it might be a longer time frame to get to that next set of capabilities, and so they need to be selling what they’ve got.”
As it stands, we’re far off from AGI becoming a reality, said Rogers. with our strongest systems still missing the mark on several metrics. “At least to the extent that we haven’t gotten to AGI yet, it’s a useful term — there’s a bar there,” said Rogers.
- One key capability missing among large language models in particular is consistency in problem solving and decision-making, said Rogers. Because these models are unpredictable, the quality of output can differ wildly, even if input prompts are similar.
- Another is that these models don’t actually know facts, said Rogers, but are instead performing word association to provide the most likely answer. “That leads to the next gap, which is then reasoning, because you can’t really reason on facts until you know facts,” he said.
- Additionally, large language models don’t have a great grasp of human emotion, he said, and often lack an understanding of the “linguistic nuance” of things like sarcasm.
If developers can overcome those challenges, AGI may prove useful to enterprises, said Rogers. Currently, agentic AI is all the rage, with tech firms promising specialized agents to handle tasks throughout operations. But as those agentic use cases start to scale, they can quickly become difficult to manage. In theory, that’s where AGI could come in, acting as a singular, central multitool, Rogers said.
“I think there is a practical consideration that, even if I have 20 agents that can do the 20 things I need to have happen in my organization, that can be harder to manage than having a single system,” he added.
But, setting aside the massive technological barriers that face the tech, the biggest obstacle for enterprises would likely be cost, said Rogers, especially for small and mid-sized companies that are already struggling with the high price of AI. Most are probably better off utilizing small, fine-tuned models that can run within their infrastructure.
“Right now, tuned models give a lot of benefits in terms of understanding my context and my documents and my needs,” said Rogers. “Why do I want one giant multitool with a million things sticking out of it that’s heavy and awkward? Where that AGI multitool becomes really valuable is a couple generations away.”