What Enterprises Need to Know About Agentic AI
Despite the buzz, the promises of this tech may be “underhyped,” one expert said.

Sign up for smart news, insights, and analysis on the biggest financial stories of the day.
When a buzzword starts to fly around, its meaning sometimes gets lost in translation. With all the talk about AI agents, what does it really mean for AI to be agentic?
Agentic AI is the concept of giving an AI model the freedom to autonomously handle tasks, rather than prompting it over and over again until you get what you need from it. This new movement, which has caught the attention of tech giants and startups alike, may have the potential to put large language models to better use and upend business automation entirely.
The real value of agentic AI is present in the name itself, said Mike Finley, co-founder and CTO of enterprise AI firm AnswerRocket: giving the model agency. This works by giving models more control over the amount of time they’re allowed to spend on tasks, the tools and information they use to properly complete said tasks, and the ability to leave a task incomplete if they don’t have the capability to complete it, he said. “The reason that models hallucinate is because they’ve been forced to answer.”
Enterprises are quickly taking notice of this tech: In a November study, Deloitte predicted that 25% of companies that use generative AI will adopt agents in some form or fashion, with that number growing to 50% by 2027. And amid the buzz, a few big tech firms have made their commitments known.
- Nvidia’s Jensen Huang said in his CES keynote that “the age of AI agentics is here,” alongside announcing Nvidia’s own platform for building AI agents called Blueprints.
- Upon releasing Gemini 2.0 in December, Google called it the “AI model for the agentic era,” and Microsoft has expanded its agentic AI portfolio several times in recent months.
- And Salesforce made “Agentforce,” its platform for building and deploying AI agents, the highlight of its annual Dreamforce conference in September.
Tech giants’ interest here makes sense: Many are likely looking for a return on investment from the billion- and trillion-parameter generative models that they’ve spent the last several years honing.
While the original question-answer framework that those generative AI models have long followed led the tech to skyrocket with the dawn of ChatGPT, the capabilities of large language models – and people’s expectations of them – have grown rapidly in the last two years, said Brian Sathianathan, CTO of Iterate.AI. “Usefulness began to unravel itself … People are generally looking for the next level of problem solving.”
Agentic AI represents that next level, allowing users to spend less time being so-called “prompt engineers,” said Finley. And despite its quick rise to fame, agentic’s potential to shift enterprise automation may still be “underhyped,” he said. Rather than simply making jobs easier as many AI tools have promised, this tech has the potential to entirely automate job functions, he said.
“This thing can literally replace the way that you do business,” Finley said. “And the role that humans play is taking a step forward.”
Once agentic AI is further along, tech firms may even start offering “agentic employees for hire,” said Sathianathan. “Hypothetically, we can lease out an employee. It’s just like when a company’s outsourcing, there could be software, agentic leasing.”
But there are a few things enterprises need to remember before going all in on agentic AI. The first is knowing how to identify what is – and isn’t – agentic, said Finley. “If it’s not making decisions, if it’s not writing sequences of instructions, if it’s not using tools, if it’s not replacing workflows that people do, then it’s not agentic,” he said.
Next is to pick off the “low-hanging fruit,” said Sathianathan. If your enterprise is looking to adopt agentic AI, start by figuring out which use cases are the easiest to implement and start there, he said. After a few of those implementations, put together a comprehensive AI strategy to see where it may fit into the rest of your organization, he said.
Finally, remember that these models still may face the same problems as traditional generative AI ones – including data security, said Sathianathan. In fact, because these models are operating autonomously, there’s a risk that data security problems may be “amplified” with less oversight, he said. “There will be more standards, agent scanning and integrity capabilities that will come into the picture very soon.”