Who’s Who? Identity Questions Undermine Model Context Protocol
‘There cannot be room for hallucination in that question.’
![SUQIAN, CHINA - MAY 23: In this photo illustration, the logo of Anthropic is displayed on a smartphone screen on May 23, 2025 in Suqian, Jiangsu Province of China. Anthropic on May 22 said it activated a tighter artificial intelligence control for Claude Opus 4, its latest AI model. (Photo by VCG/VCG ) (Newscom TagID: vcgphotos221514.jpg) [Photo via Newscom]](https://www.thedailyupside.com/wp-content/uploads/2025/09/vcgphotos221514-scaled.jpg)
Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.
As much as enterprises want agents and systems to work together, getting them to play nice may be harder than anticipated.
In November, Anthropic introduced a standard called Model Context Protocol, which aimed to connect AI models to the “systems where data lives.” As talk of agentic AI reached a fever pitch in the tech industry, OpenAI quickly followed in its rival’s footsteps, launching connector offerings between its AI models and enterprises’ internal systems.
While giving agents more context to work with can help users better reap the benefits of the tech, doing so presents some cybersecurity pitfalls: The more access you give to these agents, the larger the attack surface.
One of the biggest security concerns that these connected agents present is identity, said Alex Salazar, co-founder and CEO of Arcade.dev. When agents are performing actions autonomously or retrieving information on behalf of a person, access controls and identity can easily gum up the works.
- For example, if you ask an agent to pull up salary information related to one executive, it could hallucinate and retrieve data related to someone else that you may not be authorized to see.
- The issue of hallucination becomes “even more dangerous” when you ask an agent to do more than just retrieve information, such as performing and automating tasks like scheduling or responding to emails, said Salazar.
- “If I’m a CIO, I need to be able to have confidence that this agent, on behalf of this user, can perform this action on this resource,” said Salazar. “There cannot be room for hallucination in that question.”
The problem is that these agents are seen as a new kind of digital identity, rather than an application. And while OAuth, or a protocol that allows you to log into applications without actually sharing credentials, exists for applications, Model Context Protocol and other connectors don’t currently support that capability (though Arcade.dev is working on a solution, said Salazar).
To put it simply: As it stands, the question of identity creates friction in actually getting the best possible use out of AI systems. “Despite how far into agents we are, they still can’t send an email,” Salazar said.
For enterprises looking to more broadly adopt AI, the inability to trust and utilize agents could lead to two outcomes: a cybersecurity domino effect when AI systems go haywire, or a massive investment getting shut down before the agents could even get off the ground, Salazar said.
“If I can’t prove that an AI agent is only going to access my information and adhere to the permissions that I am held to … then I can’t ever really use it in production on a sensitive system or a multi-user system,” he added.
One solution may be thinking smaller: Among the biggest mistakes that enterprises make as they build agents is “overscoping” them, or making them do too much, Salazar said. The more you ask of an agent, the more access it needs to have, and the higher the likelihood it will make a mistake.
“When teams pick a narrow use case, they’re much more likely to succeed,” he said. “When you give an AI more information, it makes everything worse.”