|

Reducing Data Security Risks from Model Context Protocol

“If you compromise that AI, you compromise everything that it’s connected to.”

Photo of Claude and Anthropic logos
Photo via VCG/Newscom

Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.

The more trust we give AI, the more opportunity it has to go haywire. 

As agents take center stage in enterprise conversations around AI adoption, so has the idea of connecting autonomous assistants to the core of a business. Last November, Anthropic introduced a way to do so with Model Context Protocol, otherwise known as MCP, an open source framework that allows for just that: Connecting agents to “the systems where data lives.”

Since then, OpenAI has followed suit, launching its own connector offerings between its AI models and enterprises’ internal systems.

Creating a connective tissue between agents and internal systems has significant potential to help enterprises get the most out of their AI investments and deployments, said Greg Anderson, CEO and founder of vulnerability management firm DefectDojo. “They essentially make AI as smart as the tools they’re connected to,” said Anderson. “I think it really helps to bridge the gap where AI has struggled previously.” 

The problem, however, is that AI still comes with tons of fundamental risks – one of which is data security:

  • Even the major, most commonly used models face data security threats. One study by Cybernews from late May found that OpenAI has suffered 1,140 data breaches. 
  • By giving these agents access to an enterprise’s data and systems, businesses may create an “increased attack surface area,” said Anderson. That makes things like prompt injection attacks or manipulation of models by attackers all the more risky, he said. 

“It’s about exposure,” said Anderson. “With MCP, we’re essentially saying now we can connect these AIs to anything that supports it. And so by proxy, if you compromise that AI, you compromise everything that it’s connected to.” 

But it is possible for enterprises to mitigate such risks, said Anderson. That starts with the data that you let the agent access. Start with small, lower-stakes use cases before allowing agents access to deeper systems, he said.  

The next step is making sure agents perform at a sufficiently high standard that the risk is limited once access is increased, he said. “I recommend a crawl, walk, run approach,” said Anderson. “How do we roll these things out in increments to not create additional risk to the enterprise while also accomplishing the goal of actually getting these things out the door?” 

In the “mad dash rush” to adopt agents and get value out of them, however, enterprises often aren’t thinking through what they’re doing and why they’re doing it, he said. 
“Nobody wants to be late,” said Anderson “No one is stopping to say, ‘What does that actually mean? What are the limitations? What do you want to expose? What makes sense to not expose?”

Sign Up for CIO Upside to Unlock This Article
Cutting-edge insights into technology trends impacting CIOs and IT leaders.