Why Access Controls May Be AI’s Biggest Security Vulnerability
As model providers court government agencies, humans may present a major security threat.

Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.
Major model providers are fighting for the affections of the US government.
In recent weeks, both OpenAI and Anthropic have offered access to their models to government agencies practically for free, with OpenAI proposing to provide ChatGPT Enterprise to the executive branch for $1 a year and Anthropic matching that price to supply its Claude models to all three branches of government.
As these organizations seek to weave their way into government agencies’ tech stacks, however, whether their models can safely handle sensitive private data at scale remains uncertain.
“By now, model providers are well aware of security being a big impediment to adoption,” said Arti Raman, founder and CEO of AI data security company Portal26.
Though AI models still have their kinks, the bigger security problem isn’t the tech itself but the people who are using it, she said. “The bigger risks are on the human side.”
A variety of vulnerabilities have already been demonstrated:
- According to IBM’s recent Cost of a Data Breach report, 97% of surveyed organizations that experienced AI-related security incidents reported not having access controls in place.
- Of those incidents, 60% led to compromised data, and 31% led to operational disruption.
- “Who gets access and access control becomes a bigger problem than data leaking from a model that may not be connected to the outside world … Data security risks are from person to person,” said Raman.
And in government agencies, especially those handling large amounts of sensitive information relating to civilians, the risks can be great. The organizations are often the target of threat actors already, and with the Cybersecurity and Infrastructure Security Agency facing persistent cuts under the Trump administration, the layers of protection may become even thinner.
While government workers are often “trained and conditioned to worry about security,” said Raman, the nascent and evolving nature of AI means that training and governance can’t be a static thing.
“Training and education are incredibly important,” said Raman “It can’t be in the form of a manual or something that you do once a year. It has to be done in real time.”
Education may be only part of the solution. The “white space” of AI access and identity control could represent a major opportunity in the market, said Raman. “We need some innovation on complex identity and entitlement … somebody needs to really understand how to connect the dots between what a model was trained on and has access to versus what it is and isn’t allowed to answer.”