Fighting Fire with Fire: Can AI Be a Cyber-Defense Tool as Well as a Threat?
There are dos and don’ts of deploying AI for security.

Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.
AI is making cybersecurity tougher. It can be an ally as well as a dangerous opponent.
As with most technological advances, AI’s capabilities are boons to both legitimate businesses and so-called black hats, threat actors who infiltrate proprietary databases and networks for nefarious reasons, said Joshua McKenty, co-founder and CEO of Polyguard and former chief cloud architect at NASA.
“There are two major effects to AI in cyberattacks right now,” said McKenty. “One, it’s lowered the bar for who can attack … Two, it’s increased the amount of attacks – the surface area, the number of attacks per minute – because of automation.”
In a report published Monday, Crowdstrike found that generative models are helping threat actors with things like sophisticated social engineering, vulnerability exploitation and reconnaissance. Large language models are now even capable of committing cyberattacks on their own: Recent research out of Carnegie Mellon University found that AI could replicate the 2017 cyberattack on Equifax, exploiting vulnerabilities, installing malware and stealing data autonomously.
With AI increasing the pace and surface area of attacks, “the time to respond with a human in the loop on the enterprise side is disappearing,” said McKenty. “The window of opportunity doesn’t need to be very large for AI systems to be taking advantage of it … everything is being tried at once.”
But enterprises can fight fire with fire: There are a few methods that work and one common one that doesn’t, said McKenty.
- One tactic is doing as the attackers do, he said: automating. Enterprises can use AI to collect and synthesize data that needs to be managed, monitor the dark web for breaches of company data, and track in real time whether someone’s registering lookalike domain names. “Everything that you periodically do for good hygiene … you can now do continuously,” he said.
- Another is triaging and prioritizing what needs the most attention. While AI might be capable of being used to automate software-patching, “most enterprises are not in a position to (do so),” he said. But the tech can also automate reviewing of vulnerabilities, picking out which ones should be addressed first, he said.
The thing that generally doesn’t work, however, is detection of AI-generated content, he said. Security is always chasing the newest threat, he said, and as soon as you develop a 10-foot wall, a 12-foot ladder is there to surpass it. And the walls always take far longer to build than the ladders.
That’s why AI to detect deepfakes doesn’t always work, he said. As soon as a model is capable of detecting that content is AI-generated, a threat actor can create new AI-generated content that can trick it, he said. “Every time you make a detector, you might spend a year on it,” he said. “The attackers can make a new deep fake using your detector in a day, not a year.”
“Anything you can build as an AI can be used by a different AI,” said McKenty. “Every tool you build today, you’re putting into the hands of an attacker.”