Happy Thursday, and welcome to CIO Upside.
Today: We look into the new frontier of bionic hacking. Plus: Why AI use is up and ethics investment isn’t keeping pace; and how IBM’s patent attempts to strengthen cybersecurity team defenses.
Let’s jump in.
The Rise of the ‘Bionic Hacker’

Imagine all the hacking power of AI, but backed by human intelligence. That’s the threat of the bionic hacker.
Some 70% of hackers are now using AI tools to “accelerate reconnaissance, triage vulnerabilities, detect patterns and probe complex systems faster than ever,” according to Naz Bodemir, lead product researcher at HackerOne.
As AI adoption accelerates across global industries, the danger for enterprises is clear: “Attackers are exploiting AI-enabled systems faster than traditional defenses can adapt, leaving critical industries exposed,” Bodemir said.
Chain of Tools
That’s a huge change from even five years ago, when cyberattacks mostly entailed rule-based automation and manual exploitation. Today, there’s a whole new class of risks: Agentic AI systems and hackbots are able to chain tools together, adapt and act autonomously. Reports of prompt-injection attacks increased 540% in 2025, and sensitive information disclosures more than doubled, up 152% year-over-year.
“AI has turned cybersecurity into a game of speed and adaptation,” Bodemir said. HackerOne’s research shows that AI-related vulnerability reports increased by 210% over the past year, and the payouts for those findings rose by 339%. Weaknesses are being found faster than ever.
Bionic hackers are relying on AI to power several different attack methods, Bodemir said:
- They employ the usual approaches, of course: prompt injection, model manipulation and data poisoning at machine speed.
- They are also deploying AI to enhance their phishing emails, generate deepfake content and craft identities that can bypass an enterprise’s fraud controls.
In 2025, $81 million in bounties was paid. At the same time, companies avoided $2.9B in breach losses.
While bionic hackers are benefiting from using better AI to break in, enterprises can benefit by turning the same tech to their own advantage.
Deploying AI agents, which offer efficiency and speed at scale, can help enterprises boost their cybersecurity measures by allowing personnel to concentrate on complex, high-impact risks. Bodemir said enterprises that embrace the concept of a bionic hacker can benefit from leveraging the human-AI combo themselves.
“Attackers are using AI to move faster and smarter, while defenders are racing to harness the same tools,” said Bodemir. “The contest ahead isn’t humans versus machines, but which side can combine them more effectively.”
Give AI Tools Access Without Compromising Security

The Model Context Protocol, or MCP, standardizes how AI agents connect with tools, and the MCP Registry makes those tools easy to discover. But connection and discovery alone aren’t enough. Tools also need to act on behalf of users with secure, delegated access.
API keys are the typical way agents connect to tools, but this method falls short:
- They are harder to scope and revoke.
- They make setup and integration with MCP servers difficult.
- They’re less secure than an OAuth flow.
WorkOS Connect fixes all this with a fully compliant OAuth 2.1 flow for MCP. It requires user consent, enforces scoped permissions, and secures every connection with token-based access. You control what AI agents can see, you control what they can do and you improve efficiency and security in the process.
Industry Is Putting AI to Work Before Building Ethical Safeguards

Traditional machine learning is taking a backseat to GenAI, but that reversal of fortune isn’t risk-free: Ethical oversight is lagging behind adoption.
Around 78% of organizations say they fully trust AI, according to a recent study from analytics, data and AI solutions company SAS, but only 40% of them make any investment in ethical safeguards.
Udo Sglavo, vice president of applied AI and modeling research and development at SAS, said there are two reasons why enterprises aren’t investing in ethical AI:
- They’re still at the “conceptual level” of implementing AI and haven’t considered the dos and don’ts or the potential impact.
- Implementation is complicated. It requires significant experience and insight, as well as the right tools.
The organizations that invest the least in their ethical oversight trust GenAI the most, 200% more than traditional machine learning.
The crux of the problem is that large language models are a somewhat unknown entity, according to Sglavo: “They may provide you with answers, but they never give you a full understanding of ‘How did it come up with this answer, and why is it saying this?’ Adding the ethical layer is a little bit more challenging for these kinds of models.”
It’s a bit easier for machine-learning models because there are established methodologies that help users understand outputs, which inform ethical and responsible decisions.
Front-End Ethics
Creating a framework for ethical oversight “needs to happen even before you write the first line of code,” Sglavo said.
“The first thing you need to say is, ‘All right, let’s talk about the regulatory [issues], and the ethical impact of that as well, and make this a part of the journey,’” said Sglavo. Building that framework from the beginning ensures that when companies have to adjust later on to new frameworks or needs, they are prepared instead of building on the fly.
And it can be a boon for them too: AI guardrails actually boost return on investment for companies. SAS reported that trustworthy AI “leaders” were 160% more likely to report double or greater returns on their AI projects.
“No matter how technology evolves, the questions we have to ask about its trustworthiness remain the same,” said Sglavo. “Any new technology must be implemented in a way that centers humans. It’s not just the right thing to do, it’s the business-savvy thing to do.”
IBM Patent Could Automate Cybersecurity Defenses

AI has made threat actors stronger, but it’s also enabling cybersecurity teams to build sturdier defenses.
That’s something IBM might be tackling: The company is seeking to patent “cybersecurity incident investigation automation,” relying on machine learning models to automate the way cybersecurity teams respond to and investigate potential cyber threats.
The proposed system would allow a machine learning model to study under a security analyst using tools that let the analyst triage and determine how to handle security threats, while the analyst narrates what they are doing and why, according to the patent application.
First, the machine-learning model detects suspicious activity or identifies a hacking attempt, then categorizes the threat as a phishing attempt, malware or unauthorized access, for example.
It records what systems were affected and any other relevant data as it judges the severity of the threat and the situation at hand. Then, the model passes a recommendation to a human member of the cybersecurity team such as blocking an IP address or isolating a device. The member responds to that advice, then lets the AI complete the action.
IBM’s patent isn’t the first time we’ve seen tech firms meld AI with their cybersecurity strategies. Wells Fargo has sought to patent AI-powered deepfake detection tech, Booz Allen Hamilton filed a patent for tech that finds and fixes software vulnerabilities and a recent Microsoft patent weeds out false alarms of attacks on cloud environments.
With cyber threats posing an immediate, ever-present threat, this system would allow teams to respond quickly and efficiently to potential issues with a mixture of automation and human input.
Extra Upside
- Mass Hacks: A hacker group is trying to extort victims after allegedly gaining access to about a billion customer records stored in companies’ cloud databases.
- Sora-ing: OpenAI’s video generation app is blowing up in the App Store.
- Get AI Tools Working Fast And Control What They Do. MCP makes it easy for AI agents to connect to tools, but security is still a problem. API keys break the user’s flow, offer broad access, and lack controls. WorkOS Connect replaces them with a secure, scoped OAuth 2.1 flow. Try WorkOS Connect.*
* Partner
CIO Upside is a publication of The Daily Upside. For any questions or comments, feel free to contact us at team@cio.thedailyupside.com.

