Happy Thursday, and welcome to CIO Upside.
Today: Autonomous AI agents may give enterprises the capability to do more, but riding the line between risk and innovation continues to be tricky. Plus: How ransomware gangs are getting smarter; and Nvidia’s recent patent goes after robotic safety.
Let’s jump in.
Does Agentic AI Have Staying Power?

Agentic AI may be more than just a fad.
The question of whether AI has staying power has long depended on people’s ability to gain tangible value from it. Autonomous agents could allow enterprises to unlock that value, making it a “super trend,” experts told CIO Upside at the Info-Tech Live conference in Las Vegas this week.
“Looking at agentic AI and seeing that it has potential to automate across knowledge worker processes in all different industries … we’re talking about automating intelligence,” said Brian Jackson, principal research director at Info-Tech. “Humanity’s never automated intelligence before in the way that we’re doing it now.”
Supertrends in tech aren’t a common occurrence, Tom Zehren, CEO of Info-Tech, told CIO Upside. Innovations that have the capability of sticking around tend to come every 15 to 20 years, he said, citing PCs, internet and cloud computing.
Such innovations often build on each other, he noted, pushing the bar for technological innovation higher and higher. “The number of mature technologies that had to come together in former supertrends, over time, keeps going up,” Zehren said. “It’s getting more complex. It’s going to be more intertwined.”
And while conventional generative AI laid the groundwork, the automation capabilities of AI agents that can work together to solve problems are “going to be able to disrupt almost all roles, all functions, everything we’re doing today,” he said.
Playing in the AI ‘Sandbox’
More autonomy, however, opens the door to more risk — at a time when even generative AI’s security, governance and ethical hazards haven’t been effectively addressed. Giving these models more room to roam could present even more dangers, said Valence Howden, principal advisory director at Info-Tech:
- “Ethics is a bigger animal than just the bias aspect of it,” Howden said. “I suspect they’re nowhere near where they need to be in terms of understanding how to deal with ethics.”
- And because these systems are capable of interacting with multiple other systems, departments and people within an enterprise, governing risk and security gets even trickier, said Thomas Randall, advisory director at Info-Tech. “It’s very difficult to get that visibility if it’s integrated across all these different tools,” he noted.
Still, risk-first thinking may prevent your enterprise from taking the first step, said Zehren. While it’s important to understand the risks that your business is undertaking by implementing agentic AI, he added, “it’s a mistake to think too much about it from a risk perspective.”
For one, risk isn’t static, said Zehren. The risks, whether ethical, governance or security-related, evolve as the tech itself scales. Discouraging developers and employees in the “sandbox” phase of building because of risk concerns will only serve to cut your enterprise off at the knees.
“Risk evolves as you build something. It’s a journey,” said Zehren. “I don’t think enterprises provide enough sandbox environments, don’t empower enough people to actually work with AI.”
Ransomware Groups Use AI to Level Up

A new wave of AI-powered threats is on the loose.
A recent CISCO Talos report found that ransomware gangs are leveraging AI hype, luring enterprises with fake AI business-to-business software while pressuring victims with psychological manipulation.
Ransomware groups like CyberLock, Lucky_Gh0$t, and a newly-discovered malware dubbed “Numero,” are all impersonating legitimate AI software, such as Novaleads, the multinational lead monetization platform. Kiran Chinnagangannagari, co-founder and chief product and technology officer at global cybersecurity firm Securin told CIO Upside that this new tactic is not niche.
“It is part of a growing trend where cybercriminals often use malicious social media ads or SEO poisoning to push these fake tools, targeting businesses eager to adopt AI but unaware of the risks,” Chinnagangannagari said.
Mandiant, the cybersecurity arm of Google, recently reported a similar campaign running malicious ads on Facebook and LinkedIn, redirecting users to fake AI video-generator tools imitating Luma AI, Canva Dream Lab and Kling AI.
AI Gaslighting
Ransomware gangs are also using psychological manipulation to increase the success rate of their attacks. For example, CyberLock is leaving victims notes asking them to pay $50,000, an unusually low ransom demand considering the industry’s average. The notes say that the ransom payment will be used for “humanitarian aid” in various regions, including Palestine, Ukraine, Africa and Asia.
- The $50,000 demand pressures smaller businesses into paying quickly while avoiding the scrutiny that comes with multi-million dollar ransoms, Chinnagangannagari said.
- Organizations should never pay the ransom, as payment offers no guarantee of results, Chinnagangannagari said.“Companies should focus on robust backups and incident response plans to recover without negotiating,” he added.
- Security leaders also need to prepare their teams for psychological manipulation, not just technical defenses, said Mike Logan, CEO of C2 Data Technology. “These ransomware attacks are not just technical threats but psychological weapons.”
In certain industries, these smaller-scale ransomware attacks can have more serious impacts. “There are edge cases, healthcare for example, where human lives are at stake,” Logan said. However, even in those cases, the goal should be to have preventive controls in place so that paying never becomes the only option, he said.
Companies should report the incident, work with authorities, and treat the breach as a catalyst to modernize their security posture, he said.
The new wave of AI business-targeting ransomware demands a paradigm shift in defense strategies. AI tools are now considered by cybersecurity experts as high-risk assets, Chinnagangannagari said. Training staff on how to spot fake, malicious and suspicious online activity, especially when downloading unverified AI apps, is essential.
Nvidia Safety Patent Signals Physical AI Push

Nvidia wants to make sure its robots can see straight.
The company filed a patent application for “multimodal object detection for autonomous systems and applications,” a hazard-identification system that collects data from multiple sensors over time to enable more robust risk awareness.
Consistently using data from sensors including lidar, radar and cameras would enable the system to estimate the probability of a hazard existing in a certain place. When a hazard is recognized, the system creates a “bounding shape,” or essentially a digital box, around that object. Over time, that bounding shape is refined with more sensor data to help the system understand the exact dimensions of the potential hazard.
The goal of Nvidia’s tech is to help give autonomous systems more reliable memory, hazard detection and depth perception. “While some conventional systems are configured for hazard detection, these systems often provide an inadequate confidence level for the detection,” Nvidia said in the filing.
Nvidia has a keen interest in autonomous machines. The company inked a partnership with Toyota earlier this year to implement self-driving tech in vehicles, and showcased collaborations with General Motors, Volvo and several freight companies at its GTC conference this year. The company also debuted models there that would support and accelerate the development of humanoid robots.
This patent isn’t the first time we’ve seen Nvidia tackle the safety problems facing AI-powered robotics. The company has also sought patents for proactive safety measures for robots, autonomous machine prompt generation, and reactive interactions for robots.
The filings signal that safety still remains a major barrier in the deployment of autonomous machines. And as the AI robotics race heats up alongside the broader AI push, surmounting that barrier – or at least mitigating as many incidents as possible – is vital in getting people to trust these machines.
Extra Upside
- Meta Amps Up: Meta hired engineers from Google and Sesame AI for a new team focused on artificial general intelligence.
- Quantum Hopes: Nvidia CEO Jensen Huang said quantum computing is reaching an “inflection point” at the GTC Paris developer conference.
- Robotaxi Rumble: Tesla is “tentatively” set to launch its robotaxi service in Austin, Texas, on June 22.
CIO Upside is written by Nat Rubio-Licht. You can find them on X @natrubio__.
CIO Upside is a publication of The Daily Upside. For any questions or comments, feel free to contact us at team@cio.thedailyupside.com.