|

Increasing Your Enterprise’s ‘Adaptability Quotient’

Staying flexible and informed can help your enterprise get the most out of AI agents.

Photo of Shibani Ahuja, SVP of Enterprise IT Strategy at Salesforce
Photo via Salesforce

Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.

When Shibani Ahuja landed in San Francisco for her onboarding at Salesforce seven months ago, she was struck by the number of billboards focused on AI and tech.

They stood in stark contrast to those in her home city of Toronto, which mostly displayed ads for personal injury lawyers, she said. The signs made her realize that most CIOs don’t live in “hot hubs” of technology: While many are seen as their enterprise’s resident tech expert, they themselves are still playing catch-up as tech evolves at a rapid clip. 

“They are all at different and varying degrees of maturity,” Ahuja, Salesforce’s SVP of enterprise IT strategy, told CIO Upside. “And what I’ve really learned is we’ve got a responsibility to help educate CIOs along the way.” 

CIO Upside sat down with Ahuja to discuss the importance of executive AI education in change management, how to stay adaptable and how enterprises can find real value in their tech investments. This interview has been edited for clarity and brevity. 

Generally, what do you think enterprises are getting right – or getting wrong – in their approaches to AI? 

I think what they’re getting right is the enthusiasm. The recognition that this is not a fad, and that this is something that I’m hearing more and more CIOs consider how to embed within their strategy. Interestingly, I’m seeing not just CIOs engaging on this topic, but I’ve actually been speaking to a number of CFOs and CEOs. I’ve never seen a technical advancement where the board is putting pressure on the CIO, and that can be seen as a positive or a negative. 

Where I’m seeing opportunities for organizations to do a little better is managing this panic versus paralysis. The panic … is leading to proof of concepts. It’s leading to singular use cases. The challenge is, when they’re trying to scale it, they’re recognizing, “Oh wait, this is very expensive to scale,” or “This is not embedded within our workflow. It’s something that’s been built that sits on the side.” 

Largely, the industry has been moving beyond conventional generative AI toward agents. What challenges do you see enterprises face in the adoption and deployment of this tech? 

My day job is literally just to meet with CIOs. And what I’m discovering is the bigger the organization, the harder it is for a CIO to know everything about everything in the tech space. What they’re hearing around autonomous and agentic is what they’re reading in articles. So I think that education is the starting point … Educating CIOs in such a way that you are not hurting their egos is, I think, very important. 

I wonder how many CIOs are now having to wear this hat of not just understanding and deploying this technology as fast as they can because their boards are telling them to, but how many of them are also having to get educated on it themselves and educate the organization? I think that a big component of organizational readiness starts with education and a baseline of knowledge. 

Upfront Cost

Why is scaling beyond single use cases so difficult? 

You can either say, “I’m going to carve out a team over here, and they’re just going to be focused on this proof of concept,” or alternatively, you could say, “I want to build a proof of concept that is embedded within my production environment.” That’s often a bit more expensive up front, but less expensive on day two, because it’s not something that you built outside and have to embed. With the proof of concept, it’s cheaper to stand it up and do it, but then to implement it, you almost have to break it apart again to reimplement it. I think that’s what’s making it tricky. 

Let’s talk a bit about Salesforce’s Agentic Maturity Model, which suggests four levels of autonomy and responsibility. What key steps do you think enterprises need to take to graduate from one step to the next? 

Ironically, as a tech leader, I go back to the business outcomes that we are driving, not the shiny object that we are deploying. Deploying something in production doesn’t make it successful; the adoption, the consumption and the outcomes are what make it successful. To jump from one level to another, you want to look at where you’re getting incremental business value. The second one is around data readiness. To move from one level to another, there’s a dependency on having the data available, clean, safe and trusted to be able to shift. And third is really just around having the technologies available. 

Architect for your single use case now, but know that it’s a gradual build. And you don’t have to invest in all that technology immediately. Start with something that is adding value, and then build and then grow gradually over time. 

Agentic Coworkers

With the increased autonomy and access that agentic AI calls for, how does Salesforce approach data security and governance? How should enterprises approach this? 

They should approach it as their starting point. Data security, governance, and another one is data quality. I think folks are recognizing the importance of clean, trusted, secure, and high quality data. It’s the underlying machine to absolutely everything. Agents, yes they’re in the limelight, but data cloud, as far as I’m concerned, is our Trojan horse. 

When people start to recognize or think about agents as humans, as our coworkers … they provision (data) access. Just because I’m smart enough to do every job under the sun, I shouldn’t have access to all of that data. I think that we have to start thinking about agents in the same way. Just because you can code an agent to do everything you want to under the sun, you still need to be mindful of separation of duties.

I think having comfort and confidence that you are provisioning data to an agent, through guardrails, through security, through governance, that is embedded within whatever solution you choose …  I think that becomes paramount.

Where should the line be drawn with AI agent implementation in the enterprise? What tasks shouldn’t AI be responsible for? 

I came from banking, so the idea of getting a single agent into production was a daunting task, because you needed legal, privacy, compliance, my aunt and my uncle to sign off. It was wild. I think in that case, there’s almost nothing that you’d have an agent do. So I think the degree of comfort of what an agent should or should not do, it still comes down to the industry that you are in and the degree of regulatory oversight and the governance of it. I think that there isn’t a one-answer-fits-all to this question. 

What you need to feel comfortable when deploying agents is, ironically, the human side. It’s a greater amount of empathy. Technology, I think, is forcing roles to change. It’s changing the way we look at how our colleagues are engaged, how we are engaging our customers … We need to be more mindful of how we are designing agents, to be mindful of the human experience.

A common issue that enterprises face as they adopt AI is getting a return on their investments. How can organizations start to see real value, and are agents a part of that picture? 

When I was in my old seat seven months ago, it was hard for me to fathom saying “Hey, CFO, trust me. I need a bunch of money to upgrade my tech stack, and I know we’re gonna get returns.” The tech approach is to say, “In order to achieve the target state of the future, which sometimes is hard to quantify, we need all of this technology.” So I changed that entirely. I said,
“I’m going to start with the ultimate outcomes of the business.” What has my CEO outlined as our strategic imperatives as a company? 

As I go down to every single line of the business, we broke it down into business use cases that we quantify. Instead of technology, we focused on the outcomes that we want to drive. We broke those outcomes down into business use cases. What we did was we invested in technology and consumption of that technology anchored specifically to a use case. And we deemed a use case good if you could actually quantify it. 

What does the future of agentic AI look like? 

A customer once asked me a somewhat similar question, but the question was being asked in the context of, “What tech should I be investing in today?” Instead of spending time predicting what the future is going to look like in a technology that is so big and so real and so new, I would focus energy on how you can increase your AQ. There’s IQ, there’s EQ, AQ – adaptability quotient. How are you building technology stacks that will allow you to pivot? How are you building modular? How are you investing in technology that will allow you to be interoperable, that will allow you to adjust and move with the technology that I can’t predict? You’ll get caught up in not making a decision if you’re trying to anticipate what the future is. 

Sign Up for CIO Upside to Unlock This Article
Cutting-edge insights into technology trends impacting CIOs and IT leaders.