|

Industry Is Putting AI to Work Before Building Ethical Safeguards

While 78% of organizations say they fully trust AI, only 40% of them invest in ethics frameworks, according to SAS research.

Photo of a person opening an empty wallet
Photo by Frank van Hulst via Unsplash

Sign up for smart news, insights, and analysis on the biggest financial stories of the day.

Traditional machine learning is taking a backseat to GenAI, but that reversal of fortune isn’t risk-free: Ethical oversight is lagging behind adoption.

Around 78% of organizations say they fully trust AI, according to a recent study from analytics, data and AI solutions company SAS, but only 40% of them make any investment in ethical safeguards. 

Udo Sglavo, vice president of applied AI and modeling research and development at SAS, said there are two reasons why enterprises aren’t investing in ethical AI: 

  • They’re still at the “conceptual level” of implementing AI and haven’t considered the dos and don’ts or the potential impact. 
  • Implementation is complicated. It requires significant experience and insight, as well as the right tools.

The organizations that invest the least in their ethical oversight trust GenAI the most, 200% more than traditional machine learning.

The crux of the problem is that large language models are a somewhat unknown entity, according to Sglavo: “They may provide you with answers, but they never give you a full understanding of ‘How did it come up with this answer, and why is it saying this?’ Adding the ethical layer is a little bit more challenging for these kinds of models.” 

It’s a bit easier for machine-learning models because there are established methodologies that help users understand outputs, which inform ethical and responsible decisions. 

Front-End Ethics

Creating a framework for ethical oversight “needs to happen even before you write the first line of code,” Sglavo said.

“The first thing you need to say is, ‘All right, let’s talk about the regulatory [issues], and the ethical impact of that as well, and make this a part of the journey,’” said Sglavo. Building that framework from the beginning ensures that when companies have to adjust later on to new frameworks or needs, they are prepared instead of building on the fly.

And it can be a boon for them too: AI guardrails actually boost return on investment for companies. SAS reported that trustworthy AI “leaders” were 160% more likely to report double or greater returns on their AI projects.

“No matter how technology evolves, the questions we have to ask about its trustworthiness remain the same,” said Sglavo. “Any new technology must be implemented in a way that centers humans. It’s not just the right thing to do, it’s the business-savvy thing to do.”

Sign Up for The Daily Upside to Unlock This Article
Sharp news & analysis on finance, economics, and investing.