|

Why Your Enterprise Should Be Thinking About Ethical AI

‘There needs to be someone held responsible.’

AI Laws and Regulations Concept. Man touches digital icons represent justice, AI, copyright, and regulations, symbolizing intersection of law and artificial intelligence.

Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.

While a lot of the AI conversation centers around what we can do with it, the bigger question may be what we shouldn’t. 

With the onslaught of development in the AI space, enterprises are grappling with how to consider the ethics of the technology as they deploy it. And not prioritizing ethical standards can lead to more than just bad PR, Walter Sun, SVP and global head of AI at SAP, told CIO Upside. 

“How does this AI help the customer, help the user, help the employee of my company, versus it being a replacement,” said Sun. “You can’t afford to just let everything be automated, because things can go off the rails. There needs to be someone held responsible.” 

Testing out new technology can be an exciting endeavor for an enterprise, said Sun. But with any new deployment, ethical standards need to be taken into consideration right from the start. Sometimes, that starts with the basics. 

“I hope most companies .. have this idea of a moral compass,” he said. “Saying, ‘Hey, we have these non-negotiables. We’re not using AI to do X, Y and Z.’ Whether it be harming the environment or doing things that are negative for society.” 

Beyond the red flags, enterprises may have a harder time deciphering the “shades of yellow to green,” he said. But often, it may differ depending on the sector a business is operating in. For example, government and “high-risk” sectors may have higher ethical standards than others.

So how can an enterprise start to think about ethics? It all starts with the “Three R’s,” said Sun: “relevant, reliable and responsible.”

  • Relevant AI is about bringing actual value to customers, said Sun, optimizing business outcomes “based on real data.” Reliable AI, meanwhile, is about having the “right data for the right models.” This means ensuring that datasets are clean, unbiased and up to security standards. 
  • Responsible AI is about questioning everything that can go wrong with an AI deployment. That includes considering data security issues, biases or use cases that turn out to be harmful to the operator. 

Beyond this, a good place to start is with public guidelines put out by nonprofit organizations, said Sun. UNESCO, for example, offers 10 guiding principles for the ethical adoption of AI, which SAP aligns with, Sun noted. Enterprises can also adhere to public principles in combination with building their own standards. 

If these considerations are foregone in the name of progress, the consequences can be dangerous. AI has the tendency to teeter into the biases present in its training data and learn as it goes. Without monitoring and questioning of a model’s decisions, those biases can go unchecked, leading to discrimination in business outcomes, said Sun. 

But with an ethics strategy in place, even if something does go haywire, enterprises can retrace their steps, tracking how it was trained, how it was used and how the model made a decision to figure out where things went wrong, Sun said.  

“All consumers want to feel trust in our businesses,” said Sun. “If we have an explanation saying ‘We actually do have this AI ethics assessment process, and this is why something may have fallen through the cracks,’ that’s better than saying ‘We were trying to be aggressive and things happen.’” 

While AI is the current leading edge of the tech world, an ethical framework can and should apply to any technological transformation, said Sun.

“Besides AI, there’s the whole idea generally of explainability and transparency,” said Sun. “I think those are things that are important in all areas of business.” 

Sign Up for CIO Upside to Unlock This Article
Cutting-edge insights into technology trends impacting CIOs and IT leaders.