|

Nvidia Eyes AI Guardrails Patent as Chip Sales Soar

It’s hard to place exactly how much responsibility chipmakers have in ensuring that their tech is used the right way.

Photo of a Nvidia patent
Photo via U.S. Patent and Trademark Office

Sign up to uncover the latest in emerging technology.

As Nvidia’s ever-popular chips become the backbone of AI development, the company may want to make sure its tech is being used for good. 

The company is seeking to patent “runtime alignment of language models in conversational AI systems.” Nvidia’s tech would put a muzzle on large language models, constraining bad output before it gets to users. 

Nvidia’s tech relies on a “formal modeling language,” or the process of adding guardrails into models that keep them from generating irrelevant, inappropriate, or inaccurate responses. 

First, Nvidia’s tech changes a user input into a short, summarized description and tries to match it with a “dialog flow,” or a predefined guided path for a conversation. If the users’ input matches with a predefined dialog flow, it is used; if not, the system generates a new dialog flow that fits within the system’s guardrails.

By using these predictive dialog flows rather than just making up each response at a time, Nvidia could have more control over a language model’s outputs, making it easier to keep the model from spinning into inappropriate ones. 

This overcomes the issue of using fixed rules to keep language models from saying the wrong thing, which are often “ineffective, unreliable, and/or not suitable for the endless possibilities of user inputs or user queries.” 

It’s no secret that Nvidia dominates the AI chips market. The company has seen a continued hot streak with its data center sales: It reported $26.3 billion in revenue for the unit in the most recent quarter, largely driven by the AI frenzy. But with great power comes at least a little bit of responsibility. 

“[Nvidia] is showing that they care about the ethical use of the technology,” said Brian Green, director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University. 

“They’re also providing resources to people who are producing large language models.”

However, it’s hard to place exactly how much responsibility Nvidia and other chipmakers have in ensuring that their tech is used the right way, said Green. That’s because, while AI can be utilized by bad actors, “it’s not something that clearly has a destructive side to it,” he said. Plus, while the risk of an AI scandal impacting Nvidia’s reputation is low, Green added, “it’s not zero.”

“Chip companies overall would want to try to make sure that the technology is being directed towards good uses,” said Green, “because the more good uses there are, the more chips people will need.”