|

Oracle’s Patent Could Keep AI From Spilling the Beans

Oracle wants to protect its AI models from being asked the wrong questions with a patent for a “machine learning model attack guard.”

Photo by Davidlohr Bueso under CC BY 2.0

Sign up to uncover the latest in emerging technology.

Oracle wants to protect its AI models from being asked the wrong questions. 

The tech firm is seeking to patent a “machine learning model attack guard” for models hosted in cloud environments. Oracle’s system prevents a user from reverse-engineering sensitive training data out of a machine learning model by faking out the attacker with a “a shadow model that is similar to the (machine learning) model.” 

To break it down: This system first identifies a user that’s attempting to attack a model, based on the prompts or requests they’re asking of it. For example, if a user asks just the right questions to a language model, it may reveal data that it’s not meant to. In response, the system generates a shadow model that mimics the original machine learning model, without using the authentic data of the original, but using “feature vectors” instead. 

When the attacker sends a second set of questions or requests, the system generates two responses, one from the original model and one from the shadow, and compares them. Based on the comparison, the system determines whether a user is actually an attacker. If the person is identified as an attacker, the system triggers a model guard service, blocks and reports the attacker, or modifies its responses. 

Oracle noted that this could be deployed specifically in cloud contexts where machine learning is offered as a service, which often “suffers from the problem that, since the model is exposed to the public, users may attempt to attack the model.” 

“ML models are … potentially valuable targets of attacks because they contain a representation of the training dataset they were trained on,” the filing stated. “Machine learning models can be the target of attacks where the confidentiality or the privacy of the model and data can be compromised.” 

Photo via the U.S. Patent and Trademark Office.

AI models are like children. If someone asks them questions in a specific way, they “can very easily get them to spill the beans,” said Arti Raman, founder and CEO of data protection company Portal26.

“I think that’s how we have to think about AI models right now,” said Raman. “They’re young, and so we don’t even know all the ways that they can be attacked. And building those guardrails is going to take some time.” 

Protecting an original model, which often contains a treasure trove of training data, is important because there’s multiple ways an AI model can be attacked, said Raman. While the algorithms themselves can be attacked, Oracle’s patent attempts to mitigate vulnerabilities that may be revealed from the prompts themselves. And as AI adoption continues at its pace, this problem will only worsen, she said. 

While Oracle isn’t necessarily an AI powerhouse, the company offers AI infrastructure embedded into its core cloud and data services. Plus, the company has sought a number of patents for AI tech, including a chatbot-generating chatbot and an AI explainability tool. Making tech like this proprietary certainly wouldn’t hurt its efforts if it wanted to grow its power in the space. 

The only potential problem? Getting it to work, said Raman. While a system like this can make sense on paper, it may take up a significant amount of computing resources to make tons of copies of machine learning models. It also may run into latency issues, which defeats the purpose of AI bringing productivity in the first place. 

“Those are the two questions to see how well (this system) can guard against attacks like that,” Raman said. “Those are the two things that take a technology from concept to practical application.”