|

Google’s ego breaker

Sign up to uncover the latest in emerging technology.

Plus: Mastercard’s small business chatbot; Adobe’s Waldo-finder

Happy Thursday and welcome to Patent Drop! 

Today’s patents are (surprise, surprise!) all AI-based. Google wants to patent tech to make its AI a little more humble; Mastercard wants to help small businesses understand themselves better; and Adobe is breaking down the bigger picture. 

But before we get into that, a quick word from today’s Sponsor, ButcherBox. As a Patent Drop reader, you make sure to source your news from only the most high-quality, dependable, and witty of sources (that’s us). But what about when it comes to sourcing your food?

Like your brain, your body needs high-quality fuel – and that means it’s time to stop settling for gas station snacks and Subway runs and start opting for ButcherBox. ButcherBox delivers the highest-quality beef, chicken, pork, and seafood directly to your doorstep. We’re talking the full-blown, grass-fed, free-range, wild-caught good stuff, complete with recipes to make the most of every piece (think brown butter scallops, grilled steak bruschetta, and citrusy herbed lamb). 

Ready to up your meat game? New customers can get $100 off with code MARCH100 plus free chicken nuggets in every box for 1 year when they sign up right here. 

Anyways, let’s take a peek. 

#1. Google’s course corrector

Sometimes AI can be like that one guy in your freshman year politics class who always had something to say: overconfident in responses that weren’t entirely accurate. Google wants to program a little self-doubt. 

The company filed a patent application for regularizing machine learning models, which improve performance of a trained neural network. To do this, Google is trying to solve the longstanding issue of “overfitting,” the term for when a neural network gets too confident in its responses to things that aren’t covered in its training data and ends up generating poor outcomes. 

Overfitting is a fundamental problem in AI development, but if you didn’t take Machine Learning 101 in undergrad, think of it like this: An AI is trained to identify birds by looking at thousands upon thousands of images of birds flying in the sky. If it’s then shown a photo of a plane in the sky, it might classify that as a bird. 

Google’s tech solves this issue by using a “regularizing training data set,” or ones with a predetermined amount of “noise,” or meaningless data. These are created by modifying the labels in its datasets, for example, changing one that “correctly describes the training item to a label that incorrectly describes the training item.” This way, a neural network is trained to not be overly reliant on its training data, and will come up with more accurate answers.

“Overfitting may be described as the neural network becoming overly confident in view of a particular set of training data,” Google said in its filing. “When a neural network is overfitted, it may begin to make poor generalizations with respect to items that are not in the training data.”