Nvidia Wants to Balance Workloads Between GPUs
The system Nvidia wants to patent would spread out and balancing GPUs’ capacity to help avoid problems as demand ramps up in AI data centers.

Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.
Blowing a household outlet is pretty bad news. Blowing out a chip or server rack can be a huge problem.
That’s why Nvidia is thinking about spreading out and balancing GPUs’ capacity. As AI data centers continue to ramp up in demand, it’s increasingly important to avoid any issues.
The company wants to patent “apparatuses, systems and techniques to power balance multiple chips” and ensure energy is distributed evenly across the board.
In the proposed system, several processors all perform similarly to each other, but each chip uses a different amount of power when running: While some need more power, others are more power-efficient.
When all of the chips are at peak performance, running at their max speed, the system kicks in to make sure that the total power they use won’t go over a specific limit, the “system power threshold.”
It’s all designed to keep things cool, so the system doesn’t draw too much power or overheat.
As a massive force in the chips industry, it’s no surprise Nvidia is trying to mitigate any shutdowns or surges. The company’s already looked to patent a system for improving energy usage at data centers and seeks patents (covering areas from robotics to self-driving) that will bolster the sales of its chips.











