|

Is Meta Chasing a Superintelligence Pipe Dream?

The risks may be greater than the payoff.

Photo of Mark Zuckerberg
Photo via Andrej Sokolow/dpa/picture-alliance/Newscom

Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.

The path to superintelligence is unclear. But Meta is chasing it anyway. 

The tech giant has jumped headfirst into building a superintelligence lab, seeking to create an AI system that can far outperform human intelligence in all domains. CEO Mark Zuckerberg has attempted to court hundreds of talented engineers and researchers for the lab with eye-popping pay packages, according to The Wall Street Journal.

With Meta facing powerful competition in the red hot AI industry, the company’s commitment to superintelligence may be a bid to play catch-up and a “case of one-upmanship,” said Bob Rogers, chief product and technology officer of Oii.ai and co-founder of BeeKeeper AI. “They probably have a fear that someone’s going to end up with the ultimate model that wipes out everybody else, and they’ll end up being beholden to one organization,” he said

But without a precise vision, chasing superintelligence may be a pipe dream, he said. “Going after a vague, undefined goal is not usually the path to success,” said Rogers. “To immediately jump from where we are now – which is (large language models) being really cool language factories – and skipping (artificial general intelligence) to go straight to superintelligence is bold.” 

Superintelligence, not ‘Superwisdom’ 

As it stands, superintelligence is only theoretical. There are a number of hurdles in the way of creating an all-knowing AI model:

  • As it stands, researchers haven’t figured out how to make AI stop hallucinating, especially as we continue to challenge models with reasoning. The harder the task, the more likely it is to make a mistake, said Rogers. For an AI model to be considered superintelligent, “It’s got to be really good at not hallucinating,” he said. 
  • In addition to the technical obstacles, creating a system that’s that intelligent is incredibly risky, said Rogers, and could be riskier depending on who is in charge of it. The larger models get, the more unpredictable they become, he said. Plus, even if developers create kill switches, the models may be powerful enough to work around them. 

Beyond the barriers and risks, the pot of gold at the end of the superintelligence rainbow may not be worth it, said Rogers. Enterprises are currently struggling to manage the large language models of today, and often find success with implementing small, niche models and agents, something which Meta’s Llama family of models happen to be well-suited for, he noted. 

“I don’t think there’s a huge amount of merit in having an agent also happen to be the smartest being that has ever existed” Rogers said. “It’s not clear to me that superintelligence solves a problem that we need to solve. They’re not building superwisdom.”

Sign Up for CIO Upside to Unlock This Article
Cutting-edge insights into technology trends impacting CIOs and IT leaders.