|

Apple Still Won’t Ride the AI Hype Train

At its annual Worldwide Developers Conference on Monday, Apple announced a slew of incremental updates to its AI offerings.

Photo via Andrej Sokolow/dpa/picture-alliance/Newscom

Sign up for smart news, insights, and analysis on the biggest financial stories of the day.

For a Big Tech firm, Apple is suddenly thinking small. At least when it comes to artificial intelligence.

On Monday, the consumer hardware giant kicked off its annual Worldwide Developers Conference with what felt like typical reveals for the company, including updates to its iOS, iPadOS and MacOS platforms. And, in what’s beginning to feel similarly familiar for the often cutting-edge tech firm, Apple once again refused to go all-in on the AI hype train, noting some incremental updates to its current offerings and highlighting its “local is better” approach to AI. Which begs the question: Just how skeptical is Apple of the AI revolution?

Within Reason

The biggest AI announcement on Monday was likely the opening of the company’s Foundation Models, which would allow app developers to tap into the AI models built directly into the hardware of Apple’s devices. Running AI on-device typically means using models that are less capable and trained on smaller datasets than those that require tapping the massive power of cloud computing farms. Apple touts the local model as being more efficient and more secure than bigger models while allowing users to access AI-powered apps and tools even without an internet connection. For developers, it’s a little like getting access to a motorcycle with a speed regulator. “We don’t really know what the quality is going to be,” Roman Khaves, CEO of AI-powered dating advice app RIZZ, told The Wall Street Journal. “The expectation is not that high.”

The skepticism may be fitting — it seems to be almost exactly how Apple feels about the potential of AI in general. At least, that’s what a team of researchers indicated in a white paper published on the company’s website last week, titled “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity”:

  • In the report, researchers outlined how they tested certain leading Large Reasoning Models (LRMs, supposedly a step beyond Large Language Models) made by OpenAI, Claude, and DeepSeek against classic puzzles and tests (such as the Tower of Hanoi game, which early AI systems solved in 1957).
  • In sum, they found that the LRMs “still fail to develop generalizable problem-solving capabilities, with accuracy ultimately collapsing to zero beyond certain complexities across different environments.” To laypeople: That basically means that even the AI models that are supposed to be good at thinking are still mostly just good at predictive text, a la chatbots — consider it a hedge against the imminent arrival of super-genius problem-solving Artificial General Intelligence.

All In: Meta, on the other hand, keeps betting big on AI. On Sunday, Bloomberg reported that Zuckerberg’s empire is in talks to invest $10 billion or more into Scale AI, a data-labeling startup that helps clients accelerate the development of AI programs. The company’s client list includes General Motors and Uber, and it has increasingly worked as a defense contractor for the US government.

Sign Up for The Daily Upside to Unlock This Article
Sharp news & analysis on finance, economics, and investing.