Why Open Source AI May Be a ‘Marginal Player’
It may be better to think small when considering open source.

Sign up to get cutting-edge insights and deep dives into innovation and technology trends impacting CIOs and IT leaders.
How useful is open source AI?
As Chinese companies like DeepSeek, Baidu and Alibaba open their models to all, other firms have made their own open source plays in recent months. Elon Musk-owned xAI made the base model for Grok open source in March. OpenAI announced (and delayed) the release of its open weights model in June with the promise of something “unexpected and quite amazing” instead, according to CEO Sam Altman. And Meta’s Llama family of models has been open source from the jump.
While the concept of open source AI may seem rosy and altruistic, it might be difficult for enterprises to get use out of the technology, said Aaron Bockelie, senior principal consultant and AI solutions lead at Cprime.
But without the computing power necessary to run and scale open source models, the average enterprise is not likely to reap as many benefits as you’d expect, said Bockelie. The “open source model is only one piece of the puzzle,” he said, requiring a proper framework and infrastructure to implement.
“The more you get into it, the more you realize that you need a full engineering team and development team to be able to really leverage that investment, which turns you towards looking at commercial offerings instead because they’re all just turnkey,” said Bockelie.
Open source AI does have its benefits, said Bockelie, including democratized access, opening the door for faster and more collaborative innovation. But with those benefits comes risk:
- It’s harder to vet questions related to data security and ethics of these models because their open nature creates a lack of traceability, he said. Additionally, closed commercial models are often better funded, and “have way deeper pockets for being able to produce the training data set.”
- “Who’s validating the safety of the models? Who’s validating alignment, who’s understanding what its capabilities are, what’s the training set?” Bockelie said. “In some cases, you’re kind of just going on a guess or a hedge that an open model was based on this or was based on that.”
As it stands, these difficulties that open source faces may make it just a “marginal player” in the broader AI space, he said. “You don’t have access to the type of scale of compute that’s necessary to carry the big commercial models.”
But that doesn’t mean open source is a complete loss, he said. The best way to implement open source just requires enterprises to think smaller: using open models with fewer parameters that can run locally on a device and serve specific purposes.
“Local models are going to end up being smaller by their nature, and therefore way more accessible to average people,” said Bockelie.