IBM thinks Groq’s chips could make all the difference for enterprise AI

  • IBM's latest AI partnership is with AI infrastructure and chip provider Groq
  • Groq's Language Processing Units (LPUs) have been built specifically for inferencing workloads
  • The combination of IBM's enterprise know-how with Groq's speedy LPUs could help take more AI pilots to production

IBM is big on AI partnerships. Really big. From models to infrastructure, the company has been teaming up with everyone from Oracle and AMD to Anthropic to bring enterprises the tools they need. But its latest collaboration – this one with rising AI star Groq – could be uniquely powerful.

Why? Because Groq is bringing its unique Language Processing Units (LPUs) to the table.

Whereas GPUs kind of stumbled into their role as the AI processor of choice, Groq’s LPUs were designed specifically with large language model inferencing in mind. The company’s very nerdy explanation of how it all works is here, but for IBM and its customers, the two results that really matter are speed and cost.

“From what we are seeing, Groq can deliver in many cases five times faster inferencing at a lower cost in many cases with consistent performance and results,” Nick Holda, IBM’s VP of AI Technology Partnerships, told Fierce. “What that means is enterprises that have struggled to take experiments to production because they face latency issues or the cost was astronomical, they can now do that.”

Groq who?

Founded in 2016, Groq (no, not that Grok) debuted its first LPU in 2019. But it’s not just designing chips. Groq has also been working with data center co-location providers to build out its own cloud infrastructure.

It already has a presence in the U.S., Canada, Saudi Arabia and Finland, having reportedly added 12 data center locations thus far in 2025. The Wall Street Journal reported the company is planning to rapidly expand its data center footprint to Asia.

To fuel this growth, Groq has also been raising gobs of money from investment firms. Most recently, it bagged $750 million from the likes of Disruptive, BlackRock, Samsung and Cisco.

In short, it’s quickly becoming one of the big names to know in the AI space.

IBM collab

According to IBM’s Holda, combining Groq’s inferencing muscle with IBM’s enterprise know-how, agents, security and orchestration could enable a “fundamentally different outcome” for businesses whose AI rollouts have been hung up on price and performance issues.

Think of it like the transition from DVDs to streaming, he explained. When that shift first happened, buffering was a huge headache. If you wanted to watch a video, you’d often just sit and wait for it to download first. “Now we don’t even think about the fact that streaming has gotten good enough that I can just click and then it goes on and it immediately starts streaming,” he said.

That is the kind of change IBM is hoping its collaboration with Groq can deliver.

If – and it’s a big if – they can deliver on this goal, it seems only natural that demand for Groq’s services would skyrocket. But does Groq have enough capacity to handle such a wave?

Holda pointed to Groq’s publicly stated expansion plans and told Fierce IBM is working with the company to align demand so that Groq can scale up in locations where the pair expects the most customer adoption in the near term.