- OCP and partners, including Google, Nvidia and Oracle, are pushing “fungible” AI data centers with open, modular standards for power, cooling, networking, telemetry and security.
- Executives from Google, Nvidia and Oracle advocated open ecosystems to scale AI while tackling power, density and TCO
- Fungible architectures can feed the need for gigawatt-scale builds as AI demand explodes
2025 OCP GLOBAL SUMMIT, SAN JOSE, CA – The rain was coming down hard Monday night, but that didn't dampen enthusiasm for open source data center design.
"We are the organization that takes hyperscaler innovation to all," said George Tchaparian, CEO of the Open Compute Project, kicking off a series of keynotes Monday evening, featuring a rogue's gallery of leaders from top data center operators and suppliers, including Google, Nvidia and Oracle.
      
Some 11,000 slightly soggy but hardy attendees fill the conference, up from 7,000 last year and blowing past the anticipated 9,000 expected to show up, Tchaparian said.
      
      
 
Partha Ranganathan, Google Cloud VP and Technical Fellow, AI and Infrastructure, was a font of pithy observations
"We often joke in the valley that last year was a phenomenal decade for AI," he said.
      
He added: "If AI researchers are space explorers discovering new worlds, we are the ones building the rockets!" (That slogan was on one of his slides, with the exclamation mark to add oomph.)
Demand for Google Cloud resources is taking off like a rocket, according to statistics Ranganathan shared. The cloud provider has seen a 15X increase in AI accelerator usage over the past 24 months; 37X increase in data usage for Hyperdisk ML, Google's block storage offering for AI/ML workloads, since its July 2024 launch; and a 50x increase in tokens processed per month — a quadrillion monthly tokens across all Gemini surfaces, Ranganathan said.
The fungible AI data center
Google, OCP, hyperscalers and industry partners are collaborating to standardize architecture for "fungible" AI data centers. "Fungible" means interchangeable or replaceable.
Money is fungible because one dollar bill can be exchanged for another without any difference in value. (The opposite is non-fungible — you may remember the "non-fungible tokens" fad of 2020-22 — or maybe you've successfully been able to forget it, in which case, congratulations.)
Fungible data centers use modular, interoperable and interchangeable components for compute, cooling, networking and sustainability. For power, Project Mount Diablo comprises designs for 400V disaggregated power delivery with solid-state transformers and microgrid standards to enable data centers to be both consumers and suppliers of grid power; for cooling, Google led development of Project Deschutes.
Fungible data centers also require standardized layouts, telemetry, server halls, third-party colocation centers and hyperscaler environments, with security provided by OpenTitan and Caliptra roots of trust.
Ranganathan concluded with a call for "AI for AI," using artificial intelligence in systems design to accelerate development, exemplified by Google's own AlphaChip.
"We're basically in the gigascale era of AI," Ian Buck, Nvidia VP of HPC and Hyperscale, said. "Every day you hear about more and more gigawatts of AI being built into data centers that are increasingly dense and incredibly powerful. All of this requires a level of technology, complexity, innovation and invention to build what are some of the engineering marvels of the world." These marvels include compute, networking, mechanical, power and cooling.
"Data centers not only can achieve amazing things in terms of AI, but they're appreciating assets. They actually get smarter, more intelligent, and increasingly add more value and lower cost," Buck said.
That was an intriguing statement, because it runs counter to recent warnings by economists and investors that AI is an economic bubble, like the dotcom and fiber bubbles of the 1990s and 2000s and the 19th-century railroad boom. Investor and author Paul Kedrosky argues that chips are the primary asset in AI data centers and — while the fiber boom resulted in bandwidth and the railroad boom resulted in railroads, both of which remained useful for decades — chips will deprecate in value in a few short years. Buck, apparently, disagrees, arguing that data centers get more valuable over time.
'A complete redesign of everything'
"We are in an age where we are completely redefining cloud infrastructure," said Pradeep Vincent, Oracle Cloud Infrastructure SVP and chief technical architect, who said he has been working on cloud infrastructure for about 20 years. "I've never seen such a rapid change in such a short amount of time," he said. AI, of course, has been driving the change, generating new requirements for power, cooling, design, networking, compute density and software. "It is a complete redesign of everything."
By way of comparison: Total power consumption for the city of San Jose is a bit less than one gigawatt, Vincent said. (San Jose is the 12th most populous city in the U.S., with a city population just shy of 1 million and nearly 2 million in the metropolitan area.) Many data center campuses exceed that power consumption.
Enterprises need rapid access to GPU clusters to drive market leadership, Vincent said. And open standards enable building at high scale and speed. Open standards allow a data center operator to source from multiple vendors and manufacturing pipelines, with more options — extremely difficult to achieve with proprietary standards. "Open standards are a huge enabler for the scale and speed at which we need to build AI infrastructure for our customers," Vincent said
In other words, open standards enable fungible data centers.
Vincent called for the development of standards for cooling, rack design, power storage systems to accommodate abrupt load oscillations that are typical of AI, and expanded networking standards.