Red Hat's CTO keeps an 'open' mind on AI

  • Red Hat is sticking to its open ethos despite operating in a world full of proprietary AI tech
  • CTO Chris Wright acknowledged that an open approach comes with challenges, but said AI can help ease the burden on contributors
  • AI also means that the cloud vs. on-prem question now has a workload-specific answer, he said

In a world where it feels like AI is changing everything, there’s one thing the technology hasn’t shifted: Red Hat’s belief in an open approach. If anything, AI seems to have strengthened its conviction.

“Red Hat has always and will continue to ‘default to open,’” Red Hat CTO and SVP of Global Engineering Chris Wright told Fierce.

Though a commitment to open source is exactly what you’d expect to hear from Red Hat, this doubling down is notable in an era where the largest companies and early leaders like AWS and Google have leaned into proprietary products to cash in on AI. Even Meta, which had initially open sourced its Llama family of models, is now apparently pivoting away from an open approach to focus on proprietary tech.

For his part, Wright thinks a proprietary approach is the wrong move in the long term. And, it seems, he’s not alone.

“Proprietary AI models may have taken an early lead, but open ecosystems are taking over, especially in the software that supports these models,” he said.

Indeed, AI giant Anthropic open sourced its model context (MCP) last year and just last week donated it to The Linux Foundation to make it the official open standard for agentic AI. Meanwhile, OpenAI donated AGENTS.md, an open format for providing agents with task instructions and context, while Block donated goose, an on-machine AI agent.

These trio of contributions became the basis for the newly formed Agentic AI Foundation.

AI balancing act

Wright argued that the collaboration and transparency that comes with an open approach is the best way to “concerns constructively and ensure the responsible use of AI technologies.” But he did acknowledge that an open approach does come with its own set of challenges.

“As a project grows, the expectations on project leaders, from faster releases to quicker security updates and secure supply chains, can lead to maintainer burnout. This can be magnified by AI generated code contributions of varying quality and high volume,” he explained.

Wright said Red Hat is focused on supporting projects that use AI both for code creation but also review, test creation and documentation. Or put another way, it wants to use AI, “grounded in solid engineering principles, as an accelerator and force multiplier for maintainers, rather than ignoring these concerns or treating AI as a replacement for human work.”

Data dominance

For years, Wright said, Red Hat has focused on infrastructure enablement. Now, though, its attention is turning toward the data layer. This shift is also apparent in recent acquisition moves made by Red Hat’s parent company IBM, most notably its purchase of open source data platform provider Confluent earlier this month.

“It’s not just about how to run the software; it’s about feeding, managing and securing the data that makes the AI intelligent,” he explained. “We provide a complete AI platform that can orchestrate the entire data flow, allowing you to align smaller and optimized AI models with your data, wherever it lives.”

For the record, the question of where data lives and where workloads run is thornier than you might think.

After years of a blanket push to migrate to the cloud followed by a brief wave of repatriation efforts, “AI has made the decision of where to run a workload a highly strategic, workload-specific calculation,” Wright said.

“AI hasn't picked a side; it has simply amplified the constraints and advantages of each environment, cementing the hybrid cloud as the ultimate architectural default,” he added.

That’s very good news for Red Hat, which has made hybrid cloud enablement its bread and butter.