
Roche: Bring cloud and edge together, but keep it simple
Building a straightforward infrastructure to connect the edge to the cloud? That’s not exactly rocket science. But how do you do it at a global biotech company, where software decisions can literally impact lives? At Edgecase 2025, Niyazi Erdoğan and Dan Acristinii from Roche shared the journey they’ve been on over the past few years.
Niyazi Erdoğan is Experience Lead Edge Foundations at Roche, and Dan Acristinii is Product Manager Edge Foundations. Niyazi joined the company back in 2018, around the time of Kubernetes 1.11. “It always felt like a love-hate relationship,” he joked. Dan came on board a bit later, during the Kubernetes 1.13 era. Roche itself has been around for 129 years and today runs two divisions: Roche Diagnostics and Roche Pharma. Its IT strategy closely mirrors the shift in healthcare overall—from one-size-fits-all treatments, to targeted therapies, and now toward personalized, individual care.

A beautiful mess
The two showed slides of a typical lab environment: a jumble of hardware and software stacked together, patched with connectors, and cables dangling everywhere. For them, it was the perfect metaphor for Roche’s own IT state. Niyazi called it “a beautiful mess of different stacks and solutions.” Globally, every region and country worked differently. Dozens of platforms and systems—and none of them spoke to each other. “That level of fragmentation just didn’t make sense,” Dan recalled.
Knock knock
Then came the knock knock moment. Roche had to answer a tough question: how do you build a reliable, scalable infrastructure to tame the chaos at the Edge? The guiding principle was clear: keep it simple enough for end users, yet powerful enough for developers. Whatever they built had to be easy to install and easy to maintain.

Fleet management tames the chaos
Their idea was to position an edge solution between lab instruments and the firewall, managed by Fleet Management. Niyazi and Dan explained that this rollout came in phases. First, they assembled a toolkit. With hundreds of thousands of clusters, GitOps was a must—but they had to adapt it. “We had to build something ourselves,” Dan said. That led Roche to Gitless GitOps and OCI packages.
Next came connectivity. “The bottleneck was this,” Niyazi explained. “Inside Roche, we had thousands of IP addresses. How do you move from that chaos to a single IP range?” Their answer: Cilium. They built a solution with partners that could bridge edge to cloud.
From traditional to Kubernetes-native OS
Roche also explored moving toward a more Kubernetes-native OS. “The OS we had was fine,” Niyazi said, “but we weren’t using half of what it could do.” They found a good match with Talos—but couldn’t just drop in a new OS across the company. Roche needed extra features, especially around security. So they worked directly with Talos to co-develop them. Step by step, Roche shifted from a traditional OS, to a lightweight OS, to an API-driven OS—and finally to an API Kubernetes-native OS.
Key lessons: keep it simple
So what did Roche learn along the way? Bundle everything into Fleet Management. Roll out globally in small steps. And distribute solutions via Roche’s own platform, navify Algorithm Suite—sometimes fully cloud-based, sometimes hybrid, and sometimes through APIs when nothing else works.
But the biggest lesson, Niyazi and Dan stressed, is simple: don’t reinvent the wheel, adapt what exists. And above all: keep it simple. “The most important user is not the developer,” they emphasized. “Always take a step back and look at things from the user’s perspective. They don’t care about the fancy systems or components you’re using—they just want a reliable, easy-to-use platform.”
Their closing thought? “The stack is an evolution, not a dogma. So keep evolving!”