Switch and SUSE Deploy Digital Twin Platform for AI Factory Data Centers Using NVIDIA Infrastructure
Switch (technology infrastructure company) and SUSE (independent open-source software company) have announced new milestones in a joint initiative to build digital twin environments for large-scale data centers, integrating SUSE's open source infrastructure software with NVIDIA's Omniverse simulation tools and DGX computing systems. The announcement was made at SUSECON 2026 (global flagship conference) in Prague on April 22, 2026.
What the Partnership Covers
The collaboration centers on Switch's Digital Twin initiative, which is designed to create real-time simulation environments of the company's data center operations. Switch, which describes itself as the premier provider of AI, cloud, and enterprise data centers, is using SUSE AI, built on SUSE Rancher Prime and SUSE Linux Enterprise Server, alongside NVIDIA Omniverse libraries to deliver what the companies call highly accurate digital twins of Switch's facilities.
The platform is intended to allow Switch to simulate power usage, thermal dynamics, and infrastructure performance before changes are made in the physical world.
To accelerate the development of AI Factory digital twins specifically, Switch adopted the NVIDIA Omniverse DSX Blueprint. The arrangement also involves the NVIDIA DGX Platform, which serves as the shared hardware layer on which Omniverse libraries enable physically accurate simulation alongside AI and machine learning processing.
READ NEXT: Meta Breaks Ground on $1 Billion AI-Optimized Data Center in Tulsa, Oklahoma, United States
Converging Workloads on Shared Infrastructure
A central technical claim of the partnership is the elimination of siloed infrastructure that has traditionally separated high-end 3D graphics workloads from complex AI programs.
Under the architecture described by the two companies, language models, simulation, and real-time rendering can run on the same shared infrastructure simultaneously rather than across disconnected systems.
Zia Syed, Chief Technology Officer at Switch, described the approach as defining the operational foundation for a new era of computing. "A new class of enterprise applications now requires language models, simulation, and rendering to converge within a single system rather than across disconnected silos," Syed said.
"By integrating SUSE AI with NVIDIA DGX systems and Omniverse platforms, we enable these workloads to run on shared infrastructure, maximizing utilization while simplifying exascale operations."
Switch has framed this effort under its EVO AI Factory software systems and Living Data Center EVO platform, which it describes as the operating plane for unifying AI, simulation, and real-time operations.
Security and Reliability Provisions
The companies say the architecture is designed to function in air-gapped environments, meaning the system can remain secure without being connected to the open internet.
This is positioned as a feature for enterprise customers with strict security requirements. Reliability is addressed through automated software update and management processes, which the companies say reduce the risk of human error.
The platform is also described as providing the necessary integration for large language models within a secure and manageable environment. Switch is additionally using the platform to run its own internal AI models, which the company says helps automate routine tasks and improve how it serves its clients.
SUSE's Role in the Stack
SUSE AI is described as a fully governed, GPU-optimized enterprise AI platform designed to serve as the governed execution engine for deploying and orchestrating mission-critical AI applications across any infrastructure runtime.
The company positions its open source foundation as giving customers the flexibility to integrate third-party technologies without being locked into proprietary systems. Rhys Oxenham, General Manager of AI at SUSE, said the goal is to move customers from experimentation to execution.
"What we're enabling with Switch is the shift from experimentation to execution, where AI, simulation, and real-time rendering run side-by-side on the same infrastructure," Oxenham said.
"By providing a resilient, open source foundation, SUSE gives leaders the flexibility to integrate best-in-class technologies, like NVIDIA AI Enterprise and accelerated computing, on their own terms."
Oxenham characterized SUSE's contribution as providing what he called a digital floor that ensures large AI workloads remain secure, manageable, and continuously available.
Switch's Infrastructure Scale
Switch operates what it describes as massive AI Factories and high-performance data centers. The company markets itself as powering what it calls the world's most sophisticated AI pioneers.
The digital twin initiative is part of Switch's broader effort to manage and optimize the complexity of operating infrastructure at that scale. Digital twins in the data center context involve continuously ingesting operational data to model performance, predict outcomes, and optimize physical infrastructure before changes are deployed.
For an operator of Switch's scale, the ability to run such simulations could affect decisions around power distribution, cooling systems, and hardware configurations without requiring physical intervention.
Announcement Timing and Venue
The partnership milestones were announced at SUSECON 2026, SUSE's annual customer and partner conference, held in Prague, Czech Republic.
The announcement highlights SUSE's positioning of its AI and infrastructure platform in the context of large-scale enterprise data center operations, with Switch serving as a flagship customer use case for the SUSE AI stack running on NVIDIA hardware.
Power Your Data Center Pipeline With Intelligence That Spans Markets Worldwide
How much revenue is slipping through the cracks because your team lacks visibility into data center developments before competitors act? In an industry where land acquisition, power agreements, and construction timelines move fast, late intelligence is no intelligence at all.
The Global Project Tracking (GPT) platform by Blackridge Research was built to close that gap, delivering structured, continuously updated project data across hyperscale campuses, colocation facilities, and edge deployments spanning markets worldwide. Whether you are a developer, contractor, equipment supplier, or investor, the platform puts the full project lifecycle at your fingertips.
From the earliest planning signals to final commissioning records, every stage of a data center project is captured and organized so your business development and strategy teams can prioritize with confidence rather than guesswork.
Upcoming Projects
Tender Notices
Contract Awards
Projects Under Construction
Completed Projects
See exactly how the Global Project Tracking (GPT) platform by Blackridge Research can sharpen your competitive edge across markets worldwide. Book a Free Demo with our team today.
Leave a Comment
We love hearing from our readers and value your feedback. If you have any questions or comments about our content, feel free to leave a comment below.
We read every comment and do our best to respond to them all.