Skip to main content

Global Rack Server Solutions for the NVIDIA Blackwell Platform

·341 words·2 mins
NVIDIA Blackwell GB200 Microsoft Azure Google Cloud Meta
Table of Contents

In recent weeks, major global cloud and AI infrastructure providers — Microsoft, Google, and Meta — have each revealed their rack-level server designs built around the NVIDIA Blackwell platform. Although all three solutions leverage the GB200 architecture, their approaches to cooling, networking, and rack layout differ significantly. Here’s a worldwide overview of these emerging Blackwell-class systems.


💻 Microsoft Azure
#

On October 8, Microsoft announced via X that Azure is the first cloud platform to run full NVIDIA Blackwell systems, deploying AI servers powered by the GB200. Azure integrates InfiniBand networking and a customized closed-loop liquid-cooling system, enabling support for the most advanced large-scale AI workloads.

Early photos reveal that roughly two-thirds of the right side of the rack is dedicated to cooling infrastructure, emphasizing Microsoft’s investment in high-density thermal management. More architectural details are expected at the upcoming Microsoft Ignite conference.

Nvidia Blackwell Rack Server Microsoft


🌐 Google
#

Google also shared an image on X of its custom GB200 NVL rack, currently undergoing testing in its development labs. More information will be provided at the Google Cloud Application Development and Infrastructure Summit on October 30.

Google did not specify which networking technology is being used — and it may not be InfiniBand — but the design is notably compact. Unlike Microsoft’s solution, Google’s setup occupies just two rack units, suggesting a different balance between compute density, cooling needs, and lab-side evaluation workflows.

Nvidia Blackwell Rack Server Google


📘 Meta
#

During the 2024 Open Compute Project (OCP) Global Summit, Meta unveiled its full-rack Blackwell solution, codenamed Catalina. Designed for modularity, scalability, and high-density AI computing, Catalina supports the NVIDIA GB200 Grace Blackwell Superchip, positioning it for next-generation training and inference demands.

Key characteristics include:

  • Support for up to 140 kW per rack
  • Fully liquid-cooled architecture
  • Modular power shelves, compute trays, and switch trays
  • Integrated Orv3 HPR, Wedge 400 fabric switches, a management switch, a BBU, and a rack-management controller

Despite its impressive power capacity, Catalina’s compute module occupies only one rack unit, reflecting Meta’s emphasis on compact modular scaling.

Front and rear views of the rack are shown below:

Nvidia Blackwell Rack Server Meta

Related

Why CUDA Is NVIDIA’s AI Moat
·478 words·3 mins
GenAI NVIDIA GPU CUDA
大厂加速自研AI芯片:Nvidia主导地位受到挑战
·17 words·1 min
AI GenAI NVIDIA GPU OpenAI
GH200,来自英伟达的AI超级计算机
·10 words·1 min
News NVIDIA GH200