Skip to main content

Meta FBNIC 4x100G: Custom Network Silicon Enters Full Deployment

·564 words·3 mins
Meta Custom Silicon Data Center Networking Marvell OCP
Table of Contents

In 2025, the data center industry is undergoing a decisive shift toward hardware autonomy, as hyperscalers increasingly replace merchant silicon with in-house designs. Meta (formerly Facebook) has now crossed a major threshold with the FBNIC 4x100G, a custom network interface controller co-developed with Marvell and deployed at scale across its Prometheus and Yosemite v4 server platforms.

This marks Meta’s transition from optimization-at-the-software-layer to end-to-end control of the server networking boundary.


🧩 Meta’s First Network ASIC Milestone
#

The FBNIC (Facebook Network Interface Controller) is Meta’s first fully custom network adapter ASIC, purpose-built for hyperscale workloads.

Meta FBNIC 4x 100G

  • 5nm Custom Silicon: Fabricated using Marvell’s 5nm accelerated infrastructure platform, the chip is optimized for bandwidth density, power efficiency, and predictable latency.
  • Firmware Sovereignty: Meta owns the entire firmware stack, enabling:
    • Custom telemetry hooks
    • Faster root-cause analysis of network issues
    • Elimination of vendor patch wait cycles
  • Hyperscale Optimization: Unlike merchant NICs designed for broad markets, FBNIC is tightly aligned with Meta’s internal rack, switch, and software architecture.

This “deep control” approach allows Meta to tune networking behavior with the same precision it applies to CPUs and accelerators.


⚙️ Multi-Host Architecture: Efficiency at Rack Scale
#

The defining feature of FBNIC is its multi-host design, aimed at reducing rack-level complexity and total cost of ownership.

4×100G in a Single Adapter
#

  • A single FBNIC connects four independent server hosts
  • Each host receives a dedicated 100Gbps Ethernet link
  • Connectivity is delivered through a single QSFP-DD optical interface

PCIe Gen5 x4 ×4 Topology
#

  • The NIC splits a PCIe Gen5 interface into four isolated x4 endpoints
  • Each server achieves full bandwidth without contention
  • This avoids the oversubscription issues common with PCIe switch-based designs

Lower TCO by Design
#

By consolidating four NICs into one physical card, Meta achieves:

  • Reduced cable count
  • Simplified top-of-rack switch layouts
  • Lower per-server power consumption
  • Improved serviceability in dense racks

🧱 Open Standards, Not Closed Silicon
#

Despite being a custom ASIC, the FBNIC fully embraces the Open Compute Project (OCP) ecosystem.

OCP NIC 3.0 Compliance
#

  • Uses the Small Form Factor (SFF) NIC 3.0 design
  • Includes a specialized ejector latch for front-access hot-swapping
  • Technicians can replace NICs without opening the chassis, minimizing downtime

Open Hardware Contribution
#

  • Marvell has contributed the FBNIC board layout to OCP
  • Other hyperscalers can reuse the mechanical and electrical design
  • This reinforces OCP’s role as the industry’s de facto hyperscale hardware standard

❄️ Thermal and Reliability Engineering
#

Running four simultaneous 100G links introduces significant thermal challenges, especially in AI-heavy data centers.

Optical-Centric Cooling
#

  • A large heatsink is mounted directly over the optical cage
  • Optics are among the most heat-sensitive components in dense racks
  • Direct cooling significantly improves transceiver longevity

Always-On Networking
#

  • Unlike CPUs or accelerators, NICs cannot throttle or sleep
  • The FBNIC maintains dedicated connections to the BMCs of all four hosts
  • This ensures:
    • Continuous monitoring
    • Out-of-band management
    • Persistent network availability for control-plane traffic

🚀 Looking Ahead: Beyond 400G
#

With FBNIC now deployed at scale, Meta’s roadmap is expanding toward next-generation fabrics.

  • Disaggregated Scheduled Fabrics (DSF) and Non-Scheduled Fabrics (NSF) are central to Meta’s 2025–2026 strategy
  • For scale-up workloads, Meta is collaborating on ESUN (Ethernet for Scale-Up Networking) to address GPU-to-GPU communication
  • Marvell’s role as a custom silicon partner has been firmly established

The success of FBNIC demonstrates a broader industry reality:
for hyperscalers at Meta’s scale, building your own NIC is no longer optional—it is foundational infrastructure.

Related

Inside Meta’s 24K-GPU AI Superclusters
·686 words·4 mins
Meta GenAI Infrastructure Supercluster Open Compute
Global Rack Server Solutions for the NVIDIA Blackwell Platform
·341 words·2 mins
NVIDIA Blackwell GB200 Microsoft Azure Google Cloud Meta
AMD 多芯片堆叠专利曝光
·10 words·1 min
AMD Patent Chip Stack