NVIDIA MQM9700-NS2F Quantum-2 NDR InfiniBand Switch, 64 x 400Gb/s Ports, 32 OSFP Cages, Managed, Power-to-connector(P2C) Airflow(forward), with 3-year Service

#102403
Model: MQM9700-NS2F|790-SN7N0Z+P2CMI36 | SKU: 920-9B210-00FN-0M0
Sold: 7
In Stock: Available
$ 39999.00

Item Spotlights

  • 64 400Gb/s non-blocking ports with aggregate data throughput up to 51.2Tb/s.
  • Support Remote Direct Memory Access (RDMA), adaptive routing, and NVIDIA
  • Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™.
  • Support Fat Tree, SlimFly, DragonFly+, multi-dimensional Torus, and more.
  • 1+1 redundant and hot-swappable power, 6+1 hot-swappable fan unit.
  • Support CLI, WebUI, SNMP, JSON Interface, or UFM® Platform for Flexible Operation.
QM9700-NS2F 64-port Non-blocking Managed NDR 400Gb/s InfiniBand Smart Switch
NVIDIA MQM9700-NS2F Quantum-2 NDR InfiniBand Switch, 64 x 400Gb/s Ports, 32 OSFP Cages, Managed, Power-to-connector(P2C) Airflow(forward), with 3-year Service
Specifications
Applications
Product Highlights
Questions & Answers
Resources
Specifications
Applications
Product Highlights
Questions & Answers
Resources
Description

NVIDIA MQM9700-NS2F Quantum-2 NDR InfiniBand Switch, 64 x 400Gb/s ports, 32 OSFP cages, managed,  power-to-connector(P2C) airflow(forward)

NVIDIA QM9700 switch come with 64 400Gb/s ports on 32 physical octal small form-factor pluggable (OSFP) connectors that can be split to deliver up to 128 200Gb/s ports. The compact, 1U, fixed configuration switch offering includes internally managed and externally managed (aka unmanaged) versions. They carry an aggregated bidirectional throughput of 51.2 terabits per second (Tb/s), with a landmark capacity of more than 66.5 billion packets per second. As an ideal rack-mounted InfiniBand solution, the NVIDIA Quantum-2 switches allow maximum flexibility, enabling a variety of topologies, including Fat Tree, Dragonfly+, multi-dimensional Torus, and more.

Specifications
Part Number
MQM9700-NS2F
Mount Rack
1U rack mount
Ports
32xOSFP 2x400Gb/s
System Power Usage
747W
Switching Capacity
51.2Tb/s
Latency
130ns
CPU
x86 Coffee Lake i3
System Memory
8GB
Software
MLNX-OS
Power Supply
1+1 redundant and hotswappable power
Dimensions(HxWxD)
1.7"(H) x 17"W) x23.2"(D)
43.6mm (H) x 433.2mm (W) x 590.6mm (D)
Connectivity Solutions
Compute Fabric Topology for 127-node DGX SuperPOD

Each DGX H100 system has eight NDR400 connections to the compute fabric. The fabric design maximizes performance for AI workloads, as well as providing some redundancy in the event of hardware failures.

Applications
Product Highlights
SHARP Technology: Low Latency Data Reduction and Streaming Aggregation
Adaptive Routing

Intelligent selection of optimal network paths to reduce latency, alleviate congestion, and achieve dynamic load balancing.

SHIELD: Fast Network Link Self-Healing for Enhanced Availability
GPU Direct RDMA: Optimizing CPU Efficiency and Accelerating Data Transfer Performance

Reduces CPU load, lowers latency, and boosts data transfer speed and bandwidth utilization, enhancing HPC and deep learning performance.

NCCL: Library for Accelerating Multi-GPU Communication

Facilitates collective and point-to-point communication between multiple GPUs, improving data transfer efficiency.

Questions & Answers
Ask a Question
Q:
Can the same module on an NDR switch have one port connected to an NDR cable and another port connected to an NDR200 1-to-2 cable?
A:
Yes, this is possible, but the switch side needs to configure port splitting for the NDR port.
Q:
Is there a difference in the number of management nodes between the switch management function, the openSM subnet manager of the network card, and UFM? Which option is more suitable for customers during deployment?
A:
Managed switches are suitable for managing up to 2,000 nodes, while UFM and OFED's openSM have unlimited node management capabilities, depending on the CPU and hardware processing capacity of the management nodes.
Q:
Can the NDR switch support downward compatibility?
A:
Yes, it can achieve downward compatibility by reducing the speed. The 400G NDR ports on the NDR switch can be downshifted to 200G to connect to the CX6 VPI 200G HDR network card.
View More
Quality Certification
ISO140012015
ISO 90012015
ISO450012018
FDA
FCC
CE
RoHS
TUV-Mark
UL
WEEE
What We Supply