NVIDIA MQM9700-NS2R Quantum-2 NDR InfiniBand Switch, 64-ports NDR 400Gb/s, 32 OSFP Ports, Managed, Connector-to-power (C2P) Airflow (reverse), with 1-year Service

#102408
Model: MQM9700-NS2R|790-SN7N0Z+P2CMI12 | SKU: 920-9B210-00RN-0M2
Sold: 0
In Stock: Available
$ 34898.00

Item Spotlights

  • 64 ports of 400Gb/s (NDR) over 32 OSFP cages.
  • 51.2Tb/s aggregate bandwidth.
  • Internally managed with on-board subnet manager.
  • Connector-to-power (C2P) airflow (reverse).
  • 1+1 redundant and hot-swappable power.
  • Support RDMA,adaptive routing,SHARPV3,congestion control,self-healing networking.
  • Support Fat Tree,SlimFly,DragonFly+,multi-dimensional Torus, and more.
MQM9700-NS2R 64-port Non-blocking Managed NDR 400Gb/s InfiniBand Smart Switch
NVIDIA MQM9700-NS2R Quantum-2 NDR InfiniBand Switch, 64-ports NDR 400Gb/s, 32 OSFP Ports, Managed, Connector-to-power (C2P) Airflow (reverse), with 1-year Service
Specifications
Applications
Product Highlights
Questions & Answers
Resources
Specifications
Applications
Product Highlights
Questions & Answers
Resources
Description

NVIDIA MQM9700-NS2R Quantum-2 NDR InfiniBand Switch, 64-ports NDR 400Gb/s, 32 OSFP ports, managed, connector-to-power (C2P) airflow (reverse)

The NVIDIA Quantum-2-based QM9700 switch systems deliver an unprecedented 64 ports of NDR 400Gb/s InfiniBand per port in a 1U standard chassis design. A single switch carries an aggregated bidirectional throughput of 51.2 terabits per second (Tb/s), with a landmark of more than 66.5 billion packets per second (BPPS) capacity. The internally managed QM9700 switch features an on-board subnet manager that enables simple, out-of-the-box bringup for up to 2,000 nodes.

Specifications
Part Number
MQM9700-NS2R
Mount Rack
1U rack mount
Ports
32x OSFP (64x 400Gb/s)
Management
Managed
CPU
Intel® Core™ i3 Coffee Lake
Switching Capacity
51.2Tb/s
Airflow
connector-to-power (C2P)
Software
MLNX-OS
System memory
Single 8GB
EMC (emissions)
CE, FCC, VCCI, ICES, RCM
Product safety compliant/certified
RoHS, CB, cTUVus, CE, and CU
Storage
M.2 SSD SATA 16GB 2242 FF
Temperature
Operational: 0℃to 40℃
Non-Operational: -40℃ to 70℃
Dimensions (HxWxD)
1.7” (H) x 17.2” (W) x26” (D)
43.6mm (H) x 438mm (W) x 660mm (D)
Connectivity Solutions
Compute Fabric Topology for 127-node DGX SuperPOD

Each DGX H100 system has eight NDR400 connections to the compute fabric. The fabric design maximizes performance for AI workloads, as well as providing some redundancy in the event of hardware failures.

Applications
Product Highlights
SHARP Technology: Low Latency Data Reduction and Streaming Aggregation
Adaptive Routing

Intelligent selection of optimal network paths to reduce latency, alleviate congestion, and achieve dynamic load balancing.

SHIELD: Fast Network Link Self-Healing for Enhanced Availability
GPU Direct RDMA: Optimizing CPU Efficiency and Accelerating Data Transfer Performance

Reduces CPU load, lowers latency, and boosts data transfer speed and bandwidth utilization, enhancing HPC and deep learning performance.

NCCL: Library for Accelerating Multi-GPU Communication

Facilitates collective and point-to-point communication between multiple GPUs, improving data transfer efficiency.

Questions & Answers
Ask a Question
Q:
Can the same module on an NDR switch have one port connected to an NDR cable and another port connected to an NDR200 1-to-2 cable?
A:
Yes, this is possible, but the switch side needs to configure port splitting for the NDR port.
Q:
Is there a difference in the number of management nodes between the switch management function, the openSM subnet manager of the network card, and UFM? Which option is more suitable for customers during deployment?
A:
Managed switches are suitable for managing up to 2,000 nodes, while UFM and OFED's openSM have unlimited node management capabilities, depending on the CPU and hardware processing capacity of the management nodes.
Q:
Can the NDR switch support downward compatibility?
A:
Yes, it can achieve downward compatibility by reducing the speed. The 400G NDR ports on the NDR switch can be downshifted to 200G to connect to the CX6 VPI 200G HDR network card.
View More
Quality Certification
ISO140012015
ISO 90012015
ISO450012018
FDA
FCC
CE
RoHS
TUV-Mark
UL
WEEE
What We Supply