NVIDIA MQM8700-HS2F Quantum HDR InfiniBand Switch, 40 x HDR QSFP56 Ports, Two Power Supplies (AC), Managed, x86 Dual Core, Standard Depth, P2C Airflow, Rail Kit, with 1-year Service

#102172
Model: MQM8700-HS2F|790-SHQN0Z+P2CMI12 | SKU: MQM8700-HS2F
Sold: 0
In Stock: 16
$ 15776.00

Item Spotlights

  • 40x HDR 200Gb/s Ports or 80x HDR100 100Gb/s Ports
  • Comes with an onboard subnet manager, enabling simple, out-of-the-box fabric bring-up for up to 2048 nodes
  • Deliver 7.2 billion packets-per-second (Bpps), or 390 million pps per port
  • 1+1 Hot-swappable Power Supplies, N+1 Hot-swappable Fans
  • 1U form factor by delivering up to 16Tb/s of non-blocking bandwidth with sub 130ns port-to-port latency
MQM8700-HS2F 40-port Non-blocking Managed HDR 200Gb/s InfiniBand Smart Switch
NVIDIA MQM8700-HS2F Quantum HDR InfiniBand Switch, 40 x HDR QSFP56 Ports, Two Power Supplies (AC), Managed, x86 Dual Core, Standard Depth, P2C Airflow, Rail Kit, with 1-year Service
Specifications
Applications
Product Highlights
Questions & Answers
Resources
Specifications
Applications
Product Highlights
Questions & Answers
Resources
Description

NVIDIA MQM8700-HS2F Quantum HDR InfiniBand Switch, 40 x HDR QSFP56 ports, two power supplies (AC), managed, x86 dual core, standard depth,P2C airflow, rail kit

NVIDIA QM8700 switch systems provide the highest performing fabric solution in a 1U form factor by delivering up to 16Tb/s of non-blocking bandwidth with sub 130ns port-to-port latency. These switches deliver 7.2 billion packets-per-second (Bpps), or 390 million pps per port. These systems are the industry's most cost-effective building blocks for embedded systems and storage with a need for low port density systems. Whether looking at price-to-performance or energy-to-performance, these systems offer superior performance, power and space, reducing capital and operating expenses and providing the best return-on-investment.

Specifications
Part Number
MQM8700-HS2F
Mount Rack
1U rack mount
Ports
40xQSFP56 200Gb/s
System Power Usage
253W
Switching Capacity
16Tb/s
Latency
130ns
CPU
x86 ComEx Broadwell D-1508
System Memory
8GB
Software
MLNX-OS
Power Supply
1+1 redundant and hotswappable power
Dimensions(HxWxD)
1.7"(H) x 17"W) x23.2"(D)
43.6mm (H) x 433.2mm (W) x 590.6mm (D)
Connectivity Solutions
Compute Fabric Topology for 140-node DGX SuperPOD

The 140-node DGX SuperPOD uses 40-port NVIDIA QM8790 switches for all three layers. Each cluster SU consists of 20 DGX A100s, with 8 Leaf switches per SU. With each DGX A100 InfiniBand HCA connected to its fat tree topology, the rail-optimized design at both leaf and spine levels significantly boosts deep learning training performance.

Applications
Product Highlights
SHARP Technology: Low Latency Data Reduction and Streaming Aggregation
Adaptive Routing

Intelligent selection of optimal network paths to reduce latency, alleviate congestion, and achieve dynamic load balancing.

SHIELD: Fast Network Link Self-Healing for Enhanced Availability
GPU Direct RDMA: Optimizing CPU Efficiency and Accelerating Data Transfer Performance

Reduces CPU load, lowers latency, and boosts data transfer speed and bandwidth utilization, enhancing HPC and deep learning performance.

NCCL: Library for Accelerating Multi-GPU Communication

Facilitates collective and point-to-point communication between multiple GPUs, improving data transfer efficiency.

Questions & Answers
Ask a Question
Q:
Is there a difference in the number of management nodes between the switch management function, the opensm subnet manager of the network card, and UFM? Which option is more suitable for customers during deployment?
A:
The management switch is suitable for managing up to 2,000 nodes, while UFM and OFED's openSM node management capabilities are unlimited, depending on the CPU and hardware processing capacity of the management nodes.
Q:
Can the HDR switch support downward compatibility?
A:
Yes, it can achieve downward compatibility by reducing the speed. The 200G HDR ports on the HDR switch can be downshifted to 100G to connect to the EDR network card CX6 VPI 100G.
Q:
Does the IB switch support Ethernet?
A:
Currently, there are no switches that simultaneously support IB and Ethernet. According to IB standards, Ethernet data (IPv4, IPv6) can communicate over the IB network in a tunneled format (IPoIB).
View More
Quality Certification
ISO140012015
ISO 90012015
ISO450012018
FDA
FCC
CE
RoHS
TUV-Mark
UL
WEEE
What We Supply