ConnectX Ethernet Adapter Cards for OCP Spec 3€¦ · • Data center virtualization • Compute...

7
ConnectX ® Ethernet Adapter Cards for OCP Spec 3.0 High Performance 10/25/40/50/100/200 GbE Ethernet Adapter Cards in the Open Compute Project Spec 3.0 Form Factor For illustration only. Actual products may vary. OCP3.0

Transcript of ConnectX Ethernet Adapter Cards for OCP Spec 3€¦ · • Data center virtualization • Compute...

Page 1: ConnectX Ethernet Adapter Cards for OCP Spec 3€¦ · • Data center virtualization • Compute and storage platforms for public & private clouds •PCIe Lanes HPC, Machine Learning,

ConnectX® Ethernet Adapter Cards for OCP Spec 3.0High Performance 10/25/40/50/100/200 GbE Ethernet Adapter Cards in the Open Compute Project Spec 3.0 Form Factor

† For illustration only. Actual products may vary.

OCP3.0

Page 2: ConnectX Ethernet Adapter Cards for OCP Spec 3€¦ · • Data center virtualization • Compute and storage platforms for public & private clouds •PCIe Lanes HPC, Machine Learning,

Mellanox® Ethernet adapter cards in the OCP 3.0 form factor support speeds from 10 to 200GbE. Combining leading features with best-in-class efficiency, Mellanox OCP cards enable the highest data center performance.

World-Class Performance and ScaleMellanox 10, 25, 40, 50, 100 and 200GbE adapter cards deliver industry-leading connectivity for performance-driven server and storage applications. Offering high bandwidth coupled with ultra-low latency, ConnectX adapter cards enable faster access and real-time responses.

Complementing its OCP 2.0 offering, Mellanox offers a variety of OCP 3.0-compliant adapter cards, providing best-in-class performance and efficient computing through advanced acceleration and offload capabilities. These advanced capabilities, which free up valuable CPU for other tasks, while increasing data center performance, scalability and efficiency, include:

• RDMA over Converged Ethernet (RoCE)• NVMe-over-Fabrics (NVMe-oF)

• Virtual switch offloads (e.g., OVS offload) leveraging ASAP2 - Accelerated Switching and Packet Processing®

• GPUDirect® communication acceleration• Mellanox Multi-Host®for connecting multiple compute or storage hosts to a single

interconnect adapter • Mellanox Socket Direct® technology for improving the performance of multi-socket servers.

Complete End-to-End NetworkingConnectX OCP 3.0 adapter cards are part of Mellanox’s 10, 25, 40, 50, 100 and 200 GbE end-to-end portfolio for data centers which also includes switches, application acceleration packages, and cabling to deliver a unique price-performance value proposition for network and storage solutions. With Mellanox, IT managers can be assured of the highest performance, reliability and most efficient network fabric at the lowest cost for the best return on investment.

In addition, Mellanox NEO®-Host management software greatly simplifies host network provisioning, monitoring and diagnostics with ConnectX OCP3.0 cards, providing the agility and efficiency for scalability and future growth. Featuring an intuitive and graphical user interface (GUI), NEO-Host provides in-depth visibility and host networking control. NEO-Host also integrates with Mellanox NEO, Mellanox’s end-to-end data-center orchestration and management platform.

Open Compute Project Spec 3.0The OCP NIC 3.0 specification extends the capabilities of OCP NIC 2.0 design specification. OCP 3.0 defines a different form factor and connector style than OCP 2.0. The OCP 3.0 specification defines two basic card sizes: Small Form Factor (SFF) and Large Form Factor (LFF). Mellanox OCP NICs are currently supported in a SFF.*

* Future designs may utilize LFF to allow for additional PCIe lanes and/or Ethernet ports,

OCP3.0

Page 3: ConnectX Ethernet Adapter Cards for OCP Spec 3€¦ · • Data center virtualization • Compute and storage platforms for public & private clouds •PCIe Lanes HPC, Machine Learning,

• Open Data Center Committee (ODCC) compatible• Supports the latest OCP 3.0 NIC specifications• All Platforms: x86, Power, Arm, compute and storage• Industry-leading performance• TCP/IP and RDMA - for I/O consolidation• SR-IO virtualization technology: VM protection and QoS• Cutting-edge performance in virtualized Overlay Networks• Increased Virtual Machine (VM) count per server ratio

ConnectX OCP3.0 Ethernet Adapters Benefits

TARGET APPLICATIONS• Data center virtualization• Compute and storage platforms for public & private clouds• HPC, Machine Learning, AI, Big Data, and more• Clustered databases and high-throughput data warehousing• Latency-sensitive financial analysis and high frequency trading• Media & Entertainment• Telco platforms

OCP Spec 2.0 OCP Spec 3.0

Card Dimensions Non-rectangular (8000mm2) SFF: 76x115mm (8740mm2)

Baseband Connector Type Mezzanine (B2B) Edge (0.6mm pitch)

Network Interfaces Up to 2 SFP side-by-side or 2 QSFP belly-to-belly Up to two QSFP in SFF, side-by-side

Expansion Direction N/A Side

Installation in Chassis Parallel to front or rear panel Perpendicular to front/rear panel

Hot Swap No Yes (pending server support)

PCIe Lanes Up to x16 SFF: Up to x16

Maximum Power Capability Up to 67.2W for PCIe x8 card; Up to 86.4W for PCIe x16 card SFF: Up to 80W

Multi-Host Up to 4 hosts Up to 4 host in SFF or 8 Hosts in LFF

Host Management Interfaces RBT, SMBus RBT, SMBus, PCIe

Host Managment Protocols Not standard DSP0267, DSP0248

For more details, please refer to the Open Compute Project (OCP) Specifications.

OCP3.0

OCP 3.0 also provides additional board real estate, thermal capacity, electrical interfaces, network interfaces, host conflagration and management. OCP 3.0 also introduces a new mating technique that simplifies FRU installation and removal, and reduces overall downtime.

The table below shows key comparisons between the OCP Specs 2.0 and 3.0.

Page 4: ConnectX Ethernet Adapter Cards for OCP Spec 3€¦ · • Data center virtualization • Compute and storage platforms for public & private clouds •PCIe Lanes HPC, Machine Learning,

General Specs

Ports Dual Ports Dual Ports Dual Ports Single Dual Ports

Port Speed (GbE) 10, 25 10, 25 10, 25, 40, 50,100 10, 25, 40, 50, 100 10, 25, 40, 50, 100, 200

PCIe Gen3 x8 Gen3 x16 Gen3 or Gen4 x16 Gen4 x16 Gen4 x16

Connectors SFP28 SFP28 QSFP28 QSFP28 QSFP56

Typical Power @ max. speed 9.6W 12.7W 16.2W or 19.3W 15.9W 24W

Host Management Yes Yes Yes Yes Yes

Multi-Host Support No* No* No* Yes No

Form Factor

OCP Spec 3.0 OCP 3.0 SFF OCP 3.0 SFF OCP 3.0 SFF OCP 3.0 SFF OCP 3.0 SFF

Bracket Type** Thumbscrew Internal Lock Internal Lock Internal Lock Internal Lock

Ordering Part Numbers

OPNs MCX4621A-ACAB MCX562A-ACAI MCX566A-CCAI or MCX566A-CDAI

MCX565M-CDAI MCX613436A-VDAI

* May also be available with Multi Host support; contact Mellanox for more details

** Contact Mellanox for additional bracket options

For detailed information on features, compliance, and compatibility, please refer to product-specific documentation and software/firmware release notes on www.mellanox.com

Specs, Form Factors & Part Numbers

OCP3.0

Page 5: ConnectX Ethernet Adapter Cards for OCP Spec 3€¦ · • Data center virtualization • Compute and storage platforms for public & private clouds •PCIe Lanes HPC, Machine Learning,

I/O Virtualization and Virtual SwitchingMellanox ConnectX Ethernet adapters provide comprehensive support for virtualized data centers with Single-Root I/O Virtualization (SR-IOV), allowing dedicated adapter resources and guaranteed isolation and protection for virtual machines within the server. I/O virtualization gives data center managers better server utilization and LAN and SAN unification while reducing cost, power and cable complexity.

Moreover, virtual machines running in a server traditionally use multilayer virtual switch capabilities, like Open vSwitch (OVS). Mellanox’s ASAP2 - Accelerated Switch and Packet Processing® technology allows for the offloading of any implementation of a virtual switch or virtual router by handling the data plane in the NIC hardware, all the while maintaining the control plane unmodified. This results in significantly higher vSwitch/vRouter performance without the associated CPU load.

RDMA over Converged Ethernet (RoCE)Mellanox RoCE doesn’t require any network configurations, allowing for seamless deployment and efficient data transfers with very low latencies over Ethernet networks — a key factor in maximizing a cluster’s ability to process data instantaneously. With the increasing use of fast and distributed storage, data centers have reached the point of yet another disruptive change, making RoCE a must in today’s data centers.

Flexible Multi-Host® TechnologyMellanox’s innovative Multi-Host technology provides high flexibility and major savings in building next generation, scalable, high-performance data centers. Multi-Host connects multiple compute or storage hosts to a single interconnect adapter, separating the adapter PCIe interface into multiple and independent PCIe interfaces, without any performance degradation. Mellanox’s OCP 3.0 Small Form Factors (SFF) cards may support up to 4 different hosts (4x4 on SFF) or up to 8 hosts on a Large Form Factor (LFF) card. The technology enables designing and building new scale-out heterogeneous compute and storage racks with direct connectivity among compute elements, storage elements and the network. This enables better power and performance management, while achieving maximum data processing and data transfer at minimum capital and operational expenses.

Mellanox Socket Direct®

Mellanox Socket Direct technology brings improved performance to multi-socket servers by enabling direct access from each CPU in a multi-socket server to the network through its dedicated PCIe interface. With this type of configuration, each CPU connects directly to the network; this enables the interconnect to bypass a QPI (UPI) and the other CPU, optimizing performance and improving latency. CPU utilization improves as each CPU handles only its own traffic, and not the traffic from the other CPU. Mellanox’s OCP 3.0 cards include native support for socket direct technology for multi-socket servers and can support up to 8 CPUs.

OCP3.0

Page 6: ConnectX Ethernet Adapter Cards for OCP Spec 3€¦ · • Data center virtualization • Compute and storage platforms for public & private clouds •PCIe Lanes HPC, Machine Learning,

OCP3.0

Accelerated StorageMellanox adapters support a rich variety of storage protocols and enable partners to build hyperconverged platforms where the compute and storage nodes are co-located and share the same infrastructure. Leveraging RDMA, Mellanox adapters enhance numerous storage protocols, such as iSCSI over RDMA (iSER), NFS RDMA, and SMB Direct to name a few. Moreover, ConnectX adapters also offer NVMe-oF protocols and offloads, enhancing utilization of NVMe based storage appliances.

Other storage related hardware offloads are the Signature Handover mechanism based on the advanced T-10/DIF implementation, and the Erasure Coding offloading capability enabling the building of a distributed RAID (Redundant Array of Inexpensive Disks).

Host ManagementMellanox host management sideband implementations enable remote monitor and control capabilities using RBT, MCTP over SMBus, and MCTP over PCIe – Baseboard Management Controller (BMC) interface, supporting both NC-SI and PLDM management protocols using these interfaces. Mellanox OCP 3.0 adapters support these protocols to offer numerous Host Management features such as PLDM for Firmware Update, network boot in UEFI driver,UEFI secure boot, and more.

Enhancing Machine Learning Application PerformanceMellanox adapters with built-in advanced acceleration and RDMA capabilities deliver best-in-class latency, bandwidth and message rates, and lower CPU utilization. Mellanox PeerDirect® technology with NVIDIA GPUDirect™ RDMA enables adapters with direct peer-to-peer communication to GPU memory, without any interruption to CPU operations. Mellanox adapters also deliver the highest scalability, efficiency, and performance for a wide variety of applications, including bioscience, media and entertainment, automotive design, computational fluid dynamics and manufacturing, weather research and forecasting, as well as oil and gas industry modeling. Thus, Mellanox adapters are the best NICs for machine learning applications.

Secure Network AdaptersMellanox ConnectX OCP 3.0 adapters implement a secure firmware update check, which means that the devices verify – using digital signatures – the firmware binaries prior to their installation on the adapters. This ensures that only officially authentic images produced by Mellanox can be installed, regardless whether the installation happens from the host, the network, or a BMC. Starting from ConnectX-6 Mellanox offers the option for Hardware Root of Trust which introduces secure boot as well.

OCP3.0

Page 7: ConnectX Ethernet Adapter Cards for OCP Spec 3€¦ · • Data center virtualization • Compute and storage platforms for public & private clouds •PCIe Lanes HPC, Machine Learning,

Broad Software SupportAll Mellanox adapter cards are supported by a full suite of drivers for Linux major distributions, Microsoft® Windows®, VMware vSphere® and FreeBSD®. Drivers are also available inbox in Linux main distributions, Windows and VMware.

Multiple Form FactorsIn addition to the OCP Spec 3.0 cards, Mellanox adapter cards are available in other form factors to meet data centers’ specific needs, including:

• OCP Specification 2.0 Type 1 & Type 2 mezzanine adapter form factors, designed to mate into OCP servers.

• Standard PCI Express (PCIe) Gen3 and Gen4 adapter cards.

Standard PCIExpress Adapter CardOCP2.0 Adapter Card

350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: 408-970-3403www.mellanox.com

© Copyright 2019. Mellanox Technologies. All rights reserved.Mellanox, Mellanox logo, ConnectX, GPUDirect, Mellanox PeerDirect, Mellanox Multi-Host, Mellanox Socket Direct and ASAP2 - Accelerated Switch and Packet Processing are registered trademarks of Mellanox Technologies, Ltd. Mellanox NEO-Host is a trademark of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

NOTES: (1) This brochure describes hardware features and capabilities. Please refer to the driver release notes on mellanox.com for feature availability.

(2) Product images may not include heat sync assembly; actual product may differ.

060275BR Rev 1.1

OCP3.0 Adapter Card

† For illustration only. Actual products may vary.

OCP3.0