Mellanox InfiniBand Interconnect The Fabric of Choice for Clustering and Storage

Post on 21-Mar-2016

28 views 2 download

description

Mellanox InfiniBand Interconnect The Fabric of Choice for Clustering and Storage. September 2008. Gilad Shainer – Director of Technical Marketing. Company Overview. Silicon-based server and storage interconnect products R&D, Operations in Israel; Business in California - PowerPoint PPT Presentation

Transcript of Mellanox InfiniBand Interconnect The Fabric of Choice for Clustering and Storage

CONFIDENTIALCONFIDENTIAL

Mellanox InfiniBand InterconnectThe Fabric of Choice for Clustering and Storage

September 2008

Gilad Shainer – Director of Technical Marketing

2 Mellanox ConfidentialMellanox Confidential

Company Overview

Silicon-based server and storage interconnect products• R&D, Operations in Israel; Business in California• Four generations of products since 1999• 250+ employees; worldwide sales & support

InfiniBand and Ethernet leadership• Foundation for the world’s most powerful computer• 3.7M 10/20/40Gb/s ports shipped as of Jun08• Proven execution, high-volume manufacturing & quality

Solid financial position• FY’07 $84.1M, 73% growth from FY’06• Record Revenue in 2Q’08, $28.2M• 1H’08 $53.4M, 3Q’08 est. $28.5M-$29M

Tier-one, diversified customer base• Includes Cisco, Dawning, Dell, Fujitsu, Fujitsu-Siemens, HP, IBM, NEC,

NetApp, QLogic, SGI, Sun, Supermicro, Voltaire

$106M raised in IPO Feb07Ticker MLNX

3 Mellanox ConfidentialMellanox Confidential

InfiniBand End-to-End Products

High Throughput - 40Gb/s Low latency - 1us Low CPU overhead

Kernel bypass Remote DMA (RDMA) Reliability

Blade/Rack Servers StorageSwitch

ADAPTER ADAPTERSWITCH

Adapter ICs & Cards

Cables

Switch ICsSoftware

End-to-End Validation

Maximum Productivity

Cables

Adapter ICs & Cards

4 Mellanox ConfidentialMellanox Confidential

Virtual Protocol Interconnect

StorageNFS, CIFS, iSCSI

NFS-RDMA, SRP, iSER,Fibre Channel, Clustered

NetworkingTCP/IP/UDP

Sockets

ClusteringMPI, DAPL, RDS, Sockets

ManagementSNMP, SMI-S

OpenView, Tivoli, BMC, Computer Associates

10/20/40 InfiniBand

Consolidated Application Programming Interface

App1 App2 App3 App4 AppX…

Acceleration Engines

10GigE Data CenterEthernet

Any Protocol over Any Convergence Fabric

Protocols

Applications

Networking VirtualizationClustering Storage RDMA

5 Mellanox ConfidentialMellanox Confidential

The Fastest InfiniBand Technology

InfiniBand 40Gb/s QDR in full productions• Multiple sites already utilized InfiniBand QDR performance

ConnectX InfiniBand - 40Gb/s server and storage adapter• 1usec application latency, zero scalable latency impact

InfiniScale IV - 36 InfiniBand 40Gb/s switch device• 3Tb/s switching capability in a single switch device

6 Mellanox ConfidentialMellanox Confidential

InfiniBand QDR Switches

1RU 36-port QSFP, QDR switch• Up to 2.88Tb/s switching capacity• Powered connectors for active cables• Available now

19U 18 slot chassis, 324-port QDR switch• Up to 25.9Tb/s switching capacity• 18 QSFP ports per switch blade• Available: Q4 2009

7 Mellanox ConfidentialMellanox Confidential

InfiniBand Technology Leadership

Industry Standard• Hardware, software, cabling, management• Design for clustering and storage interconnect

Price and Performance• 40Gb/s node-to-node• 120Gb/s switch-to-switch• 1us application latency• Most aggressive roadmap in the industry

Reliable with congestion management Efficient

• RDMA and Transport Offload• Kernel bypass• CPU focuses on application processing

Scalable for Petascale computing & beyond

End-to-end quality of service Virtualization acceleration I/O consolidation Including storage InfiniBand Delivers the Lowest Latency

The InfiniBand Performance Gap is Increasing

Fibre Channel

Ethernet

60Gb/s

20Gb/s

120Gb/s

40Gb/s

240Gb/s (12X)

80Gb/s (4X)

8 Mellanox ConfidentialMellanox Confidential

InfiniBand 40Gb/s QDR Capabilities

Performance driven architecture • MPI latency 1us, Zero scalable latency• MPI bandwidth 6.5GB/s bi-dir, 3.25GB/s uni-dir

Enhanced communication• Adaptive/static routing, congestion control

Enhanced Scalability• Communication/Computation overlap• Minimizing systems noise effect (DOE funded project)

Mellanox ConnectX MPI Latency - Multi-core Scaling

0

2

4

6

1 2 3 4 5 6 7 8# of CPU cores (# of processes)

Late

ncy

(use

c)

Mellanox ConnectX MPI Latency - Multi-core Scaling

0

3

6

9

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

# of CPU cores (# of processes)

Late

ncy

(use

c)8-cores 16-cores

9 Mellanox ConfidentialMellanox Confidential

HPC Advisory Council

Distinguished HPC alliance (OEMs, IHVs, ISVs, end-users) Members activities

• Qualify and optimize HPC solutions • Early access to new technology, mutual development of future solutions• Outreach

A community effort support center for HPC end-users• End-User Cluster Center• End- user support center

For details – HPC@mellanox.com

10 Mellanox ConfidentialMellanox Confidential

Thank You

HPC@mellanox.com

10