InfiniBand Strengthens Leadership as the High-Speed Interconnect

32
InfiniBand Strengthens Leadership as the High-Speed Interconnect Of Choice Provides the Best Return on Investment by Delivering the Highest System Efficiency and Utilization TOP500 Supercomputers June 2011

Transcript of InfiniBand Strengthens Leadership as the High-Speed Interconnect

Page 1: InfiniBand Strengthens Leadership as the High-Speed Interconnect

InfiniBand Strengthens Leadership as the High-Speed Interconnect Of Choice

Provides the Best Return on Investment by Delivering

the Highest System Efficiency and Utilization

TOP500 Supercomputers

June 2011

Page 2: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 2

TOP500 Performance Trends

Explosive high-performance computing market growth

Clusters continue to dominate with 82% of the TOP500 list

Mellanox InfiniBand solutions provide the highest systems utilization in the TOP500

80%

CAGR 39%

CAGR

Page 3: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 3

InfiniBand in the TOP500

InfiniBand is the dominant interconnect technology for high-performance computing

• 206 clusters, 41.2% of the list, InfiniBand demonstrates CAGR of 20% 2008-2011

• In the Top100, 16 new InfiniBand systems were added in 2011, 0 new Ethernet systems

InfiniBand is the interconnect of choice for large scale systems

• InfiniBand connects the best power efficient system in the TOP10 (Petaflop systems) – 2x better versus TOP10 average

• InfiniBand connects the most efficient system on the TOP500 – more than 96% system efficiency

• The InfiniBand connected CPUs grew 34% YoY, The overall InfiniBand based systems performance grew 44% YoY

InfiniBand makes the most powerful clusters - TOP10, TOP20

• 5 of the TOP10 (#4, #5, #7, #9, #10), 8 of the TOP20 (#13, #16, #17)

• 50% (5 systems) of the world sustained Petaflop systems

The most used interconnect in the TOP100, TOP200, TOP300, TOP400

• Connects 61% (61 systems) of the TOP100 while Ethernet only 1% (1 system)

• Connects 58% (115 systems) of the TOP200 while Ethernet only 17% (35 systems)

• Connects 49% (147 systems) of the TOP300 while Ethernet only 33% (100 systems)

• Connects more systems in the TOP400 versus Ethernet or any other technology – 42% (177 systems)

InfiniBand the interconnect of choice for GPU-based systems

• 93% of the GPU based systems are connected with InfiniBand

• 14 GPU based systems on the TOP500 list, 13 are InfiniBand connected, 1 proprietary

Diverse set of applications

• High-end HPC, commercial HPC and enterprise data center

Page 4: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 4

Mellanox in the TOP500

Mellanox 40Gb/s InfiniBand is the only scalable 40Gb/s technology on the list • Enables the highest system utilization in the TOP500 – 96.3% system efficiency

• Enables the top 3 highest utilized system on the TOP500 list

• Enables the most power efficient clusters based on standard technology – the top 7 systems

• Accounts for more than twice the performance compared to Ethernet systems

Mellanox connects the best power efficiency system on the TOP10 • Most power efficient Petascale system – 2x better versus TOP10 average

Mellanox InfiniBand is the only Petascale proven standard interconnect solution • Connects 5 out of the 10 sustained Petascale performance systems on the list

• Connects nearly 3x the number of Cray based system in the Top100, more than 7x in TOP500

Mellanox’s end-to-end scalable solutions accelerates GPU-based systems • GPUDirect technology enables faster communications and higher performance

Mellanox 10GigE is the highest ranked 10GigE system (#124) • Mellanox 10GigE enables 22% higher efficiency compared to 10GigE iWARP System

Page 5: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 5

Interconnect Trends – TOP500

InfiniBand is the de-facto high-speed connectivity for compute demanding applications

• 206 systems on the TOP500 June 2011 list, 41.2% of the systems, more than 90% Mellanox based

Mellanox InfiniBand and Ethernet technology proven to deliver highest scalability and efficiency

Page 6: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 6

InfiniBand delivers highest utilization and efficiency, Only 40Gb/s interconnect on the TOP500 list

Migration expected to FDR InfiniBand 56Gb/s in the coming TOP500 releases

InfiniBand Trend – TOP500

Introduction of

QDR InfiniBand

Introduction of

FDR InfiniBand

Page 7: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 7

InfiniBand demonstrates fastest penetration within the TOP500

TOP500 Interconnect Penetration

Trend line

Page 8: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 8

InfiniBand connected systems performance demonstrates highest growth rate

TOP500 Performance Trend

Page 9: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 9

Interconnect Trends – TOP100 Status

InfiniBand is the leading interconnect in the TOP100

The natural choice for world leading supercomputers • Performance, Efficiency, Scalability

Page 10: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 10

Paving The Road to Exascale Computing

Mellanox InfiniBand is the interconnect of choice for PetaScale computing

• Accelerating 50% of the sustained PetaFlop systems (5 systems out of 10)

Page 11: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 11

InfiniBand connects the majority of the TOP100, 200, 300, 400 supercomputers

Due to superior performance, scalability, efficiency and return on investment

InfiniBand versus Ethernet – TOP100, 200, 300, 400

Page 12: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 12

TOP500 Interconnect Placement

InfiniBand is the high performance interconnect of choice • Connects the most powerful clusters, and provides the highest system utilization

InfiniBand is the best price/performance connectivity for HPC systems • For all system sizes and applications

Page 13: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 13

InfiniBand connects 8 of the 10 most efficient systems, including the top 3

More than 96% system utilization achieved with Mellanox connectivity solutions

Top 10 Most Performance-Efficient Systems on the TOP500

Page 14: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 14

Enabling the Best Return on Investment

Mellanox InfiniBand connects all of the top 3 most efficient systems

• Up to 96.3% efficiency/utilization, only 3.7% less than the theoretical limit

Mellanox enables the best power-efficient Petascale systems

• 2x better power efficiency versus Petaflop systems average

Mellanox’s end-to-end InfiniBand solutions provide HPC users with best return on investment

• Best performance and scalability, highest efficiency and utilization

• Lowest power/performance

Host/Fabric

Software

Industries Only End-to-End InfiniBand and Ethernet Portfolio

ICs Switches/Gateways Adapter Cards Cables

Page 15: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 15

InfiniBand’s Unsurpassed System Efficiency

TOP500 systems listed according to their efficiency

InfiniBand is the key element responsible for the highest system efficiency

Mellanox delivers efficiencies of up to 96% with IB, 80% with 10GigE (highest 10GigE efficiency)

Page 16: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 16

TOP500 Interconnect Comparison

InfiniBand systems account for twice the performance than Ethernet systems

The only scalable HPC interconnect solutions

Page 17: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 17

InfiniBand Performance Trends

InfiniBand connected CPUs grew 34% from June 10 June 11

InfiniBand-base system performance grew 44% from June 10 to June 11

Mellanox InfiniBand is the most efficient and scalable Interconnect

Driving factors: performance, efficiency, scalability, many-many cores

Page 18: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 18

Dawning TC3600 Blade Supercomputer

5200 nodes, 120640 cores, NVIDIA GPUs

Mellanox end-to-end 40Gb/s InfiniBand solutions • ConnectX-2 and IS5000 switches

1.27 sustained PetaFlop performance

The first PetaFlop systems in China

National Supercomputing Centre in Shenzhen - #4

Page 19: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 19

Tokyo Institute of Technology - #5

TSUBAME 2.0, the fastest supercomputer in Japan • First PetaFlop system in Japan, 1.2 PF performance

HP ProLiant SL390s G7 1400 servers

4,200 NVIDIA Tesla M2050 Fermi-based GPUs

Mellanox 40Gb/s InfiniBand

Page 20: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 20

NASA Ames Research Center (NAS) - #7

11648 nodes

111,104 cores

1.08 Peraflops sustained performance

Mellanox InfiniBand adapters and switch based systems

Supports variety of scientific and engineering projects • Coupled atmosphere-ocean models

• Future space vehicle design

• Large-scale dark matter halos and galaxy evolution

Page 21: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 21

Commissariat a l'Energie Atomique (CEA) - #9

Tera 100 , the fastest supercomputer in Europe • First PetaFlop system in Europe - 1.05 PF performance

4,300 Bull S Series servers

140,000 Intel® Xeon® 7500 processing cores

300TB of central memory, 20PB of storage

Mellanox InfiniBand 40Gb/s

Page 22: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 22

World’s first machine to break the Petaflop barrier • Los Alamos Nation Lab, #2 on Nov 2009 Top500 list

• Usage - national nuclear weapons, astronomy, human genome science and climate change

CPU-GPU-InfiniBand combination for balanced computing • More than 1,000 trillion operations per second

• 12,960 IBM PowerXCell CPUs, 3,456 tri-blade units

• Mellanox ConnectX 20Gb/s InfiniBand adapters

• Mellanox based InfiniScale III 20Gb/s switches

Mellanox Interconnect is the only scalable solution for PetaScale computing

LANL Roadrunner – #10, World First Petaflop System

Page 23: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 23

“Lomonosov” cluster, #15

674 TFlops, T-Platforms TB2-XN T-Blade Chassis

4.4K nodes, 36K cores

Mellanox end-to-end 40Gb/s InfiniBand connectivity

Moscow State University - Research Computing Center

Page 24: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 24

Sandia “Red Sky” - #16

5.4K nodes, 43K cores, 87% efficiency

Intel Xeon X55xx 2.93Ghz CPUs

Mellanox ConnectX and switch based systems

Network Topology – 3D Torus

Page 25: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 25

Goethe University “LOEWE-CSC” - #22

470TFlops, 20K CPU cores and 800 GPUs

Clustervision based, AMD Opteron Magny-Cours, ATI GPUs

Mellanox 40Gb/s InfiniBand solutions

System supports wide range of research activities • Theoretical physics

• Chemistry

• Life sciences

• Computer science.

Page 26: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 26

Julich - #25, 40Gb/s Networking for Highest System Utilization

274.8 TFlops at 91.6% Efficiency

JuRoPA and HPC –FF Supercomputer

• #13 on the Top500

• 3288 compute nodes

• 79 TB main memory

• 26304 CPU cores

Mellanox End-to-End 40Gb/s Connectivity • Network Adaptation: ensures highest efficiency

• Self Recovery: ensures highest reliability

• Scalability: the solution for Peta/Exa flops systems

• On-demand resources: allocation per demand

• Green HPC: lowering system power consumption

Page 27: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 27

Provides on-demand access to support over 400 users Spread

across 70 research groups • Coupled atmosphere-ocean models

• traditional sciences such as chemistry, physics and biology

• HPC-based research such as bio-medicine, clinical-medicine and social sciences.

Dell based, Mellanox end-to-end 40Gb/s InfiniBand connected • ConnectX-2 HCAs

• BridgeX gateway system, (InfiniBand to Ethernet)

• Mellanox 40Gb/s InfiniBand switches

University of Cambridge - HPC Cloud

Page 28: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 28

“Ekman” cluster, Dell based, 1300 nodes, 86Tflops

Located at PDC, Stockholm

Mellanox 20Gb/s InfiniBand solutions

Fat Tree, full bisection bandwidth

KTH Royal University Stockholm

Page 29: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 29

Leibniz Supercomputing Centre

Schedule: 648 Nodes 2011, 9000 Nodes in 2012, 5000 Nodes 2013

System will be available for European researchers • To expand the frontiers of medicine, astrophysics and other scientific disciplines

System will provide a peak performance of 3 Petaflop/s

IBM iDataplex based

Mellanox end-to-end FDR InfiniBand switches and adapters

Coming Next - LRZ SuperMUC Supercomputer

Page 30: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 30

ORNL “Spider” System – Lustre File System

Oak Ridge Nation Lab central storage system • 13400 drives

• 192 Lustre OSS

• 240GB/s bandwidth

• Mellanox InfiniBand interconnect

• 10PB capacity

World leading high- performance InfiniBand storage system

Page 31: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 31

108 Node IBM GPFS Cluster

Mellanox FabricIT™ management software allows the GPFS

clusters to be part of several class C networks • Using redundant switch fabric

Mellanox BridgeX bridge the InfiniBand network to a 10G/1G

Ethernet intra-network

Partitioning ensures no traffic cross between the different clusters

connected to the GPFS file system

Uses High Availability for IPoIB

Swiss Supercomputing Center (CSCS) GPFS Storage

Page 32: InfiniBand Strengthens Leadership as the High-Speed Interconnect

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 32

For more information

[email protected]