Bullx HPC eXtreme computing cluster references

38

description

Designed without compromise for unlimited innovation, Bull's HPC clusters of bullx processors are deployed on several continents with petascale computing power, for applications from sports car design to simulating the whole full observable universe.

Transcript of Bullx HPC eXtreme computing cluster references

Page 1: Bullx HPC eXtreme computing cluster references
Page 2: Bullx HPC eXtreme computing cluster references

Nov 18th 2013

Bull Extreme Factory Remote Visualizer

3D Streaming Technology

Page 3: Bullx HPC eXtreme computing cluster references

Nov 18th 2013

Readers’ Choice best server product or technology: Intel Xeon Processor

Editors’ Choice best server product or technology: Intel Xeon Phi Coprocessor

Readers’ Choice top product or technology to watch: Intel Xeon Phi Coprocessor

Page 4: Bullx HPC eXtreme computing cluster references

Nov 18th 2013

Readers’ Choice: GENCI CURIE for DEUS (Dark Energy Universe Simulation) project

on

Page 5: Bullx HPC eXtreme computing cluster references
Page 6: Bullx HPC eXtreme computing cluster references
Page 7: Bullx HPC eXtreme computing cluster references
Page 9: Bullx HPC eXtreme computing cluster references
Page 10: Bullx HPC eXtreme computing cluster references
Page 11: Bullx HPC eXtreme computing cluster references
Page 12: Bullx HPC eXtreme computing cluster references
Page 13: Bullx HPC eXtreme computing cluster references
Page 14: Bullx HPC eXtreme computing cluster references
Page 15: Bullx HPC eXtreme computing cluster references
Page 16: Bullx HPC eXtreme computing cluster references
Page 17: Bullx HPC eXtreme computing cluster references

Needs Increase the capacity of University of Reims’ ROMEO HPC Center, an

NVIDIA CUDA® Research Center

Develop the teaching activities on accelerators technologies

A system to drive research in mathematics and computer science, physics

and engineering sciences, and multiscale molecular modeling.

Solution A large GPU-accelerated cluster:

260 NVIDIA Tesla K20X GPU accelerators housed in 130 bullx R421 E3 servers

Expected performance: 230 Tflops (Linpack)

Free-cooling system based on Bull Cool Cabinet Doors

joint scientific and technical collaboration with NVIDIA and BULL

The new “Romeo” system will be installed this summer

Page 18: Bullx HPC eXtreme computing cluster references

Needs Create a world-class supercomputing center that will be

made available to the Czech science and industry community

Find accommodation for the supercomputer while the IT4I

supercomputing center is built

Solution The Anselm supercomputer, an 82 Tflops bullx system housed in a

leased mobull container:

180 bullx B510 compute nodes

23 bullx B515 accelerator blades with NVIDIA M2090 GPUs

4 bullx B515 accelerator blades with Intel® Xeon Phi™ coprocessors

Lustre shared file system

Water-cooled rear doors

bullx supercomputer suite

Page 19: Bullx HPC eXtreme computing cluster references

Needs A system that matches the University’s strong involvement in

sustainable development

A minimum performance of 45 Tflops

Solution A bullx configuration that optimizes power consumption, footprint

and cooling, with Direct Liquid Cooling nodes and free cooling:

136 dual-socket bullx DLC B710 compute nodes

InfiniBand FDR

Lustre file system

bullx supercomputer suite

PUE 1.13

Free cooling installation

Page 20: Bullx HPC eXtreme computing cluster references

Needs A solution focusing on green IT with a very innovative collaboration

and research program

Solution A bullx supercomputer based on the bullx DLC series

Phase 1 (Q1 2013) Throughput: 180 bullx B500 blades, IB QDR

HPC: 270 bullx DLC B710 nodes, 24 bullx B515 accelerator blades, IB FDR

Phase 2 (Q3 2014)

HPC: 630 bullx DLC B720 nodes, 4 SMP nodes with 2TB each

A total peak performance > 1.6 Petaflops at the end of phase 2

Page 21: Bullx HPC eXtreme computing cluster references

Needs Replace the current 41.8 Tflops vector system by a scalar supercomputer

Two identical systems: one for research & one for production

Solution A bullx configuration that optimizes power consumption, footprint and cooling,

with Direct Liquid Cooling nodes:

Phase 1 (2013): 2 x 475 Tflops peak ─ 2 x 990 dual-socket bullx B710 compute nodes with Intel® Xeon® ‘IvyBridge EP’

Phase 2 (2015): 2 x 2.85 Pflops peak ─ 2 x 1800 dual-socket bullx B710 compute nodes with Intel® Xeon® ‘Broadwell EP’

Fat tree InfiniBand FDR

Lustre file system

bullx supercomputer suite

Page 22: Bullx HPC eXtreme computing cluster references

Needs

Replace the 65 TFlops Dutch National Supercomputer Huygens

Support a wide variety of scientific disciplines

A solution that can easily be extended

An HPC vendor who can also be a partner

Solution

A bullx supercomputer delivered in 3 phases:

Phase 1: 180 bullx Direct Liquid Cooling B710 nodes – Intel Sandybridge-based

+ 32 bullx R428 E3 fat nodes

Phase 2: 360 bullx Direct Liquid Cooling B710 nodes – Intel Ivybridge-based

Phase 3: 1080 bullx Direct Liquid Cooling B710 nodes – Intel Haswell-based

A total peak performance in excess of 1.3 Petaflops in phase 3

Page 23: Bullx HPC eXtreme computing cluster references

Needs Provide high level computing resources for the R&D teams at

AREVA, Astrium, EDF, INERIS, Safran and CEA

Meet the requirements of a large variety of research topics

Solution 200 Tflops supercomputer “Airain” with a flexible architecture

594 bullx B510 compute nodes

InfiniBand QDR interconnect

Lustre file system

bullx supercomputer suite

+ extension used for genomics 180 “memory-rich” bullx B510 compute nodes (128GB of RAM)

Page 24: Bullx HPC eXtreme computing cluster references

Needs Replacement of the bullx cluster installed in 2007

Support a diverse community of users, from experienced practitioners

to those just starting to consider HPC

Solution A dedicated MPI compute node partition

128 dual-socket bullx B510 compute nodes with Intel® Xeon® E5-2670

16 “memory-rich” nodes for codes with large memory requirements

A dedicated HTC compute node partition

72 refurbished bullx B500 blades

InfiniBand QDR

Lustre

bullx supercomputer suite

Page 25: Bullx HPC eXtreme computing cluster references

Needs Upgrade the computing capacities dedicated to aerodynamics

Solution A homogeneous cluster of 72 compute nodes

A few specialized nodes used either as “pure” compute nodes or as

hybrid nodes transferring part of the calculations to accelerators

bullx R424-E/F3 2U servers each housing 4 compute nodes

1 NVIDIA 1U system with 4 GPUs

InfiniBand QDR

Managed with the bullx supercomputer suite.

«The bullx cluster provides the ease of use and robustness that our

engineers are entitled to expect from an everyday tool for their work.»

Page 26: Bullx HPC eXtreme computing cluster references

Atomic Weapons Establishment

AWE confirms its trust in Bull with the upgrade

of its 3 bullx supercomputers

New blades in the existing infrastructure

Simple replacement of the initial blades with new bullx B510

blades featuring the latest Sandy Bridge EP CPUs

Willow 2x 35 Tflops Whitebeam 2x 156 Tflops

Blackthorn 145 Tflops Sycamore 398 Tflops

All existing bullx chassis re-used to house the new blades

Upgrade of the storage systems

Cluster software upgraded to bullx supercomputer suite 4

Page 27: Bullx HPC eXtreme computing cluster references

Needs Replace one of their 2 compute clusters used for:

2D and 3D modeling of rivers

3D modeling of flows

Reliability analyses (Monte Carlo simulations)

Solution 126 bullx B510 compute nodes (2x Intel® Xeon® E5-2670)

Bull Cool Cabinet doors (water-cooled)

Full non-blocking InfiniBand QDR interconnect network

Panasas Storage System (110 TB)

Cluster software: Hpc.manage powered by scVENUS (a solution

from science + computing, a Bull Group company)

Bundesanstalt für Wasserbau

Page 28: Bullx HPC eXtreme computing cluster references

HPC Midlands Consortium

Needs

Make world-class HPC facilities accessible to both academic and

industrial researchers, and especially to smaller companies, to

facilitate innovation, growth and wealth creation

Encourage industrially relevant research to benefit the UK economy

Solution

A bullx supercomputer with a peak performance of 48 TF:

188 bullx B510 compute nodes (Intel® Xeon® E5-2600)

Lustre parallel file system (with LSI/Netapp HW)

Water-cooled racks

Page 29: Bullx HPC eXtreme computing cluster references

This research center active in the fields of energy &

transport wanted:

A 100 Tflops extension to their computing resources

To provide sustainable technologies to meet the challenges of climate

change, energy diversification & water resource management

Solution

A bullx supercomputer delivering 130 Tflops peak:

392 B510 compute nodes (Intel® Xeon® E5-2670)

new generation InfiniBand FDR interconnect

GPFS on LSI storage

Page 30: Bullx HPC eXtreme computing cluster references

Needs

Create a world-class manufacturing research

centre

Finite-based modeling of detailed 3D time-

dependent manufacturing processes

Solution

72 bullx B510 compute nodes (Intel® Xeon® E5-

2670)

1 bullx S6030 supernode

Page 31: Bullx HPC eXtreme computing cluster references

Needs

One of the newest public universities in Spain, it

needed a high density compute cluster:

For the Physical Chemistry Division

To design multifunctional nano-structured materials

Solution

A complete solution with:

36 bullx B500 compute blades (Intel® Xeon® 5640)

installation, training, 5-year maintenance

Page 32: Bullx HPC eXtreme computing cluster references

This innovative engineering company

specializing in design for the motor

racing industry wanted to:

Support the use of advanced virtual engineering

technologies, developed in-house, for complete

simulated vehicle design, development and testing

Solution

198 bullx B500 compute blades

2 memory rich bullx S6010 compute nodes for pre

and post meshing

Page 33: Bullx HPC eXtreme computing cluster references

Needs

“Keep content looking great wherever it’s played”

An ultra-dense HPC platform optimized for large scale

video processing

Solution

TITAN, built on bullx B510 blades: a scalable video

processing platform that enables massively parallel

content transcoding into multiple formats at a very high

degree of fidelity to the original

Page 34: Bullx HPC eXtreme computing cluster references

This Belgian research center working for

the aeronautics industry wanted to:

Double their HPC capacity

Find an easy way to extend their computer room

capacity

Solution

A bullx system delivering 40 Teraflops (bullx B500

compute nodes)

Installed in a mobull mobile data centre

Page 35: Bullx HPC eXtreme computing cluster references

Banco Bilbao Vizcaya Argentaria needed to

reduce run time for mathematical models to:

manage financial risks better

have a competitive advantage and get the best price for

complex financial products

Solution A bullx cluster delivering 41 Teraflops, with:

80 bullx R424-E2 compute nodes

2 bullx R423-E2 service nodes

Page 36: Bullx HPC eXtreme computing cluster references

The Dutch meteo was looking for: More computing power to be able to issue early warnings in case

of extreme weather and enhance capabilities for climate research

Solution A system 40 times more powerful than KNMI’s previous system:

396 bullx B500 compute nodes, equipped with Intel® Xeon® Series

5600 processors

9.5 TB memory

peak performance 58.2 Tflop/s

“The hardware, combined with Bull's expert support, gives us

confidence in our cooperation”

Page 37: Bullx HPC eXtreme computing cluster references

300 Tflops peak

A massively parallel section (MPI) including

1,350 bullx B500 processing nodes with a total

of 16,200 Intel® Xeon® cores

An SMP (symmetrical multiprocessing) section

including 11,456 Intel® Xeon® cores, grouped

into 181 bullx S6010/S6030 supernodes

Over 90 Terabytes of memory

Page 38: Bullx HPC eXtreme computing cluster references

Join the Bull User group for eXtreme computing

www.bux-org.com