System x WW Marketing, Hyper Scale Computing Solutions − October 2014 Step up, Scale out with...
-
Upload
priscilla-carpenter -
Category
Documents
-
view
219 -
download
5
Transcript of System x WW Marketing, Hyper Scale Computing Solutions − October 2014 Step up, Scale out with...
System x WW Marketing, Hyper Scale Computing Solutions − October 2014
Step up, Scale out with NeXtScale System
Delivering Insight Faster….
2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.
2
Agenda
NeXtScale Overview NeXtScale Family Client Benefits
Introducing IBM NeXtScale System M5 M5 Enhancements Target Market Segments Messaging: Scale, Flexible, Simple
NeXtScale with Water Cool Technology
Timeline
Solutions
“Hartree Centre needed a powerful, flexible server system that could drive research in energy efficiency as well as economic impact for its clients. By extending its IBM System x platform with IBM NeXtScale System, Hartree Centre can now move to exascale computing, support sustainable energy use and help its clients gain a competitive advantage..”
—Prof. Adrian Wander, Director of the Scientific
Computing Department, Hartree Centre
“In my 20 years of working with supercomputers, I’ve never had so few failures out of the box. The NeXtScale nodes were solid from the first moment we turned them on..
—Patricia Kovatch, Associate Dean, Scientific Computing,
Icahn School of Medicine at Mount Sinai
“IBM delivered server hardware of exceptional performance and provided superior support, allowing us to rapidly integrate the system into our open standards based research infrastructure. Complementing the technical excellence of NeXtScale System, IBM has a long track record in creating high-performance computing solutions that gives us confidence in its capabilities...”
—Paul R Brenner, Associate Director, HPC
The University of Notre Dame, Indiana
3
Introducing IBM NeXtScale System M5Modular, high-performance system for scale-out computing
Compute
Chassis
Storage*
Acceleration
Primary Workloads
Standard Rack
Low Cost Chassis provides only power and cooling
Dense High Performance Server
Dense Storage Tray (8 X 3.5” HDDs)
Dense PCI Tray (2 x 300W GPU/Phi)
Standard 19” Racks
Top of Rack switching and choice of fabric
Open Standards Based tool kit for deployment and management
* M5 support to be available with 12Gb version at Refresh 1
High Performance Computing
4
Deliver Insight Faster
Create a system tailored to precisely meet your need now
Provides the ability to adapt
rapidly to new needs and new technology
Drive out complexity with a single architecture
Rapid provisioning, easy to manage,seamless growth
Smart delivery of scale yields better economics and
greater impact per $$
Significant CAPEX and OPEX savings while conserving
energy
NeXtScale System provides the scale, flexibility and simplicity to help clients solve problems faster
SecureEfficient Reliable
Flexible SimpleScale
2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.
Introducing IBM NeXtScale System M5
6
Compute nodeIBM NeXtScale nx360 M5
ChassisNeXtScale n1200 EnclosureAir or Water Cool Technology
Storage NeX node*
nx360 M5 + Storage NeX
New Compute Node fits into existing NeXtScale infrastructure One Architecture Optimized for Many Use Cases
Dense Compute Top Performance Energy Efficient Air or Water Cool Technology Investment Protection
Add RAID card + cable Dense 32TB in 1U Up to 8 x 3.5” HDDS Simple direct connect Mix and Match
PCI Nex node (GPU / Phi)nx360 M5 + PCI NeX
Add PCI riser + GPUs 2 x 300W GPU in 1U Full x16 Gen3 connect Mix and Match
New
*M5 support to be available with 12Gb version at Refresh 1
7
2XMemory
Capacity3
14%Faster Memory4
50%More Cores2
2XHard Drives5
50%More PCI Slots7
Full
Gen3 x16 ML26
All NewHot Swap
HDD5
Choice of Air or Water Cool
39%Faster compute performance1
NeXtScale System M5 EnhancementsIncorporates Broad Customer Requirements
What’s New- 50% more cores and up to 39% faster compute performance* with Intel
Xeon E5-2600 v3 processors (up to 18 core)- Double the memory capacity with 16 DIMM slots (2133MHz DDR4 up to
32GB)- Double the storage capacity with 4x 2.5” drives- Hot swap HDD option- New RAID slot in rear provides greater PCI flexibility- x16 Gen3 ML2 slot supports InfiniBand / Ethernet adapters for increased
configuration flexibility at lower price (increase from x8)- Choice of air or water cool- Investment protection – chassis supports M4 and M5
Key Market Segments- HPC, Technical Computing, Grid, Cloud, Analytics, Managed Service
Providers, Scale-out datacenters- Direct and Business Partner enabled solutions
8
Target Segments - Key Requirements
Key Requirements:• High bin EP processors for maximum
performance• High performing memory• InfiniBand• 4 HDD capacity• GPU support
Key Requirements:• Mid-high bin EP processors• Lots of memory (>256GB/node)
for virtualization• 1Gb / 10Gb Ethernet• 1-2 SS drives for boot
Key Requirements:• Low-bin processors (low cost)• Smaller memory (low cost)• 1Gb Ethernet• 2 Hot Swap drives (reliability)
Key Requirements:• Lots of memory (> 256GB per
node) for virtualization• GPU support
Key Requirements:• Mid-high bin EP processors• Lots of memory (>256GB per node)• 1Gb / 10Gb Ethernet• 1-2 SS drives for boot
High Performance Computing
Data Center Infrastructure
Virtual Desktop
Cloud Computing
Data Analytics
9
NeXtScale M5 addresses segment requirements
Virtual Desktop
Data Analytics
High Performance Computing
NeXtScale M5 provides:• Intel EP (high bin)• Up to 36 cores per node• Fast memory (2.1 GHz), 16 slots• FDR ML2 InfiniBand, future EDR• Broad range of 3.5’ HDDs• 4 int. 2.5’ HDD, 2 hot swap• Up to 2 x GPUs per 1u
Cloud Computing
NeXtScale M5 provides:• Intel EP (mid- to high bin)• Up to 36 cores / node• Up to 512 GB memory / node• Ethernet (1 /10 Gb), PCIe, ML2• Broad range of 3.5”, 2.5” HDDs• 2 front hot-swap drives
Data Center Infrastructure
NeXtScale M5 provides:• Intel EP (low bin)• Up to 36 cores / node• Low cost 4 / 8 GB memory• Onboard Gb Ethernet std.• 2 front hot-swap drives (2.5”)• Integrated RAID slot
NeXtScale M5 provides:• Intel EP (mid- to high bin)• Up to 36 cores / node• Up to 512 GB memory / node• Ethernet (1 /10 Gb), PCIe, ML2• Broad range of 3.5”, 2.5” HDDs
NeXtScale M5 provides:• Choice of Processors• Up to 512 GB memory• Up to 2 x GPUs per 1U
10
Sys
tem
infr
astr
uctu
re
IBM NeXtScale nx360 M5 Server
Sim
ple
arc
hit
ectu
re
½ Wide 1U, 2 socket server Intel E5-2600 v3 processors
(up to 18C) 16x DIMM slots (DDR4,
2133MHz) 2 Front Hot-Swap HDD option
(or std PCI slot) 4 internal HDD capacity New, embedded RAID PCI slot ML2 mezzanine for x16 FDR
and Ethernet Native expansion (NeX)
support – storage and GPU/Phi
IBM NeXtScale nx360 M5 – The Compute Node
Supported on same chassis as M4 version
Power, LEDs
Dual-port ML2 x16 mezzanine card
(IB/Ethernet)
KVM connector
1 GbE ports Optional Hot
Swap HDD or PCIe adapter
x24 PCIe 3.0 slot
E5-2600 v3 CPU
16x DIMMs
Drive bay(s)
x16 PCIe 3.0 slot
RAID Slot
nx360 M5 Server
11
KVM Connector
IBM NeXtScale nx360 M5 Server
2 Hot-Swap SFF HDDs or SSDs
Dual-port x16 ML2 mezzanine card
InfiniBand / Ethernet
Labeling tagfor system naming,
asset tagging
1 GbE portsDedicated or shared mgm’t
Std full-high, half-length PCIe 3.0 Slot
Hot Swap HDD Option
PCI slot Option
Choice of:
or
Power, LEDs
4x 2.5” drives supported per node
12
Sys
tem
infr
astr
uctu
re
n1200 Enclosure
Op
tim
ized
sh
ared
infr
astr
uct
ure
IBM NeXtScale n1200 Enclosure
Fan and Power Controller
Bay 11
Bay 1 Bay 2
Bay 4
Bay 6
Bay 8
Bay 10
Bay 3
Bay 5
Bay 7
Bay 9
Bay 12
3x power supplies
3x power supplies5x 80mm
fans
5x 80mm fans
Front View
Rear View
6U Chassis, 12 bays ½ wide component support Up to 6x 900W or 1300W
power supplies N+N or N+1 configurations
Up to 10 hot swap fans Fan and Power Controller Mix and match compute,
storage, or GPU nodes No built in networking No chassis management
required Mix and match M4 and M5
air cool nodes¹
Investment Protection - Chassis supports M4 or M5 Nodes
13
NeXtScale - Choice of Air or Water Cooling
IBM NeXtScale System
Innovative direct water cooling No internal fans Extremely energy efficient Extremely quiet Lower power Dense, small footprint Lower operational cost and TCO Ideal for geographies with high electricity
costs or space constraints
Air cooled, internal fans Fits in any datacenter Maximum flexibility Broadest choice of configurable
options supported Supports Native Expansion nodes
- Storage NeX- PCI NeX (GPU, Phi)
Air Cool Water Cool Technology
Your
Choice
14
IBM NeXtScale System with Water Cool Technology (WCT)
nx360 M5 WCT Compute Tray (2 nodes)
Power, LEDsDual-port ML2
(IB/Ethernet)Labeling
tag
1 GbE ports
PCI slot for Connect IB
CPU with
liquid cooled
heatsink
Cooling tubes
x16 DIMMs
1 GbE ports
n1200 WCT Enclosure
6 full wide bays12 compute nodes
n1200 WCT Manifold
Sys
tem
infr
astr
uctu
reS
imp
le a
rch
itec
ture
Water Cool Node & Chassis
Full wide, 2-node compute tray 6U Chassis, 6 bays (12
nodes/chassis) Manifolds deliver water directly
to nodes Water circulated through cooling
tubes for component level cooling
Intel E5-2600 v3 CPUs 16x DDR4 DIMM slots InfiniBand FDR support (ML2 or
PCIe) 6x 900W or 1300W PSU No fans except PSUs Drip sensor / Error LEDs
15
Even a small cluster can change the outcome
Start at any size and grow as you want
Efficient at any scale with choice of air or
water cooling
Maximum impact/$
Optimized stacks for performance,
acceleration, and cloud computing
NeXtScale – Key Messages
SIMPLE
Single Architecture with Native Expansion
Built on Open Standards
Optimized for your data center today and
tomorrow
Channel and box ship capable
One part number unlocks IBM’s service
and support
Flexible Storage and Energy Management
The back is now the front—simplify
management and deployment
Get in production faster with Intelligent Cluster
Optimized shared infrastructure without
compromising performance
“Essentials only” design
SCALE FLEXIBLE
16
NeXtScale – Key Messages
Even a small cluster can change the outcome
Start at any size and grow as you want
Efficient at any scale with choice of air or
water cooling
Maximum impact/$
Optimized stacks for performance,
acceleration, and cloud computing
SCALE
17
Game Changing Results
Life Insurance Actuarial workbook 1700 records that took 14 hours on a single workstation now takes 2.5
minutes on small cluster 1 million records that took 7.5 days on 600 workstations now takes 2 hours
on a 3 rack cluster with only 150 nodes
Workstation Node(s)
Make better decisions by running larger, more sophisticated models
Spot Trends Faster and more effectively by
reducing total time to results
Manage Risk Better by increasing accuracy and
visibility of models and datasets
Even a small cluster can change the outcome
Scale: The Power of Scale Delivers Benefits at Any Size
18
Want to speed how quickly you can grow? Shipped fully assembled Client driven, Choice Optimized. “Starter Packs” are Appliance easy. CTO flexible
Growing by leaps and bounds? NeXtScale can arrive ready to power on - ‘personality’ applied Racks at a time or complete infrastructure ready containers
Growing node by node? Available direct from IBM Optimized for availability through our partners Install the chassis today, grow into it tomorrow
Single nodesand Chassis
Configured racks or chassis
Complete Clusters and Containers
Scale: Start at Any Size. Grow in any Increment.
Single NodesRack(s)Chassis or
Departmental SolutionsContainers – ‘NeXtPods’
19
40%More Energy Efficient
Data Center1
10%More Power Efficient Server2
85%Heat Recovery by Water
1. Based on comparisons between air-cooled IBM iDataPlex M4 server and water-cooled iDataPlex M4 servers2. LRZ, a client in Germany DC numbers3. Geography dependent
NeXtScale System with Water Cool Technology
Scale: Achieve extreme scale with ultimate efficiency
Water Cool Technology
• Requires no auxiliary cooling³• No chillers required due to warm water cooling
(up to 45 C)3
• No Fans required for compute elements• Small power supply fans only• Lower operational costs, and quieter
• Re-use warm water to heat other facilities• Run processors at higher frequencies (Turbo mode)
20
CUSTOMER BENEFITS
LESS racks per solution680%
Memory Runs
FASTER2
14%
Power Savigns with Platform LSF Energy
Aware4
15%
One nx360 with SSDs delivers same IO perf as
355Hard Disks3
2X
1.1 TFLOP
Maximize the capability of your data center floor with dense and essential IT
More cores per floor tileEasy front access serviceability Choice of rack infrastructureLight weight + high performance can reduce floor loading
Race Car Design – performance and cost point ahead of features / functionsTop bin E5-2600 v3 processorsFast memory running at 2133MhzChoice of SATA, SAS, or SSD on boardOpen ecosystem of high speed IO interconnects
More High Frequency Cores1
50%Processing
More FLOPs per cycle than a Xeon E-5 2600
v27
Performance achieved per server
40%Less weight per system5
Scale: Maximum impact per $. Per ft2. Per rack.
2.7XIncrease in Flops/Watt10
50%Less Servers required 9
21
Workload and Resource
Management
Data Management
Infrastructure Management
Platform LSF Family
Batch, MPI workloads with process mgmt, monitoring,
analytics, portal, license mgt
Platform HPC
Simplified, integrated HPC mgmt for batch, MPI workloads
integrated with systems
Platform Symphony Family
High throughput, near ‘real time’ parallel compute and Big Data /
MapReduce workloads
Big Data / Hadoop
Simulation / Modeling
AnalyticsApplications
Heterogeneous Resources
Compute Storage Network
Virtual, Physical, Desktop, Server, Cloud
Platform Cluster Manager Family
Provision and manage Single Cluster (Standard) to Dynamic Clouds (Advanced)
Elastic Storage based on General Parallel File System (GPFS)
High performance software defined storage
Social &Mobile
Scale: Platform Computing – complete, powerful, fully-supported
22
Workload and Resource Mgmt
Global/Parallel Filesystem
Operating Systems
Platform LSF Platform LSF Platform HPCPlatform HPC Adaptive Computing Moab
Adaptive Computing Moab
Applications
Compute Storage Network
Virtual, Physical, Desktop, Server, Cloud
Extreme Cluster Administration Toollkit (xCAT) Bare metal management/provisioning
Bare Metal Management/ Provisioning/Monitoring
Risk Analysis
Simulation / Modeling Analytics
RHEL SuSE Windows Ubuntu
GPFS Lustre NFS
Application Libraries
Intel® Cluster Studio OpenMPI MVAPICH2 Platform MPI
Maui/TorqueMaui/Torque
Scale: Performance Optimized Stack – From Hardware Up
23
Operating Systems
Compute Storage NetworkVirtual, Physical, Desktop, Server, Cloud
Extreme Cluster Administration Toollkit (xCAT) Bare metal management/provisioning
Bare Metal Mgmt and Provisioning
Life Sciences
RHEL SuSE Windows
Workload and Resource Mgmt
Global/Parallel Filesystem
Platform LSF Platform LSF Platform HPCPlatform HPC Adaptive Computing Moab
Adaptive Computing Moab
Applications Oil and Gas
GPFS Lustre NFS
Application Libraries Intel® Cluster Studio CUDA OpenCL OpenGL
Maui/TorqueMaui/Torque
FinanceMolecularDynamics
Scale: GPGPU Accelerator Optimized Stack – From Hardware Up
24
Common Cloud Stack
Hypervisors
Common Cloud Management Platform
Provides Server, Storage and Network Integration, access to OpenStack APIs
Common Cloud Management Platform
Provides Server, Storage and Network Integration, access to OpenStack APIs
SmartCloud Orchestrator
Customers who require optimized utilization, multi-tenacy and
enahanced security
SmartCloud Orchestrator
Customers who require optimized utilization, multi-tenacy and
enahanced security
Private Cloud MSP/CSP
Public Cloud ProvidersApplication
Compute Storage NetworkVirtual, Physical, Desktop, Server, Cloud
KVM, VMWare, Xen, Hyper-V
Puppet, xCAT, Chef, SmartCloud ProvisioningBare Metal Mgmt and Onboarding
IBM Cloud Manager with OpenStack
.
Optimized with automation, security, resource sharing and monitoring over OpenStack CE
IBM Cloud Manager with OpenStack
.
Optimized with automation, security, resource sharing and monitoring over OpenStack CE
OpenStack CE
For customers looking to deploy complete Open Source Solutions with little to no Enterprise features
OpenStack CE
For customers looking to deploy complete Open Source Solutions with little to no Enterprise features
Cloud Management
Solutions
Scale: Cloud Compute Optimized Stack – From Hardware Up
25
NeXtScale – Key Messages
Single Architecture with Native Expansion
Built on Open Standards
Optimized for your data center today and
tomorrow
Channel and box ship capable
One part number unlocks IBM’s service
and support
Flexible Storage and Energy Management
FLEXIBLE
26
Base node delivers robust and dense raw compute capabilities NeXtScale’s Native Expansion allows seamless upgrades of the base to add common functionalities All on a single architecture, with no need for exotic connectors or unique components
+
nx360 M5
IBM NeXtScale nx360 M5 with Storage NeX
Storage NeX*
+
RAID Card +SAS Cable + HDDs
+
nx360 M5
PCI NeX
+
GPU Riser Card + GPU/Phi
IBM NeXtScale nx360 M5 with Accelerator NeX
Storage Graphics Acceleration / Co-processing
Flexible: Native eXpansion – Adding Value, not Complexity
* M5 support to be available with 12Gb version at Refresh 1
27
xCAT • Provides remote & unattended methods to
assist with Deploying, Updating, Configuring, and Diagnosing
IBM ToolsCenter• Consolidated, integrated suite of management
tools• Powerful bootable media creator, FW updating
Platform Computing• Workloads managed seamlessly with Platform
LSF• Deploy clusters easily with Platform HPC
SDN Friendly• Networking direct to system; no integrated
proprietary switching• Support for 1/10/40Gb, InfiniBand, FCoE, and
VFAs
Flexible: Designed on Open Standards = Seamless Adoption
UEFi and iMM• Standards-based hardware that combines
diagnostic and remote control; No embedded SW
• Richer management experience and future-ready
OpenStack Ready• Deploy OpenStack with Chef or Puppet• Mirantis Fuel, SuSE Cloud, IBM SmartCloud
IPMI 2.0 Compliant• Use any IPMI compliant mgt. software – Puppet,
Avocent, IBM Director, iAMT, xCAT, etc.• OpenIPMI, ipmitool, ipmiutils, FreeIPMI
compatible
System Monitoring• Friendly with open source tools like Ganglia,
Nagios, Zenoss, etc.• Use with any RHEL/SuSE (and clones) or
Windows based tools.
28
The Challenge: Package more into the data center without breaking
the clients’ standards Lower Power Costs all day long – peak usage times
and slow times Maximize energy efficiency in data center
The Solution: NeXtScale + IBM Innovation Essentials only design allows more servers to fit into
the data center Designed to consume less power and to lower
energy costs at peak and at idle Smart power management can drive down power
needs when systems are at idle Choice of air and water cooled servers in either IBM
racks or existing racks 40% energy efficiency for water cooled solutions
Flexible: Optimized with your DataCenter in Mind – today and tomorrow
Reduce power cost during slow times
Choiceair or water cooled racks
Our RackOR
yours
Lower energy usage during
the peak
2X MORE Servers per
floor tile
40% More Energy
Efficient with Water Cool
LowestOperational costs with Water Cool
Lower energy usage during
the peak
40% More Energy
Efficient with Water Cool
Choiceair or water cooled racks
Lower energy usage during
the peak
40% More Energy
Efficient with Water Cool
LowestOperational costs with Water Cool
Choiceair or water cooled racks
Lower energy usage during
the peak
40% More Energy
Efficient with Water Cool
29
Customer Benefits
Number of part numbers needed for the entire solution support – no
matter the brand of componentSAVE
105 lbs of cardboard1
54.6 ft3 of styrofoam 288 linear feet of wood
21,730 less paper insertsFaster Time from arrival to
production readiness
75%1
NeXtScale can ship fully configured, ready to power on Fully racked and cabled Labeling with user supplied naming Pre programmed IMMs and addresses Burn in testing before shipment at no added cost
Prefer to receive systems in boxes – no problem
IBM Intelligent Cluster
1Per Rack
Flexible: How Do You Want Your IT to Arrive?
£ ¥
€ $
30
Comprehensive list of interoperability-proven components for building out solutions
IBM servers IBM switching, 3rd party switching IBM storage, 3rd party storage IBM software, 3rd party software Countless cables, cards, and add ins Best recipe approach yields confident interoperability
Each rack is built from over 9000 individual parts Manufacturing LINPACK test provides lengthy burn-in
on all parts in the solution Confidence the parts are installed and functioning properly Any failing parts are replaced prior to shipment Reduces early life part fall out for our clients
Consistent performance and quality are confirmed before shipment
Flexible: Confidence it is high quality and functioning upon arrival
It’s both
Is this one rack
OR
Is it >9,000 parts?
31
Prevent downtime with proactive, first-rate service Resolve outages faster if they do occur to protect your brand Optimize IT and end-user productivity—and revenue—to
enhance business results Protect your brand reputation and keep your customer base Simply support to save time, resources, and cost
Speed + Quality
IBM is a recognized leader in services & support
Flexible: Global Services & Support
57call centers
worldwide with regional and
localized language support
23,000IT support specialists
worldwide who know technology
585parts centers with
13 million IBM and non-IBM
parts
Lenovo’s Service Commitment“After the deal closes, IBM will continue to provide maintenance delivery on Lenovo’s behalf for an extended period pursuant to the terms of a five-year maintenance service agreement with IBM. Customers who originated contracts with IBM should not see a change in their maintenance support for the duration of the customer’s contract.”
94% first call hardware success
rate
A combined total of
6.8Mhardware and software
service requests
114
hardware and software development laboratories
Rated
#1 in Technical Support
Parts are delivered within 4 hours for
99%
of US customers
75%of software calls resolved by
first point of contact
Source: http://shop.lenovo.com/us/en/news/ibm-server
32
NFSLow Cost Block Storage
Click name once for solution to appearClick 2nd time to make disappear and then to select different choice
Simply choose available storage controllers Connect the nx360 node to a JBOD or storage controller of your choice.
+ + =
DCS3700 JBOD
V7000
V3700
V3700 JBOD
Dense Object StorageHadoop
AnalyticsHPC
Low-Cost Object StorageHadoop
AnalyticsVirtualizated Storage
Secure EncryptionCompressionBlock Storage
nx360 M5mini-SAS port
RAID controllermini-SAS cable
1
2 3 4
Natively expand beyond the node with the onboard mini-SAS port
Flexible: NeXtScale Mini-SAS Storage Expansion
Ideal for dense storage requirements
33
NeXtScale Chassis and nodes– 24 x 2 Socket E5-2600 v3 nodes per chassis– Dual port 10G Ethernet– 2x 1G Management Port– SAS HBA w/6Gb External Connector
Storage JBODs– 60 Hot Swap Drives in 4U– 6 JBODs/Rack– 4G SAS NL Disks – Pure JBOD, No Zoning
Networking– 1 x 64 Port 10G Ethernet (optionally 2 switches for redundancy)
• Uplinks Req’d– 2 x 48 Port 1G Ethernet Switches
• Management (1x Dedicated + 1x Shared port)• Connects to Nodes, JBODs, Chassis FPCs, and PDUs
1.44 Petabytes of Raw storage per rack!
Flexible: Dense Storage Customer
34
Efficient Hardware Control beyond the server Powerful Energy Management
(1) Pending announcement of product(2) On select IBM configurations
Flexible: Power efficiency designed into HW, SW and management
80 Plus Platinum power supplies at over 94% efficiency – saves 3-10%
Extreme efficiency voltage regulation – saves 2-3%
Larger, more efficient heat sinks require less air – saves 1-2%
Smart sharing of fans and power supplies reduce power consumption – saves 2-3%
Underutilized power supplies can be placed into a low power standby mode.
Energy efficient turbo mode Less parts = less power Energy Star Version 2(1)
Choice of air or water cooling No fans or auxiliary cooling required for
water cooled version, saving power cost Pre-set operating modes - tune for
efficiency, performance, or minimum power
Chassis level power metering and control Power optimally designed for 1-phase or
3-phase power feeds Optional intelligent and highly efficient
PDUs for monitoring and control Powerful sleep state(2) control reduces
power and latency
xCAT APIs allow for embedding HW control into management applications
LSF Energy Aware features allows for energy tuning on the fly
Platform software can target low-bin CPU applications to lower power on CPUs in mixed environments
Platform Cluster Manager Adv. Edition can completely shut off nodes that are not in use
Open Source monitoring tool friendly allows of utilization reporting
Autonomous power management for various subsystems within each node.
35 ** Only available on IBM NeXtScale and iDataPlex
On Idle Nodes Power Saving Aware Scheduling
─ Ability to set the node/core frequency for a specific job / application / user
─ Set thresholds based on environmental factors – such as node temperature
Energy Saving Policies**- Minimize energy to solution- Minimize time to solution- by intelligently controlling CPU
frequencies
Collect the power usage for an application (AC and DC)**
Make Intelligent Predictions─ Performance, power consumption and
runtime of applications at different frequencies**
On Active Nodes
Optimize Power Consumption with Platform LSF®
Flexible: Energy Aware Scheduling
Policy Driven Power Saving─ Suspend the node to the S3 state
(saves~60W)**─ Idle for a configurable period of time.─ Policy windows (i.e., 10:00 PM – 7 AM) ─ Site customizable to use other
suspension methods
Power Saving Aware Scheduling─ Schedule jobs to use idle nodes first
(Power saved nodes as last resort)─ Aware of job request and wake up nodes
precisely on demand─ Safe period before running job on
resumed nodes
Manual management ─ Suspend, resume, history
36
NeXtScale – Key Messages
The back is now the front—simplify
management and deployment
Get in production faster with Intelligent Cluster
Optimized shared infrastructure without
compromising performance
“Essentials only” design
SIMPLE
37
NeXtScale
Add/remove/power servers without touching the power cables
Know what cable you are pulling Which cable do I pull?
Remove power in rear(from the right system) before
pulling system out from the front
65-80ºF >100 ºF!!!Which aisle do you
want to work in?
Hot aisleCold aisle
Reduce service errors when maintaining systems
Tool-less design speeds problem resolution
Quick access to servers
It’s so dark in hereYou don’t have to be in the dark
Stay in front of the rack and see things better
Competition
Simple: Making management and deployment simple
38
Solution Overview– 1500 server nodes in 36 racks
– 3000 NVIDIA K20x GPGPU accelerators
– IFDR InfiniBand, Ethernet, GPFS
– Enclosed in cold-aisle containment cage
– #11 on June 2014 Top500 list
Delivered fully integrated to the client’s center– HW inside delivery and installation included at no additional cost
– TOP500 Linpack run successfully 10 days after first rack arrived
– All servers pre-configured with customer VPD in manufacturing
Entire solution delivered, supported as 1 part number – Full interoperability test and support
– One number to call for support regardless of component
Included
Interoperability Tested Yes
HPL (Linpack) Stressed / Benchmarked in Mfg. Yes
IBM HW Break Fix Support Yes - All components
Inside Delivery, HW Install. Yes
Bare Metal Customization Available at no charge Yes
Result Production Ready
Intelligent Cluster significantly reduces setup time resulting in getting clients’ into production by at least
75%1 faster than non-Intelligent Cluster offerings
Simple: In Production Faster - ENI Client Example
1 comparison on install time for complete roll your own installation versus IBM Intelligent Cluster delivery
39
IBM fully integrates and tests your cluster, saving time and reducing complexity
Step Intelligent Cluster w/o Intelligent Cluster
Move Servers to DC 15min 40min
Install Rail Kits 0 30min
Install Servers 0 2hr
Cable Ethernet 0 2hr
Cable IB 0 2hr
Rack to Rack 1hr 1hr
Power on Test 0 10min
Program IMMs 0 15min
Program TOR 0 10min
Collect MAC & VPD 0 30
Provision 15min 15min
HW Verification 1hr 0
TOTAL TIME: 2-1/2 hr 9-1/2 hr
Simple: Save Time and Resources with Intelligent Cluster
SAVE ~7 hrs per rack!
40
Shared Power Supplies and Fans 90% reduction in fans1
75% reduction in PSUs1
Each system acts as an independent 1U/2U server Individually managed Individually serviced and swappable servers Use any Top of Rack (TOR) Switch
No contention for resources with in the chassis Direct access to network and storage resources No management contention
Light weight/Low Cost chassis
Simple mid plane with no active components
No in chassis IO switching
No left or right specific nodes
High efficiency PSU and fans
No unique chassis management required
1 typical 1U server with 8 fans and 2 PSUs
12 Nodes
Midplane
Power
Fault
LocateInformation
Power Supply (3X)
80 mm Fans (5X)
Power Supply (3X)
Fan/PowerControl Card
80 mm Fans (5X)
Node Interposers (12X) 1 2
3
4
5
6
I2C Mux
6
5
3
12
FPCFPC
EE
PR
OM
EE
PR
OM
Fan Fault
Fan Fault
LEDs
RJ45RJ45 Ethernet
12 Nodes
Midplane
Power
Fault
LocateInformation
Power Supply (3X)
80 mm Fans (5X)
Power Supply (3X)
Fan/PowerControl Card
80 mm Fans (5X)
Node Interposers (12X) 1 2
3
4
5
6
I2C Mux
6
5
3
12
FPCFPC
Fan Fault
Fan Fault
LEDs
RJ45RJ45 Ethernet
Simple: The Advantages of Shared Infrastructure without the Contention
41
Simple: “Essentials Only” Design
Only includes the essentials Two production 1Gb Intel NICs; dedicated or shared 1Gb
for management Standard PCI card support Flexible ML2/Mezzanine for IO expansion Power, Basic LightPath, and KVM crash cart access Simple ‘pull out’ asset tag for naming or RFID Not painted black just left as silver Clean, simple, and low cost Blade like weight/size – rack server like individuality/control
NeXtScale delivers basic, performance centric IT“I can’t see my servers, don’t care what color they are”
“I don’t use 24 DIMMs why pay for a system to hold them?”
“I only need RAID mirror for OS don’t want extra HDD bays”
“I only need a few basic PCI/IO options”NeXtScale nx360 M5
2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.
Introducing
IBM NeXtScale System
with Water Cool Technology
43
3 Ways to Cool your Datacenter
Standard air flow with internal fans Good for lower kW densities Less energy efficient Consumes more power – higher
OPEX Typically used with raised floors
which adds cost and limits airflow out of tiles
Unpredictable cooling. Hot spots in one area, freezing in another
Air cool, supplemented with RDHX door on rack
Uses chilled water Works with all IBM servers and
options Rack becomes thermally transparent
to data center Enables extremely tight rack
placement
100% water cooled No fans or moving parts in system Most energy efficient datacenter Most power efficient servers Lowest operational cost Quieter due to no fans Run processors in turbo mode for max
performance Warm water cooling means no expensive
chillers required Good for geographies with high electricity
cost
Direct Water CooledAir Cooled Rear Door Heat Exchangers
44
40%More Energy Efficient
Data Center1
10%More Power Efficient Server2
85%Heat Recovery by Water
1. Based on comparisons between air-cooled IBM iDataPlex M4 server and water-cooled iDataPlex M4 servers2. LRZ, a client in Germany DC numbers3. Geography dependent
Water Cool Technology
• Requires no auxiliary cooling³• No chillers required due to warm water cooling
(up to 45 C)3
• No Fans required for compute elements• Small power supply fans only• Lower operational costs, and quieter
• Re-use warm water to heat other facilities• Run processors at higher frequencies (Turbo mode)
NeXtScale System with Water Cool TechnologyAchieve extreme scale with ultimate efficiency
45
NeXtScale nx360 M5 WCT Dual Node Compute TrayS
yste
m in
fras
truc
ture
Sim
ple
arc
hit
ectu
re
Water Cool Compute Node
2 compute nodes per full wide 1U tray
Water circulated through cooling tubes for component level cooling
Dual socket Intel E5-2600 v3 processors (up to 18C)
16x DIMM slots (DDR4, 2133MHz))
InfiniBand FDR support via choice of ConnectX-3 ML2 adapter or Connect IB PCIe adapter
Onboard GbE NICs
nx360 M5 WCT Compute Tray (2 nodes)
Power, LEDsDual-port ML2
(IB/Ethernet)
Labeling tag
1 GbE ports
PCI slot for Infiniband
PCI slot for Connect IB
x16 ML2 slot
CPU with liquid cooled heatsink
16x DIMM slots
Cooling tubes
Water Inlet
x16 ML2 slot
PCI slot for Connect IB
PCI slot for Connect IB
Water Outlet
46
nx360 M5 WCT Compute Tray (2 nodes) – Front Panel
• 2 compute nodes per tray 6 trays per 6U chassis (12 servers)• Dual x16 ML2 slot supports InfiniBand FDR (optional)• PCIe adapter support for Connect IB or Intel QDR (optional)• GbE dedicated and GbE shared NIC
Power, LEDs
Dual-port ML2 (IB/Ethernet) 1GbE/ Shared
NIC
PCI slot for Infiniband1GbE/ Shared
NICLEDs
(Power, Location, Log, Error)
PCI slot for Infiniband
Dual-port ML2 (IB/Ethernet)
Node #1 Node #2Front Panel*
Rear View
Water OutletWater Inlet
KVM Connector
KVM Connector
* Configuration dependent. Configuration includes Ml2 and PCI adapters.
47
NeXtScale n1200 WCT Enclosure – Water Cool Chassis
IBM NeXtScale n1200 WCT Enclosure
Fan and Power Controller
Front Viewshown with 12 compute nodes installed (6 trays)
Rear View
3x power supplies
3x power supplies
Rear fillers/EMC shields
Rear fillers/EMC shields
Sys
tem
infr
astr
uctu
reS
imp
le a
rch
itec
ture
Water Cool Chassis
6U Chassis, 6 bays Each bay houses a full wide,
2-node tray (12 nodes per 6U chassis)
Up to 6x 900W or 1300W power supplies N+N or N+1 configurations
No fans except PSUs Fan and Power Controller Drip sensor and error LEDs
for detecting water leaks No built in networking No chassis management
required
48
nx360 M5 WCT Manifold Assembly
Manifolds deliver water directly to and from each of the compute nodes within the chassis via water inlet and outlet quick connects.
Modular design enables multiple configurations via sub-assembly building block per chassis drop. 6 models: 1, 2, 3, 4, 5 or 6 chassis drops
6 drop Manifold
n1200 WCT Chassis Single Manifold Drop(1 per chassis)
49
NeXtScale M5 Product Timeline
Refresh 1GA: Jan 2015
Refresh 1GA: Jan 2015Announce: Sept. 8, 2014
Shipments Begin: Sept. 30, 2014GA: Nov. 19, 2014
Announce: Sept. 8, 2014Shipments Begin: Sept. 30, 2014GA: Nov. 19, 2014
A Lot More ComingA Lot More Coming
More StorageMore AcceleratorsMore IO Options
Next Gen Processors4 GPU / 4 HS drive tray
EDR supportBroader SW Ecosystem
OS Support
• 8 additional processors• NVIDIA K80 support• Storage NeX 12Gb Support• Mini-SAS port • -48VDC power supply
Currently ShippingCurrently Shipping
n1200 Chassis
Storage NeX
PCIe NeX
nx360 M4
• nx360 M5 compute node (air cool and water cool)
• Supports existing 6U chassis (air)• New 6U chassis (water)• 14 processor SKUs• PCI NeX support (GPU/Phi)
2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.
NeXtScale Solutions
51
Application Ready Solutions simplifies HPC, speeds deliveryDeveloped in partnership with leading ISVs, based on reference architectures
IBM Platform Cluster Manager
IBM Platform Symphony
IBM Platform HPC
IBM Platform™ LSF®Workload management platform, Intelligent policy-driven scheduling features
Out-of-the-box features reduce complexity of HPC environment
Run compute and data intensive distributed applications on a scalable, shared grid
Quickly and simply provision, run, manage, monitor HPC clusters
Clust
erGrid Cloud
IBM Intelligent Cluster™
IBM NeXtScaleSystem™
Applications
Computenodes
Storage
Accelerators
Networking
52
IBM Application Ready Solution for CLC bio Accelerate time to results for your genomics research
Easy to use, performance optimized solution architected for CLC bio Genomic Server and Workbench software Client support for increased demand for genomics sequencing Drive down cost, speed up the assembly, mapping and analysis
involved in the sequencing process with integrated solution Modular solution approach enables easy scalability as workloads
increase Learn more: solution brief, reference architecture
“It has been a pleasure to work with IBM, optimizing our enterprise software running on the IBM Application Ready Solution for CLC bio platform. We are proud to offer this pre-configured, scalable high-performance infrastructure with integrated GPFS to all our clients with demanding computational workloads.”
- Mikael Flensborg, Director of Global Partner Relations CLC bio, A QIAGEN® Company
Use Case Next Generation Sequencing
Workload size15 human genome (37x) or 120 human exome (150x) per week
30 human genome (37x) or 240 human exome (150x) per week
60 human genome (37x) or 480 human exome (150x)/wk
Head Node - x3550M4 Single Dual Dual
Compute – # x240 or nx360 nodes 3 6 12
Compute - Disk per Node 2 TB 2 TB 2 TB
Compute - Memory per Node 128 128 128
Storwize V7000 Unified (TB) 20 55 90
10 Gigabit switch / adapters yes yes yes
Management Software IBM Platform HPC, Elastic Storage (based on GPFS)52
53
IBM Application Ready Solution for AlgorithmicsOptimized high-performance solution for risk analytics
Easy to use, performance optimized solution architected for IBM Algorithmics Algo One solution Supported software:
Algo Scenario Engine
Algo RiskWatch
Easy to deploy integrated, high-performance cluster environment
Based on “best practices” reference architecture, lowers risk
User friendly portal provides easy access , control of resources
“Many firms will benefit from the Application Ready Solution for Algorithmics to accelerate risk analytics and improve insight . This solution helps lower costs and mitigate IT risk by delivering an integrated infrastructure optimized for active risk management”.
— Dr. Srini Chari, Managing Partner, Cabot Partners
Use Case Algo One Risk Analysis
Workload Small Medium Large
Management Node for Algo / PCM 1 1 (can be shared) 1 (can be shared)
Management Node for Symphony 1 2 (shared) 2 (shared)
Compute Server – x240 or nx360 6 14 36 (or more)
Compute Cores (total) 96 224 574
Compute – Total memory (GB) 768 1792 4608
Elastic Storage (GPFS) Servers None 2 2
Storage – V3700 SAS; shared stg GPFS 31 TB GPFS, 62 TB GPFS, 124 TB
FDR IB switch / adapters Network 10 GbE 10 GbE 10 GbE
Software IBM Platform Cluster Manager (PCM), IBM Platform Symphony, Elastic Storage, DB2 enterprise (opt.)
• Algo Credit Manager
• Algo Risk Application
• Algo Aggregation Services (Fanfare)
Read theanalyst paper
53
54
IBM Application Ready Solution for ANSYSSimplified, high-performance simulation environment
Computational Fluid Dynamics (Fluent, CFX)
Structural Mechanical (Ansys)
Workload size
•1 job 15+M cells using all 120C
•6 jobs 2.5+M cells each
•1 job 25+M cells each using all 200C
•10 jobs 2.5+M cells each
•1 job 200+M cells using all 840C
•20 jobs 10+M cells each
•4 large jobs 2–5 MDOF each
•10 large jobs 2–5 MDOF each
• 15 large jobs 10-20 MDOF each
Head Node – nx360 M4 Single Single Single Single Single Single
Compute – nx360 M4
Quantity 6 10 42 6 10 42
Processor E5-2680 v2 10C E5-2680 v2 10C E5-2680 v2 10C E5-2670 v2 8C E5-2670 v2 8C E5-2670 v2 8C
Memory 128 128 128 256 256 256
Disk Diskless Diskless Diskless 2 x 800 GB SSD 2x800 GB SSD 2x800 GB SSD
GPU Node* – PCI NeX
GPU 2 x NVIDIA K40 2 x K40 2 x K40
Memory 256 256 256
Disk 2 x 800 GB SSD 2 x 800 GB SSD 2 x 800 GB SSD
Visualization* – PCI NeX
GPU - NVIDIA GRID K2 K2 K2 K2 2 x K2
Memory 256 256 256 256 256
File System* DS3524 yes yes yes yes yes yes
NetworkGigabit yes yes yes yes yes yes
FDR IB no yes yes yes yes yes
Management Software Platform HPC or Platform LSF; Elastic Storage (GPFS File System (opt))
Configuration shown is based on IBM NeXtScale System™. IBM Flex System™ x240 with E5-2600 V2 Compute nodes is also available. Both systems are available to order as IBM Intelligent Cluster™. To Learn more: read the solution brief and reference architecture.
* Optional
55
1. Lead with NeXtScale on all x86 Technical Computing (HPC) opportunities
2. Look for NeXtScale opportunities in Cloud Computing, Datacenter Infrastructure, Data Analytics, and Virtual Desktop
3. Evaluate customer’s energy efficiency requirements to assess if Water Cooling is appropriate for them
4. Utilize customer seed systems
5. Learn more about NeXtScale from the links on the Resources page
Call to Action
56
IBM NeXtScale M5 – Resources / Tools
Product Resources: Announcement Page Link Announcement Webcast (replay) Link Product Page Link Data Sheet Link Product Guide Link Virtual Tour
Air Cool Link Water Cool Link
Product Animation Link Infographic IBM Blog Link Benchmarks:
SPEC_CPU2006 - NeXtScale nx360 M5 with E5-2667 v3 Link SPEC_CPU2006 - NeXtScale nx360 M5 with E5-2699 v3 Link
Sales Tools: Sales Kit IBM PW Seller Presentation IBM PW Client Presentation IBM PW Sales Education IBM PW Technical Education IBM PW Seller Training Webcast: NeXtScale M5, GSS Link VITO Letters IBM PW Quick Proposal Resource Kit Link
Client Videos: Caris Life Sciences Link Hartree Centre Link Univ of Notre Dame Link
Analyst Papers Cabot Partners: 3D Virtual Desktops by Perform Link Intersect360: Hyperscale Computing- No Frills Clusters at Scale Link