Hortonworks Data Platform on OpenPOWER Systems · 2019. 12. 12. · Hortonworks SmartSense ......
Transcript of Hortonworks Data Platform on OpenPOWER Systems · 2019. 12. 12. · Hortonworks SmartSense ......
Hortonworks Data Platform on OpenPOWER Systems
© 2017 Mellanox Technologies 2
Agenda
Overview and Roadmap
Hortonworks Client value proposition
HDP on Power with Mellanox
Performance
Reference Architecture
© 2017 Mellanox Technologies 3
Hortonworks Value Proposition
© 2017 Mellanox Technologies 4
The Data Tipping Point
© 2017 Mellanox Technologies 5
Gain Actionable Insights
Payment Tracking
DueDiligence
SocialMapping
ProductDesign
M & ACall
AnalysisMachine
Data
DefectDetecting
FactoryYields
CustomerSupport
BasketAnalysis
Segments
CustomerRetention
SentimentAnalysis
OptimizeInventories
SupplyChain
Cross-Sell
VendorScorecards
AdPlacement
CyberSecurity
DisasterMitigation
InvestmentPlanning
AdPlacement
RiskModeling
ProactiveRepair
InventoryPredictions
NextProduct Recs
OPEXReduction
HistoricalRecords
MainframeOffloads
Device Data
Ingest
Rapid Reporting
DigitalProtection
Dataas a
Service
FraudPrevention
PublicData
Capture
I N N OVAT E
R E N OVAT E
E X P L O R E O P T I M I Z E T R A N S F O R M
A C T I V EA R C H I V E
E T LO N B O A R D
DATAE N R I C H M E N T
DATAD I S C OV E RY
S I N G L EV I E W
P R E D I C T I V EA N A LY T I C S
M&A Storage
Blending
M&A Ingest
Integration
© Hortonworks Inc. 2011 – 2016. All Rights Reserved
© 2017 Mellanox Technologies 6
HPD is a 100% Open Source Connected Data Platform
Eliminates Riskof vendor lock-in by delivering100% Apache open source technology
Maximizes Community Innovationwith hundreds of developers across hundreds of companies
Integrates Seamlesslythrough committed co-engineering partnerships with other leading technologies
© Hortonworks Inc. 2011 – 2016. All Rights Reserved
© 2017 Mellanox Technologies 7
Hortonworks Influences the Apache Community
A PA C H E H A D O O P
C O M M I T T E R S
We Employ the Committersone third of all committers to the Apache® Hadoop™ project, and a majority in other important projects
Our Committers Innovateand expand Open Enterprise Hadoop
We Influence the Hadoop Roadmapby communicating important requirementsto the community through our leaders
© 2017 Mellanox Technologies 8
Open Source Optimizes Variety and Cost Efficiencies
Hortonworks Employs the Committersone third of all committers to the Apache® Hadoop™ project, and a majority in other important projects
Eliminates Risk and Ensures Integrationprevents vendor lock-in and speeds ecosystem adoption of ODPi-compliant core
Unmatched Economicssupport low cost data-center and cloud architectures for Enterprise Apache Hadoop
COSTEFFICIENCY
DATAVARIETY
EDW
PROPRIETARYHADOOP
HORTONWORKS OPEN SOURCE
RDBMS
© 2017 Mellanox Technologies 9
Hortonworks Nourishes the Community and Ecosystem
H O R T O N W O R K S
C O M M U N I T Y C O N N E C T I O N
H O R T O N W O R K S
P A R T N E R W O R K S
• Community Q/A Resources
• Articles & Code Repos!
• Community of (big data)
developers
• Open Ecosystem of Big Data for
vendors & end-users
• Advance Apache™ Hadoop®
• Enable more Big Data Apps
• World class partner program
• Network of partners providing
best-in-class solutions
H A D O O P & B I G D A T A
E C O S Y S T E M
© Hortonworks Inc. 2011 – 2016. All Rights Reserved
© 2017 Mellanox Technologies 10
Hortonworks Delivers Proactive Support
Hortonworks SmartSense™with machine learning and predictive analytics on your cluster
Integrated Customer Portalwith knowledge base and on-demand training
© Hortonworks Inc. 2011 – 2016. All Rights Reserved
© 2017 Mellanox Technologies 11
HDP on Power with Mellanox
© 2017 Mellanox Technologies 12
Hortonworks and IBM
Collaborate to Offer Open Source Distribution on Power Systems
Latest Hortonworks Data Platform (HDP) to provide IBM customers with more choice in open
source Hadoop distribution for big data processing
(Las Vegas, NV (IBM Edge) - 19 Sep 2016)
• Modern Data Applications on a Modern Data Platform
• Open on Open: 100% Open Hadoop and Spark on OpenPOWER
• Fueling rapid community innovation
• Combined Market Leadership and Reach
• Hortonwork’s strong client success, rapid growth and leadership in the Hadoop
community
• Power's success, large global enterprise install base, and IBM's client focus
Scott Gnau, CTO, Hortonworks at Edge.
Youtube: http://bit.ly/2dSOliW
Youtube: https://youtu.be/z9X---2z2qY
© 2016 IBM Corporation
© 2017 Mellanox Technologies 13
IBM, Hortonworks and Mellanox Combined Value
Unequaled in the Market…
OpenPOWER performance leadership
Flexible, software defined storage
#1 Data Science Platform (Source: Gartner)
#1 SQL Engine for complex, analytical workloads.
Leader in On-premise and Hybrid Cloud solutions
#1 Pure Open Source Hadoop Distribution
1100+ customers and 2100+ ecosystem partners
Employs the original architects, developers and
operators of Hadoop from Yahoo!
#1 provider of High-performance Ethernet adapters
(Source: Crehan Research)
Only end-to-end InfiniBand and Ethernet provider
Fastest growing Ethernet Switch vendor
Fair and predictable performance
Zero packet loss
Lower Latency
Dynamic Buffer
+ +
IBM adopted Hortonworks Data Platform
(HDP) as its core Hadoop distribution and
resell HDP and HDF
Hortonworks will adopt and resell IBM Data
Science Experience (DSX) and IBM Big SQL
IBM and Hortonworks Data Platform adopt
Mellanox Ethernet and InfiniBand as
interconnect solution provider
© 2017 Mellanox Technologies 14
StorageFront /
Back-End
Server /Compute
Switch /Gateway
56/100/200G InfiniBand
10/25/40/50/100/200GbE
Virtual Protocol Interconnect
56/100/200G InfiniBand
10/25/40/50/100/200GbE
Virtual Protocol Interconnect
Leading Supplier of End-to-End Interconnect Solutions
© 2017 Mellanox Technologies 15
Delivering Highest Data Center
Return on Investment
10 of Top 10 Automotive Manufacturers
3 of Top 5 Pharmaceutical Companies
9 of the Top 10 Oil and Gas Companies
5 of Top 6 Global Banks
9 of Top 10 Hyperscale Companies
Mellanox Leads Across Industries
© 2017 Mellanox Technologies 16
Scale Up or Out to Meet Evolving Workloads
• Scale up each node by exploiting the memory bandwidth and multi-threading
• 4X threads per core vs x86 allows you to optimize and drive more workload per node
• Offering 4X memory bandwidth vs x86, POWER8 gives you more options as your workloads expand and
evolve
Unmatched Range of Linux Servers
• From 1U, 16-core servers up to 16 socket, 192 core powerhouses with industry leading reliability all running
standard Linux
• Virtualization options to host low cost dev environments or rich, multi-tenant private clouds
• Wide range of OpenPOWER servers offered by OpenPower members for on-prem and the cloud
Accelerated Analytics
• Add accelerators (flash, GPU, FPGA) with direct access to processor memory with OpenCAPI
Flexibility with HDP on Power Systems
© 2017 Mellanox Technologies 17
Power Systems S822LC for Big Data
Not Just Another Intel Server…
Innovation Pervasive in the Design
NVIDIA:
Tesla K80 GPU Accelerator
Linux by Redhat:
Redhat 7.2 Linux OS
Mellanox:
InfiniBand/Ethernet Connectivity in
and out of server
HGST:
Optional NVMe Adapters
Alpha Data with Xilinx FPGA:
Optional CAPI Accelerator
Samsung:
SSDs & NVMe
Hynix, Samsung, Micron:
DDR4
IBM:
POWER8 CPU
© 2017 Mellanox Technologies 18
Major N. American Food Retailer, implements HDP on IBM POWER
Business need:• Gain a competitive advantage by retaining and analyzing
their store level loyalty program data• Bring outsourced analytics back in-house
Solution:• Consolidation of client transaction data into a Hortonworks
Data Platform on Linux on IBM Power Systems.• SAP Customer Activity Repository (CAR) application,
powered by SAP HANA, connected to the data lake to enable real-time insights.
Business Benefits:• More efficient and flexible in-store experiences for their
clients to increase client loyalty and purchases.
Time to ValueHDP 2.6 running on a cluster of 9 IBM Power System servers
Full solution deployed by IBM lab services and an IBM Business Partner in < 2 weeks
Trial to production in 2 months
© 2017 Mellanox Technologies 19
Customer Success Story
Business Problem
• Transformational journey resulting in rapid expansion of business models
• Technology innovation required to keep up with the business expansion while improving client satisfaction, reducing costs andsupporting the company’s green IT initiatives
- Existing x86 server sprawl not sustainable
Solution with Hortonworks, IBM OpenPOWER servers and Sage Solutions Consulting
• Embraces the open software and hardware model adopted by Florida Blue
• Hortonworks supporting new fraud analytics initiative to reduce costs and client premiums
• OpenPOWER to enable smaller datacenter footprint with stronger reliability
• High performance interconnect solutions from Mellanox provide ample bandwidth and tested in end-to-end HDP solutions
Differentiators:
• Flexibility – Richest family of Linux servers to match your workload’s scale and reliability needs
• Performance and Price/Performance – Leading performance for SQL and Spark workloads
• Designed for Cognitive/AI – Obtain your ML/DL results faster with AI on Power servers
• TCO – 3X compute and storage infrastructure reduction with Power and Elastic Storage
• Open on Open – Leading innovation and choice with open Hadoop on openPOWER
• Support – Hortonworks and IBM industry experts with commitment to client success
© 2017 Mellanox Technologies 20
Performance
© 2017 Mellanox Technologies 21
IBM Power S822LC, Hortonworks and Mellanox
Delivering Leadership in Hadoop Big Data Environments…
• Performance results are based on preliminary IBM Internal Testing of 10 queries (simple, medium, and complex) with varying runtimes running against a 10TB database. The tests were run on 10 x IBM Power System
S822LC for Big Data 20 cores / 40 threads, 2 X POWER8 2.92GHz, 256 GB memory, RHEL 7.2,, HDP 2.5.3 compared to the published x86/Hortonworks results running on 10 x AWS d2.8xlarge EC2 nodes running HDP 2.5;
details can be found at https://hortonworks.com/blog/apache-hive-going-memory-computing/ . Conducted under laboratory condition, individual result can vary based on workload
size, use of storage subsystems & other conditions. Data as of February 28, 2017)
• POWER8 and Hortonworks deliver 1.70X the throughput
compared to Hortonworks running on x86
– 70% More QpH based on the average response time –
complete the same amount of work with less system
resources
– 41% Reduction on average in query response time –
reduced response time enables making business decisions
faster.
• Results are based on IBM internal testing of Power S822LC for Big
Data
– Compared to x86 published results found at
https://hortonworks.com/blog/apache-hive-going-memory-
computing/
– Based on 10 representative queries from a standard query
workload
70%More
Throughput
© 2017 Mellanox Technologies 22
Reference Architecture
© 2017 Mellanox Technologies 23
HDP on POWER – Minimum Production Configuration
FSP
FSP
FSP
FSP
FSP
Worker Node 7
Worker Node 6
Data Network
(private EN)
VLAN untagged
traffic from servers
System Management Node
Master Node 1
Master Node 2
Client Uplink (Campus Network)
Client Uplink (Data Network, optional)
Existing client environment
Solution environment
Campus Network
(shared EN)
VLAN x tagged
traffic from servers
Disk
Disk
FSPMaster Node 3
Worker Node 8 Disk
Edge Node 1
Worker Node 4
Worker Node 3 Disk
Disk
Worker Node 5 Disk
Worker Node 2
Worker Node 1 Disk
Disk
Partial Homed (Thin DMZ) Network Topology Shown; Other Topologies Possible and Supported
© 2017 Mellanox Technologies 24
HDP on POWER – Initial Reference Configurations
Switches
1 GbE (1x or 2x):
• IBM 7120-48E (Lenovo G8052) Switch (48x 1GbE + 4x 10GbE ports)
10 GbE (2x typical, 1x allowed):
• IBM 7120-64C (Lenovo G8264) Switch (48x 10GbE + 4x 40GbE), or
• IBM 8831-S48 (Mellanox SX1410) Switch (48x 10GbE + 12x 40GbE)
Additional Config Options:
Network topologies: Flat, Dual Homed, Partial Homed, Full DMZ
Size: POC, min-production (12 node), full rack, multi rack
Balanced Performance Storage Dense
Server Type 1U S821LC (Stratton) 1U S821LC (Stratton) 1U S821LC (Stratton) 2U S822LC (Briggs) 2U S822LC (Briggs) 2U S822LC (Briggs)
Count (Min / Max) 1 / 1 3 / Any 1 / Any 8 / Any 8 / Any 8 / Any
Cores 8 20 20 22 22 11
Memory 32GB 256GB 256GB 256GB 512GB 128GB
Storage - HDD 2x 4TB HDD 4x 4TB HDD 4x 4TB HDD 12x 4TB HDD 8x 6TB HDD 12x 8TB HDD
Storage - SSD + 4x 3.8TB SSD
Storage Controller Marvell (internal)LSI MegaRAID 9361-8i
(2GB cache)
LSI MegaRAID 9361-8i
(2GB cache)
LSI MegaRAID 9361-8i
(2GB cache)
LSI MegaRAID 9361-8i
(2GB cache)
LSI MegaRAID 9361-8i
(2GB cache)
Network - 1GbE 4 ports (internal) 4 ports (internal) 4 ports (internal) 4 ports (internal) 4 ports (internal) 4 ports (internal)
Network - 10GbE 2 ports 2 ports 2 ports 2 ports 2 ports 2 ports
System Mgmt Node Master Node Edge NodeWorker Node
© 2017 Mellanox Technologies 25
HDP on POWER – Reference Architecture
Single rack example – Minimum production configuration Multi-rack example (extensible)
Up to 18 worker nodes per rack
possible
42
41
40
39
38
37
36
35
34
33
32
F R R F 31
30
29
28
27
26
25
24
23
22
21
20
19
18
17
F R R F 16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
Worker Node (8001-22C)
Worker Node (8001-22C)
Worker Node (8001-22C)
Worker Node (8001-22C)
Worker Node (8001-22C)
PD
U
PD
U
Mellanox SX1410 10GbE
Cable ingress/egress
Master Node (8001-12C)
Edge Node (8001-12C)
Master Node (8001-12C)
Master Node (8001-12C)
PD
U
Worker Node (8001-22C)
PD
U
Worker Node (8001-22C)
Cable ingress/egress
A B
CWorker Node (8001-22C)
D
Sys Mgmt Node (8001-12C)
Mellanox SX1410 10GbE
Lenovo 7120-48E 1GbE
42 42
41 41
40 40
39 39
38 38
37 37
36 36
35 35
34 34
33 33
32 32
F R R F 31 F R R F 31
30 30
29 29
28 28
27 27
26 26
25 25
24 24
23 23
22 22
21 21
20 20
19 19
18 18
17 17
F R R F 16 F R R F 16
15 15
14 14
13 13
12 12
11 11
10 10
9 9
8 8
7 7
6 6
5 5
4 4
3 3
2 2
1 1
Worker Node (8001-22C) Worker Node (8001-22C)
Cable ingress/egress Cable ingress/egress
Worker Node (8001-22C) Worker Node (8001-22C)
Worker Node (8001-22C) Worker Node (8001-22C)
Worker Node (8001-22C) Worker Node (8001-22C)
Worker Node (8001-22C) Worker Node (8001-22C)
Worker Node (8001-22C) Worker Node (8001-22C)P
DU
Worker Node (8001-22C)
PD
U
PD
U
Worker Node (8001-22C)
PD
U
Worker Node (8001-22C) Worker Node (8001-22C)
CWorker Node (8001-22C)
D CWorker Node (8001-22C)
D
Mellanox SX1410 10GbE
Worker Node (8001-22C) Worker Node (8001-22C)
Worker Node (8001-22C) Worker Node (8001-22C)
B
Worker Node (8001-22C)
PD
U
Worker Node (8001-22C)
PD
U
PD
U
PD
U
Sys Mgmt Node (8001-12C)
Worker Node (8001-22C)
Rack to Rack Switch Rack to Rack Switch
Edge Node (8001-12C)Worker Node (8001-22C)
Edge Node (8001-12C)
Master Node (8001-12C)Worker Node (8001-22C)
Master Node (8001-12C)
Lenovo 7120-48E 1GbE
Cable ingress/egress Cable ingress/egress
Rack to Rack Switch
A Master Node (8001-12C) B AWorker Node (8001-22C)
Lenovo 7120-48E 1GbE
Mellanox SX1410 10GbE
Rack to Rack Switch
Speed Switch Cabling Adapter Optics*
40 GbE
SX1710 – 8831-NF2
See lists on right
EKAL 2@40
EB27 +EB2J or
EB2K
10/40 GbE
SX1410 – 8831-S48EKAU 10/25
EKAL 2@40
EB28 + ECBD or
ECBE
1/10GbE4610-54T – 8831-S52
LOM
Mellanox Infrastructure for HortonWorks Choice of Cabling40GbE / FDR Cabling
Length Description FC
0.5m 40GbE / FDR Copper Cable QSFP EB40
1m 40GbE / FDR Copper Cable QSFP EB41
2m 40GbE / FDR Copper Cable QSFP EB42
3m 40GbE / FDR Optical Cable QSFP EB4A
5m 40GbE / FDR Optical Cable QSFP EB4B
10m 40GbE / FDR Optical Cable QSFP EB4C
15m 40GbE / FDR Optical Cable QSFP EB4D
20m 40GbE / FDR Optical Cable QSFP EB4E
30m 40GbE / FDR Optical Cable QSFP EB4F
50m 40GbE / FDR Optical Cable QSFP EB4G
* Optics are IBM Parts only
Internet
M E S S
EDWFirewall
Internet
M E S S
Firewall
Flat Dual Home
Public
Public
Private
Internet
M E S S
Firewall
Partial Home“Thin DMZ”
Public
Private
Internet
M E S S
Firewall
DMZ
Public
Private
FirewallX
Mellanox Infrastructure for HortonWorks
As you increase the speed of the network, the topology of the PCI slot becomes important.The two topologies that IBM has for the cards and slots in the servers are1. PCI Gen 3.0 x82. PCI Gen 3.0 x16The important piece is the x8/x16, what does this mean? This is the width of the PCI bus, how much bandwidth can be passed from Network
to the CPU. How much network bandwidth can be passed thru these two PCI Slots.
Speed PCI Gen 3.0 x8 - # Ports FC# PCI Gen 3.0 x16 - # Ports FC#
10 GbE 2EKAU
2 -
25 GbE 2 2 -
40 GbE 1 EC3A 2 EC3L / EKAL
50 GbE 1 EKAM*(x16 Card) 2 EC3L / EKAL
56 GbE 1 EC3A 2 EC3L / EKAL
100 GbE 0 - 1 EC3L / EKAM
FDR 1 - 2 EL3D / EKAL
EDR 0 - 1 EC3E / EKAL
NOTE: To provide Active/Active redundant network, the PCI Slot must have enough bandwidth to pass the data from the CPU to the Network.IBM FC# EC3A is only a PCI Gen3.0 x8 Card so is limited to max bandwidth of 56GbTo achieve dual 40GbE Active/Active redundant network, the FC# EC3L or EKAL should be used with both ports connected @40GbE on a card with PCI Gen3.0 x16.
NOTE:Bonding: The most common mode is Mode 4 LACP/802.3ad, this has an overhead and is originally to bond low speed unreliable links. With the implementation of modern Ethernet networks and enhancements to Linux. Mode 5 – TLB and Mode 6 – ALB. Using Mode 5/6 are good choice as
they have less overhead than Mode 4 and they require no configuration on the switches to provide Active / Active redundancy.
NOTE:When Mellanox is configured end to end, Adapter, Cable and Switch, there is a free upgrade to Mellanox supported 56GbE. This provides 40% more bandwidth than 40GbE.
Activation is a single command on the required interface of “speed 56000” on the switch interface.
NOTE:To achieve redundant network for IB - FDR, FC#EC3E / EKAL @ 2x FDRTo achieve redundant network for IB - EDR 2x FC#EC3E / EKAL @ EDR
Redundancy is provided by Mode 1 Active / Standby, the bond is created the same as normal Linux bond
Speed Switch Cabling Adapter Optics
10/40 GbESX1410 – 8831-S48
See list on right
EKAU 2@10/25
1GbE4610-54T – 8831-S52
LOM
Mellanox Infrastructure for 10 GbE Cluster HortonWorks
48x 10 GbE Endpoints per Leaf
Sample 96 Port L2 Cluster
48 HA 10GbE Hosts+ ESS Storage
ToR
T 10 GbE Client
Length Description FC
0.5m 40GbE / FDR Copper Cable QSFP EB40
1m 40GbE / FDR Copper Cable QSFP EB41
2m 40GbE / FDR Copper Cable QSFP EB42
3m 40GbE / FDR Optical Cable QSFP EB4A
5m 40GbE / FDR Optical Cable QSFP EB4B
10m 40GbE / FDR Optical Cable QSFP EB4C
15m 40GbE / FDR Optical Cable QSFP EB4D
20m 40GbE / FDR Optical Cable QSFP EB4E
30m 40GbE / FDR Optical Cable QSFP EB4F
50m 40GbE / FDR Optical Cable QSFP EB4G
Choice of Cabling40GbE / FDR Cabling
8831-S48
48x 10GbE + 12x 40GbE
T
ESS
Mode 6 - ALB
Mode 6 - ALBMode 6 - ALB
T
Speed Switch Cabling Adapter Optics
40 GbE
SX1710 – 8831-NF2
See list on right
EB27 +EB2J or
EB2K
10/40 GbESX1410 – 8831-S48
EKAU 2@10/25EB28 +
ECBD or ECBE
1GbE4610-54T – 8831-S52
LOM
Mellanox Infrastructure for 10 GbE Cluster HortonWorks
8831-NF2 36x 40GbE
6x 40 GbE Link per Spine
48x 10 GbE Endpoints per Leaf
Sample 192 Port L2 (VMS) Cluster96 HA 10GbE Hosts
Spine
Leaf
IPL 4x 56GbE
T 10 GbE Client
Length Description FC
0.5m 40GbE / FDR Copper Cable QSFP EB40
1m 40GbE / FDR Copper Cable QSFP EB41
2m 40GbE / FDR Copper Cable QSFP EB42
3m 40GbE / FDR Optical Cable QSFP EB4A
5m 40GbE / FDR Optical Cable QSFP EB4B
10m 40GbE / FDR Optical Cable QSFP EB4C
15m 40GbE / FDR Optical Cable QSFP EB4D
20m 40GbE / FDR Optical Cable QSFP EB4E
30m 40GbE / FDR Optical Cable QSFP EB4F
50m 40GbE / FDR Optical Cable QSFP EB4G
Choice of Cabling40GbE / FDR Cabling
6 x 40 GbE Link per Spine
8831-S48
48x 10GbE + 12x 40GbE
TTTT
Speed Switch Cabling Adapter Optics*
40 GbE
SX1710 – 8831-NF2
See list on right EKAL 2@40EB27 +EB2J or
EB2K
Mellanox Infrastructure for 40GbE Cluster HortonWorks
8831-NF2 36x 40GbE
7x 56 GbE Link per Spine
18x 10/40 GbE Endpoints per Leaf
Sample 72 Port L2 (VMS) Cluster36 HA 10/40GbE Ports
Spine
Leaf
IPL 4x 56GbE
4
4
Q
40 GbE Data Network
40 GbE Client
10 GbE Endpoint
X
QSFP to SFP+ Adapter (QSA)
SFP+ DAC or Transceiver*
Length Description FC
0.5m 40GbE / FDR Copper Cable QSFP EB40
1m 40GbE / FDR Copper Cable QSFP EB41
2m 40GbE / FDR Copper Cable QSFP EB42
3m 40GbE / FDR Optical Cable QSFP EB4A
5m 40GbE / FDR Optical Cable QSFP EB4B
10m 40GbE / FDR Optical Cable QSFP EB4C
15m 40GbE / FDR Optical Cable QSFP EB4D
20m 40GbE / FDR Optical Cable QSFP EB4E
30m 40GbE / FDR Optical Cable QSFP EB4F
50m 40GbE / FDR Optical Cable QSFP EB4G
Choice of Cabling40GbE / FDR Cabling
7x 56 GbE Link per Spine
Q
X
4444
Q
X
* 10GBE & Optics are IBM Parts only
T T
T
Q
X
36port40GbE
36port40GbE
36port40GbE
36port40GbE
36port40GbE
36port40GbE
36port40GbE
36port40GbE
18ports 18ports 18ports 18ports 18ports
6 Ports per Spine
36port40GbE
36port40GbE
36port40GbE
36port40GbE
36port40GbE
36port40GbE
36port40GbE
36port40GbE
18ports 18ports 18ports 18ports
ESS ESS ESS ESS
6 Ports per Spine6 Ports per Spine6 Ports per Spine
2x 100Gb Cards x 2 Ports @ 40 = 160Gb per NSD3x 40Gb Cards x 1 Port @ 40 = 120Gb per NSD
Compute Node
18ports
2x EC3L per NSD
4x ESS with 4x EC3L Cards @ 2x 40Gb
32x 40Gb Ports = ~ 112GB
1x EKAL @ 2x 40Gb per Node
Layer 3 OSPF/ECMP NetworkMellanox VMS
72x Top Port Dual Port Card72x Bottom Port Dual Port Card
Mode 6 - ALB
Mode 6 - ALB
36port40GbE
Speed Switch Cabling Adapter Optics
40 GbE
SX1710 – 8831-NF2
See list on right
EKAL 2@40GbEEB27 +EB2J or
EB2K
1 GbE4610-54T – 8831-S52
LOM
Mellanox Infrastructure for 40 GbE Cluster HortonWorks
Sample 108 Port L3 (VMS) Cluster
90 HA 40GbE +Dedicated Storage switches
Length Description FC
0.5m 40GbE / FDR Copper Cable QSFP EB40
1m 40GbE / FDR Copper Cable QSFP EB41
2m 40GbE / FDR Copper Cable QSFP EB42
3m 40GbE / FDR Optical Cable QSFP EB4A
5m 40GbE / FDR Optical Cable QSFP EB4B
10m 40GbE / FDR Optical Cable QSFP EB4C
15m 40GbE / FDR Optical Cable QSFP EB4D
20m 40GbE / FDR Optical Cable QSFP EB4E
30m 40GbE / FDR Optical Cable QSFP EB4F
50m 40GbE / FDR Optical Cable QSFP EB4G
Choice of Cabling40GbE / FDR Cabling
36port40GbE
Mellanox Infrastructure for ESS/Spectrum Scale
Ports 10GbE 25GbE 40GbE 100GbE@2x 40GbE 56GbE 100GbE@2x 56GbE FDR EDR@2x FDR 100GbE EDR
One Port 0.8 1.8 3.2 3.6 4.48 4.48 5.0 5.5 8.0 8.5
Two Ports 1.6 3.6 6.4 7.2 8.96 8.96 10.0 11.0 16.0 17.0
Three Ports 2.4 5.4 9.6 10.8 13.44 13.44 15.0 16.5 24.0 25.5
Four Ports 3.2 7.2 14.4 17.92 20.0 22.0
Five Ports 4.0 9.0 18.0 22.4 25.0 27.5
Six Ports 4.8 10.8 21.6 26.88 30.0 33.0
SINGLE NSD Port Bandwidth options
0.8
1.8
3.2
3.6
4.48
4.48
5.0
5.5
8.0
8.5
1.6
3.6
6.4
7.2
8.96
8.96
10.0
11.0
16.0
17.0
2.4
5.4
9.6
10.8
13.44
13.44
15.0
16.5
24.0
25.5
3.2
7.2
14.4
17.92
20.0
22.0
4.0
9.0
18.0
22.4
25.0
27.5
4.8
10.8
21.6
26.88
30.0
33.0
1 0 G b E
2 5 G b E
4 0 G b E
1 0 0 G b E @ 2 x 4 0 G b E
5 6 G b E
1 0 0 G b E @ 2 x 5 6 G b E
F D R
E D R @ 2 x F D R
1 0 0 G b E
E D R
GB Bandwidth per Port per Speed for Single NSD/IO Node
One Port Two Ports Three Ports Four Ports Five Ports Six Ports
-
5
10
15
20
25
30
35
40
10 100 1,000
Max
Se
qu
en
tial
Th
rou
ghp
ut
(GB
yte
s/s)
Rea
d, IO
R, I
nfin
iban
d+R
DM
A n
etw
ork,
16M
B f
ilesy
stem
blo
cksi
ze (E
SS)
TB Usable CapacityApprox max capacity using 8+2P (ESS), combined MD+Data pool. Note logarithmic scale.
Sequential throughput vs. Capacity for selected ESS models
Mellanox Infrastructure for ESS/Spectrum Scale
GL6S = 34 GB/s
GL6 = 25 GB/s
GL4S = 23 GB/s
GL4 = 17 GB/s
GL2S = 11 GB/s
GL2 = 8 GB/s
Dual NSD Port Bandwidth options
Ports per NSD 10GbE 25GbE 40GbE 100GbE@ 2x 40GbE 56GbE 100GbE@ 2x 56GbE FDR EDR@2x FDR 100GbE EDR
One Port 1.6 3.6 6.4 7.2 8.96 8.96 10.0 11.0 16.0 17.0
Two Ports 3.2 7.2 12.8 14.4 17.92 17.92 20.0 22.0 32.0 34.0
Three Ports 4.8 10.8 19.2 21.6 26.88 26.88 30.0 33.0 48.0 51.0
Four Ports 6.4 14.4 28.8 35.84 40.0 44.0
Five Ports 8.0 18. 36.0 44.8 50.0 55.0
Six Ports 9.6 21.6 43.2 52.76 60.0 66.0
© 2017 Mellanox Technologies 34
IBM Support Contacts
Duane Dial – Director of Sales, IBM WW
512-574-4360
Jim Lonergan – Business Development IBM WW
Sametime [email protected]
512-897-8245
Lyn Stockwell-White – North America Channels IBM
Sametime [email protected]
602-999-5255
Matthew Sheard - Solutions Architect – IBM WW
Sametime [email protected]
919-360-1654
John Biebelhausen – Sr. OEM Marketing
512-770-4991
FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consent
www.mellanox.com [email protected] +1 (512) 897-824528-Sep-17 v1
© 2017 Mellanox Technologies 35
www.mellanox.com/oem/ibm
FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consent
www.mellanox.com [email protected] +1 (512) 897-824528-Sep-17 v1
OEM Microsite
© 2017 Mellanox Technologies 36
https://community.mellanox.com/community/solutions
FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consent
www.mellanox.com [email protected] +1 (512) 897-824528-Sep-17 v1
Mellanox Community
© 2017 Mellanox Technologies 37
http://academy.mellanox.com/en/
FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consent
www.mellanox.com [email protected] +1 (512) 897-824528-Sep-17 v1
Mellanox Academy
Thank You