NetApp E-Series Storage with IBM Spectrum Scale … · 2019-04-15 · E-Series Storage Spectrum...
Transcript of NetApp E-Series Storage with IBM Spectrum Scale … · 2019-04-15 · E-Series Storage Spectrum...
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services NETAPP E-SERIES
STORAGE SYSTEMS WITH SPECTRUM SCALE
E5700 | E2800 IMPLEMENTATION
GUIDE
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
NETAPP E-SERIES STORAGE WITH SPECTRUM SCALEImplementation GuideE-SERIES STORAGE SOLUTION WITH IBM SPECTRUM SCALE FOR HPC AND ENTERPRISE APPLICATIONS
This implementation guide relays benchmark results for NetApp
E5700 and E2800 storage running IBM Spectrum Scale.
Testing in the guide was performed using the scatter block
allocation method for Spectrum Scale. IOR was first used for sniff
testing to find the optimal configuration, settings, and building
block for benchmarking. This building block was then used in our
testing. The performance results utilizing the IOR driver on this
same building block is given.
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700
E2800
SANtricity OS
Hardware
RAS Features
E-Series Positioning
NETAPP E-SERIES STORAGE SYSTEMS
NetApp E-Series Storage — high performance controller systems designed to deliver enhanced performance, scalability, and simplicity.
The E-Series Storage segment provides details on:
• Features and Benefits
• Full Stripe Write Acceleration
• Platform Offerings
• NetApp E5700 and E2800
• SANtricity OS
• Hardware Options
• RAS Features
• E-Series Positioning
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700
E2800
SANtricity OS
Hardware
RAS Features
E-Series Positioning
FEATURE BENEFITS
Host interface options:
• 12Gb and 6Gb SAS
• 32Gb and 16Gb FC
• 1 Gb, 10Gb and 25Gb iSCSI
• 100Gb IB (SRP & iSER), NVMe/RoCE with 100GbE and NVMe/IB with 100Gb IB
• Helps improve bandwidth utilization, management and network robustness
• Multiprotocol support is available for some interface combinations
• Direct-attachment support and the capability to be shared by multiple host servers offers ease of use and simplicity at an affordable price
Next-generation, high-performance controller • Delivers outstanding performance for resource-intensive applications
Data protection• Dynamic Disk Pooling (DDP) offers simplified installation and faster rebuilds than traditional
RAID options
• RAID levels 0, 1, 5, 6, 10 and DDP provide flexibility to choose the level of protection required
SANtricity OS
• Provides powerful yet easy-to-use and intuitive graphical user interface for administrative activities on single or multiple systems
• Includes configuration, reconfiguration, expansion and routine maintenance, as well as performance tuning and management of advanced functions
Performance • IO ordering, dynamic caching, hybrid performance optimizations and high throughput algorithms, like full stripe write acceleration are built into the code.
Energy efficiencies • Energy efficient power supplies designed to meet multiple efficiency standards and variable speed fans, to help reduce power consumption and provide a lower overall total cost of ownership.
E-Series FEATURES AND BENEFITSGain fast, efficient and scalable storage for high-performance computing environments
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700
E2800
SANtricity OS
Hardware
RAS Features
E-Series Positioning
E-Series Storage Systems FULL STRIPE WRITE ACCELERATION• Sending Full Stripe Host I/O can greatly enhance performance with
the latest SANtricity Software. (1M I/O 8+2 128K Segment Size)
• The FSWA algorithm constantly exams the incoming write data stream to determine the effectiveness of enabling FSWA mode to greatly enhance sequential write throughput performance.
• For Optimal performance proper Stripe Alignment is required to avoid partial segment updates which incur Read Modify Write (RMW) penalties. FSWA is available with RAID 5 or RAID 6 volume groups with Stripe sizes up to 2 MiB (2048).
• For Full Stripe I/O, Cache Mirroring is not used and Write Through (to media) is used. In Write Through mode, “Command Complete, Good” is returned when the data has been written to media. Mixed Full Stripe I/O will use Cache and Write Through when in FSWA mode.
• Data is protected in the event of:
• Primary Raid Controller failure since the Array Controller does not return Command Complete until the data is on media.
• Power Loss to the Array for Non Write Through Writes since internal Batteries on both controllers support an offload to internal Flash Drives of any unwritten data for write completion when power is restored.
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700
E2800
SANtricity OS
Hardware
RAS Features
E-Series Positioning
ENTRY HYBRIDE2000 SERIES
MIDRANGE HYBRIDE5000 SERIES
ALL FLASH ARRAYSEF SERIES
E2800 E5700 EF280 EF570
180 drives 480 drives 96 drives 120 drives
FC, SAS, iSCSIFC, SAS, iSCSI, IB (SRP & iSER),
NVMe/RoCE, NVMe/IBFC, SAS, iSCSI
FC, SAS, iSCSI, IB (SRP & iSER), NVMe/RoCE, NVMe/IB
45/300K IOPs (writes/reads)10GB/sec (reads)
3.7 GB/sec (CME writes)
185K/1000 IOPs (writes/reads)21GB/sec (reads)
14 GB/sec (FSWA)9.0 GB/sec CME Seq. Write
45/300K IOPs (write/reads)10GB/sec (reads)
3.7 GB/sec (CME writes)
185K/1000 IOPs (writes/reads)21GB/sec (reads)
14 GB/sec (FSWA)9.0 GB/sec CME Seq. Write
2u12*, 2u24*, 4u60* 2u24*, 4u60* 2u24* 2u24*
SANtricity 11.40.2 SANtricity 11.40.2 SANtricity 11.40.2 SANtricity 11.40.2
E-Series Storage Systems PLATFORM OFFERINGS
1 IOPs performance based on RAID6 4KB Random 100% Reads2 Throughput performance based on RAID6 512KB Sequential3 FSWA Requires full stripe write workloads
* 12G SAS shelves
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700Platform Overview
Feature Comparison
Performance Comparison
Controller
E2800
SANtricity OS
Hardware
RAS Features
E-Series Positioning
NETAPP E5700 SERIES STORAGEAchieve field-proven and reliable performance efficiency for modern enterprise applications
Key Benefits• Extreme Performance. Accelerate
performance, boost IOPS, and increase density with a hybrid system that is perfectly suited for modern enterprise applications, such as big data analytics, technical computing, video surveillance, and backup and recovery.
• Unmatched Value. Customize configurations to optimize performance and capacity requirements with three distinct disk system shelves, multiple drive types, and a complete selection of SAN interfaces.
Address always changing business requirements with the industry’s most flexible, enterprise-grade storage system.
• Proven Simplicity. Simplify deployment and access to your data with secure, reliable storage with nearly 1 million installations.
• Cloud Connectivity. Enable flexible and cost-effective backup and recovery to the cloud from a NetApp® E5700 Series system with NetApp SANtricity® Cloud Connector.
Your enterprise must have storage that can meet your performance and capacity demands without sacrificing simplicity and efficiency. That is why the NetApp E5700 system was designed with NetApp SANtricity OS adaptive caching algorithms, which address a large range of application workloads. Those workloads range from high-IOPS or bandwidth-intensive streaming applications to a mixture of workloads that deliver high-performance storage consolidation.
Requiring just 2U of rack space, the E5700 hybrid array combines extreme IOPS, sub-100 microsecond response times, and up to 21GBps of read bandwidth and 14GBps of write bandwidth*. With fully redundant I/O paths, advanced data protection features,
and extensive diagnostic capabilities, the E5700 storage systems enable you to achieve greater than 99.9999% availability and provide data integrity and security.
With more than 1 million systems shipped, NetApp E-Series technology is found in enterprise SAN application environments such as big data analytics, technical computing, video surveillance, and backup and recovery. E-Series powers the world’s largest enterprises:
• The world’s second-largest stock exchange
• The world’s largest online media cash register
• The world’s largest wealth management firms
• The world’s largest data warehouse
DOWNLOAD THE DATA SHEET
E-SERIES PRODUCT SUPPORT
E-SERIES HPC
E5700
*FSWA required to reach this performance number
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700Platform Overview
Feature Comparison
Performance Comparison
Controller
E2800
SANtricity OS
Hardware
RAS Features
E-Series Positioning
E5700 Controller PLATFORM OVERVIEW
MAINTAINS PRICE/PERFORMANCE
LEADERSHIP
• x8 NTB PCIe connection between partner controllers through the mid-plane
• 8 core Broadwell DE, up to 64GB of memory per controller
FLEXIBLE PLATFORM TARGETED FOR
DIFFERENT APPLICATIONS
• Core enterprise applications (eg, Database, Backup, VMware, Video)
• Specialized enterprise applications (eg, Splunk, Cassandra, MongoDB, Ceph, Swift)
• HPC, Media & Entertainment, Oil & Gas, AI/ML/DL, Genomics and Big Data analytics applications
FLEXIBLE ENCLOSURE
OPTIONS• 2u24 and 4u60 12Gb SAS enclosures
VARIETY OF HOST INTERFACES
• 12Gb SAS: Low-cost, direct attach to servers
• 32Gb FC: High-speed, for performance-oriented workloads
• 10Gb/25Gb iSCSI: Simple administration, low cost, IP-based
• 100Gb IB (SRP & iSER) & 100Gb NVMe/IB and 100Gb NVMe/RoCE
Low latency, for performance-oriented workloads
ENHANCED SANTRICITY UI • Modern, on-box, browser-based
E5700
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700Platform Overview
Feature Comparison
Performance Comparison
Controller
E2800
SANtricity OS
Hardware
RAS Features
E-Series Positioning
FEATURE E5700
PROCESSOR 8 Core Intel Broadwell-DE
CONTROLLER MEMORY DDR4 16GB, 64GB
HOST INTERFACE (BASE)
Dual 10Gb iSCSI – or –
Dual 16Gb FC
HOST INTERFACE (ONE ADD-ON CARD)
Quad 32Gb FCDual 100Gb IB
100Gb NVMe/IB100Gb NVMe/RoCE
Quad 25Gb iSCSI (optical)Quad 12Gb SAS
EXPANSION PORTS Dual 12G SAS
DRIVES 480 HDDs120 SSDs
ENCLOSURE SUPPORT 8 (7 for expansion)
E5700 Controller FEATURE MATRIX
E5700
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700Platform Overview
Feature Comparison
Performance Comparison
Controller
E2800
SANtricity OS
Hardware
RAS Features
E-Series Positioning
RAID 6 E5700
IOPS (IOs/seC)
Max Cashed IOPS (512B) 1.1M
Max Random Read IOPS (4KB)
150K (HDDs)1.0M (SSDs)
Max Random Write IOPS (4KB)
30K (HDDs)185K (SSDs)
BANDWIDTH (MB/seC)
Max Cached Reads (512KB) 24,000
Max Sequential Reads (512KB) 21,000
Max Sequential Writes (512KB) 9,000 SSDs (CME) 8,500 HDDs (CME)
Max Sequential Writes (1M) 14,000 SSDs (FSWA) 13,000 HDDs (FSWA)
E5700 Controller PERFORMANCE MATRIX
E5700
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700Platform Overview
Feature Comparison
Performance Comparison
Controller
E2800
SANtricity OS
Hardware
RAS Features
E-Series Positioning
E5700 CONTROLLER AT A GLANCE
LNK LNK
EXP1 EXP2
LNK LNK0a 0b
P1 P2
LNK 0c LNK 0d LNK 0e LNK 0f
12Gb SAS DiskExpansion Ports
Dual EthernetManagement Ports
RJ45 SerialPort
Mini-USB SerialCommunication Port(Non-Production Use)
Host Interface Card Options
• 4-port 12Gb SAS Wide Port• 4-port 32Gb FC
• 4-port 25Gb iSCSI (SFP+)• 2-port 100Gb IB
Host Base Ports16Gb FC or10Gb iSCSI
E5700 CONTROLLER
Designed to be equally adept at
delivering throughput for bandwidth-
intensive applications and I/O operations
for transactional applications
E5700
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700
E2800Platform Overview
Feature Comparison
Performance Comparison
Controller
SANtricity OS
Hardware
RAS Features
E-Series Positioning
NETAPP E2800 SERIES STORAGEGain affordable performance and simplicity with our cost-effective all-flash and hybrid arrays
Key Benefits• Optimized Performance. Leverage
all flash for a wide range of mixed workloads.
• Application Integration. Facilitate ongoing management and maintenance. Enable seamless integration into your environment through application-aware plug-ins for VMware, Oracle, and Microsoft and through plug-ins and drivers for emerging applications, such as Splunk, Nagios, and OpenStack.
• Ease of Use and Configuration. Easily install and administer NetApp® E-Series storage systems by using the new on-box, web-based, and powerful NetApp SANtricity® software.
Today, many small and medi-um-sized businesses and remote and branch offices see new ways to manage growing data require-ments with minimal cost and main-tenance. Consistent performance delivery is an imperative. Yet man-aging data is increasingly more
complex—especially with limited resources, space, and power.
The NetApp E2800 storage sys-tem offers all-flash and hybrid configuration options, so you can streamline your IT infrastructure and drive down costs. Pay-as-you-grow flexibility makes the E2800 an excellent solution for compa-nies of all sizes that are facing rap-id, unpredictable growth.
Unlike other storage systems that add file or virtualization layers in the I/O data path, E2800 systems are purpose-built to optimize per-formance for mixed workloads. A
next-generation controller that is built on Intel processor technology, along with a 12Gb SAS infrastruc-ture, improves IOPS and through-put to help you extract value from your data and take action faster.
The E2800 offers an improved user experience with an on-box, web-browser-based interface that is modern, simple, and clean. The intuitive interface of the E2800 simplifies configuration and main-tenance while providing enter-prise-level storage capabilities to deliver consistent performance, data integrity, and security.
The E2800 is based on a field-proven architecture that de-livers high reliability and greater than 99.9999% availability. The E2800 is easy to install and to use. It is optimized for perfor-mance efficiency, and it fits into most application environments. The E2800 system offers excellent price-toperformance for small and medium-sized businesses, remote and branch offices, and work-groups within an enterprise.
DatasheetDatasheet
NetApp E2800 Series Gain affordable performance and simplicity with our cost-effective all-flash and hybrid arrays
KEY BENEFITS
Optimized Performance Leverage all flash for a wide range of mixed workloads.
Application Integration Facilitate ongoing management and maintenance. Enable seamless integration into your environment through application-aware plug-ins for VMware, Oracle, and Microsoft and through plug-ins and drivers for emerging applications, such as Splunk, Nagios, and OpenStack.
Ease of Use and Configuration Easily install and administer NetApp® E-Series storage systems by using thenew on-box, web-based, and powerfulNetApp SANtricity® software.
The ChallengeToday, many small and medium-sized businesses and remote and branch offices seek new ways to manage growing data requirements with minimal cost and maintenance. Consistent performance delivery is an imperative. Yet managing data is increasingly more complex—especially with limited resources, space, and power.
The SolutionAll-flash and hybrid storage with low acquisition costsThe NetApp E2800 storage system offers all-flash and hybrid configuration options, so you can streamline your IT infrastructure and drive down costs. Pay-as-you-grow flexibility makes the E2800 an excellent solution for companies of all sizes that are facing rapid, unpredictable growth.
Unlike other storage systems that add file or virtualization layers in the I/O data path, E2800 systems are purpose-built to optimize performance for mixed workloads. A next-generation controller that is built on Intel processor technology, along with a 12Gb SAS infrastructure, improves IOPS and throughput to help you extract value from your data and take action faster.
The E2800 offers an improved user experience with an on-box, web-browser-based interface that is modern, simple, and clean. The intuitive interface of the E2800 simplifies configuration and maintenance while providing enterprise-level storage capabilities to deliver consistent performance, data integrity, and security.
Dynamic Disk Pools Dynamic Disk Pools (DDP) simplify the management of traditional RAID groups by distributing data parity information and spare capacity across a pool of drives. DDP enhances data protection by enabling faster rebuilds after a drive failure, protecting against potential data loss if additional drive failures occur. DDP dynamic rebuild technology uses every drive in the pool to rebuild a failed drive, enabling exceptional performance under failure.
DDP eliminates complex RAID management. With DDP, there are no idle spares to manage, and you do not need to reconfigure RAID when you expand your system. Compared with traditional RAID, DDP also significantly reduces the impact on performance after one or more drives fail.
DOWNLOAD THE DATA SHEET
E-SERIES PRODUCT SUPPORT
E-SERIES HPC
E2800
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700
E2800Platform Overview
Feature Comparison
Performance Comparison
Controller
SANtricity OS
Hardware
RAS Features
E-Series Positioning
E2800 Controller PLATFORM OVERVIEW
MAINTAINS PRICE/PERFORMANCE
LEADERSHIP
• All-flash and hybrid arrays offer industry-leading IOPS and bandwidth
• Up to 10GBps sustained bandwidth | 300,000 IOPS sustained
FLEXIBLE PLATFORM TARGETED FOR THE DIFFERENT APPLICATIONS
• Core enterprise applications (eg, Database, Backup, VMware, Video)
• Specialized enterprise applications (eg, Splunk, Cassandra, MongoDB, Ceph, Swift)
• HPC, Media & Entertainment, Oil & Gas, AI/ML/DL, Genomics and Big Data analytics applications
FLEXIBLE ENCLOSURE
OPTIONS• 2U12, 2u24 and 4u60 12Gb SAS enclosures
VARIETY OF HOST INTERFACES
• 12Gb SAS: Low-cost, direct attach to servers
• 16Gb & 32Gb FC: High-speed, for performance-oriented workloads
• 10Gb & 25Gb iSCSI: Simple administration, low cost, IP-based
ENHANCED SANTRICITY UI • Modern, on-box, browser-based
E2800
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700
E2800Platform Overview
Feature Comparison
Performance Comparison
Controller
SANtricity OS
Hardware
RAS Features
E-Series Positioning
FEATURE E2800
PROCESSOR Intel Broadwell-DE
CONTROLLER MEMORY 8GB, 16GB
HOST INTERFACE (BASE)
Dual 10Gb iSCSI (optical) – or – Dual 10Gb iSCSI (base-T) – or –
Dual 16G FC
HOST INTERFACE (ONE ADD-ON CARD)
Dual/Quad 16Gb or 32Gb FCDual/Quad 12Gb SAS
Dual/Quad 10Gb or 25Gb iSCSI (optical)Dual 10Gb iSCSI (Base-T)
EXPANSION PORTS Dual 12G SAS
DRIVES 180 HDDs96 SSDs
ENCLOSURE SUPPORT 4 (3 for expansion)
E2800 Controller FEATURE MATRIX
E2800
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700
E2800Platform Overview
Feature Comparison
Performance Comparison
Controller
SANtricity OS
Hardware
RAS Features
E-Series Positioning
RAID 6 E2800
IOPS (IOs/seC)
Max Cashed IOPS (512B) 800K
Max Random Read IOPS (4KB)
55K (HDDs)300K (SSDs)
Max Random Write IOPS (4KB)
10K (HDDs)45K (SSDs)
BANDWIDTH (MB/seC)
Max Cached Reads (512KB) 10,000
Max Sequential Reads (512KB) 10,000
Max Sequential Writes (512KB) 3,700 (CME)
Max Sequential Writes (1M) 6,000 (FSWA)
E2800 Controller PERFORMANCE MATRIX
E2800
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700
E2800Platform Overview
Feature Comparison
Performance Comparison
Controller
SANtricity OS
Hardware
RAS Features
E-Series Positioning
E2800 CONTROLLER AT A GLANCE
LNK LNK
EXP1 EXP2
LNK LNK0a 0b
P1 P2
LNK 0c LNK 0d LNK 0e LNK 0f
12Gb SAS DiskExpansion Ports
Mini-USB SerialCommunication Port(Non-Production Use)
Host Interface Card Options
• 4-port 12Gb SAS Wide Port• 4-port 16Gb or 32Gb FC
• 4-port 10Gb or 25Gb iSCSI (SFP+)• 2-port 12Gb SAS
Lorem ipsum
Dual Ethernet Management Ports
Host Base Ports16Gb FC or10Gb iSCSI
RJ45 SerialPort
E2800 CONTROLLER
Designed to be equally adept at
delivering throughput for bandwidth-
intensive applications and I/O operations
for transactional applications
E2800
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700
E2800Platform Overview
Feature Comparison
Performance Comparison
Controller
SANtricity OS
Hardware
RAS Features
E-Series Positioning
E-Series Storage Systems SANTRICITY OS
PERFORMANCE-OPTIMIZED SANTRICITY® OS
• Efficient architecture with high throughput and IOPS
• Minimizes CPU and memory requirements
MANAGEMENT APIS AND PROVIDERS
• SANtricity Web Services proxy (RESTful API)
• SMI-S provider
SANtricity OS
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700
E2800
SANtricity OS
HardwareDE460C Enclosure
DE224C Enclosure
DE212C Enclosure
RAS Features
E-Series Positioning
4U/60 drive SAS-3 DE460C ENCLOSURESupports Integrated controllers (RBOD)
SHELF EXPANSION
• Supports expansion with 12Gb SAS IOM12 (EBOD)
• 4 x 4 or 2 x 8 SAS-3 ports for data ingest
• Estimated BW of 17.6 GBps
FLEXIBLE DRIVE OPTIONS
• High-density disk shelf supporting 60 drives 2.5” HH or 3.5” FH
• 5 horizontal drawers with 12 drives per drawer
• Supports 60 SSDs or 60 10K or 60 NL-SAS HDDs
• Up to 228TB raw capacity with 3.8TB SSDs and 720TB with 12TB NL-SAS drives
SUPERIOR RASUI
• Drives remain online when drawer is extended for service
• Individual drawer extension and front access enables safer drive replacement
• Drawers can be redundant and replaceable online
HIGH EFFICIENCY TOC
• Energy Star Platinum rated high efficiency 2325 watt power supplies
• 200-240 VAC Auto-ranging
• Supports up to 16.3 watts per drive slot. Slots may be power cycled individually
HARDWARE OPTIONS
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700
E2800
SANtricity OS
HardwareDE460C Enclosure
DE224C Enclosure
DE212C Enclosure
RAS Features
E-Series Positioning
2U/24 drive SAS-3 DE224C ENCLOSURESupports Integrated controllers (RBOD)
SHELF EXPANSION
• Supports expansion with 12Gb SAS IOM12 (EBOD)
• 4 x 4 or 2 x 8 SAS-3 ports for data ingest
• Estimated BW of 17.6 GBps
FLEXIBLE DRIVE OPTIONS
• High Density 2U shelf supporting 24 drives
• 24 dual ported SAS drive slots with power cycle feature
• 2.5” HDDs and/or 2.5” SSDs
• Fully redundant hot-swappable components
• Up to 367TB of raw capacity with 15.3TB SSDs
DRIVE CARRIERS • Leverages common NetApp drive carriers
HIGH EFFICIENCY TOC
• Energy Star Platinum rated high efficiency 900 watt power supplies
• 88-264 VAC Auto-ranging
• Supports up to 15.3 watts per drive slot
HARDWARE OPTIONS
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700
E2800
SANtricity OS
HardwareDE460C Enclosure
DE224C Enclosure
DE212C Enclosure
RAS Features
E-Series Positioning
2U/12 drive SAS-3 DE212C ENCLOSURESupports Integrated controllers (RBOD)
SHELF EXPANSION
• Supports expansion with 12Gb SAS IOM (Otter) (EBOD)
• 4 x 4 or 2 x 8 SAS-3 ports for data ingest
• Estimated BW of 17.6 GBps
FLEXIBLE DRIVE OPTIONS
• High Density 2U shelf supporting 12 drives
• 12 dual ported SAS drive slots with power cycle feature
• 3.5” HDDs and/or 2.5” SSDs
• Fully redundant hot-swappable components
• Up to 144TB of raw capacity with 12TB 7.2K NL-SAS drives
DRIVE CARRIERS • Leverages common NetApp drive carriers
HIGH EFFICIENCY TOC
• Energy Star Platinum rated high efficiency 900 watt power supplies
• 88-264 VAC Auto-ranging
• Supports up to 16.3 watts per drive slot
HARDWARE OPTIONS
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700
E2800
SANtricity OS
Hardware
RAS Features
E-Series Positioning
PROVEN RELIABILITY, AVAILABILITY, AND SERVICEABILITY
More than 30 YEARS o f i n d u s t r y k n o w l e d g e
O V E R 70,000
s y s t e m s i n s t a l l e d
de l i ver ing 99.9999%
availability
P r o v i d e s
DATAPROTECTION
& SECURITY
• Self-encrypting disks
• T10 PI—data assurance
• Media scan with automatic parity check and optional correction
• Extensive diagnostic data capture and statistics collection
• Proactive disk and I/O monitoring with automatic drive evacuator functionality for suspect drives
• Optional RAID parity verification
• Embedded system health check
KEY RELIABILITY, AVAILABILITY, SERVICEABILITY (RAS) FEATURES
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-SERIES STORAGE
Features & Benefits
Full Stripe Write
Platform Offerings
E5700
E2800
SANtricity OS
Hardware
RAS Features
E-Series Positioning
E-Series Storage Systems POSITIONING FOR THE HPC INDUSTRY
IBM Spectrum Storage
High Scalability /Performance
Average Scalability /Performance
Block Protocol Parallel File Protocol
NetApp E-Series Storage with
IBM Spectrum Scale
E-SeriesStorage
E-SeriesStorage
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
SPECTRUM SCALE
Overview
Features & Benefits
Specifications
IBM Spectrum Scale
IBM Spectrum Scale — Scalable File and Object Storage for analytics and content repositories.
The Spectrum Scale segment provides details on:
• Overview
• Features and Benefits
• Product Specifications
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
SPECTRUM SCALE
Overview
Features & Benefits
Specifications
IBM SPECTRUM SCALE Cognitive storage manages unstructured data for cloud, big data, analytics, objects and more
Highlights• Consolidate storage across
traditional file and new-era workloads for object, Hadoop and analytics use cases
• Achieve new operational efficiency and cost effectiveness—deliver up to 10 times higher performance on the same hardware
• Help lower the cost of data retention up to 90 percent through cognitive and policy-driven automation
• Improve application performance with scale-out and flash-based acceleration
• Enable collaboration and efficient sharing of resources among global, distributed teams
• Transparently tier to and from cloud object storage on-premises or to the public cloud
Scalable File and Object Storage for analytics and content repositoriesIBM Spectrum Scale is a flexible software-defined storage that can
• Big data and analytics with support for HDFS
• High performance backup and restores
• Private cloud
• Content repositories
The IBM Spectrum Scale GUI, simplified data management and integrated information lifecycle tools can manage petabytes of data and billions of files, enabling you to control the cost of skyrocketing data growth. IBM Spectrum Scale:
• Provides extreme scalability for data, metadata and flash.
• Reduces storage costs up to 90 percent with automatic policy-based storage tiering from flash through disk to tape.
• Improves security and management efficiency in cloud and big data and analytics environments.
be deployed as high performance file storage or a cost optimized large-scale content repository. IBM Spectrum Scale, previously known as IBM General Parallel File System (GPFS), is built from the ground up to scale performance and capacity with no bottlenecks. Today’s storage requirements around massive capacities and the need for faster time to insights cannot be met by traditional scale-up storage systems that can’t scale beyond a few filers. This is why IBM Spectrum Scale is deployed at the most demanding enterprises in the world for both high performance and high scale. IBM Spectrum Scale provides interfaces for both traditional file based applications and the modern object based applications.
Enterprises around the globe have deployed IBM Spectrum Scale for:
• Compute clusters (technical computing)
DOWNLOAD THE DATA SHEET
RESOURCES
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
SPECTRUM SCALE
Overview
Features & Benefits
Specifications
Spectrum Scale FEATURES AND BENEFITSFEATURE BENEFITSPerformance and scalability
• IBM Spectrum Scale is designed to meet the needs of data-intensive applications including content repositories, technical computing and big-data analysis. The solution scales up to more than a billion petabytes of data and hundreds of GB/s throughput.
• IBM Spectrum Scale local cache can use inexpensive solid-state drives (SSDs) or flash placed directly in IBM Spectrum Scale Client nodes that accelerate input/output (I/O) performance up to six times by reducing the time CPUs spend waiting for data and reducing the overall load on network and storage resources
Transparent cloud tiering • IBM Spectrum Scale can move data to and from object clouds on-premises or as a service from public cloud providers to lower data total cost of ownership and enable “pay as you grow” using commodity-driven cloud pricing.
Simplified administration • New graphical user interface for common administration tasks can speed provisioning, configuration and monitoring of an IBM Spectrum Scale cluster. It enhances administrator productivity and simplifies new deployments.
• IBM Spectrum Scale is integrated with IBM Spectrum Control to monitor multiple IBM Spectrum Scale installations (as well as other storage). The familiar interfaces can help save time and money by improving staff productivity.
Global file sharing with active file management (AFM)
• IBM Spectrum Scale provides low-latency access to data from anywhere in the world with AFM distributed disk-caching technology. AFM expands the IBM Spectrum Scale global namespace across geographical distances, providing fast read and write performance with automated namespace management regardless of location.
Cost-effective information lifecycle management
• IBM Spectrum Scale enhances information lifecycle management, helping lower data management costs significantly using multiple tiers of storage, including tape. With powerful policy-driven automation and tiered storage management, organizations can create optimized tiered storage pools by grouping devices (flash, SSD, disks or tape) based on performance, locality or cost.
End-to-end data reliability, availability and integrity
• IBM Spectrum Scale provides system scalability, very high availability and reliability with no single point of failure in large-scale storage infrastructures. Administrators can configure the file system so it remains available automatically if a disk or server fails. IBM Spectrum Scale can be configured to automatically recover from node, storage and other infrastructure failures.
Native encryption and secure erase
• IBM Spectrum Scale offers the protection of data at rest and secure deletion. IBM Spectrum Scale encryption is designed to provide protection of data from security breaches; unauthorized access; and loss, theft, or improper disposal. Cryptographic erase provides fast, simple and secure file deletion.
Object storage and OpenStack
• IBM Spectrum Scale supports the leading object storage protocols—OpenStack Swift and Amazon S3—giving architects building public, private and hybrid clouds access to the features and capabilities of industry-leading enterprise scale-out software-defined storage. IBM Spectrum Scale unifies virtual machine images, block devices, objects and files within a single namespace no matter where data resides.
Synchronous and asynchronous disaster recovery
• Ensures the survivability of data in the event of failures
Support for a broad set of access protocols
• Multi-protocol support with native access: NFS v3.0 and v4.0, SMB v2.0 and v3.0; OpenStack Swift and Amazon S3
• POSIX compliancePolicy-driven compression • Save storage space and optimize resources by compressing only the data that benefits from it
Quality of service • Throttle background tasks to prioritize I/O operations per second (IOPS) for users
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
SPECTRUM SCALE
Overview
Features & Benefits
Specifications
FEATURE APPLICATION
Operating systems supported • IBM AIX®; Linux: Red Hat, SUSE Linux Enterprise Server; Microsoft Windows Server 2012, Microsoft Windows 7; IBM z Systems™
Hardware supported• x86 architecture: Intel EM64T processors or AMD Opteron, minimum 1 GB system memory
• IBM POWER® architecture: AIX v6.1 or v7.1, Linux on POWER3 (minimum), minimum 1 GB system memory; z Systems (Linux only)
Maximum number of files/file system
• 264 (9 quintillion) files per file system
Maximum file system size • 299 bytes
Minimum/maximum number of nodes
• 1 - 16,384
Protocols
• POSIX, GPFS, NFS v4.0, SMB v3.0
• Big data and analytics: Hadoop MapReduce
• Cloud: OpenStack Cinder (block), OpenStack Swift (object), S3 (object)
Cloud object storage • IBM Cloud Storage System (Cleversafe), Amazon S3, IBM SoftLayer® Native Object, OpenStack Swift and Amazon S3 compatible providers
Spectrum Scale SPECIFICATIONS AT A GLANCE
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5700 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
SPECTRUM SCALE WITH SCATTER BLOCK ALLOCATION METHOD RESULTS
Performance Summary and Sizing With Scatter Block Allocation Method
IOR Testing Performance Details
Scatter Testing Settings & Configurations
SPECTRUM SCALE WITH SCATTER BLOCK ALLOCATION METHOD RESULTS
Performance Summary and Sizing With Scatter Block Allocation Method
GPFS Clients
NSD Servers
NetApp E5700 or E2800Storage Modules
SASLinks
FDR IBNetwork
BUILDINGBLOCK
IOR Testing Performance DetailsIOR-2.10.3: MPI Coordinated Test of Parallel I/O
Run began: Thu Mar 16 02:35:39 2017Command line used: /usr/local/bin/IOR -i 5 -r -e -E -F -k -t 1M -b 100G -o /gpfs1/iorMachine: Linux lenovo-1.atai.lan
Summary: api = POSIX test filename = /gpfs1/ior access = file-per-process ordering in a file = sequential offsets ordering inter file= no tasks offsets clients = 100 (25 per node) repetitions = 5 xfersize = 1 MiB blocksize = 100 GiB aggregate filesize = 10000 GiB
Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) Mean (OPs) Std Dev Mean (s)--------- --------- --------- ---------- ------- --------- --------- ---------- ------- --------
read 18633.88 18557.27 18610.16 27.61 18633.88 18557.27 18610.16 27.61 550.23817 EXCEL
Max Read: 18633.88 MiB/sec (19539.04 MB/sec)
Run finished: Thu Mar 16 03:21:30 2017
Scatter Testing Settings and
ConfigurationsPARAMETER VALUE DEFAULT COMMENTS
nsdThreadsPer-Queue 12 3 The parameter nsdThreadsPerQueue determines the number of threads assigned to
process each NSD server IO queue.
nsdThreadsPerDisk 12 3 It represents a guess of how many threads can keep a single NSD LUN busy without introducing a detrimental amount of backlog.
nsdSmallThrea-dRatio 1 7 value is set to 1, which is ideal for large transfer workloads.
nsdMaxWorker-Threads 1024 512 Based on best practices for this type of configuration.
pagepool
8G (GPFSperf & IOR Runs)
24G (video stream-ing run)
1G
Pagepool defines the amount of physical memory that should be pinned by GPFS at start-up. It is used in various places of the code, but from a performance perspective its required to cache data and metadata objects.
For GPFSperf and IOR sequential runs, pagepool of 8G saturates the storage, and the focus of this test was to focus on the storage, not the server capability.
maxFilesToCache 4M (video stream-ing run)
4K
(GPFSperf & IOR Runs)
Pagepool defines the amount of physical memory that should be pinned by GPFS at start-up. It is used in various places of the code, but from a performance perspective its required to cache data and metadata objects.
For GPFSperf and IOR sequential runs, pagepool of 8G saturates the storage, and the focus of this test was to focus on the storage, not the server capability.
maxMBpS 28672 2048
Specifies an estimate of how many megabytes of data can be transferred per second into or out of a single node.
As a general rule, try setting maxMBpS to twice the IO throughput available to the node. ~2 x 2 (FDR IB links/node) x7000 MBpS
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5700 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
PERFORMANCE SUMMARY AND SIZING WITH SCATTER BLOCK ALLOCATION METHOD
E5700 | E2800 Building Block
E5700 | E2800 Maximum Performance
PERFORMANCE SUMMARY AND SIZING WITH SCATTER BLOCK ALLOCATION METHOD
Scatter Performance Summary and Sizing — benchmarking the ultimate Building Block and performance testing results with GPFSperf.
The Scatter Performance Summary and Sizing segment provides details on:
• Building Block Configuration Guide
• Building Block Maximum Performance
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5600 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
PERFORMANCE SUMMARY AND SIZING WITH SCATTER BLOCK ALLOCATION METHOD
E5700 | E2800 Building Block
E5700 | E2800 Maximum Performance
E5700 | E2800 with Spectrum Scale CONFIGURATION GUIDE
Spectrum Scale Clients
NSD Servers
NetApp E5700 or E2800Storage Modules
SASLinks
FDR IBNetwork
BUILDINGBLOCK
E5700 only
Spectrum Scale CLIENTS
Sockets: 2
Cores: 12/socketCPU Model Name:
Intel Xeon CPU E5-2650 v4 @2.20GHz
NSD SERVERS
Sockets: 2
Cores: 6/socketCPU Model Name: R720
Intel® Xeon® CPU E5-2620 @ 2GHz
E5700
Expansions: 1 (1+1 configuration)
Drive Count: 120
E2800
Expansions: 1 (1+1 configuration)
Drive Count: 120
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5600 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
PERFORMANCE SUMMARY AND SIZING WITH SCATTER BLOCK ALLOCATION METHOD
E5700 | E2800 Building Block
E5700 | E2800 Maximum Performance
Spectrum Scale Single – E5700 | E2800 BUILDING BLOCK MAX. PERFORMANCE
Performance scales linearly by drive count
Compare Building Block table for linear scaling across E5700 and E2800
E5700 Drive Count 60 80 100 120
RAID 616MiB Filesystem
Block Size
Read (GB/s) 5.69 7.50 9.4 10.5
Write (GB/s) 4.86 6.21 7.4 8.1
RAID 64MiB Filesystem
Block Size
Read (GB/s) 3.69 4.7 5.7 6.88
Write (GB/s) 2.84 3.59 4.42 5.2
E2800 Drive Count 60 80 100 120
RAID 616MiB Filesystem
Block Size
Read (GB/s) 5.59 7.33 8.95 10.16
Write (GB/s) 4.02 4.11 4.07 4.06
RAID 64MiB Filesystem
Block Size
Read (GB/s) 3.54 4.53 5.56 6.54
Write (GB/s) 2.58 3.35 3.90 3.90
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5700 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
IOR Testing Performance Details
Sequential IO Pattern Test Results
Sequential Write (32 clients)
Sequential Read (32 clients)
Conclusions
IOR TESTING PERFORMANCE DETAILS
IOR Testing Performance Details — IOR Driver Benchmark Scatter test results for sequential read and write using 16MiB and 4MiB block sizes.
The IOR Testing Performance Details section provides details on:
• IOR Sequential Test Scenario
• Sequential Read with 16MiB Block Size
• Sequential Write with 16MiB Block Size
• Test Conclusions
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5600 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
IOR Testing Performance Details
Sequential IO Pattern Test Results
Sequential Write (32 clients)
Sequential Read (32 clients)
Conclusions
IOR Driver Benchmark SEQUENTIAL IO PATTERN TEST RESULTS
*All the files were created before the tests.
Spectrum Scale Clients
NSD Servers
NetApp E5700 or E2800Storage Modules
SASLinks
FDR IBNetwork
BUILDINGBLOCK
E5700 only
IOR TEST PARAMETERS
IO Driver: IOR
File Size: 160GiB
Number of Files: 32
Transfer Size: 1MiB
Pattern: Sequential
Access: File-per-Process
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5600 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
IOR Testing Performance Details
Sequential IO Pattern Test Results
Sequential Write (32 clients)
Sequential Read (32 clients)
Conclusions
IOR Driver Command E2800 SEQUENTIAL WRITE 16MiBIOR-2.10.3: MPI Coordinated Test of Parallel I/O
Run began: Wed Feb 6 09:13:43 2019
Command line used: /gpfs1/IOR -w -e -E -F -k -t 1M -b 160G -o /gpfs1/testFileMachine: Linux lenovo1.hpc.lan
Summary: api = POSIX test filename = /gpfs1/testFile access = file-per-process ordering in a file = sequential offsets ordering inter file= no tasks offsets clients = 32 (8 per node) repetitions = 1 xfersize = 1 MiB blocksize = 160 GiB aggregate filesize = 5120 GiB
Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) Mean (OPs) Std Dev Mean (s)--------- --------- --------- ---------- ------- --------- --------- ---------- ------- --------
write 5931.81 5931.81 5931.81 0.00 5931.81 5931.81 5931.81 0.00 883.85775 EXCEL
Max Write: 5931.81 MiB/sec (6219.96 MB/sec)
Run finished: Wed Feb 6 09:28:27 2019
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5600 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
IOR Testing Performance Details
Sequential IO Pattern Test Results
Sequential Write (32 clients)
Sequential Read (32 clients)
Conclusions
IOR Driver Command E2800 SEQUENTIAL READ 16MiBIOR-2.10.3: MPI Coordinated Test of Parallel I/O
Run began: Wed Feb 6 09:44:29 2019
Command line used: /gpfs1/IOR -r -e -E -F -k -t 1M -b 160G -o /gpfs1/testFileMachine: Linux lenovo1.hpc.lan
Summary: api = POSIX test filename = /gpfs1/testFile access = file-per-process ordering in a file = sequential offsets ordering inter file= no tasks offsets clients = 32 (8 per node) repetitions = 1 xfersize = 1 MiB blocksize = 160 GiB aggregate filesize = 5120 GiB
Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) Mean (OPs) Std Dev Mean (s)--------- --------- --------- ---------- ------- --------- --------- ---------- ------- --------
read 7155.85 7155.85 7155.85 0.00 7155.85 7155.85 7155.85 0.00 732.67012 EXCEL
Max Read: 7155.85 MiB/sec (7503.46 MB/sec)
Run finished: Wed Feb 6 09:56:42 2019
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5600 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
IOR Testing Performance Details
Sequential IO Pattern Test Results
Sequential Write (32 clients)
Sequential Read (32 clients)
Conclusions
Spectrum Scale with E5700CONCLUSIONS
• Excellent performance
• Performance scales linearly from 60 to 120 drives
• Performance and capacity scale linearly with the number of building blocks
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5700 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
Scatter Testing Settings and Configs
NSD Clients and Servers
E5700 Configuration
Disk IO Settings
GPFS | Scatter Configuration
GPFS | Scatter Non-default Settings
GPFS Configuration
16MiB FSAs
4MiB FSAs
SCATTER TESTING SETTINGS AND CONFIGURATIONS
Scatter Testing Settings & Configurations — Hardware settings used for benchmarking along with software settings to achieve optimum test results.
The Scatter Testing Settings & Configurations section provides details on:
• NSD Clients and Servers
• E5700 Configuration
• Linux Disk IO Settings
• Spectrum Scale | Scatter Configuration
• Spectrum Scale | Scatter Non-default Settings
• 16MiB Block Size | Scatter – File System Attributes
• 4MiB Block Size | Scatter – File System Attributes
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5600 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
Scatter Testing Settings and Configs
NSD Clients and Servers
E5700 Configuration
Disk IO Settings
GPFS | Scatter Configuration
GPFS | Scatter Non-default Settings
GPFS Configuration
16MiB FSAs
4MiB FSAs
NSD CLIENTS AND SERVERS
SPECTRUM SCALE NSD CLIENT NODES SPECTRUM SCALE NSD SERVER NODES
Operating System CentOS v7.2 CentOS v7.2
Processing Elements
4 x Dual Socket Intel Xeon CPU E5-2650 v4 @ 2.20GHz
4 x Single Socket Intel Xeon CPU E5-2620 @ 2GHz
RAM Size 128 GiB 192 GiB
Interconnection Network
FDR Infiniband2 FDR IB Links per Node
FDR Infiniband2 FDR IB Links per Node
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5600 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
Scatter Testing Settings and Configs
NSD Clients and Servers
E5700 Configuration
Disk IO Settings
GPFS | Scatter Configuration
GPFS | Scatter Non-default Settings
GPFS Configuration
16MiB FSAs
4MiB FSAs
E5700 | E2800 CONFIGURATION
Controller Firmware Version: 08.42.20.01
RAID Level: RAID6 (8+2)
Segment Size: 512 KB
Read Cache: Enabled
Write Cache: Enabled
Write cache without batteries: Disabled
Write cache with mirroring: Enabled
Flush write cache after (in seconds): 10.00
Dynamic cache read prefetch: Disabled (Spectrum Scale does pre-fetching at File level, so block-level was disabled)
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5600 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
Scatter Testing Settings and Configs
NSD Clients and Servers
E5700 Configuration
Disk IO Settings
GPFS | Scatter Configuration
GPFS | Scatter Non-default Settings
GPFS Configuration
16MiB FSAs
4MiB FSAs
Linux DISK IO SETTINGS
PARAMETER VALUE DEFAULT COMMENTS
scheduler noop deadline noop scheduler is appropriate when the storage subsystem does its own caching.
read_ahead_kb 128 128It’s possible for the OS to detect sequential behavior at the block level even thought there is no sequential access at the file level, and in that case a value of 0 would prevent false positives.
nr_requests 128 128
Max number of outstanding requested against a given device. Specifies the I/O block layer request descriptors per request queue. This value keeps NSDs busy and gives good throughput. Default was optimum in the case.
max_sectors_kb 2048 512 The maximum amount of data that can be transferred to a disk in a sin-gle I/O.
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5600 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
Scatter Testing Settings and Configs
NSD Clients and Servers
E5700 Configuration
Disk IO Settings
GPFS | Scatter Configuration
GPFS | Scatter Non-default Settings
GPFS Configuration
16MiB FSAs
4MiB FSAs
Spectrum Scale | Scatter CONFIGURATIONVersionmmdiag --version
=== mmdiag: version ===current GPFS build: “4.2.0.3 “.Built on May 4 2016 at 09:29:46Running 2 hours 30 minutes 16 secsGPFS Network Shared Disks
rpm -qi gpfs.baseName : gpfs.baseVersion : 4.2.0Release : 3Architecture: x86_64Install Date: Sun 29 May 2016 02:09:35 PM UTCGroup : System Environment/BaseSize : 48840684License : (C) COPYRIGHT International Business Machines Corp. 2001Signature : (none)Source RPM : gpfs.base-4.2.0-3.src.rpmBuild Date : Wed 04 May 2016 02:22:46 PM UTCBuild Host : bldlnx83.pok.stglabs.ibm.comRelocations : (not relocatable)Packager : IBM Corp. <[email protected]>Vendor : IBM Corp.URL : http://www-03.ibm.com/systems/storage/spectrum/scale/index.htmlSummary : GPFS File ManagerDescription : General Parallel File System File Manager
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5600 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
Scatter Testing Settings and Configs
NSD Clients and Servers
E5700 Configuration
Disk IO Settings
GPFS | Scatter Configuration
GPFS | Scatter Non-default Settings
GPFS Configuration
16MiB FSAs
4MiB FSAs
Spectrum Scale | Scatter NON-DEFAULT SETTINGSpagepool 8G (GPFS Clients)pagepool 96G (NSD Servers)verbsRdma enableverbsRdmaSend enableverbsRdmasPerConnection 256verbsRdmasPerNodeOptimize yesverbsSendBufferMemoryMB 1024workerThreads 512ignorePrefetchLUNCount yesmaxMBpS 28672scatterBuffers yesscatterBufferSize 262144nsdThreadsPerQueue 10nsdThreadsPerDisk 12nsdSmallThreadRatio 1nsdMaxWorkerThreads 1024
mmlsconfigclusterName GPFS_Trafford.hpc.lanclusterId 5639113227347647704autoload yesdmapiFileHandleSize 32minReleaseLevel 4.2.0.1ccrEnabled yescipherList AUTHONLYmaxblocksize 16384KverbsRdma enablensdMaxWorkerThreads 1024verbsRdmasPerNode 1024scatterBuffers yesscatterBufferSize 262144nsdMultiQueueType 1nsdThreadsPerDisk 12nsdSmallThreadRatio 1nsdbufspace 70verbsRdmaSend yesverbsRdmasPerConnection 256verbsRdmasPerNodeOptimize yesignorePrefetchLUNCount yesmaxMBpS 28672verbsSendBufferMemoryMB 1024[client]verbsPorts mlx4_0/1 mlx4_1/1[server]verbsPorts mlx5_0/1 mlx5_0/2[common]nsdThreadsPerQueue 10pagepool 2G
[client]pagepool 8G[server]pagepool 96G[common]workerThreads 512adminMode central
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5600 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
Scatter Testing Settings and Configs
NSD Clients and Servers
E5700 Configuration
Disk IO Settings
GPFS | Scatter Configuration
GPFS | Scatter Non-default Settings
GPFS Configuration
16MiB FSAs
4MiB FSAs
Spectrum Scale | Scatter NON-DEFAULT SETTINGS continued
PARAMETER VALUE DEFAULT COMMENTS
verbsRdma yes no Enables InfiniBand RDMA transfers between Spectrum Scale client nodes and server nodes.
verbsRdmaSend yes no Enables the use of InfiniBand RDMA for most Spectrum Scale daemon-to-daemon communication.
verbsRdmasPerConnection 256 8Max number of outstanding transfers at a time per connection.
Value based on best practices.
scatterBuffers yes no The scatterBuffer parameter affects how Spectrum Scale organizes file data in the page-pool. Based on best practices for this type of configuration.
scatterBufferSize 256K 32K
When RDMA is being used on FDR10, FDR, or EDR network, it may be beneficial to change scatterBufferSize from the default of 32 KiB to 64KiB, 128KiB, or 256 KiB in order to reduce load on the Spectrum Scale RDMA completion threads.
Based on best practices for this type of configuration.
workerThreads 512 48
Controls the maximum number of concurrent file operations at any one instant, as well as the degree of concurrency for flushing dirty data and metadata in the background and for prefetching data and metadata.
Based on best practices for this type of configuration.
ignorePrefetchLUNCountyes no
Tells the NSD client to not limit the numbers of requests based on the number of visible LUN’s (as they can have a large number of physical disks behind them) and rather limit by the max to number of buffers and prefetch threads.
Based on best practices for this type of configuration.
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5600 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
Scatter Testing Settings and Configs
NSD Clients and Servers
E5700 Configuration
Disk IO Settings
GPFS | Scatter Configuration
GPFS | Scatter Non-default Settings
GPFS Configuration
16MiB FSAs
4MiB FSAs
Spectrum Scale | Scatter NON-DEFAULT SETTINGS continued
PARAMETER VALUE DEFAULT COMMENTS
nsdThreadsPerQueue 10 3 The parameter nsdThreadsPerQueue determines the number of threads assigned to process each NSD server IO queue.
nsdThreadsPerDisk 12 3 It represents a guess of how many threads can keep a single NSD LUN busy without introducing a detrimental amount of backlog.
nsdSmallThreadRatio 1 7 Value is set to 1, which is ideal for large transfer workloads.
nsdMaxWorkerThreads 1024 512 Based on best practices for this type of configuration.
pagepool 8G client, 96G server 1G
Pagepool defines the amount of physical memory that should be pinned by Spectrum Scale at startup. It is used in various places of the code, but from a performance perspective its required to cache data and metadata objects.
For GPFSperf and IOR sequential runs, pagepool of 8G saturates the storage, and the focus of this test was to focus on the storage, not the server capability.
maxFilesToCache 4M (video streaming run)
4K
(GPFSperf & IOR Runs)
Should be set fairly large to assist with local workload. It can be set very large in small client clusters, but should remain small on clients in large clusters to avoid excessive memory use on the token servers.
maxMBpS 28672 2048
Specifies an estimate of how many megabytes of data can be transferred per second into or out of a single node.
As a general rule, try setting maxMBpS to twice the IO throughput available to the node. ~2 x 2 (FDR IB links/node) x7000 MBpS
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5600 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
Scatter Testing Settings and Configs
NSD Clients and Servers
E5700 Configuration
Disk IO Settings
GPFS | Scatter Configuration
GPFS | Scatter Non-default Settings
GPFS Configuration
16MiB FSAs
4MiB FSAs
Spectrum Scale CONFIGURATIONmmlsclusterGPFS cluster information========================GPFS cluster name: GPFS_Trafford.hpc.lanGPFS cluster id: 5639113227347647704GPFS UID domain: GPFS_Trafford.hpc.lanRemote shell command: /usr/bin/sshRemote file copy command: /usr/bin/scpRepository type: CCRNode Daemon node name IP address Admin node name Designation-------------------------------------------------------------------------------------------------------1 nsd1-cluster 192.168.212.20 nsd1-cluster quorum-manager2 nsd2-cluster 92.168.212.21 nsd2-cluster quorum-manager3 lenovo1-cluster 192.168.212.24 lenovo1-cluster4 lenovo2-cluster 192.168.212.25 lenovo2-cluster5 lenovo3-cluster 192.168.212.26 lenovo3-cluster6 lenovo4-cluster 192.168.212.27 lenovo4-cluster
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5600 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
Scatter Testing Settings and Configs
NSD Clients and Servers
E5700 Configuration
Disk IO Settings
GPFS | Scatter Configuration
GPFS | Scatter Non-default Settings
GPFS Configuration
16MiB FSAs
4MiB FSAs
File System Attributes 16MiB BLOCK SIZE | SCATTERFile system Disk name NSD servers
------------------------------------
gpfs1 S1V00 nsd1-cluster,
nsd2-cluster
gpfs1 S1V01 nsd2-cluster,
nsd1-cluster
gpfs1 S1V02 nsd2-cluster,
nsd1-cluster
gpfs1 S1V03 nsd1-cluster,
nsd2-cluster
gpfs1 S1V04 nsd1-cluster,
nsd2-cluster
gpfs1 S1V05 nsd2-cluster,
nsd1-cluster
gpfs1 S1V06 nsd2-cluster,
nsd1-cluster
gpfs1 S1V07 nsd1-cluster,
nsd2-cluster
gpfs1 S1V08 nsd1-cluster,
nsd2-cluster
gpfs1 S1V09 nsd2-cluster,
nsd1-cluster
gpfs1 S1V10 nsd2-cluster,
nsd1-cluster
gpfs1 S1V11 nsd1-cluster,
nsd2-cluster
File system attributes for /dev/gpfs1:======================================flag value description------------------- ------------------------ ----------------------------------- -f 524288 Minimum fragment size in bytes-i 4096 Inode size in bytes-I 32768 Indirect block size in bytes-m 1 Default number of metadata replicas-M 2 Maximum number of metadata replicas-r 1 Default number of data replicas-R 2 Maximum number of data replicas-j scatter Block allocation type-D nfs4 File locking semantics in effect-k all ACL semantics in effect-n 8 Estimated number of nodes that will mount file system-B 16777216 Block size-Q none Quotas accounting enabled none Quotas enforced none Default quotas enabled--perfileset-quota No Per-fileset quota enforcement--filesetdf No Fileset df enabled?-V 15.01 (4.2.0.0) File system version--create-time Tue Jan 22 20:13:12 2019 File system creation time-z No Is DMAPI enabled?-L 16777216 Logfile size-E Yes Exact mtime mount option-S No Suppress atime mount option-K whenpossible Strict replica allocation option--fastea Yes Fast external attributes enabled?--encryption No Encryption enabled?--inode-limit 134217728 Maximum number of inodes--log-replicas 0 Number of log replicas--is4KAligned Yes is4KAligned?--rapid-repair Yes rapidRepair enabled?--write-cache- 0 HAWC Threshold (max 65536) threshold-P system Disk storage pools in file system-d S1V00;S1V01;S1V02;S1V03;S1V04;S1V05; Disks in file system S1V06;S1V07;S1V08;S1V09;S1V10;S1V11 -A yes Automatic mount option-o none Additional mount options-T /gpfs1 Default mount point--mount-priority 0 Mount priority
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
E-Series Storage
Spectrum Scale
Scatter Test Results
Remote Demo Services
E5600 Storage
Spectrum Scale
IOR Test Results
Building Block Configs
Remote Demo Services
Scatter Testing Settings and Configs
NSD Clients and Servers
E5700 Configuration
Disk IO Settings
GPFS | Scatter Configuration
GPFS | Scatter Non-default Settings
GPFS Configuration
16MiB FSAs
4MiB FSAs
File System Attributes 4MiB BLOCK SIZE | SCATTERFile system Disk name NSD servers--------------------------------------
gpfs1 S1V00 nsd1-cluster,
nsd2-cluster
gpfs1 S1V01 nsd2-cluster,
nsd1-cluster
gpfs1 S1V02 nsd2-cluster,
nsd1-cluster
gpfs1 S1V03 nsd1-cluster,
nsd2-cluster
gpfs1 S1V04 nsd1-cluster,
nsd2-cluster
gpfs1 S1V05 nsd2-cluster,
nsd1-cluster
gpfs1 S1V06 n sd2-cluster,
nsd1-cluster
gpfs1 S1V07 nsd1-cluster,
nsd2-cluster
gpfs1 S1V08 nsd1-cluster,
nsd2-cluster
gpfs1 S1V09 nsd2-cluster,
nsd1-cluster
gpfs1 S1V10 nsd2-cluster,
nsd1-cluster
gpfs1 S1V11 nsd1-cluster,
nsd2-cluster
File system attributes for /dev/gpfs1:======================================flag value description------------------- ------------------------ ------------------------------------f 524288 Minimum fragment size in bytes-i 4096 Inode size in bytes-I 32768 Indirect block size in bytes-m 1 Default number of metadata replicas-M 2 Maximum number of metadata replicas-r 1 Default number of data replicas-R 2 Maximum number of data replicas-j scatter Block allocation type-D nfs4 File locking semantics in effect-k all ACL semantics in effect-n 8 Estimated number of nodes that will mount file system-B 4194304 Block size-Q none Quotas accounting enabled none Quotas enforced none Default quotas enabled--perfileset-quota No Per-fileset quota enforcement--filesetdf No Fileset df enabled?-V 15.01 (4.2.0.0) File system version--create-time Tue Jan 22 20:13:12 2019 File system creation time-z No Is DMAPI enabled?-L 16777216 Logfile size-E Yes Exact mtime mount option-S No Suppress atime mount option-K whenposible Strict replica allocation option--fastea Yes Fast external attributes enabled?--encryption No Encryption enabled?--inode-limit 134217728 Maximum number of inodes--log-replicas 0 Number of log replicas--is4KAligned Yes is4KAligned?--rapid-repair Yes rapidRepair enabled?--write-cache- 0 HAWC Threshold (max 65536) threshold -P system Disk storage pools in file system-d S1V00;S1V01;S1V02;S1V03;S1V04;S1V05; Disks in file system S1V06;S1V07;S1V08;S1V09;S1V10;S1V11-A yes Automatic mount option-o none Additional mount options-T /gpfs1 Default mount point--mount-priority 0 Mount priority
Spectrum Scale Scatter Test Results Remote Demo ServicesE-Series Storage
ENNOVAR ASSETS, BENCHMARKS, BEST PRACTICES, AND ASSISTANCEENNOVAR SPECTRUM SCALE SOLUTIONS CENTER
Institute of Emerging Technologies and Marketing Solutions at Wichita State University
• Hands-on student applied learning with Spectrum Scale
• Spectrum Scale and Spectrum Archive subject matter experts (SME)
• Consulting—including benchmarks, best practices and technical marketing services.
• Remote Customer Demo Online Access
• Customer defined use case and application environments.
• Online Spectrum Scale Solutions Lab Portal provides VPN customer access on Ennovar’s high-performance FSS configurations.
REMOTE CUSTOMER DEMO ONLINE ACCESS