Software-defined Storage in Action
-
Upload
fujitsu-global -
Category
Technology
-
view
405 -
download
1
Transcript of Software-defined Storage in Action
0 Copyright 2015 FUJITSU
Human Centric Innovation
in Action
Fujitsu Forum 2015
18th – 19th November
1 Copyright 2015 FUJITSU
Software-defined Storage in Action
Frank Reichart Sen. Dir. Product Marketing Storage
2
The Petabyte Age Needs New Storage
Traditional RAID based storage systems meet
their limits when reaching petabyte scale
High rebuild and recovery times of disk and
system failures
Scalability limits – capacity and performance
High cost of delivering high availability
Data migration times are unacceptable
Disk density, data center space
High TCO
0.5PB 1PB 10PB 20PB 100PB
Traditional RAID storage systems
Next generation storage
Copyright 2014 FUJITSU
3
Introducing ETERNUS CD10000
Software-defined, hyper-scale
storage system
Scale-out architecture up to hundreds
of storage nodes / 50+ PB
Based on Red Hat Ceph Storage open-
source technology
Appliance approach combining
HW & SW & services
Unified management of HW &SW
ETERNUS CD10000
End-to-end maintenance and support
Object
Access
Block
Access File Access (planned)
ETERNUS CD10000 Unified Management
Red Hat Ceph Storage Software
ETERNUS CD10000 storage nodes
Copyright 2015 FUJITSU
4
Node2
A new way to do storage
Copyright 2014 FUJITSU
Node1 Node2 Node(n)
Data is automatically distributed over disks and nodes in a self-optimizing way
2,3,4.. data copies (replicas) protect against disk and node failures (instead of RAID)
In case of a disk or node failure, lost data copies are automatically rebuilt
Nodes can be exchanged online
5
What ETERNUS CD10000 delivers
ETERNUS
CD10000
Unlimited
Scalability
TCO
optimized
Automation
Immortal
System
Zero
Downtime
Copyright 2015 FUJITSU
Ceph
6
Based on Open Standards
Ceph
ETERNUS CD10000 is based Red Hat Ceph Storage
Ceph is an open-source software-defined storage platform
Designed to present object, block, and file storage
from a distributed x86 compute cluster, scalable to the Exabyte level
The leading distribution: Red Hat Ceph Storage
OpenStack
Ceph is the core storage element within the OpenStack project
OpenStack enables IT organizations building private
and public cloud environments with open source software
7
Red Hat Product Process
We enable software and hardware partners, customers, and academia to participate at every stage of development.
8 Copyright 2015 FUJITSU
Open Source to the Enterprise
9
MULTI-LINGUAL
24 HOURS / 7 DAYS A WEEK
UNLIMITED INCIDENTS
MULTI-CHANNEL
MULTI-VENDOR CASE
OWNERSHIP
CUSTOMER PORTAL
& FORUMS
KNOWLEDGEBASE
HARDWARE
CERTIFICATION
SOFTWARE
CERTIFICATION
TRAINING CURRICULA
(OPTIONAL)
STABILITY WITH A
PRODUCT LIFE CYCLE
OF UP TO 10 YEARS
SOFTWARE
ASSURANCE SECURITY RESPONSE
TEAM
UPDATES
PATCHES
UPGRADES
CLOUD PROVIDER
CERTIFICATION
COMMITMENTS
ACCESS LABS
EXPERTISE
ONGOING DELIVERY
TECHNICAL SUPPORT
Red Hat & Your Business: Subscription Model
EMBEDDED IN ETERNUS CD 10000 - DELIVERED THROUGH FUJITSU
10
ETERNUS CD10000 vs. build your own storage
It is a storage system – not a stack of components.
Optimized component evaluation, integration and sizing
End-to-end maintenance with consistent updates
Unified management of HW & SW – one system image
Highly automated storage node provisioning or exchange
Adding functionality where Ceph has gaps
Copyright 2015 FUJITSU
11
Enhancements of ETERNUS CD10000 S2
Reduce costs of
exponential data growth
$
01
Provide business continuity
for mass data
02
Shorten time to production
with validated solutions
03
Copyright 2015 FUJITSU
12
Enhancements of ETERNUS CD10000 S2
Reduce costs of
exponential data growth
$
01
Provide business continuity
for mass data
02
Shorten time to production
with validated solutions
03
Copyright 2015 FUJITSU
13
Reducing costs of exponential data growth
Allow customers to balance costs,
capacity and performance depending
on the usage scenario
Reduce the capacity cost of redundant
data copies which are necessary to
prevent data loss
Finer granularity in scalability to
reduce upfront investment in capacity
which is not yet needed
Increase storage density to store more
data at a given DC space
Copyright 2015 FUJITSU
14
Density Node
4-node
min. configuration
Density Node
42-node configuration
Density Node Flex Node
with 1×
Capacity
Extension
Flex Node
with 2×
Capacity
Extension
Flex Node
with 3×
Capacity
Extension
Flex Node
New storage nodes for different needs
1 Based on a four storage node configuration, redundant internal network, erasure code, maintenance not included. List prices vary significantly depending on configuration and system parameters
Flex Node
Flexible scaling capacity & performance
by adding nodes and node extensions
Optional use of PCIe SSDs
Focus: balancing performance and capacity
Density Node
Maximize density with 60 TB in a 1U node
2+ PB in a rack saving data center space
Focus: minimum storage costs and density
92 TB 140 TB 188 TB
~ 0,88 $ per GB1 ~ 0,47 $ per GB1
Copyright 2015 FUJITSU
44 TB 60 TB 240 TB
… …
2.5 PB
15
Erasure Code reduces costs of data redundancy
Optimized Erasure Code developed by Fujitsu contributes to the Ceph open source community
But increases capacity needs
200% overhead for 3 copies
Full copies of stored objects creates
high redundancy and fast recovery
Creating redundancy with
50% overhead (factor 1.5)
Erasure code uses one stored object plus parity
information (e.g. 4 data chunks + 2 coding chunks)
Ceph Storage Cluster
Object
Erasure Coded Pool
1 2 3 4 x y
Ceph Storage Cluster
Object
Replicated Pool
Copy
Copy Copy
Copyright 2015 FUJITSU
16
Efficiency gains with ETERNUS CD10000 S2
Up to factor 6 less cost
per GB with Flex Nodes
Up to 75% reduction in capacity
investment for data redundancy with
Erasure Coding
Up to factor 10 less costs
per GB with Density Nodes
Up to factor 10 higher storage
density with Density Nodes
Compared to ETERNUS CD10000 S1
Up to 50% less power consumption
with Density Nodes
Copyright 2015 FUJITSU
17
Enhancements of ETERNUS CD10000 S2
Reduce costs of
exponential data growth
$
01
Provide business continuity
for mass data
02
Shorten time to production
with validated solutions
03
Copyright 2015 FUJITSU
18
Disaster resilience with one stretched cluster
The Ceph cluster of an ETERNUS CD10000 is stretched over two or more sites
(one logical entity)
The most simple way to build disaster resilience for storage
Site A Site B
LAN/
WAN
Ceph
Cluster
CD10000
storage
nodes
But: may create latency while waiting for the conclusion of the remote copy write
Object Copy 1 Copy 2
Copyright 2015 FUJITSU
19
New: disaster resilience between two clusters
Via gateways data from site A are synchronized to site B (one direction)
Due to discrete clusters site A performance is not impacted through site B
With Read Affinity users or applications can be directed to the nearest data
copies accelerating read performance
Site A Site B
LAN/
WAN
Zone 1
Site A
Zone 2
Site A
Ceph
Cluster
A
CD10000 Gateway Node CD10000 Gateway Node
Zone 2
Site B
Zone 1
Site B
Ceph
Cluster
B
CD10000
storage
nodes
SYNC
AGENT
Copyright 2015 FUJITSU
20
New: backup-in-the-box
Backup-in-the-box
Pool based backup
Full, incremental, synthetic full
Pool or object-based recovery
To the original pool or to another
CD10000 storage nodes
Ceph
Cluster
Pool A Backup
Pool Pool B
Copyright 2015 FUJITSU
21
Enhancements of ETERNUS CD10000 S2
Reduce costs of
exponential data growth
$
01
Provide business continuity
for mass data
02
Shorten time to production
with validated solutions
03
Copyright 2015 FUJITSU
22
Validated solutions for fast project success
ETERNUS CD10000
Object
Access
Block
Access File Access (planned)
ETERNUS CD10000 Unified Management
Red Hat Ceph Storage Software
ETERNUS CD10000 storage nodes
Open
Stack
Sync
& Share
Online
Archives
Cloud
Backup Validated solutions based on
existing customer scenarios
Enables a fast implementation
of software-defined storage for
Copyright 2015 FUJITSU
OpenStack
Enterprise sync & share
Online archives / content depots
Cloud back-up
23
Survey – OpenStack Summit Vancouver (May 2015)
Source: OpenStack Organization, 2015
Ceph is the No. 1 storage platform for OpenStack.
Block Storage Drivers: Production
1%
1%
2%
3%
4%
10%
7%
10%
23%
37%
2%
2%
1%
5%
4%
9%
5%
11%
16%
44%
HP3PAR
Dell EqualLogic
VMWare VMDK
SolidFire
EMC
GlusterFS
NFS
Net App
LVM (default)
Ceph RBD
May 2015 November 14
Copyright 2015 FUJITSU
24
RH
EL
OS
P
PRIMEFLEX for Red Hat OpenStack
Complete reference architecture for fast implementation of OpenStack environments
API Clients Dashboard
(Horizon) Cloud Forms
Portal Fujitsu
Catalog Manager
OpenStack Cloud APIs
Physical Infrastructure (PRIMERGY, ETERNUS, Network)
ETERNUS CD
Block RBD
S3 RGW
Object RADOS
Admin
Server
Required
Services
Auth
entication
Keysto
ne
Images
Gla
nce
Obje
ct
Sw
ift
Volu
me
Cin
de
r
Netw
ork
N
eutr
on
Orc
he
str
atio
n
Hea
t
Tele
metr
y
Celio
mete
r
Compute Nova
Hyperv
isor
KV
M
Hyperv
isor
ES
Xi
Hyperv
isor
Hyper-
V
vCenter W2k12
Puppet,
Foreman
DHCP,
PXE, …
Maria DB
RabbitMQ
Operating System: Red Hat Enterprise Linux
Copyright 2015 FUJITSU
25
ETERNUS CD10000 for online archives
ETERNUS CD10000
Object
Access
Block
Access File Access (planned)
ETERNUS CD10000 Unified Management
Red Hat Ceph Storage Software
ETERNUS CD10000 storage nodes
Online Archives
based on iRODS
Example: University, Germany
Uses iRODS Application for library services
integrated with ETERNUS CD10000
Copyright 2015 FUJITSU
iRODS is an open-source data manage-
ment software in use at research organiza-
tions and government agencies worldwide
Organizes and manages large depots
of distributed digital data
Automates data workflows and enables
secure collaboration
26
ETERNUS CD10000 for sync & share
ETERNUS CD10000
Object
Access
Block
Access File Access (planned)
ETERNUS CD10000 Unified Management
Red Hat Ceph Storage Software
ETERNUS CD10000 storage nodes
Sync & Share
based on ownCloud, Seafile
Customer Example: Logistics company
Running ownCloud sync & share services
for tens of thousands of users on
ETERNUS CD10000
Running ICT processes along the logistics
chain of partners & suppliers
Customer Example: University
Research institution with >100,000 students
Supporting collaboration between students
by running Seafile sync & share solution on
ETERNUS CD100000
Copyright 2015 FUJITSU
27
For backups and archives
Efficient backup as service platform
for internal/external service providers
Using ETERNUS CD10000 as backup storage
Two options to implement
Block
S3
Backup Pool
Backup Server
Rados Gateway
Backup Server
10 Gb/s
Media
Agent
Ceph Block
Driver
Media
Agent
S3 API
Copyright 2015 FUJITSU
Access via Ceph Block Device
Access via Amazon S3 API
28
Stretching one cluster
between to sites for flexible
DR with one system
Asynchronous replication
between two discrete clusters
Backup-in-the-box
OpenStack
Enterprise File Sync & Share
Online Archiving
Cloud Backup
New Flex Node provides a
perfect balance of perfor-
mance, capacity and costs
New Density Node offers high
capacity at minimum space
and costs
New erasure code enables
data redundancy with reduced
storage capacity and costs
Enhancements of ETERNUS CD10000 S2
$
Copyright 2015 FUJITSU
29
New: ETERNUS CD10000 Quick Start Edition
Virtual ETERNUS CD10000 Appliance for demo, integration and validation purposes
30
Making
software-
defined storage
rock-solid
ETERNUS
CD10000 S2
31 Copyright 2015 FUJITSU
32