DUSTIN L. BLACK, RHCA OPEN STORAGE IN THE ENTERPRISE Hat, Inc. - Open... · DUSTIN L. BLACK, RHCA...

Post on 14-Oct-2019

1 views 0 download

Transcript of DUSTIN L. BLACK, RHCA OPEN STORAGE IN THE ENTERPRISE Hat, Inc. - Open... · DUSTIN L. BLACK, RHCA...

DUSTIN L. BLACK, RHCA

OPEN STORAGEIN THEENTERPRISEwith GlusterFS and Ceph

Dustin L. Black, RHCAPrincipal Technical Account ManagerRed Hat Strategic Customer Engagement

2014-10-13

Dustin L. Black, RHCAPrincipal Technical AccountManagerRed Hat, Inc

dustin@redhat.com@dustinlblack

Wouldn't you like to have...a single named support contactwho know's your business,your technology,and your needs?

A trusted advisor and technicalexpert

to analyze your configuration,advise on your architecture,

and collaborate on your strategy?

An advocate and liaisonconnecting you with engineers

and maintainers,within Red Hat and upstream,

ensuring your priorities are alsotheirs?

A partner who livesand breathes open

source andtransparency?

RED HATTechnical AccountManagementPremium named-resource proactivesupportfrom your leading experts in opensolutionsContact your sales team or visit redhat.com

Supporting success. Exceeding expectations.

Let's TalkDistributed Storage

Decentralize and Limit FailurePointsScale with Commodity Hardwareand Familiar OperatingEnvironmentsReduce Dependence onSpecialized Technologies andSkills

GlusterFSClustered Scale-out GeneralPurpose Storage PlatformFundamentally File-Based &POSIX End-to-End

Familiar Filesystems Underneath(EXT4, XFS, BTRFS)Familiar Client Access (NFS, Samba,Fuse)

No Metadata ServerStandards-Based – Clients,

Red Hat StorageServer

Enterprise Implementation ofGlusterFSIntegrated Software ApplianceRHEL + XFS + GlusterFSCertified Hardware CompatibilitySubscription Model24x7 Premium Support

CephMassively scalable, software-defined storage systemCommodity hardware with nosingle point of failureSelf-healing and Self-managing

Rack and data center awareAutomatic distribution of replicas,

Block, Object, FileData stored on common backendfilesystems (EXT4, XFS, etc.)

Fundamentally distributed as objects

Inktank CephEnterprise

Enterprise Implementation ofCephCombined with management anddeployment toolsEnterprise-level support with bugescalation and hot patchesBare metal and OpenStackdeployments

Use Case:Media Storage via Object Interface

GoalsMedia file storage for customer-facing appDrop-in replacement for legacyobject backend1PB plus 1TB/day growth rateMinimal resistance to increasingscaleMulti-protocol capable for futureservicesFast transactions for

Implementation12 Dell R710 nodes +MD1000/1200 DAS

Growth of 6 -> 10 -> 12 nodes~1PB in total after RAID 6GlusterFS Swift interface fromOpenStackBuilt-in file+object simultaneousaccessMulti-GBit network withsegregated backend

Use Case:Self-Service Provisioning withAccounting and Chargeback

GoalsAdd file storage provisioning toexisting self-service virtualizationenvironment

Automate the administrative tasksMulti-tenancy

Subdivide and limit usage by corporatedivisions and departmentsAllow for over-provisioningCreate a charge-back model

Simple and transparent scaling

ImplementationDell R510 nodes with local disk~30TB per node as one XFSfilesystemBricks are subdirectories of theparent filesystem

Volumes are therefore naturally over-provisioned

Quotas* placed on volumes tolimit usage and provide foraccounting and charge-back

Use Case:NoSQL Backend with SLA-Bound

Geo-Replication

GoalsReplace legacy database key/blobarchitectureDivide and conquer

NoSQL layer for key/pointerScalable storage layer for blob payload

Active/Active sites with 30-minute replication SLAPerformance tuned for small-fileWORM patterns

ImplementationHP DL170e nodes with local disk~4TB per nodeCassandra replicated NoSQL layerfor key/pointerGlusterFS parallel geo-replication* for data payload sitecopy exceeding SLA standardsWorked with Red Hat Engineeringto modify application datapatterns for better small-file

Use Case:Storage & Compute Consolidation

for Scientific Research

GoalsScale with storage needs

Eliminate need to move data betweenbackendsKeep pace with exponential demand

Reduce administrative overhead;Spend more time on the scienceControl and predict costs

Scale on demandSimple chargeback model

Efficient resource consumption

ImplementationDell PowerEdge R720 ServersOpenStack + Ceph

HPC and Storage on the samecommodity hardwareSimple scaling, portability, and trackingfor chargeback and expansion

400TB virtual storage poolAmple unified storage on a flexibleplatform reduces administrativeoverhead

Use Case:Multi-Petabyte RESTful Object

Store

GoalsObject-based storage forthousands of cloud servicecustomersSeamlessly serve large media &backup files as well smallerpayloadsQuick time-to-market and pain-free scalabilityHighly cost-efficient with minimalproprietary reliance

ImplementationModular server-rack-row "pod"system

6x Dell PowerEdge R515 servers perrack10x 3TB disks per server; Total 216TBraw per rack10x racks per row; Total 2.1PB raw perrow

700TB triple-replicated customerobjects

Questions?people.redhat.com/dblack

Do it!Build a test environment in VMs injust minutes!Get the bits:

has GlusterFS and Cephpackages nativelyRHSS 2.1 ISO available on the

Go upstream: /

Fedora 20

Red HatPortal

gluster.org ceph.com

RED HATTechnical AccountManagementPremium named-resource proactivesupport from your leading experts inopen solutionsContact your sales team or visit redhat.com

Supporting success. Exceeding expectations.