meets : All-Flash Ceph!

17
meets : All-Flash Ceph! Forum K-21 Brent Compton, Director Storage Solution Architectures, Red Hat Gunna Marripudi, Principal Architect, Samsung Semiconductor Inc.

Transcript of meets : All-Flash Ceph!

meets : All-Flash Ceph!

Forum K-21

Brent Compton, Director Storage Solution Architectures, Red Hat

Gunna Marripudi, Principal Architect, Samsung Semiconductor Inc.

Ceph Architecture Overview

Flash Memory Summit 2016 Santa Clara, CA 2

STANDARD SERVERS AND MEDIA (HDD, SSD, PCIE)

STANDARD NICS AND SWITCHES

WORKLOADS

ACCESS

PLATFORM

NETWORK

CEPH STORAGE CLUSTER

CEPH BLOCK & OBJECT CLIENTS

Ceph Cluster Building Blocks

Flash Memory Summit 2016 Santa Clara, CA 3

OpenStack Starter100TB

S500TB

M1PB

L2PB

IOPSOPTIMIZED

THROUGHPUTOPTIMIZED

COST-CAPACITYOPTIMIZED

Ceph Cluster Possibilities

Flash Memory Summit 2016 Santa Clara, CA 4

OpenStack Starter100TB

S500TB

M1PB

L2PB

IOPSOPTIMIZED

THROUGHPUTOPTIMIZED

COST-CAPACITYOPTIMIZED

Flash in Ceph Clusters

Flash Memory Summit 2016 Santa Clara, CA 5

SSDs

NVMe Reference Design1

EIA 310-D 19”2U Form Factor 24x 2.5” NVMe SSDs (U.2) 2x PCIe Gen3 x16 Slots

PCIeSwitch

PCIeSwitch

1 http://www.samsung.com/semiconductor/support/tools-utilities/All-Flash-Array-Reference-Design/Flash Memory Summit 2016 Santa Clara, CA 6

High-Performance Ceph over NVMe

MessengerOSD

FileStoreXFS

blk_mqNVMe Driver

40GbE Network

RBDCeph Clients

RADOS

Flash Memory Summit 2016 Santa Clara, CA 7

Ceph Reference Design over NVMe

Test configuration• Ceph Jewel w/jemalloc• 2x replication• Default debug values set• OSD nodes with

– 2x Xeon® E5-2699 v3 CPUs– 24x Samsung PM953 2.5”

960GB SSDs• CBT test framework• RHEL 7.2

Flash Memory Summit 2016 Santa Clara, CA 8

IOPS Optimized Configuration

Random Read• >700K IOPS with 12 PM953 SSDs in the cluster for 4KB IOs• >500K IOPS with 12 PM953 SSDs in the cluster for 8KB IOs

Flash Memory Summit 2016 Santa Clara, CA 9

IOPS Optimized Configuration

Random Write• >100K IOPS with 12 PM953 SSDs in the cluster for 4KB IOs• >70K IOPS with 12 PM953 SSDs in the cluster for 8KB IOs

Flash Memory Summit 2016 Santa Clara, CA 10

IOPS Optimized Configuration

Random IO Latency (1 job, QD=1)• 95th percentile <500usec for Random Read of 4KB and 8KB• 95th percentile <100usec for Random Write of 4KB and 8KB

Flash Memory Summit 2016 Santa Clara, CA 11

Throughput Optimized Configuration

Sequential Read• >28GB/s throughput with 72 PM953 SSDs in the cluster• >800MB/s throughput per TB cluster capacity

Flash Memory Summit 2016 Santa Clara, CA 12

Throughput Optimized Configuration

Sequential Write• >6GB/s throughput with 72 PM953 SSDs in the cluster• >179MB/s throughput per TB cluster capacity

Flash Memory Summit 2016 Santa Clara, CA 13

Ceph Scalability: 3 Nodes to 5 Nodes

Performance scales 1.3x-1.6x

Flash Memory Summit 2016 Santa Clara, CA 14

Ceph Reference Design for NVMe

Extensible platform for IOPS and Throughput optimized configuration

>1.2M IOPS and >45GB/s throughput in 5 Node Ceph Cluster

Ready for end-user deployments!

Flash Memory Summit 2016 Santa Clara, CA 15

Flash Memory Summit 2016 Santa Clara, CA 16

Test Configuration

System tuning, Ceph configuration and CBT test methodology as detailed in• http://www.samsung.com/semiconductor/support/t

ools-utilities/All-Flash-Array-Reference-Design/downloads/High-Performance_Red_Hat_Ceph_Storage_Using_Samsung_NVMe_SSDs-WP-20160622.pdf

Flash Memory Summit 2016 Santa Clara, CA 17