Download - Openstack Summit HK - Ceph defacto - eNovance

Transcript
Page 1: Openstack Summit HK - Ceph defacto - eNovance

Ceph: de factor storage backend for OpenStack

OpenStack Summit 2013Hong Kong

Page 2: Openstack Summit HK - Ceph defacto - eNovance

Whoamiđź’Ą SĂ©bastien Hanđź’Ą French Cloud Engineer working for eNovanceđź’Ą Daily job focused on Ceph and OpenStackđź’Ą Blogger

Personal blog: http://www.sebastien-han.fr/blog/Company blog: http://techs.enovance.com/

Worldwide offices coverageWe design, build and run clouds – anytime -

anywhere

Page 3: Openstack Summit HK - Ceph defacto - eNovance

CephWhat is it?

Page 4: Openstack Summit HK - Ceph defacto - eNovance

The project

âžś Unified distributed storage system

âžś Started in 2006 as a PhD by Sage Weil

âžś Open source under LGPL license

âžś Written in C++

âžś Build the future of storage on commodity hardware

Page 5: Openstack Summit HK - Ceph defacto - eNovance

Key features

âžś Self managing/healing

âžś Self balancing

âžś Painless scaling

âžś Data placement with CRUSH

Page 6: Openstack Summit HK - Ceph defacto - eNovance

Controlled replication under scalable hashing

âžś Pseudo-random placement algorithm

âžś Statistically uniform distribution

âžś Rule-based configuration

Page 7: Openstack Summit HK - Ceph defacto - eNovance

Overview

Page 8: Openstack Summit HK - Ceph defacto - eNovance

Building a Ceph clusterGeneral considerations

Page 9: Openstack Summit HK - Ceph defacto - eNovance

How to start?âžś Use case

• IO profile: Bandwidth? IOPS? Mixed?• Guaranteed IOs : how many IOPS or Bandwidth per client do I want to deliver?• Usage: do I use Ceph in standalone or is it combined with a software solution?

➜ Amount of data (usable not RAW)• Replica count• Failure ratio - How much data am I willing to rebalance if a node fail?• Do I have a data growth planning?

âžś Budget :-)

Page 10: Openstack Summit HK - Ceph defacto - eNovance

Things that you must not do

➜ Don't put a RAID underneath your OSD• Ceph already manages the replication• Degraded RAID breaks performances• Reduce usable space on the cluster

➜ Don't build high density nodes with a tiny cluster• Failure consideration and data to re-balance• Potential full cluster

âžś Don't run Ceph on your hypervisors (unless you're broke)

Page 11: Openstack Summit HK - Ceph defacto - eNovance

State of the integrationIncluding best Havana’s additions

Page 12: Openstack Summit HK - Ceph defacto - eNovance

Why is Ceph so good?

It unifies OpenStack components

Page 13: Openstack Summit HK - Ceph defacto - eNovance

Havana’s additions➜ Complete refactor of the Cinder driver:

• Librados and librbd usage• Flatten volumes created from snapshots• Clone depth

➜ Cinder backup with a Ceph backend:• backing up within the same Ceph pool (not recommended)• backing up between different Ceph pools• backing up between different Ceph clusters• Support RBD stripes• Differentials

➜ Nova Libvirt_image_type = rbd• Directly boot all the VMs in Ceph• Volume QoS

Page 14: Openstack Summit HK - Ceph defacto - eNovance

Today’s Havana integration

Page 15: Openstack Summit HK - Ceph defacto - eNovance

Is Havana the perfect stack?

…

Page 16: Openstack Summit HK - Ceph defacto - eNovance

Well, almost…

Page 17: Openstack Summit HK - Ceph defacto - eNovance

What’s missing?

âžś Direct URL download for Nova

• Already on the pipe, probably for 2013.2.1

➜ Nova’s snapshots integration

• Ceph snapshot

https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd

Page 18: Openstack Summit HK - Ceph defacto - eNovance

Icehouse and beyondFuture

Page 19: Openstack Summit HK - Ceph defacto - eNovance

Tomorrow’s integration

Page 20: Openstack Summit HK - Ceph defacto - eNovance

Icehouse roadmap

➜ Implement “bricks” for RBD

âžś Re-implement snapshotting function to use RBD snapshot

âžś RBD on Nova bare metal

âžś Volume migration support

âžś RBD stripes support

« J Â» potential roadmapâžś Manila support

Page 21: Openstack Summit HK - Ceph defacto - eNovance

Ceph, what’s coming up?Roadmap

Page 22: Openstack Summit HK - Ceph defacto - eNovance

Firefly

âžś Tiering - cache pool overlay

âžś Erasure code

âžś Ceph OSD ZFS

âžś Full support of OpenStack Icehouse

Page 23: Openstack Summit HK - Ceph defacto - eNovance

Many thanks!

Questions?

Contact: [email protected]: @sebastien_hanIRC: leseb