Openstack Summit HK - Ceph defacto - eNovance

Post on 12-May-2015

6.738 views 3 download

Tags:

description

by Sebastien Han

Transcript of Openstack Summit HK - Ceph defacto - eNovance

Ceph: de factor storage backend for OpenStack

OpenStack Summit 2013Hong Kong

Whoami💥 Sébastien Han💥 French Cloud Engineer working for eNovance💥 Daily job focused on Ceph and OpenStack💥 Blogger

Personal blog: http://www.sebastien-han.fr/blog/Company blog: http://techs.enovance.com/

Worldwide offices coverageWe design, build and run clouds – anytime -

anywhere

CephWhat is it?

The project

➜ Unified distributed storage system

➜ Started in 2006 as a PhD by Sage Weil

➜ Open source under LGPL license

➜ Written in C++

➜ Build the future of storage on commodity hardware

Key features

➜ Self managing/healing

➜ Self balancing

➜ Painless scaling

➜ Data placement with CRUSH

Controlled replication under scalable hashing

➜ Pseudo-random placement algorithm

➜ Statistically uniform distribution

➜ Rule-based configuration

Overview

Building a Ceph clusterGeneral considerations

How to start?➜ Use case

• IO profile: Bandwidth? IOPS? Mixed?• Guaranteed IOs : how many IOPS or Bandwidth per client do I want to deliver?• Usage: do I use Ceph in standalone or is it combined with a software solution?

➜ Amount of data (usable not RAW)• Replica count• Failure ratio - How much data am I willing to rebalance if a node fail?• Do I have a data growth planning?

➜ Budget :-)

Things that you must not do

➜ Don't put a RAID underneath your OSD• Ceph already manages the replication• Degraded RAID breaks performances• Reduce usable space on the cluster

➜ Don't build high density nodes with a tiny cluster• Failure consideration and data to re-balance• Potential full cluster

➜ Don't run Ceph on your hypervisors (unless you're broke)

State of the integrationIncluding best Havana’s additions

Why is Ceph so good?

It unifies OpenStack components

Havana’s additions➜ Complete refactor of the Cinder driver:

• Librados and librbd usage• Flatten volumes created from snapshots• Clone depth

➜ Cinder backup with a Ceph backend:• backing up within the same Ceph pool (not recommended)• backing up between different Ceph pools• backing up between different Ceph clusters• Support RBD stripes• Differentials

➜ Nova Libvirt_image_type = rbd• Directly boot all the VMs in Ceph• Volume QoS

Today’s Havana integration

Is Havana the perfect stack?

Well, almost…

What’s missing?

➜ Direct URL download for Nova

• Already on the pipe, probably for 2013.2.1

➜ Nova’s snapshots integration

• Ceph snapshot

https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd

Icehouse and beyondFuture

Tomorrow’s integration

Icehouse roadmap

➜ Implement “bricks” for RBD

➜ Re-implement snapshotting function to use RBD snapshot

➜ RBD on Nova bare metal

➜ Volume migration support

➜ RBD stripes support

« J » potential roadmap➜ Manila support

Ceph, what’s coming up?Roadmap

Firefly

➜ Tiering - cache pool overlay

➜ Erasure code

➜ Ceph OSD ZFS

➜ Full support of OpenStack Icehouse

Many thanks!

Questions?

Contact: sebastien@enovance.comTwitter: @sebastien_hanIRC: leseb