Ceph Day London 2014 - Ceph at IBM
-
Upload
ceph-community -
Category
Technology
-
view
657 -
download
5
description
Transcript of Ceph Day London 2014 - Ceph at IBM
© 2014 IBM Corporation
Ceph at IBM and IBM Research
Zoltan Arnold NagyIBM Research – [email protected]
Zoltan Arnold Nagy
23. 10. 2014.
© 2014 IBM Corporation2/16
Zoltan Arnold Nagy - Ceph at IBM
Legal disclaimer
All information contained in this document is subject to change
without notice. The products described in this document are
generally available via the normal IBM sales channels. The
information contained in this document does not affect or change
IBM product specifications or warranties. Nothing in this
document shall operate as an express or implied license or
indemnity under the intellectual property rights of IBM or third
parties. All information contained in this document was obtained
in specific environments, and is presented as an illustration. The
results obtained in other operating environments may vary.
© 2014 IBM Corporation3/16
Zoltan Arnold Nagy - Ceph at IBM
Agenda
Ceph at IBM Research - Zurich
Ceph on Softlayer
Plans for the future
© 2014 IBM Corporation4/16
Zoltan Arnold Nagy - Ceph at IBM
Agenda
Ceph at IBM Research - Zurich
Ceph on Softlayer
Plans for the future
© 2014 IBM Corporation5/16
Zoltan Arnold Nagy - Ceph at IBM
The need for a local cloud at the Zurich Research LabHighly available and fast compute nodes
— Simulations need reliable infrastructure
Projects with high storage needs
— Not suitable for migrating to other site
— Legal issues about data handling
Network and cloud security research
— Needs a very responsive admin team
Few hundred users
— But not usual end-users
— Different OS needs
© 2014 IBM Corporation6/16
Zoltan Arnold Nagy - Ceph at IBM
First iteration
3 compute nodes with 396 GB of RAM and 36 CPUs
— System x3550 M4
4 storage nodes with 13.5 TB of total raw space on SSDs
— 2 storage nodes are hybrid compute & storage (not optimal)
— System x3550 M4, Supermicro SC216E16-R1200LPB
Everything connected with 2x10GbE
OpenStack Havana & Ceph Dumpling (0.67) on Ubuntu 12.04 LTS
— KVM hypervisors
© 2014 IBM Corporation7/16
Zoltan Arnold Nagy - Ceph at IBM
Boundaries hit quickly
Storage need growth is ~1TB / month
— Ceph is CPU hungry when doing small block IO
— We've grown to 38 SSD disks (17TB RAW)
at the time of sunset (Oct 2014)
Had a too high price point for general volume storage
— Some hadoop users would need ~30TB
Network is doing well, not saturated
© 2014 IBM Corporation8/16
Zoltan Arnold Nagy - Ceph at IBM
Second iteration
6x System x3650 M4 BD storage boxes
— Ceph 0.80.7 on Ubuntu 14.04
— 144 TB raw capacity on HDDs 12x2TB OSD per node
2x120GB SSD for journal
— OS on dedicated SSDs
— 6 disk journals to one SSD
3 monitors running in Vms
Will repurpose the SSD nodes to provide
a second cinder pool for special use-cases
© 2014 IBM Corporation9/16
Zoltan Arnold Nagy - Ceph at IBM
Agenda
Ceph at IBM Research – Zurich
Ceph on Softlayer
Plans for the future
© 2014 IBM Corporation10/16
Zoltan Arnold Nagy - Ceph at IBM
Agenda
Ceph at IBM Research – Zurich
Ceph on Softlayer
Plans for the future
© 2014 IBM Corporation11/16
Zoltan Arnold Nagy - Ceph at IBM
IBM's cloud participation is growing
SoftLayer acquired in 2013
— Mostly bare-metal hosting, but VMs are also available
OpenStack Foundation platinum member
— Ranking among the top contributors release by release
Strong commitment to cloud technologies
© 2014 IBM Corporation12/16
Zoltan Arnold Nagy - Ceph at IBM
Ceph on Softlayer
Hosted managed private cloud
—Using Softlayer as a physical foundation
—Using ceph for volume storage! (ephemeral is still on local RAID10)
Consist of
—3 controller nodes running in VMs on 3 physical “controllers”
—1GbE / 10GbE connectivity across hypervisors/storage nodes
depending on physical provisioning location
—Fixed storage size (between 8TB and 96TB usable in the initial offering)
—2-way replication instead of 3 to keep costs down
No erasure coding yet
© 2014 IBM Corporation13/16
Zoltan Arnold Nagy - Ceph at IBM
Agenda
Ceph at IBM Research – Zurich
Ceph on Softlayer
Plans for the future
© 2014 IBM Corporation14/16
Zoltan Arnold Nagy - Ceph at IBM
Plans for the future
Currently nova uses a central cephx token to access ceph
—Would be nice to have per-tenant cephx keys
—Need to propagate image ACLs to ceph auth rules
Working on proper rbd snapshotting
—Apparently will need an approved blueprint first...
Simplified multi-DC support for stretched ceph deployments
—Via playing with CRUSH physical data separation is doable, but not so
convenient
—Need async replication support so we can tolerate latency
© 2014 IBM Corporation15/16
Zoltan Arnold Nagy - Ceph at IBM
Questions?
© 2014 IBM Corporation16/16
Zoltan Arnold Nagy - Ceph at IBM
Thank You!