Gluster.next feb-2016
-
Upload
vijay-bellur -
Category
Technology
-
view
445 -
download
2
Transcript of Gluster.next feb-2016
![Page 1: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/1.jpg)
Gluster.Next
- Vijay Bellur@vbellur
February 2016
![Page 2: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/2.jpg)
Credits
Some slides/content borrowed & stolen from:Atin Mukherjee
Jeff DarcyLuis Pabon
Prasanna Kalever
![Page 3: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/3.jpg)
Agenda
- What is Gluster.Next?- How Gluster.Next?- Why Gluster.Next?- When Gluster.Next?
![Page 4: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/4.jpg)
Gluster.Next What?
![Page 5: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/5.jpg)
Gluster.Today- Scale-out storage system
- Aggregates storage exports over network interconnects to provide an
unified namespace
- File, Object, API and Block interfaces
- Layered on disk file systems that support extended attributes
![Page 6: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/6.jpg)
Gluster.Today - Features● Scale-out NAS
○ Elasticity, quotas● Data Protection and Recovery
○ Volume Snapshots○ Synchronous & Asynchronous Replication○ Erasure Coding
● Archival○ Read-only, WORM
● Native CLI / API for management
![Page 7: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/7.jpg)
Gluster.Today - Features● Isolation for multi-tenancy
○ SSL for data/connection, Encryption at rest● Performance
○ Data, metadata and readdir caching, tiering (in beta)● Monitoring
○ Built in io statistics, /proc like interface for introspection● Provisioning
○ Puppet-gluster, gdeploy● More..
![Page 8: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/8.jpg)
What is Gluster.Next?
- Gluster.today- Client driven Distribution, Replication and Erasure Coding
(with FUSE)- Spread across 3 - 100s of nodes- Geared towards “Storage as a Platform”
![Page 9: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/9.jpg)
What is Gluster.Next?
- Gluster.Next- Architectural Evolution Spanning over multiple releases (3.8 &
4.0)- Scale-out to 1000s of nodes- Choice of Distribution, Replication and Erasure Coding on
servers or clients- Geared towards “Storage as a Platform” and “Storage as a
Service”- Native REsTful management & eventing for monitoring
![Page 10: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/10.jpg)
Gluster.Next How?
![Page 11: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/11.jpg)
Terminology
● Trusted Storage Pool - Collection of Storage Servers
● Brick - An export directory on a server● Volume - Logical collection of bricks
![Page 12: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/12.jpg)
Gluster.Next - Main Components
GlusterD 2
DHT 2 NSR
EventsNetwork QoS Brick Mgmt
Sharding
![Page 13: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/13.jpg)
DHT 2
● Problem: directories on all subvolumes○ directory ops can take O(n) messages○ not just mkdir or traversals - also create, rename,
and so on● Solution: each directory on one subvolume
○ can still be replicated etc.○ each brick can hold data, metadata, or both○ by default, each is both just like current Gluster
![Page 14: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/14.jpg)
DHT 2 (continued)
● Improved layout handling○ central (replicated) instead of per-brick○ less space, instantaneous “fix-layout” step○ layout generations help with lookup efficiency
● Flatter back-end structure○ makes GFID-based lookups more efficient○ good for NFS, SMB
![Page 15: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/15.jpg)
NSR (name change coming soon)
● Server-side with temporary leader○ vs. client-side, client-driven○ can exploit faster/separate server network
● Log/journal based○ can exploit flash/NVM (“poor man’s tiering”)
● More flexible consistency options○ fully sync, ordered async, hybrids○ can replace geo-replication for some use cases
![Page 16: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/16.jpg)
NSR Illustrated (data flow)
Client
Server A(leader) Server B Server C
no bandwidth splitting
traffic on server network
![Page 17: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/17.jpg)
NSR Illustrated (log structure)
Server
client requests
Log 1(newest) Log 2
BrickFilesystem
SSD/NVM for logs
![Page 18: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/18.jpg)
Sharding
● Spreads data blocks across a gluster volume
● Primarily targeted for VM image storage
● File sizes not bound by brick or disk limits
● More efficient healing, rebalance and geo-
replication
● Yet another translator in Gluster
![Page 19: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/19.jpg)
Sharding Illustrated
![Page 20: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/20.jpg)
Network QoS● Necessary to avoid hot spots at high scale
○ avoid starvation, cascading effects● A single activity or type of traffic (e.g. self-heal or
rebalance) can be:○ directed toward a separate network○ throttled on a shared network
● User gets to control front-end impact vs. recovery time
![Page 21: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/21.jpg)
GlusterD2● More efficient/stable membership
○ especially at high scale
● Stronger configuration consistency
● Modularity and plugins
● Exposes ReST interfaces for management
● Core implementation in Go
![Page 22: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/22.jpg)
GlusterD2 - Architecture
![Page 23: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/23.jpg)
Heketi● Dynamic Share Provisioning with Gluster volumes
● Eases brick provisioning - LVs, VGs, filesystem etc.
● Automatically determines brick locations for fault
tolerance
● Exposes high level ReST interfaces for management
○ create share, expand share, delete share etc.
![Page 24: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/24.jpg)
Heketi Illustrated
![Page 25: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/25.jpg)
Heketi - Architecture
![Page 26: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/26.jpg)
Event Framework
● Export node and volume events in a more consumable way
● Support external monitoring and management
● Currently: via storaged (DBus)○ yeah, it’s a Red Hat thing
![Page 27: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/27.jpg)
Brick Management
● Multi-tenancy, snapshots, etc. mean more bricks to manage○ possibly exhaust cores/memory
● Heketi can help○ can also make things worse (e.g. splitting bricks)
● One daemon/process must handle multiple bricks to avoid contention/thrashing○ core infrastructure change, many moving parts
![Page 28: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/28.jpg)
Gluster.Next Why?
![Page 29: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/29.jpg)
Why Gluster.Next?
- Paradigm Changes in IT consumption- Storage as a Service & Storage as a Platform- Private, Public & Hybrid clouds
- New Workloads- Containers, IoT, <buzz-word> demanding scale
- Economics of Scale
![Page 30: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/30.jpg)
Why Gluster.Next: StaaS- User/Tenant driven provisioning
of shares.
- Talk to as many Gluster clusters and nodes
- Tagging of nodes for differentiated classes of service
- QoS for preventing noisy neighbors
![Page 31: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/31.jpg)
Why Gluster.Next: Containers- Persistent storage for stateless Containers
- Non-shared/Block : Gluster backed file through iSCSI - Shared/File: Multi-tenant Gluster Shares / Volumes
- Shared Storage for container registries- Geo-replication for DR
- Heketi to ease provisioning- “Give me a non-shared 5 GB share”- “Give me a shared 1 TB share”
- Shared Storage use cases being integrated with Docker, Kubernetes & OpenShift
![Page 32: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/32.jpg)
Why Gluster.Next: Hyperconvergence with VMs
- Gluster Processes are
lightweight
- Benign self-healing and
rebalance with sharding
- oVirt & Gluster - already
integrated management
controller
- geo-replication of shards
possible!
![Page 33: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/33.jpg)
Why Gluster.Next: Hyperconvergence with OpenShift
![Page 34: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/34.jpg)
Gluster.Next When?
![Page 35: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/35.jpg)
Gluster.Next Phase 1
Gluster 3.8- May/June 2016- Experimental features from Gluster.Next
- dht2, NSR, glusterd2, Eventing- Sub-directory export support for FUSE- Improvements for NFS/SMB accesses
- Leases, RichACLs, Mandatory Locks etc.- UNIX-domain sockets for I/O
- slight boost in hyperconverged setups
![Page 36: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/36.jpg)
Gluster.Next Phase 2
Gluster 4.0- December 2016- Everything that we’ve discussed so far :-)- And more...
![Page 37: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/37.jpg)
Other Stuff
● IPv6 support● “Official” FreeBSD support● Compression● Code generation
○ reduce duplication, technical debt○ ease addition of new functionality
● New tests and test infrastructure
![Page 38: Gluster.next feb-2016](https://reader034.fdocuments.us/reader034/viewer/2022051709/587244811a28ab102f8b8457/html5/thumbnails/38.jpg)
Thank You!
Resources:[email protected]@gluster.org
http://twitter.com/glusterIRC: #gluster, #gluster-dev on Freenode