Gluster as Block Store in Containers
-
Upload
glusterorg -
Category
Technology
-
view
212 -
download
0
Transcript of Gluster as Block Store in Containers
GLUSTER AS BLOCK STORAGE
GLUSTER DEVELOPER SUMMIT
#IRC
PKALEVER
OCT-07-2016
Block storage Architecture
LIO/TCMU
TCMU-Runner
Targetcli
Demo of block store
Block Snapshots
Performance Numbers
Integration with containers
Persistent store inContainers
Demo with k8s
Read Write Once
What's Next ?
Q&A
INDEX
looplback
ARCHITECTURE
iSCSI InitiatorLIO core
tcmu-runner
tcm-user
fileIO
ramdisk
block
pscsi
vhost
fabrics
storage modules / Backstores
targetcli
uio0 uio1
qcow2 glfs
/dev/sda
User
Kernel
iSCSI
BLOCK SNAPSHOTS
Change in tcmu-runner qcow2 handler
Reflinks based snapshots
QEMU Translator from Gluster
BLOCK SNAPSHOTS [CONT...]
refcountheadertable
L1refcountblock
L2 datadata datadata L1 L2snap
1 2 3 N
Qemu Block Xlator
TCMU Runner
LIO/TCMU
/dev/sda
glfs qcow2
Server
Client
User
Kernel
raw
qcow2
READ PERFORMANCE
WRITE PERFORMANCE
“ Containers isolate application from Env
“ Persistent storage
“ iSCSI Targer device
“ File in Gluster volume
CONTAINERIZATION GOAL
“ They are stateless
KUBERNETES ARCHITECTURE
“ All the kubernetes nodes initiate the iSCSI session, attaches iSCSI targetas block device and serve it to Kubernetes pod where the application isrunning and requires persistent storage.
DEMO PLAN
From Node 6 create the pod and examine the iSCSI target device mount andworks as expected
We use 6 nodes (VMs) [ 3S for gluster & target side and other 3C for k8s Cluster & Initiator]
We create a gluster replica 3 volume using the 3 nodes {Node1, Node2 and Node3}.
Define iSCSI target using the same nodes, expose ‘LUN’ from each of them.
Use Node 4 and Node 5 as iSCSI initiators, by logging-in to the iSCSI targetsession created above
Setup K8s cluster by using {Node4, Node5 and Node6}, Node 6 is master andother 2 are slave nodes
DEMO
Demo on setting up multiple target with same wwn
https://asciinema.org/a/88325
READ WRITE ONCE
PERSISTENT RESERVATIONS OR PGR
IO fencing:
SCSI-3 PGR's are most commonly employed as fencing mechanisms inHigh-availability clusters to safegaurd against split-brain conditions.
Implemented in LIO target engine
Logical mechanism for controlling the access to a device server/target
READ WRITE ONCE [CONT...]
HOW THEY WORK ?
PERSISTENT RESERVE IN and PERSISTENT RESERVE OUT SCSI commands.
PROUT (Service Actions)
REGISTER, RESERVE, RELEASE ..
PROUT (Reservation Types)
Write Exclusive, Exclusive Access, Write Exclusive Registrants Only,...
PRIN (Service Actions)
READ_KEYS, READ_RESERVATION, REPORT_CAPABILITIES, READ_FULL_STATUS
Example:
# sg_persist --out --register --param-rk=0x12 --prout-type=6 /dev/sda
# sg_persist --read-reservation /dev/sda
INTEGRATION WITH CONTAINERS
OS Level Virtualization
Docker Orchestration
PaaS Platform on top of K8s
Kubernetes
Hint: Click on the logos to navigate to respective blogs
WHAT'S NEXT?
1 More Functional testing with various workloads
2 Performance with container workloads
3 Gdeploy Integration
4 Dynamic Provisioning with Heketi
5 Hyper-Convergence ?
6 Thoughts are welcome!
Q & A