MANAGING STORAGE IN CLUSTER AND DOCKER CONTAINER BY USING
KUBERNETES
NUR SUWAIBAH BINTI MOHD ZAKI
BACHELOR OF COMPUTER SCIENCES
(COMPUTER NETWORK SECURITY) WITH HONOURS
FACULTY OF INFORMATICS AND COMPUTING
UNIVERSITI SULTAN ZAINAL ABIDIN
OGOS 2018
i
DECLARATION
I declared that titled Managing storage in Cluster and Dockers Container by Using Kubernetes is
based on the result of my investigations from the gathered information previously. All the
sections text and results which have been obtained from several sources are fully referenced.
Signature: …………………………….
Name: Nur Suwaibah Binti Mohd Zaki
Date: ………………………………..
ii
CONFIRMATION
This project report entitled automated, Managing storage in Cluster and Docker Container by
Using Kubernetes was prepared and submitted by Nur Suwaibah Binti Mohd Zaki (Matric
Number: BTBL15040362) and has been found satisfactory in term of scope, quality and
presentations partial fulfillment of the requirement for the Bachelor of Computer Science
(Computer Network Security) with honors in University Sultan Zainal Abidin.
Signature: …………………………….
Name: Ahmad Faisal Amri Bin Abidin@Bharun
Date: ………………………………..
iii
ACKNOWLEDGMENT
In the Name of Allah, the Most Gracious and the Most Merciful.
The research presented in this desertion could not have been conducted without the support,
encouragement, and cooperation of many people. First of all, I would like to express my deepest
gratitude to my supervisor, En. Ahmad Faisal Amri Bin Abidin@Bharun, who's always given
valuable advice and encouragement at each stage throughout in developing this project
successfully. I would like to thank her for given the opportunity to learn and work under
guidance, which has been the most memorable experience.
I want to take this opportunity to thanks to my parent also special thanks to all lecturers of
Faculty of Informatics and Computing for their attention, guidance, and advice to help in
developing of this project. My sincere thanks also go to my fellow friends for there to finish up
this project. May Allah S.W.T bless all the effort that has been taken to finish this project.
Thank you.
iv
ABSTRACT
Users are storing increasingly massive amounts of data. Storage software complexity is
growing. The use of cheap and less reliable hardware is increasing. Keeping critical data safe and
accessible from several locations has become a global preoccupation, either being this data
personal, organizational or from applications. As a consequence of this issue, we verify the
emergence of online storage services. In addition, there is the new paradigm of Cloud
Computing, which brings new ideas to build services that allow users to store their data and run
their applications in the "Cloud". By doing a smart and efficient management of these services'
storage, it is possible to improve the quality of service offered, as well as to optimize the usage
of the infrastructure where the services run. This management is even more critical and complex
when the infrastructure is composed of thousands of nodes running several virtual machines and
sharing the same storage.
Kubernetes, a new open-source cluster manager from Google, which is a use the current
resource utilization of the application to determine how to replicate applications across the
cluster to ensure each application operates at a previously specified resource utilization level.
Kubernetes is a tool for facilitating deployment of multiple OS system-level virtualized
applications using containers. It is capable for the management of several containerized
applications and users at the same time allowing compute resources to be managed and
distributed to the applications. In short, this thesis presents users of Kubernetes and managing
storage by using a cluster, an additional option for efficiently and reliably running their
application. In doing so, we expand the realm of issues addressable through computing with
distributed systems.
v
ABSTRAK
Pengguna menyimpan jumlah data yang semakin besar. Kerumitan perisian penyimpanan
semakin berkembang. Penggunaan perkakasan murah dan kurang dipercayai semakin meningkat.
Menjaga data kritikal yang selamat dan boleh diakses dari beberapa lokasi telah menjadi
tumpuan global, sama ada data ini peribadi, organisasi atau dari aplikasi. Sebagai akibat daripada
isu ini, kami mengesahkan kemunculan perkhidmatan penyimpanan dalam talian. Di samping
itu, terdapat paradigma baru Cloud Computing, yang membawa idea baru untuk membina
perkhidmatan yang membolehkan pengguna menyimpan data mereka dan menjalankan aplikasi
mereka dalam "Cloud". Dengan melakukan pengurusan storan perkhidmatan yang bijak dan
cekap, adalah mungkin untuk meningkatkan kualiti perkhidmatan yang ditawarkan, serta
mengoptimumkan penggunaan infrastruktur di mana perkhidmatan dijalankan. Pengurusan ini
lebih kritis dan kompleks apabila infrastruktur dibuat oleh ribuan node yang menjalankan
beberapa mesin maya dan berkongsi storan yang sama.
Kubernetes, pengurus kluster sumber terbuka baru dari Google, yang menggunakan
penggunaan sumber aplikasi semasa untuk menentukan cara meniru aplikasi di seluruh cluster
untuk memastikan setiap aplikasi beroperasi pada tahap penggunaan sumber yang telah
ditetapkan sebelumnya. Kubernetes adalah alat untuk memudah cara penggunaan aplikasi
berbilang sistem OS berbilang menggunakan bekas. Ia mampu untuk pengurusan beberapa
aplikasi dan pengguna berkumpulan pada masa yang sama membolehkan sumber pengiraan
dapat diuruskan dan diedarkan kepada aplikasi. Singkatnya, tesis ini membentangkan pengguna
Kubernetes, dan menguruskan storan dengan menggunakan cluster, pilihan tambahan untuk
menjalankan aplikasi mereka dengan cekap dan tepat. Dengan berbuat demikian, kami
memperluaskan bidang isu-isu yang dapat ditangani melalui pengkomputeran dengan sistem
yang diedarkan.
vi
TABLE OF CONTENT
CHAPTER TITLE PAGE
DECLARATION
CONFIRMATION
ACKNOWLEDGEMENT
ABSTRACT
ABSTRAK
TABLE OF CONTENT
LIST OG FIGURE
LIST OF TABLE
i
ii
iii
iv
v
vi-viii
ix
x
1 INTRODUCTION
1.1 Introduction
1.2 Background
1.3 Problem Statement
1.4 Objective
1.5 Scope
1.6 Project Constrains and Limitations
1
2
3
3
4
4
2 LITERATURE REVIEW
2.1 Introduction
2.2 Google Cloud Platforms
2.2.1
2.3 Kubernetes
2.4 Research and article
2.5 Summary
5
6
6
6-7
8-11
11
vii
3 METHODOLOGY
3.1 Introduction
3.2 Methodology
3.2.1 Cluster
3.2.2 Pod
3.2.3 Service
3.2.4 Volumes
3.2.5 Namespace
3.2.6 Ingresses
3.3 Kubernetes Architecture
3.3.1 Kubernetes Master/ Master Node
3.3.2 Kubernetes Nodes/ Worker Node
3.3.3 etcd
3.3.4 API Server
3.3.5 Scheduler
3.3.6 Control Manager
3.3.7 Docker
3.3.8 Kubelet
3.3.9 Kubernetes proxy
3.4 Persistent Volumes
3.4.1 Kubernetes Volumes vs Persistent
Volumes
3.5 Framework
3.5.1 Pod Flow On Kubernetes
3.5 Define Requirement
3.3.1 Software Requirement
3.3.1 Hardware Requirement
3.4 Summary
12
13
13-14
14-15
15
15
16
16-17
18-19
19
20
21
21
21
21
22
22
23-24
24
25-26
27-28
28
29
29
viii
4 IMPLEMENTATION AND RESULT
4.1 Introduction
4.2 Google Cloud Platform
4.3 Ubuntu On Google Cloud Platform
4.3.1 Install KVM2 a driver
4.3.2 Install Helm and Tiller
4.3.3 Install WordPress
4.3.4 Manage Storage on Kubernetes
4.4 Summary
30
31-32
32-34
34-36
36-37
38-43
44
5 CONCLUSION
5.1 Introduction
5.2 Project Constrain
5.3 Future work
5.4 Conclusion
45
46
46
47
APPENDIX 48
REFERENCES 49-51
ix
LIST OF FIGURE
FIGURE 1: Kubernetes cluster depicted.
FIGURE 2: Pod namespace creation instructions
FIGURE 3: Instructions to create an Ingress resource.
FIGURE 4: Kubernetes Architecture
FIGURE 5: Framework
FIGURE 6: Pod flow
FIGURE 7: Custom Image For Virtualization
FIGURE 8: Create a VM instance
FIGURE 9: Create a VM instance install a KVM2 driver
FIGURE 10: Install the driver
FIGURE 11: Start Minikube
FIGURE 12: Installing Helm
FIGURE 13: Installing tiller
FIGURE 14: install WordPress
FIGURE 15: Enable NFS provisioning with a file named
default-storage.yaml
FIGURE 16: Create the default storage class
x
LIST OF TABLE
Table 1: Software Requirement
Table 2: Hardware Requirement
Table 3: Master and Node architecture function
1
CHAPTER 1
INTRODUCTION
1.1 INTRODUCTION
This chapter presents an introduction to the project proposal. It starts with an overview of
the key concept related to the project problems addressed. Then the fundamental motivations
behind this project are stated and the proposed solutions to address the project challenges are
briefly presented. This chapter ends with a discussion on the contributions and the structure of
the project is outlined.
Kubernetes is an open source orchestration system for Docker containers. It handles and
scheduling onto nodes in the compute cluster and actively manages workloads to ensure that
their state matches the users declared intentions. It's a very active open-source platform with lots
of contributors.
In Kubernetes Engine, a container cluster consists of at least one cluster master and
multiple worker machines called nodes. These master and node machines run the Kubernetes
cluster orchestration system. A container cluster is the foundation of Kubernetes Engine which is
the Kubernetes objects that represent your containerized applications all run on top of a cluster.
In this project, we will discuss some of Kubernetes basic concept. We will talk about
configure and deploy ability for kubernetes. Then, we need to utilize manage storages in
kubernetes.
2
1.2 BACKGROUND
Kubernetes is a cloud storage which implements cluster networking. A Kubernetes
cluster is made of a master node and a set of worker nodes. In a production environment these
run in a distributed setup on multiple nodes. For testing purposes, all the components can run on
the same node (physical or virtual) by using minikube. Kubernetes has six main components that
form a functioning cluster: API server, Scheduler, Controller manager, kubelet, kube-proxy,
etcd. This page gathers resources about Kubernetes administrative
procedures such as configuration, resource.
An advantage of Kubernetes is that when we build our Microservices, we can add as
many lifetime replications as we want. So if the project expands, making changes to it doesn’t
take lots of effort.
Imagine you have three servers. One of the servers is overloaded and the other two are
rather free compared to the first one. So there’s the master, and there are the minions on the
servers, so you want to reduce the load on a particular server. And when you run the command
for the master to build some more replications, the load balancer decides which nodes are less
loaded at the moment, so it spans the load to other less loaded nodes. So the use of Kubernetes
was rather comfortable from this perspective as well.
For clustering, when accessing the Kubernetes API, you will use Kubernetes
CLI.kuberctl. To access a cluster, you need to know the location of the cluster and have
credentials to access it. Typically, this is automatically set-up when you work through a Getting
started guide, or someone else setup the cluster and provided you with credentials and a
location.
The Docker container is a platform which getting support for Kubernetes. This means
that developers and operators can build apps with Docker and seamlessly test and deploy them
using both Docker Swarm and Kubernetes.
3
1.3 PROBLEM STATEMENT
Nowadays we need a large storage to store gigantic data. It very costly to used physical
drive. What we need to do is cloud application to store its own cloud. The advantage of cloud
storage is universal document access. Then, Cloud technology provides Cloud application to
store large data. That is not a problem with cloud computing, because you do not take your
documents with you. Instead, they stay in the cloud, and you can access them whenever you have
a computer and an Internet connection. The documents are also instantly available from where
you are. So, what we need is a storage that can store them. But if we had a storage, how we want
to manage storage. Managing storage is a distinct problem from managing to compute.
Kubernetes provides architecture of cloud storage which can store the data in any place and
anywhere that you want too. The advantages of cloud storage are universal document access. The
documents are also instantly available from where you are.
1.4 OBJECTIVE
The goal of this project is to deploy a kubernetes and evaluate its ability for managing and
monitoring cluster and Docker containers. It can be easy for a user to manage storage to store
any information. Our project will mainly focus on the following objective:
a) To study the kubernetes cloud application including cluster container and Docker
container
b) To configure and deploy kubernetes with cluster and Docker container
c) To test the functionality of kubernetes to manage the storage in a cluster by leveraging
docker container
4
1.5 SCOPE
The scope of the project is:
a) Firstly, we need to install and configure Kubernetes itself in Linux.
b) After install Kubernetes, we need to manage cluster containers.
c) The cluster is a small component in Kubernetes.
d) Then, configure Dockers container which is what we want to do is install Dockers in
Kubernetes.
1.6 PROJECT CONSTRAINS AND LIMITATION
Can store a small volume of data
Don't have gigantic of real data which not can be feed into cluster environment to test its
functionality
The cluster only focusing on managing storage
5
CHAPTER 2
LITERATURE REVIEW
2.1 INTRODUCTION
This chapter discusses about literature review of the project proposal. It starts with an
explanation of the key concept related to the project proposal. Then the fundamental
motivations behind this project are stated and the proposed solution to address the project
challenges is briefly presented. This chapter ends with a discussion on the simulator used
in this project and the structure is outlined.
6
2.2 Google Cloud Platforms
Google Cloud Platform (GCP) is a collection of Google’s computing resources, made
available via services to the general public as a public cloud offering. It is a set of services
that enables developers to build, test and deploy applications on Google’s reliable
infrastructure.
2.2.1 Ubuntu
Ubuntu is an open source Debian-based Linux distribution. It’s a community-developed
Linux-based operating system that can be used on desktops, laptops and servers. The
operating system includes a variety of applications including those for word processing,
e-mail applications, and web server software and also programming tools. Ubuntu is free
of charge, including enterprise releases and security updates. It also comes with full
commercial support from Canonical. Ubuntu is available in both a desktop and server
edition.
2.3 Kubernetes
"Kubernetes is an open-source platform designed to automate deploying, scaling,
and operating application containers." (Kubernetes 2018, cited 14.1.2018). Kubernetes is
the leading container orchestration engine in the field of containerization. Developed by
Google, it uses the Docker images as a basis to deploy applications into the containers.
With Kubernetes, the containers are easily scaled up, destroyed and remade. Compared to
normal virtual machines, they are deployed faster, more efficiently and reliably. Docker
7
creates the image, which is used in Kubernetes. In the world of growing virtualization
and the Internet of Things, applications and services need to be deployed quickly and
efficiently. This is where Kubernetes comes in.
Instead of operating at the hardware level and rather in the container level, Kubernetes
provides some features related to a Platform as a Service (PaaS). PaaS usually refers to a
cloud service. In the cloud service, the user can develop, deploy and run applications
using several environments provided by the service provider. Essentially, the provider
takes all the responsibility for installing and configuring the environment. This way the
customers are free to apply the application code to the cloud. This is what Kubernetes
basically does, except that the cluster is managed by the developer or cluster
administrator and it can be done locally. These features include, for example,
deployment, scaling, load balancing, logging, and monitoring. Kubernetes comprises of a
set of independent control a process that drives the current state of the deployment to the
wanted result. (Kubernetes 2018, cited 14.1.2018; Meegan 2016, cited 12.2.2018.)
The reason Kubernetes was chosen instead of the native Docker cluster, Docker Swarm,
is its scalability, portability and self-healing attributes. Kubernetes has been around
longer than Docker Swarm and therefore has much more documentation. It also has more
widespread 3 rd party application support available. (Kubernetes 2018, cited 14.1.2018;
Docker 2017, cited 13.1.2018.)
8
2.4 Research and article
Author / year Article Name Description
Michael
Mishalov Avi,
SAIDIANItamar
Cohen Pini and
Shlomi Erez
Yaary; 2006
Container
Monitoring
Configuration
Deployment
In one example in accordance with the present
disclosure, a method may include determining, by a
monitoring server external to a container cluster
environment, that a new container has been added
to the container cluster environment. The method
may include receiving, by the monitoring server,
cluster data from the new container and comparing,
by the monitoring server, the cluster data to a
plurality of configuration templates on the
monitoring server. The method may also include
determining, by the monitoring server, a
configuration template from the plurality
appropriate for the new container based on the
comparison and deploying the configuration
template to monitor the new container. [23]
Huamin Chen;
2017
Lazy persistent
storage volume
provisioning
Methods and systems for provisioning persistent
storage are disclosed. For example, a new isolated
guest and associated persistent storage volume are
requested to be created. The isolated guest is based
on an image file in an image repository and is
associated with metadata. An orchestrator obtains
the image file. The orchestrator reserves the
persistent storage volume by updating the system
resource allocation database based on the metadata.
The persistent storage volume is then created in the
persistent storage based on the reservation of the
persistent storage volume in the system resource
9
allocation database. The orchestrator activates the
constructed isolated guest and the isolated guest
accesses the persistent storage volume. [24]
Arredondo,
J.Jugo; 2018
Containerized
Control Structure
for Accelerators
Nowadays modern accelerators are starting to use
virtualization to implement their control systems.
Following this idea, one of the possibilities is to
use containers. Containers are highly scalable, easy
to produce/reproduce, easy to share, resilient,
elastic and low cost in terms of computational
resources. All of those are characteristics that fit
with the necessities of a well-defined and versatile
control system. In this paper, a control structure
based on this paradigm is discussed. Firstly the
technologies available for this task are briefly
compared. Starting from containerizing tools and
following with the container orchestration
technologies. As a result Kubernetes and Docker
are selected. Then, the basis of Kubernetes/Docker
and how it fits into the control of an accelerator is
stated. Following the control applications suitable
to be containerized are analyzed. Finally, a
particular structure for an accelerator based on
EPICS as middleware is sketched. [25]
Stefano
Sebastio, Rahul
Ghosh and
Tridib
Mukherjee;
2018
An Availability
Analysis Approach
for Deployment
Configurations of
Containers
Operating system (OS) containers enabling the
microservice-oriented architecture are becoming
popular in the context of Cloud services.
Containers provide the ability to create lightweight
and portable runtime environments decoupling the
application requirements from the characteristics of
10
the underlying system. Services built on containers
have a small resource footprint in terms of
processing, storage, memory and network, allowing
a denser deployment environment. While the
performance of such containers is addressed in few
previous studies, understanding the failure-repair
behavior of the containers remains unexplored. In
this paper, from an availability point of view, we
propose and compare different configuration
models for deploying a containerized software
system. Inspired by Google Kubernetes, a container
management system, these configurations are
characterized with a failure response and migration
service. We develop novel non-state-space and
state-space analytic models for container
availability analysis. Analytical as well as
simulative solutions are obtained for the developed
models. Our analysis provides insights on k out-of
N availability and sensitivity of system availability
for key system parameters. Finally, we build an
open-source software tool powered by these
models. The tool helps Cloud administrators to
assess the availability of containerized systems and
to conduct a what-if analysis based on user-
provided parameters and configurations. [26]
Victor Marmol,
Rohit Jnagal
and Tim
Hockin; 2015
Docker Containers
across Multiple
Clouds and Data
Centers
Containers are becoming a popular way of
deploying applications quickly, cheaply, and
reliably. As with the early days of virtual machines,
a variety of container networking configurations
have been proposed to deal with the issues of
11
discovery, flexibility and performance. Using
Docker we present available networking
configurations along with many of the popular
networking setups, their uses, and their problems
today. Second aspects we will explore are
containers in clusters, as these systems rely even
more heavily on the network. We use Kubernetes
as an example of this type of system. We present
the network setups of the core components of
Kubernetes: pods and services. [27]
2.5 Summary
In this phase, it will deliver the information about the study on the past research about
container monitoring, persistent storage, a configuration of container and article about
kubernetes. This study is more focus to do the development and guide to the successful
project, and also come out with a successful configuration that will become useful to all
users.
12
CHAPTER 3
METHODOLOGY
3.1 INTRODUCTION
In this chapter is introducing a methodology that proposed by other researcher and this
project improve the idea by presents a framework, system model, and flowchart of the project.
For this project, cluster computer approach is used for managing storage in kubernetes. Then,
this chapter including a framework which is one of the processes that needs to develop
kubernetes. It will discuss the process of developing kubernetes and it includes the framework of
the project. In this chapter also include the details about every phase that involve on this project
development.
13
3.2 METHODOLOGY
With Kubernetes, we need to know the basic concepts concerning Kubernetes
architecture. It is easier to work when the basic concepts are known when starting on a new
subject. The concepts used in the deployment process are described in the next chapters which
are the following: Cluster, Node, Master, Pod, Service, Volume, Namespace and Ingress.
3.2.1 Cluster
A cluster is a collection of hosts storage and networking resources that Kubernetes uses
to run the various workloads that comprise your system. The Minikube single node cluster,
which is used in this project, consists of three resources: The master, which coordinates the
cluster. The second parts are the nodes which manage run the kubelet, kube proxy and the
Dockers engine. (FIGURE 1). (Kubernetes Cluster 2018, cited 14.1.2018.)
FIGURE 1. Kubernetes cluster depicted. (Kubernetes Cluster 2018, cited 14.1.2018.)
14
The master is the head of Kubernetes cluster. It consists of multiple parts, such as an API
server, a scheduler, and a controller manager. The master is responsible for the global cluster-
level scheduling of pods and handling events. All the instructions made via kubectl go through
the API server, which is then redirected to the designated worker nodes.
The worker units inside the clusters are called nodes. A node is a single host inside the
cluster, it can be a physical or a virtual machine. Their purpose is to manage pods. Each node
runs several Kubernetes managed components, such as Kube proxy and Kube ingress. All the
nodes are managed by the Kubernetes master and their job is to do all the work given by the
Kubernetes master.
3.2.2 Pod
A pod consists of one more containers, with shared network and storage and instructions
on how to run containers. Pods are managed by the nodes. They are the smallest deployable units
which can be created in Kubernetes. "Containers within a pod share an IP address, a port space
and can find each other via localhost". Pods can be used to create vertical application stacks,
such as a LAMP-stack, although their main purpose is to support co-located and managed helper
programs, like proxies, file and data loaders, log and checkpoint backups. (Kubernetes pod 2018,
cited 14.1.2018; Kubernetes Cluster 2018, cited 14.1.2018.)
When pods are destroyed, they are not resurrected. Specifically, ReplicationControllers
are designed to create and delete pods dynamically when, for example, commencing rolling
updates or scaling the deployment up or down. (Kubernetes service 2018, cited 14.1.2018.) This
enables for the processes running inside the pod to have a good uptime when in a disaster
situation they are remade by the Replication Controller.
They can also communicate using standard inter-process communications (IPC).
Containers in different pods have unique IP addresses and cannot communicate via IPC without
specific configuration. They usually communicate via pod IP addresses. (Kubernetes Cluster
2018, cited 14.1.2018)
15
3.2.3 Service
The basic idea of services is that they define a policy with which there is a way to gain
access to the pods. They take care of the variables needed for communication: an IP address,
ports and with a group of pods, also load balancing. When a service and a deployment are linked,
the service should be started first and then the deployment. They can be deployed using one
YAML file. The instructions need to be separated with a line containing several hyphens.
(Abbassi 2016, cited 24.1.2018.)
Services are used to expose some functionality to users or other services. Kubernetes
pods don't have a long lifecycle and when they are taken down, they are not resurrected. Pods are
created dynamically when taken down. Because of this, to communicate with newly created pods
there is a need for a concept which abstracts away the pods and this is achieved with services.
(Kubernetes service 2018, cited 14.1.2018. Hong 2017, cited 11.01.2018.)
3.2.3 Service
The basic idea of services is that they define a policy with which there is a way to gain
access to the pods. They take care of the variables needed for communication: an IP address,
ports and with a group of pods, also load balancing. When a service and a deployment are linked,
the service should be started first and then the deployment. They can be deploying using one
YAML file. The instructions need to be separated with a line containing several hyphens.
(Abbassi 2016, cited 24.1.2018.)
Services are used to expose some functionality to users or other services. Kubernetes
pods don’t have a long lifecycle and when they are taken down, they are not resurrected. Pods
are created dynamically when taken down. Because of this, to communicate with newly created
pods there is a need for a concept which abstracts away the pods and this is achieved with
services. (Kubernetes service 2018, cited 14.1.2018. Hong 2017, cited 11.01.2018.)
16
3.2.4 Volumes
A Kubernetes volume is essentially a directory accessible to all containers running in a
pod. In contrast to the container-local filesystem, the data in volumes is preserved across
container restarts. Kubernetes uses PersistentVolumes (PV) to persist data between reboots and
system catastrophes. So that the PV can be used, a PersistentVolumeClaim (PVC) must be
created. Their job is to link the PV to the container for which the PVC is configured. They can be
implemented straight on the host, through Network File Shares (NFS), and various other
methods. (Kubernetes Storage 2018, cited 23.1.2018.)
3.2.5 Namespace
In Kubernetes, you can assign virtual clusters called namespaces. They are used to keep
different versions of the application separate, such as development and production versions
(Kubernetes namespace 2018, cited 16.01.2018). This is to ensure potential mix-ups between
versions. A namespace is created based on the instructions inside a YAML file with, for
example, the following information (FIGURE 2).
The namespaces in Kubernetes are based on Linux namespaces. They have the same kind
of working principle. The purposes of these are to allocate resources to a specific namespace
along with isolating the applications deployed between different namespaces.
FIGURE 2. Prod namespace creation instructions
17
3.2.6 Ingresses
For Minikube to properly redirect traffic to the exposed services, an Ingress is needed.
The services and pods inside the cluster network are only accessible internally. All the traffic that
try to access the services, is dropped or redirected by an edge router. By default, the cluster is
isolated from the Internet so that the cluster network is not directly accessible from outside
(Oranagwa 2017, cited 16.01.2018;
Kubernetes ingress 2018, cited 10.2.2018). Ingress allows inbound connection to reach
the cluster services through the edge router by creating a “hole” in it. To access the service, an
Ingress resource must be created. The resources are created via YAML files (FIGURE 3). For
the production namespace, the following ingress was created.
FIGURE 3. Instructions to create an Ingress resource.
To make the resource work, the cluster must have an Ingress controller. The controller is
responsible for taking the resource and using its instructions to redirect traffic to the wanted
destination. Minikube versions v0.14.0 and above come with Nginx ingress setup as an addon. It
is enabled by typing minikube addons enable ingress in the Linux terminal. (Oranagwa 2017,
cited 16.01.2018.)
18
3.3 Kubernetes Architecture
FIGURE 4: Kubernetes Architecture
Based on FIGURE 4, there are two type kubernetes control panel and eight type of component
on kubernetes architecture which is:
Kubernetes control panel:
Kubernetes Master/ Master Node
Kubernetes Nodes/ Worker Nodes
Type of component:
Etcd
API server
Scheduler
Control manager
19
Doker
Kubelet
Kube-proxy
3.3.1 Kubernetes Master/ Master Node
The Kubernetes master is responsible for maintaining the desired state for your cluster.
When you interact with Kubernetes, such as by using the kubectl command-line interface,
you’re communicating with your cluster’s Kubernetes master.
The “master” refers to a collection of processes managing the cluster state. Typically these
processes are all run on a single node in the cluster, and this node is also referred to as the
master. The master can also be replicated for availability and redundancy. The components of
the master can be run on any node in the cluster. Below is a breakdown of each of the key
components of the master:
API Server – This is the only component of the Kubernetes control panel with a user-
accessible API and the sole master component that you’ll interact with. The API server
exposes a restful Kubernetes API and consumes JSON manifest files.
Cluster Data Store – Kubernetes uses “etcd.” This is a strong, consistent, and highly-
available key value store that Kubernetes uses for persistent storage of all API objects.
Think of it as the “source of truth” for the cluster.
Controller Manager – Known as the “kube-controller manager,” this runs all the
controllers that handle routine tasks in the cluster. These include the Node Controller,
Replication Controller, Endpoints Controller, and Service Account and Token
Controllers. Each of these controllers works separately to maintain the desired state.
Scheduler – The scheduler watches for newly-created pods (groups of one or more
containers) and assigns them to nodes.
Dashboard (optional) – Kubernetes’ web UI that simplifies the Kubernetes cluster
user’s interactions with the API server.
20
3.3.2 Kubernetes Nodes/ Worker Node
The nodes in a cluster are the machines (VMs, physical servers, etc) that run your
applications and cloud workflows. The Kubernetes master controls each node; you’ll rarely
interact with nodes directly. The important components are nodes. Whereas the master handles
and manages the cluster, worker nodes run the containers and provide the Kubernetes runtime
environment.
Worker nodes comprise a kubelet. This is the primary node agent. It watches the API
server for pods that have been assigned to its node. Kubelet carries out tasks and maintains a
reporting backchannel of pod status to the master node.
Inside each pod there are containers, kubelet runs these via Docker (pulling images,
starting and stopping containers, etc.). It also periodically executes any requested container
liveness probes. In addition to Docker, RKT is also supported and the community is actively
working to support OCI.
Another component of worker nodes is kube-proxy. This is the network brain of the node,
maintaining network rules on the host and performing connection forwarding. It’s also
responsible for load balancing across all pods in the service.
3.3.3 etcd
etcd is used as Kubernetes’ backing store. All cluster data is stored here. Always have a backup
plan for etcd’s data for your Kubernetes cluster. It provides a REST API for CRUD operations as
well as an interface to register watchers on specific nodes, which enables a reliable way to notify
the rest of the cluster about configuration changes.
21
3.3.4 API Server
The API server is the entry points for all the REST commands used to control the cluster. Kube
API Server exposes the Kubernetes API. It is the front-end for the Kubernetes control plane. It is
designed to scale horizontally – that is, it scales by deploying more instances.
3.3.5 Scheduler
Kubernetes Scheduler watches newly created pods that are not assigned to any node and selects a
node for them to run on. The scheduler has the information about resources available on the
members of the cluster, and also the ones required for the configured service to run. The
scheduler is able to decide where to deploy a specific service based on the information it has.
3.3.6 Control Manager
Controller manager runs controllers, which are the background threads that handle routine tasks
in the cluster. Logically, each controller is a separate process, but to reduce complexity, they are
all compiled into a single binary and run in a single process.
3.3.7 Docker
Docker is used for running containers. Docker runs on each of the worker nodes, and runs the
configured pods. It takes care of downloading the images and starting the containers.
22
3.3.8 Kubelet
kubelet is the primary node agent. It watches for pods that have been assigned to its node (either
by API server or via local configuration file) and:
Mounts the pod’s required volumes.
Downloads the pod’s secrets.
Runs the pod’s containers via docker (or, experimentally, rkt).
Periodically executes any requested container liveness probes.
Reports the status of the pod back to the rest of the system, by creating a mirror pod if
necessary.
Reports the status of the node back to the rest of the system.
3.3.9 Kubernetes Proxy
Kubernetes Proxy acts as a network proxy and a load balancer for a service on a single worker
node. It takes care of the network routing for TCP and UDP packets. Kube-Proxy enables the
Kubernetes service abstraction by maintaining network rules on the host and performing
connection forwarding.
23
3.4 Persistent Volumes
Kubernetes provides an API to separate storage from computation, i.e., a pod can perform
computations while the files in use are stored on a separate resource. The API introduces 2
types of resources:
PersistentVolumes are used to define a storage volume in the system, but their lifecycle is
independent of the ones of the pods that use them. PersistentVolumes are Volume plugins
and the API supports a large variety of implementation, including NFS, Glusterfs,
CephFS, as well as cloud-providers such as GCEPersistentDisk, AWSElasticBlockStore,
AzureFile, and AzureDisk, amongst others. Persistent Volumes Persistent Volume (PV) is,
it is useful to know how Kubernetes manages storage resources. Kubernetes has a
matching primitive for each of the traditional storage operational activities
(provisioning/configuring/attaching ). Kubernetes persistent volumes are administrator
provisioned volumes. These are created with a particular filesystem, size, and identifying
characteristics such as volume IDs and names.
It is provisioned either dynamically or by an administrator:
Created with a particular filesystem
Has a particular size
Has identifying characteristics such as volume IDs and a name
In order for pods to start using these volumes, they need to be claimed (via a persistent
volume claim) and the claim referenced in the spec for a pod. A Persistent Volume Claim
describes the amount and a characteristic of the storage required by the pod finds any
matching persistent volumes and claims these. Storage Classes describe default volume
information (filesystem,size,block size etc). Kubernetes persistent volumes are
administrator provisioned volumes. These are created with a particular filesystem, size,
and identifying characteristics such as volume IDs and names.
24
PersistentVolumeClaims are requests emitted by pods to obtain a volume. Once obtained,
the volume is mounted on a specific path in the pod while providing an abstraction to the
underlying storage system. A claim may specify a storage class name attribute to obtain a
PersistentVolume that satisfies the specific needs of the pod.
4.4.1 Kubernetes Volumes vs. Persistent Volumes
There are currently two types of storage abstracts available with Kubernetes:
Volumes
Persistent Volumes.
A Kubernetes volume exists only while the containing pod exists. Once the pod is deleted, the
associated volume is also deleted. As a result, Kubernetes volumes are useful for storing
temporary data that does not need to exist outside of the pod’s lifecycle. Kubernetes persistent
volumes remain available outside of the pod lifecycle – this means that the volume will remain
even after the pod is deleted. It is available to claim by another pod if required, and the data is
retained. Kubernetes persistent volumes are used in situations where the data needs to be retained
regardless of the pod lifecycle. Kubernetes volumes are used for storing temporary data.
25
3.5 FRAMEWORK
Based on FIGURE 5, it shows the framework for kubernetes. A framework is often a layered
structure indicating what kind of program can or should be built and how they would interrelate.
A framework may be a set of functions within a system and they interrelate, the layer of the
system, the layer of a network and many more.
FIGURE 5: Framework
26
TYPE FUNCTION
MASTER The Master coordinates the cluster.
WORKER Worker nodes are the workers that run the applications.
Table 3: Master and Node architecture function
Based on Table 3 the Master is responsible for managing the cluster. The master coordinates all
activities in your cluster, such as scheduling applications, maintaining applications' desired state,
scaling applications, and rolling out new updates. The API server exposes a highly-configurable
REST interface to all of the Kubernetes resources. The Scheduler's main responsibility is to place
the containers on the node in the cluster according to various policies, metrics, and resource
requirements. It is also configurable via command line flags. Finally, the Controller Manager is
responsible for reconciling the state of the cluster with the desired state, as specified via the API.
In effect, it is a control loop that performs actions based on the observed state of the cluster and
the desired state. The master node supports a multi-master highly-available setup. The schedulers
and controller managers can elect a leader, while the API servers can be fronted by a load-
balancer. Based on worker nodes, the kubelet interacts with the underlying Docker engine to
bring up containers as needed. The Kube-proxy is in charge of managing network connectivity to
the containers.
27
3.5.1 POD FLOW ON KUBERNETES
FIGURE 6: Pod flow
Refer to the pod flow above (FIGURE 6), it showed the flow of the process for the kubernetes
connected. There are 11 states that flow the process. Each of the processes is related and very
important.
1) kubectl writes to the API Server.
2) API Server validates the request and persists it to etcd.
3) etcd notifies back the API Server.
4) API Server invokes the Scheduler.
5) Scheduler decides where to run the pod on and return that to the API Server.
6) API Server persists it to etcd.
7) etcd notifies back the API Server.
8) API Server invokes the Kubelet in the corresponding node.
28
9) Kubelet talks to the Docker daemon using the API over the Docker socket to create the
container.
10) Kubelet updates the pod status to the API Server.
11) API Server persists the new state in etcd.
3.5 Define Requirement
3.5.1 Software Requirement
Based on Table 1 and Table 2, the requirement of hardware and software for
developing this project is needed because a massive of small and big tools is used to rest
and implement the required function of the application. The table shows the software
used in this project, while the table shows the hardware that was used to set up the
software.
SOFTWARE DESCRIPTION
Microsoft Word As a platform for write a documentation
Google Cloud Platform As a platform which is a virtual machine
software that is used for x86 and x86-64
computers to run multiple operating systems
over a single physical host computer.
Microsoft FireFox Web Browser to search required data
Table 1: Software Requirement
3.5.2 Hardware Requirement
SOFTWARE DESCRIPTION
Laptop As a machine to serve Google Cloud Platform
Table 2: Hardware Requirement
29
3.6 SUMMARY
This chapter full description of the methodology together with software and hardware
specification used. Every phase of this project is based on the project methodology mention
above. On other hands, the details of software and hardware specification also listed in this
chapter. The framework of the system provides a clear system flow to the user. This can ensure
that the functionality of the system is fulfilling the user and system requirement. The system is
only successful working according to user exception if and only if the framework right and
complete.
30
CHAPTER 4
IMPLEMENTATION AND RESULT
4.1 INTRODUCTION
This chapter discusses the implementation and the result of the project. It starts with the
implementation of the Google Cloud Platform. Then, implemented for installation kubernetes
which is master and worker. For the second one is for manage storage by using kubernetes.
31
4.2 Google Cloud Platform
Used Google Cloud Platform (APPENDIX), firstly we need to enabling nested virtualization
on a VM. Nested virtualization refers to virtualization that runs inside an already virtualized
environment. In other words, it's the ability to run a hypervisor inside of a virtual machine (VM),
which itself runs on a hypervisor. Enable nested virtualization using the API or gcloud
component. To enable nested virtualization, we must create a custom image with a special
license key that enables VMX in the L1 or host VM instance and then starts a VM instance using
that image in a zone that supports Haswell or later.
1. Based on FIGURE 6, create a custom image with the special license key required for
virtualization.
gcloud compute images create kubernetesfyp \
--source-disk fyp --source-disk-zone asia-southeast1-b \
--licenses https://www.googleapis.com/compute/v1/projects/vm-
options/global/licenses/enable-vmx
FIGURE 7: Custom Image for Virtualization
2. Based on FIGURE 8, create a VM instance with the custom image in a zone that supports
Haswell or higher.
gcloud compute instances create kubernetesfyp--zone asia-southeast1-b \
--image vm2
32
FIGURE 8: Create a VM instance
3. Confirm that nested virtualization is enabled in the VM.
a. Connect to the VM instance. For example:
gcloud compute ssh kubernetesfyp
b. Check that nested virtualization is enabled by running the following command. A
non-zero response confirms that nested virtualization is enabled.
grep -cw vmx /proc/cpuinfo
4.3 Ubuntu On Google Cloud Platform
4.3.1 Install KVM2 driver
Minikube uses Docker Machine to manage the Kubernetes VM so it benefits from the driver
plugin architecture that Docker Machine uses to provide a consistent way to manage various
VM providers. Minikube embeds VirtualBox and VMware Fusion drivers so there are no
additional steps to use them. However, other drivers require an extra binary to be present in
the host PATH. The KVM2 driver is intended to replace KVM driver. The KVM2 driver is
maintained by the minikube team, and is built, tested and released with minikube.
Based on FIGURE 9, we need to install libvirt and qemu-kvm on your system
$ sudo apt install libvirt-clients libvirt-daemon-system qemu-kvm
# Add yourself to the libvirt group so you don't need to sudo
$ sudo usermod -a -G libvirt $(whoami)
33
# Update your current session for the group change to take effect
$ newgrp libvirt
FIGURE 9: Create a VM instance install KVM2 driver
Based on FIGURE 10, install the driver itself:
curl -Lo docker-machine-driver-kvm2
https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \
&& chmod +x docker-machine-driver-kvm2 \
&& sudo cp docker-machine-driver-kvm2 /usr/local/bin/ \
&& rm docker-machine-driver-kvm2
34
FIGURE 10: Install the driver
Minikube is a lightweight Kubernetes implementation that creates a VM on your local
machine and deploys a simple cluster containing only one node. To use the driver you
would do start minikube (FIGURE 11).
minikube start --vm-driver kvm2
FIGURE 11: Start Minikube
4.3.2 Install Helm and Tiller
The easiest way to run and manage applications in a Kubernetes cluster is using
Helm. Helm allows you to perform key operations for managing applications such as
install, upgrade or delete. As previously mentioned, Helm is composed of two parts:
Helm (the client) and Tiller (the server).
35
Installing Helm
Helm is a Kubernetes-based package installer. It manages Kubernetes “charts”, which
are “preconfigured packages of Kubernetes resources.” Helm enables you to easily
install packages, make revisions, and even roll back complex changes. FIGURE 12
shows the installation of Helm. To install Helm, we need to use this command:
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get >
get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh
FIGURE 12: Installing Helm
Installing Tiller
A server(tiller) than runs in kubernetes cluster as another pod. We can install a tiller
in the cluster with “$ helm init”. Based on FIGURE 13, to complete the installation of
Tiller run the command below:
$ helm init
36
Check if Tiller is correctly installed by checking the output of kubectl get pods as
shown
below:
$ kubectl --namespace kube-system get pods | grep tiller
tiller-deploy-f9b8476d-6cbjf 1/1 Running 0 4d
FIGURE 13: Installing tiller
4.3.3 Install WordPress
The will install and run WordPress in a container and MariaDB in a container for the
database. This is known as a “Pod” in Kubernetes. A Pod is basically an abstraction that
represents a group of one or more application containers and some shared resources for
those containers (e.g. storage volumes, networking etc.).
37
We give the release a namespace and a name to keep things organized and make them
easy to find. We also set the serviceType to NodePort. This is important because, by
default, the service type will be set to LoadBalancer and, as we currently don’t have a
load balancer for our cluster, we wouldn’t be able to access our WordPress site from
outside the cluster.
In the last part of the output from this command, you will notice some helpful
instructions on how to access your WordPress site. To install WordPress (FIGURE 14),
used the command above:
$ helm install stable/wordpress –n fyp2
38
FIGURE 14: install wordpress
4.3.4 Manage Storage on Kubernetes
To manage storage, use NFS persistent storage
Create NFS server
To enable NFS volume provisioning on a UCP cluster, we need to install an NFS server.
Google provides an image for this purpose. On any node in the cluster with a UCP client
bundle, copy the following yaml to a file named nfs-server.yaml.
apiVersion: v1
kind: Pod
metadata:
name: nfs-server
namespace: default
labels:
role: nfs-server
spec:
tolerations:
39
- key: node-role.kubernetes.io/master
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/master: ""
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:0.8
securityContext:
privileged: true
ports:
- name: nfs-0
containerPort: 2049
protocol: TCP
restartPolicy: Always
Run the following command to create the NFS server pod.
kubectl create -f nfs-server.yaml
The default storage class needs the IP address of the NFS server pod. Run the following
command to get the pod’s IP address.
kubectl describe pod nfs-server | grep IP:
The result looks like this:
IP: 127.17.0.4
40
Create the default storage class
To enable NFS provisioning, create a storage class that has the storageclass.kubernetes.io/is-
default-class annotation set to true. Also, provide the IP address of the NFS server pod as a
parameter. Copy the following yaml to a file named default-storage.yaml. Replace <nfs-server-
pod-ip-address> with the IP address from the previous step. (FIGURE 15)
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: default
name: default-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
labels:
kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/nfs
parameters:
path: /
server: <nfs-server-pod-ip-address>
41
FIGURE 15: Enable NFS provisioning with file named default-storage.yaml
Run the following command to create the default storage class. (FIGURE 16)
kubectl create -f default-storage.yaml
Confirm that the storage class was created and that it’s assigned as the default for the cluster.
kubectl get storageclass
FIGURE 16: Create the default storage class
spec.nfs -- The list of the NFS-specific options. Here, we say that nfs Volume should be
mounted at the /tmp path of a server with an IP 172.15.0.6
42
Create persistent volumes
Create two persistent volumes based on the default-storage storage class. One volume is for the
MySQL database, and the other is for WordPress. To create an NFS volume,
specify storageClassName: default-storage in the persistent volume spec. Copy the following
yaml to a file named local-volumes.yaml.
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv-1
labels:
type: local
spec:
storageClassName: default-storage
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/pv-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv-2
labels:
type: local
spec:
storageClassName: default-storage
capacity:
storage: 20Gi
accessModes:
43
- ReadWriteOnce
hostPath:
path: /tmp/data/pv-2
Run this command to create the persistent volumes. (FIGURE 17)
kubectl create -f local-volumes.yaml
FIGURE 17: Create the persistent volumes
spec.capacity -- Storage capacity of the PV. In this example, our PV has a capacity of
20 Gi (gigabytes). The capacity property uses the same units as defined in
the Kubernetes resource model. It allows users to represent storage as unadorned
integers or as the fixed-point integer with one of these SI suffices (E, P, T, G, M, K,
m) or as their binary equivalents (Ei, Pi, Ti, Gi, Mi, Ki). Currently, Kubernetes users
can only request storage size. However, future attributes may include throughput,
IOPS, etc.
spec.accessModes -- Defines how the volume can be accessed. (Note: valid values
vary across persistent storage providers.) In general, the supported field values are:
o ReadWriteOnce –- the volume can be mounted as a read/write volume only
by a single node.
o ReadOnlyMany -- many nodes can mount the volume as read-only.
o ReadWriteMany -- many nodes can mount the volume as read-write.
Note: a volume can be only mounted with one access mode at a time, even if it supports many.
44
4.4 SUMMARY
In this chapter, managing storage in cluster and Dockers container by using
kubernetes project is tested whether it was fulfilling the functionality requirement or not.
All the processes are done to make sure the system is working and avoid any problems in
the future.
45
CHAPTER 5
CONCLUSION
5.1 INTRODUCTION
This chapter discusses the conclusion, constraints and future works of this project.
The conclusion of the system is discussing the conclusion of the system for users that
use the system. Projects constraints will state all the difficulties that have been faced
throughout the development of the project. Future work will be discussed about the
suggestion in the future project and the conclusion is made to conclude about the
project.
46
5.2 PROJECT CONSTRAIN
There are several problems and limitations that occurred throughout the development
of the system. These problems and limitations in conducting this study are:
Can store a small volume of data
Don't have gigantic of real data which not can be feed into cluster environment to
test its functionality
The cluster only focusing on managing storage
5.3 FUTURE WORK
For future work, there is little suggestion that can be made to upgrade the system to
be more efficient. Some suggestions that need to be considered are service discovery
and load balancing. No need to modify your application to use an unfamiliar service
discovery mechanism. Kubernetes gives containers their own IP addresses and a
single DNS name for a set of containers and can load-balance across them.
Other than that, self-healing which is e\restarts containers that fail replaces and
reschedules containers when nodes die, kills containers that don't respond to your
user-defined health check, and doesn't advertise them to clients until they are ready to
serve.
47
5.4 CONCLUSION
Managing storage in cluster and Dockers container by using kubernetes is an
open-source system for automating deployment, scaling, and management of
containerized applications. Based on the previous study and discussion from
supervisor and panels, the suitable approach that will be implemented in this project
is cluster computer approach which is a set of computing nodes that work together
and can be loosely viewed as a single system. Lastly but not least, this kubernetes can
provide cloud storage for the user and help the user to manage their storage by using
the kubernetes platform.
Persistent volumes are powerful abstractions that enable user access to diverse
storage types supported by the Kubernetes platform. Using PVs you can attach and
mount almost any type of persistent storage such as object-, file-, network- level
storage to your pods and deployments. In addition, Kubernetes exposes a variety of
storage options such as capacity, reclaim policy, volume modes, and access modes,
making it easy for you to adjust different storage types to your particular application's
requirements and needs. Kubernetes makes sure that your PVC is always bound to the
right volume type available in your cluster, enabling the efficient usage of resources,
high availability of applications, and integrity of your data across pod restarts and
node failures.
48
APPENDIX
49
REFERENCES
[1] Kubernetes. (2018. What is Kubernetes? Cited 14.1.2018,
https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/
[2] Kubernetes cluster. (2018) Cluster Intro. Cited 14.1.2018,
https://kubernetes.io/docs/tutorials/kubernetes-basics/cluster-intro/
[3] Kubernetes ingress. (2018). Ingress. Cited 10.2.2018,
https://kubernetes.io/docs/concepts/services-networking/ingress/
[4] Kubernetes kompose. (2018). Translate a Docker Compose File to Kubernetes Resources.
Cited 14.1.2018,
https://kubernetes.io/docs/tools/kompose/user-guide/#kompose-convert
[5] Kubernetes namespace. (2018). Namespaces. Cited 16.01.2018,
https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
[6] Kubernetes pod. (2018). Pods. Cited 14.01.2018,
https://kubernetes.io/docs/concepts/workloads/pods/pod/
[7] Kubernetes quickstart. (2018). Getting started guide: Running Kubernetes Locally via
Minikube. Cited 11.01.2018, https://kubernetes.io/docs/getting-started-guides/minikube/
[8] Kubernetes service. (2018). Services. Cited 14.01.2018,
https://kubernetes.io/docs/concepts/services-networking/service/
[9] Kubernetes storage. (2018). Dynamic Provisioning and Storage classes in Kubernetes. Cited
23.01.2018, http://blog.kubernetes.io/2017/03/dynamic-provisioning-and-storage-
classeskubernetes.html
50
[10] Kubernetes volume. (2018). Persistent Volumes. Cited 22.01.2018,
https://kubernetes.io/docs/concepts/storage/persistent-volumes/
[11] Oranagwa, O. (2017). Setting up Ingress on Minikube. Cited 16.01.2018,
https://medium.com/@Oskarr3/setting-up-ingress-on-minikube-6ae825e98f82
[12] Abbassi, P. (2016) Understanding Basic Kubernetes Concepts III – Services give you
abstraction. Cited 24.01.2018, https://blog.giantswarm.io/basic-kubernetes-concepts-iii-services-
giveabstraction
[13] https://codingcompiler.com/tutorial/kubernetes-components/
[14] https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
[15]https://www.theseus.fi/bitstream/handle/10024/146845/Moilanen_Miika_Opinnaytetyo.pdf?
sequence=1&isAllowed=y
[16] Docker container. (2018). What container? Cited 13.1.2018,
https://www.docker.com/whatcontainer
[17] https://medium.com/jorgeacetozi/kubernetes-master-components-etcd-api-server-controller-
manager-and-scheduler-3a0179fc8186
[18] https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances
[19] https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver
[20] https://docs.bitnami.com/kubernetes/get-started-kubernetes/#step-4-install-helm-and-tiller
[21] https://hub.kubeapps.com/charts/stable/wordpress
51
[22] https://supergiant.io/blog/persistent-storage-with-persistent-volumes-in-kubernetes
[23] Mishalov, Michael, et al. "Container monitoring configuration deployment." U.S. Patent
Application No. 15/234,662.
https://patents.google.com/patent/US20180048545A1/en
[24] Chen, Huamin. "Lazy persistent storage volume provisioning." U.S. Patent Application No.
15/885,203.
https://patents.google.com/patent/US20180181436A1/en
[25] Jugo, Josu. "Containerized Control Structure for Accelerators." (2018): TUPHA170.
http://inspirehep.net/record/1656253/references
[26] Sebastio, Stefano, Rahul Ghosh, and Tridib Mukherjee. "An Availability Analysis Approach
for Deployment Configurations of Containers." IEEE Transactions on Services
Computing(2018).
https://ieeexplore.ieee.org/abstract/document/8242669/
[27] Abdelbaky, Moustafa, et al. "Docker containers across multiple clouds and data
centers." Utility and Cloud Computing (UCC), 2015 IEEE/ACM 8th International Conference
on. IEEE, 2015.
https://ieeexplore.ieee.org/abstract/document/7431433/
Top Related