Post on 21-Jan-2017
Agenda● Introducing Docker
● Docker Components
● What Docker Isn’t
● Docker Deployment Framework
● Working with Docker Images & Dockerfile
● Using Docker for CD/CI
What is docker● Docker is an open-source engine that automates the deployment of applications
into containers
● For developers, it means that they can focus on writing code without worrying
about the system that it will ultimately be running on.
● At the core of the Docker solution is a registry service to manage images and the
Docker Engine to build, ship and and run application containers.
Docker Client and ServerDocker is a client-server application.
The Docker client talks to the Docker server or daemon, which, in turn, does all the
work.
You can run the Docker daemon and client on the same host or
connect your local Docker client to a remote daemon running on another host.
Docker ImagesContainers are launched from images. Images are the "build" part of Docker's life cycle.
They are a layered format, using Union file systems, that are built step-by-step using a
series of instructions.
Images can be considered as "source code" for your containers.
They are highly portable and can be shared, stored, and updated.
Docker Registries
Docker stores the images you build in registries.
There are two types of registries: public and private.
Docker, Inc., operates the public registry for images, called the Docker Hub.
Docker ContainersDocker helps you build and deploy containers inside of which you can package your
applications and services.
Containers are launched from images and can contain one or more running processes.
Images as the building or packing aspect of Docker and the containers as the running
or execution aspect of Docker.
A Docker container is: • An image format. • A set of standard operations. • An
execution environment.
What Docker is not
● Enterprise Virtualization Platform
● Cloud Platform
● Configuration Management
● Deployment Framework
Not an Enterprise Virtualization Platform
A container is not a virtual machine, traditionally.
Virtual machines contain a complete operating system, running on top of the host operating
system.
The biggest advantage is that it is easy to run many virtual machines with radically different
operating systems on a single host.
With containers, both the host and the containers share the same kernel. This means that
containers utilize fewer system resources, but must be based on the same underlying operating
system (i.e., Linux).
Not a Cloud Platform
Both allow applications to be horizontally scaled in response to changing demand.
Docker, however, is not a cloud platform.
It only handles deploying, running, and managing containers on pre-existing Docker hosts.
It doesn’t allow you to create new host systems (instances), object stores, block storage, and the
many other resources that are typically associated with a cloud platform.
Not a Configuration Management
Although Docker can significantly improve an organization’s ability to manage applications and
their dependencies,
Docker does not directly replace more traditional configuration management.
Dockerfiles are used to define how a container should look at build time.
but they do not manage the container’s ongoing state, and cannot be used to manage the Docker
host system. (this point needs explanation)
Not a Deployment Framework
Docker eases many aspects of deployment by creating self-contained container images that
encapsulate all the dependencies of an application
and can be deployed, in all environments, without changes.
However, Docker can’t be used to automate a complex deployment process by itself.
Docker Deployment FrameworkDocker preaches an approach of “batteries included but removable.”
By using an image repository , Docker allows the responsibility of building the application image
to be separated from the deployment and operation of the container.
Working with Docker ImagesA Docker image is made up of filesystems layered over each other.
When a container is launched from an image, Docker mounts a read-write filesystem on top of layers.
This is where whatever processes we want our Docker container to run will execute.
Docker Daemon/usr/bin/docker daemon -H unix:///var/run/docker.sock
Or service docker start
docker -ps -- to see all containers
Working with Docker Images1. Pulling Images
docker pull centos
2. Searching Images
docker search puppet
3. Listing Images
docker images
Working with Docker Imagesa. Running a container docker run nginx echo bye
b. Running Interactive container docker run -ti ubuntu bash
c. Running Stopped container docker start <container-id> or <name>
d. Attaching container docker attach <container-id> or <name>
e. Daemonized container docker run -ti -d --name ng_cont_2 ubuntu bash -c "while true;do echo
hello;done"
f. Inspecting container docker logs <container-id> or <name>
What is a dockerfile● A Dockerfile is a text document that contains all the commands a user could call
on the command line to assemble an image
● The Docker daemon runs the instructions in the Dockerfile one-by-one,
committing the result of each instruction to a new image if necessary, before
finally outputting the ID of your new image.
● Note that each instruction is run independently, and causes a new image to be
created - so RUN cd /tmp will not have any effect on the next instructions.
● Whenever possible, Docker will re-use the intermediate images (cache), to
accelerate the docker build process significantly. This is indicated by the Using
cache message in the console output
●
Building our own imagesCreate a Repo
❏ mkdir staticweb ❏ cd staticweb ❏ touch Dockerfile
Create Dockerfile# Version: 0.0.1 FROM ubuntu:15.10 MAINTAINER Name "e@mail.id" ENV REFRESHED_AT 2014-07-01RUN apt-get update RUN apt-get install -y nginx RUN echo 'Hi, I am in your container' \ >/usr/share/nginx/html/index.html EXPOSE 80
Building our own images● Building an Image
○ Docker build . (It builds the image in the current context)
○ docker build -t=static . (build’s image from static folder)
● Pushing an image
○ Docker push <image name>
● Remove an image
○ docker rmi <image name>
● Running our image
○ docker run -d -p 80 --name static_main static nginx -g "daemon off"
● RUN command
○ RUN <command>
○ The RUN instruction will execute any commands in a new layer on top of the current image and commit the results.
The resulting committed image will be used for the next step in the Dockerfile
● ADD Command
○ The ADD instruction copies new files, directories or remote file URLs from <src> and adds them to the filesystem of
the container at the path <dest>.
○ The <dest> is an absolute path, or a path relative to WORKDIR, into which the source will be copied inside the
destination container.
● WORKDIR Command
○ WORKDIR /path/to/workdir
○ The WORKDIR instruction sets the working directory for any RUN and ADD instructions that follow it in the
Dockerfile.
○ The WORKDIR instruction can resolve environment variables previously set using ENV. You can only use environment
variables explicitly set in the Dockerfile. For example:
○ ENV DIRPATH /path
WORKDIR $DIRPATH/$DIRNAME
RUN pwd
Using Docker for CI/CD
Continuous Integration: Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day
Continuous Delivery / Deployment:
Continuous Delivery / Deployment is described as the logical evolution of continuous integration: Always be able to put a product into production!
Using Docker for CI/CD● Traditional Release Cycle
○ Following the “old-school” release approach means to ship a release after a certain amount of time
(let’s say 6 months). We have to package the release, test it, set up or update the necessary
infrastructure and finally deploy it on the server.
What are the problems about this approach?● The release process is done rarely. Consequently, we are barely practiced in
releasing. Mistakes can happen more easily.
● Manual steps “The release process consists of a lot of steps which have to be
performed manually (shutdown, set up/update infrastructure, deployment, restart
and manual tests).
● The whole release process is more laborious, cumbersome and takes more time.”
Continuous Delivery using Docker1. Developer pushes a commit to GitHub.
2. GitHub uses a webhook to notify Jenkins of the update.
3. Jenkins pulls the GitHub repository, including the Dockerfile describing the
image, as well as the application and test code.
4. Jenkins builds a Docker image on the Jenkins slave node.
5. Jenkins instantiates the Docker container on the slave node, and executes the
appropriate tests.
6. If the tests are successful the image is then pushed up to Docker Trusted registry.
New Release Cycle : CIWhat are the benefits of this approach?
● Increased Reliability “Fewer mistakes can happen during an automated process in
comparison to a manual one”.
● Deploying our application into production is low-risk, because we just execute the
same automated process for the production as we did for the tests or the
pre-production system.
● faster feedback
● Accelerated release speed and time-to-market.
The GitHub repository for the target application needs to contain the following
components:
• The application code
• The test code for the application
• A Dockerfile that describes how to build the application container, and copies over
the necessary application and test code.
Configuring GitHub
The docker file1. Dockerfile will pull the
supported Node.js image from the Docker Hub.
2. install any necessary dependencies and then copy the application code and test files
3. The tests for the application are included in the /script and /test directories.
● Notify the Jenkins server
when a new commit happens.
This is done via the
“Webhooks and Services”
section of the “Settings” page.
● The GitHub Plugin needs to
be installed on the Jenkins
master. This plugin allows for
a Jenkins job to be initiated
when a change is pushed a
designated GitHub repository
Jenkins Slave● Prerequisites for Jenkins
Slave
○ Docker Engine
○ SSH enabled
○ Java runtime
installed
Configuring the test job
Once the slave has been added, a Jenkins job can be created.
The key fields to note are as follows:
● “Restrict where this build can run” is checked, and the label “docker” is supplied. This ensures the job will only attempt to
execute on slaves that have been appropriately configured and tagged.
● Under “Source Code Management” the name of the target GitHub repository is supplied. This is the same repository that
houses the Dockerfile to build the test image, and that has been configured to use the GitHub webhook. Also ensure that
“Poll SCM” is checked (it is not necessary to set a schedule).
● Under “Build Triggers” “Build when a change is pushed to GitHub” instructs Jenkins to fire off a new job every time the
webhook sends a notification of a new push to the repository.
● Under the “Build” section of the Jenkins job we execute a series of shell commands
○ # build docker image
■ docker build --pull=true -t dtr.mikegcoleman.com/hello-jenkins:$GIT_COMMIT . [Builds a Docker image
based on the Dockerfile in the GitHub repository. The image is tagged with the git commit id ]
○ # test docker image
■ docker run -i --rm dtr.mikegcoleman.com/hello-jenkins:$GIT_COMMIT ./script/test [The command
instantiates a new container based on the image, and executes the test scripts that were copied over during
the image build]
○ # push docker image
■ docker push dtr.mikegcoleman.com/hello-jenkins:$GIT_COMMIT [Finally, the image is pushed up to our
Docker Trusted Registry instance. ]
Putting it all together: Running a Test● Make a commit to the applications GitHub repository
● This fires off a GitHub webhook, which notifies Jenkins to execute the
appropriate tests.
● Jenkins receives the webhook, and builds a Docker image based on the Dockerfile
contained in the GitHub repo on our Jenkins slave machine.
● After the image is built, a container is created and the specified tests are executed.
● If the tests are successful, the validated Docker image is pushed to the Docker
Trusted Registry instance.