Post on 13-Jan-2017
Continuous Deployment of Polyglot Microservices
A Practical Approach
Introduction: Who Am I
> Juan Larriba
> DevOps Engineer at everis innolab
> @compilemymind
Introduction: The DevOps World
> Currently the situation is completely volatile
> It is very complicated to visualize who the big players will be in the future
> Dozens of tools appear each week, each one apparently solving previously unsolvable problems
> All the tools promise an easy, errorless experience
Introduction: DevOps in “traditional” companies
What do I pick? Who is the leader? Which tool won’t be abandoned 1 year from now? Which tool is “the best”?
Introduction: DevOps in “traditional” companies
> Let’s face the facts: distributed computing is hard
> Currently, there are no “easy” tools
> But amazingly, almost all the development is being done open sourced
> We focused on leveraging the cloud to improve the development cycle inside an IT company
> We developed a mock fraud detector application to test what was needed to develop and deploy a “real world” application using microservices
How?
> Be Continuous
> Be Polyglot
> Be Immutable
> Be Reliable
> Be Operative
> Be Practical
Be Continuous
Be Continuous: Integration -> Delivery -> Deployment
> Most of IT companies today implement Continuous Integration methodology for the majority of their recent projects
> In Java this means that usually the projects are already using a CI Server like Jenkins
> We used Jenkins to manage all the Continuous Deployment lifecycle using the Build Pipeline plugin
> Currently, as of Jenkins 2, it might be better to use the Pipeline plugin (formerly Workflow)
Be Continuous: Jenkins + Build Pipeline
Be Continuous: Rules
> All microservices must provide a way to generate a Docker image, normally being a Dockerfile
> All microservices should provide a build.sh describing their own build process
> The build process always has to end with a generated Docker image
> Version numbers are stored in a centralized configuration file and applied to Docker tags and automatically increased by a Groovy script
Be Polyglot
Be Polyglot: Using the right tool for the right job
> Every programming language/framework has a set of advantages and some disadvantages
> The main advantage of microservices is the possibility to use the right tool for the right job
> So, we did just that...
Be Polyglot: Using the right tool for the right job
Be Polyglot: Vert.x
> The shortest definition for Vert.x is “NodeJS on the JVM”
> Vert.x is a polyglot framework/application platform that permits development of reactive applications in every language supported by the JVM
> It is blazing fast, and seems the right choice for the receiver of potentially millions of credit card transactions
Be Polyglot: Python
> Python is the “coolest” programming language right now
> It is the de-facto standard for Machine Learning and scientific programming
> It fits perfectly for doing the ML analysis of a fraud
Be Polyglot: Spring Boot
> Spring is the best known framework in the Java world
> It is known not only for its dependency injection capabilities but for being able to connect to almost every system in the world
> In particular, the relationship between Spring Data and MongoDB is superb, so it seems perfect for the storage microservice, which stores credit card transactions on a MongoDB
> Spring Boot allows to quickly create the Microservice with very little programming
Be Immutable
Be Immutable: Using containerization
> As each language generates a different kind of artifact, or none at all in case of interpreted languages, containers provide a nice abstraction interface
> Each application must provide a Dockerfile for its build process, as the Docker container will be the artifact
> Docker containers are versioned using tags and survive the whole pipeline, to preserve the binary integrity of the build
> A private Docker Registry acts as our artifact repository
Be Reliable
Be Reliable: Writing functional tests while writing the frontend
> Continuous Deployment relies heavily in testing. The more testing, the merrier
> There are lots of different tests: Unit, Integration, Performance, Security and Functional
> The bare minimum to achieve a trustable CD cycle is Functional Testing
> Traditionally tooling for Functional Testing has been scarce and hard (Selenium…)
Be Reliable: Writing functional tests while writing the frontend
> Currently we can rely on the JavaScript world to provide tools for almost everything, and Functional Testing is not an exception
> We used CasperJS, a Functional Testing framework developed on top of PhantomJS
> This enables frontend developers to write Functional Tests while writing the frontend, using the same tools and with the same programming language
Be Operative
Be Operative: Automating deployments
> Kubernetes has very good support for blue-green deployments using either the kubectl rollout command on RC or kubectl set image on Deployments
> OpenShift support is even better thanks to registry notification and ImageStreams
> Pushing an image to the OpenShift private repository triggers a blue-green deployment of all the pods based on that image.
> During our tests, the whole deployment to the remote dev cluster took around 5-10 seconds
Be Practical
Be Practical: Architecture vs Platform
> Traditionally, we use to expand the architecture to resolve all the problems we face
> Most of the distributed computing related problems are not architecture related, but systems related
> This makes them trivial to resolve on platform level but very hard and inefficient to resolve on architecture level
Be Practical: Objectives
> Try to keep the architecture to the bare minimum
> The goal is that the architecture only applies to a microservice level, as if they were “little monolithic” applications, completely unconscious they are part of a microservice cloud
> Unit testing can be done locally, Integration testing should be done on the platform
> Replace local Integration testing with automated deployments
Be Practical: Platform
> Service Discovery
> Api Gateway
> Security (Authentication and Authorization)
> Monitoring
> Health Check
> Autoscaling
> Log Management
Be Practical: Architecture
> Circuit Breaking
> REST Communication (client and server)
Lessons Learned
Lessons Learned
> OpenShift is an expensive monster that encapsulates a lot of different tools in an abstract way
> Kubernetes alone fits for most situations/clients and gives you more control over your cluster
> OpenShift offers multi-tenancy, registry notification, ImageStreams and the Router component
> Eureka is redundant as Kubernetes offers embedded service discovery and load balancing using Services
Lessons Learned
> Kubernetes (from version 1.2) offers a much more powerful routing component: Ingress, which central part is the Ingress Controller
> An external Service Discovery component is useful if you want to use an external router for the cluster: for example, traefik + Consul can replace the Ingress Controllers
> This pattern is very useful when you want to authenticate requests before they reach the cluster
Q&AQuestions and Answers
@compilemymind