Post on 20-May-2020
Deploy Microservices like aNinja with Istio Service Mesh
Presented by Anton WeissOtomato technical training.http://otomato.link
Slides: https://devopstrain.pro/istio
Slide-generation engine borrowed from container.training
1 / 143
IntroductionThis presentation was created by Ant Weiss to support instructor-led workshops.
We included as much information as possible in these slides
Most of the information this workshop is based on is public knowledge and can also beaccessed through Istio official documents and tutorials
2 / 143
Training environmentThis is a hands-on training with exercises and examples
We assume that you have access to a Kubernetes cluster
The training labs for today's session were generously sponsored by Strigo
We will be using microk8s to get these clusters
Haven't tried microk8s yet?! You're in for a treat!
3 / 143
Getting IstioGet the source code and the slides for this workshop:
Exercise
On your Strigo VM:
git clone https://github.com/otomato-gh/istio.workshop.git cd istio.workshop./prepare-vms/setup_microk8s.sh# enter new shell for kubectl completionsudo su - ${USER}
Choose 'No' when prompted for mutual TLS
This will install a microk8s single-node cluster with Istio
4 / 143
A few words about microk8s
sudo snap install microk8s --classic sudo snap install kubectl --classic microk8s.start microk8s.enable istio
Single node Kubernetes done right
Zero-ops k8s on just about any Linux box
Many popular k8s add-ons can be enabled:
metrics-serverkube-dashboardand of course: IstioFor more: microk8s.enable --help
5 / 143
Chapter 1What is a Service Mesh?
6 / 143
Chapter 2Istio Architecture
Exploring Istio on K8s
The Demo Installation
7 / 143
Chapter 3Deploying the Application
Deploying a self-hosted registry
Istio Observability Features
Monitoring with Istio
Distributed tracing with Jaeger
8 / 143
Chapter 4Deploying to K8s with Istio
Progressive Delivery Strategies
Istio Traffic Management Basics
Our App with Istio
Launching Darkly
Traffic Mirroring
Rolling out to Production with Canary
9 / 143
Chapter 5Summing It All Up
10 / 143
11 / 143
What is a Service Mesh?
Previous section | Back to table of contents | Next section
12 / 143
What is a Service Mesh?
Twitter microservices having a little chat
13 / 143
What is a Service Mesh?The less helpful definition
The term service mesh is used to describe the network of microservices that make updistributed buisness applications and the interactions between these services.
As such distributed applications grow in size and complexity these interactions becomeever harder to analyze, predict and maintain.
Our services need to conform to contracts and protocols but expect the unexpected tooccur.
14 / 143
The Reality of Distributed SystemsRPC instead of local communication
Network is unreliable
Latency is unpredictable
Call stack depth is unknown
Dependency on other services(and teams)
Services are ephemeral (i.e : they come and go without prior notice)
Unpredictable load
15 / 143
Types of Failures in Distributed Systemsimproper fallback settings when a service is unavailable
retry storms from improperly tuned timeouts
outages when a downstream dependency receives too much traffic
cascading failures when a SPOF crashes
16 / 143
Resilience Patternsconnection pools
failure detectors, to identify slow or crashed hosts
failover strategies:
circuit breaking
exponential back-offs
load-balancers
back-pressure techniques
rate limiting
choke packets
17 / 143
Additional ConcernsService Discovery
Observability
Distributed tracing
Log aggregation
Security
Point-to-point mutual TLS
Continuous Deployments
Traffic splitting
Rolling updates
18 / 143
Progressive DeliveryRolling Updates
Blue-Green
Canary
Dark Launch
Traffic Mirroring (shadowing)
19 / 143
What Is A Service Mesh?A network of lightweight, centrally configurable proxies taking care of inter-service traffic.
The purpose of these proxies is to solve the application networking challenges.
They make application networking:
reliable
observable
manageable
20 / 143
21 / 143
Istio Architecture
Previous section | Back to table of contents | Next section
22 / 143
Istio Architecture
23 / 143
EnvoyEnvoy is a high-performance proxy developed in C++ to mediate all inbound and outboundtraffic for all services in the service mesh.
Istio leverages Envoy’s many built-in features, for example:
Dynamic service discoveryLoad balancingTLS terminationHTTP/2 and gRPC proxiesCircuit breakersHealth checksStaged rollouts with %-based traffic splitFault injectionRich metrics
24 / 143
The Sidecar PatternThe 'sidecar' is a an assistant container in the pod
Think Batman's Robin
It takes on some responsibility that the main container can't be bothered with
Log shipping
Data preparation
Or in our case : networking!
25 / 143
MixerMixer is a platform-independent component.
Enforces access control and usage policies
Collects telemetry data from the Envoy proxy and other Istio components.
The proxy extracts request level attributes, and sends them to Mixer for evaluation.
Mixer includes a flexible plugin model.
26 / 143
PilotService discovery for the Envoy proxies
Traffic management capabilities for intelligent routing (e.g., A/B tests, canary rollouts,etc.)
Resiliency (timeouts, retries, circuit breakers, etc.).
27 / 143
Citadelcreates a SPIFFE certificate and key pair for each of the existing and new serviceaccounts
stores the certificate and key pairs as Kubernetes secrets.
when you create a pod, Kubernetes mounts the certificate and key pair to the podaccording to its service account
Citadel watches the lifetime of each certificate, and automatically rotates thecertificates by rewriting the Kubernetes secrets.
28 / 143
Galleyvalidates configuration
will abstract Istio from underlying platform (i.e Kubernetes)
29 / 143
30 / 143
Exploring Istio on K8s
Previous section | Back to table of contents | Next section
31 / 143
Exploring Istio on K8sIstio on Kubernetes stores all data in ... Kubernetes
Istio installs 20+ CRDs
Kubernetes API serves and handles the storage of these custom resources
That means we communicate with Istio control plane via the K8s API
32 / 143
Exploring Istio on K8sExercise
Let us see these CRDs
kubectl get crd | grep istio
Let us count how many we got
kubectl get crd | grep istio | wc -l
33 / 143
Exploring Istio on K8sExercise
Let us see these CRDs
kubectl get crd | grep istio
Let us count how many we got
kubectl get crd | grep istio | wc -l
23 resource definitions (Used to be 50+, but things are improving)
34 / 143
Exploring Istio on K8sOk, that's where config is stored. But where are the processes?
kubectl get pod
Nothing here... Are they in kube-system?
kubectl get pod -n kube-system
Not here too!
35 / 143
Exploring Istio on K8sLet's look somewhere else
kubectl get ns
Hey, there's an istio-system namespace
kubectl get pod -n istio-system
Now we're talking!
But why so many?!
36 / 143
37 / 143
The Demo Installation
Previous section | Back to table of contents | Next section
38 / 143
The Demo Installationmicrok8s installs the so-called evaluation or demo install of Istio
It includes additional components:
Prometheus - for monitoring
Grafana - for dashboards
Jaeger - for tracing (see the istio-tracing-.. pod)
Kiali - the Istio UI
39 / 143
The mixer podsWe can see pilot, galley, citadel... But where is the mixer?
Exercise
kubectl get pod -n istio-system -l=istio=mixer
Mixer has 2 functions: defining traffic policy and exposing traffic telemetry.
Therefore - 2 pods.
40 / 143
The sidecarsNow, where are the Envoys?
Let's look at the Pilot pod:
Exercise
kubectl describe pod -n istio-system -l istio=pilot
41 / 143
The sidecarsNow, where are the Envoys?
Let's look at the Pilot pod:
Exercise
kubectl describe pod -n istio-system -l istio=pilot
Containers: discovery:... Image: docker.io/istio/pilot:1.0.5... istio-proxy:... Image: docker.io/istio/proxyv2:1.0.5
42 / 143
The sidecars
But how do the sidecars get into our own pods?
Let's deploy a service.
Exercise
kubectl create deployment httpbin --image=kennethreitz/httpbin
And look at the pod:
Exercise
kubectl describe pod -l=app=httpbin
There's only one container. The sidecar proxy isn't there...
43 / 143
The sidecar injectionHow do we inject the proxy into our pod?
Do we need to edit our deployment ourselves?!
There should be some magic somewhere!
Remember when we looked at Istio pods there was that sidecar-injector pod?
So why didn't it work?
44 / 143
The sidecar injectionFrom Istio docs: "When you deploy your application using kubectl apply, the Istio sidecarinjector will automatically inject Envoy containers into your application pods if they arestarted in namespaces labeled with istio-injection=enabled."
Let's label our namespace and redeploy:
Exercise
kubectl label namespace default istio-injection=enabledkubectl delete pod -l=app=httpbin
45 / 143
The sidecar injectionFrom Istio docs: "When you deploy your application using kubectl apply, the Istio sidecarinjector will automatically inject Envoy containers into your application pods if they arestarted in namespaces labeled with istio-injection=enabled."
Let's label our namespace and redeploy:
Exercise
kubectl label namespace default istio-injection=enabledkubectl delete pod -l=app=httpbin
Recreating that pod took a whole lotta time, didn't it?!
46 / 143
The sidecar injectionLook at our new pod:
Exercise
kubectl describe pod -l=app=httpbin
Now we have two containers and there was an init-container!
The istio-init container is run before the other containers are started and it'sresponsible for setting up the iptables rules so that all inbound/outbound traffic will gothrough Envoy
For a deep dive into what istio-init does - read this blog post
47 / 143
Let's cleanup the default namespaceWe just learned that automated istio-proxy injection is enabled per namespace.
We will be using a special namespace for our deployments today.
We don't want istio injection enabled on our default namespace, so let's clean it up:
Exercise
kubectl label ns default --overwrite istio-injection=disabled
48 / 143
49 / 143
Deploying the Application
Previous section | Back to table of contents | Next section
50 / 143
Deploying the ApplicationOur purpose today is to learn how Istio allows us to implement progressive deliverytechniques
We'll do that by deploying a demo application - an alphabeth system :)
It's just a frontend service that speaks to 2 backends (aleph and beth)
All the services are bare-bones Python Flask apps
51 / 143
The Sample App
52 / 143
What's on the menu?In this part, we will:
build images for our app,
ship these images with a registry,
run deployments using these images,
expose these deployments so they can communicate with each other,
expose the web UI so we can access it from outside.
53 / 143
The planBuild our images using Docker
Tag images so that they are named $REGISTRY/servicename
Upload them to a registry
Create deployments using the images
Expose (with a ClusterIP) the services that need to communicate
Expose (with a NodePort) the WebUI
54 / 143
Which registry do we want to use?We could use the Docker Hub
Or a service offered by our cloud provider (GCR, ECR...)
Or we could just self-host that registry
We'll self-host the registry because it's the most generic solution for this workshop.
55 / 143
Using the open source registryWe need to run a registry:2 container (make sure you specify tag :2 to run the new version!)
It will store images and layers to the local filesystem (but you can add a config file to use S3, Swift, etc.)
Docker requires TLS when communicating with the registry
unless for registries on 127.0.0.0/8 (i.e. localhost)
or with the Engine flag --insecure-registry
Our strategy: publish the registry container on a NodePort, so that it's available through 127.0.0.1:32000 on our single node
We're choosing port 32000 because it's the default port for an insecure registry onmicrok8s
56 / 143
57 / 143
Deploying a self-hostedregistry
Previous section | Back to table of contents | Next section
58 / 143
Deploying a self-hosted registryWe will deploy a registry container, and expose it with a NodePort 32000
Exercise
Create the registry service:
kubectl create deployment registry --image=registry:2
Expose it on a NodePort:
kubectl create service nodeport registry --tcp=5000 --node-port=32000
59 / 143
Testing our registryA convenient Docker registry API route to remember is /v2/_catalog
Exercise
View the repositories currently held in our registry:
REGISTRY=localhost:32000curl $REGISTRY/v2/_catalog
60 / 143
Testing our registryA convenient Docker registry API route to remember is /v2/_catalog
Exercise
View the repositories currently held in our registry:
REGISTRY=localhost:32000curl $REGISTRY/v2/_catalog
We should see:
{"repositories":[]}
61 / 143
Testing our local registryWe can retag a small image, and push it to the registry
Exercise
Make sure we have the busybox image, and retag it:
docker pull busyboxdocker tag busybox $REGISTRY/busybox
Push it:
docker push $REGISTRY/busybox
62 / 143
Checking again what's on our local registryLet's use the same endpoint as before
Exercise
Ensure that our busybox image is now in the local registry:
curl $REGISTRY/v2/_catalog
The curl command should now output:
{"repositories":["busybox"]}
63 / 143
Building and pushing our imagesWe are going to use a convenient feature of Docker Compose
Exercise
Go to the alephbeth directory:
cd ~/istio.workshop/alephbeth
Build and push the images:
export REGISTRYdocker-compose builddocker-compose push
Let's have a look at the docker-compose.yaml file while this is building and pushing.
64 / 143
services: front: build: front image: ${REGISTRY}/front:${TAG-0.3} aleph: build: aleph image: ${REGISTRY}/aleph:${TAG-0.3} beth: build: beth image: ${REGISTRY}/beth:${TAG-0.3} mongo: image: mongo
65 / 143
Deploying all the thingsWe can now deploy our code
We will create a new namespace 'staging' and enable istio sidecar injection on it
Exercise
We have kubernetes yamls ready for the first version of our app in deployments dir:
cd deploymentskubectl create ns stagingkubectl label ns staging istio-injection=enabledkubectl apply -f aleph.yaml -f front.yaml -f beth.yaml -n staging
66 / 143
Is this working?After waiting for the deployment to complete, let's look at the logs!
(Hint: use kubectl get deploy -w to watch deployment events)
Exercise
Look at some logs:
kubectl logs -n staging deploy/front
Hmm, that didn't work. We need to specify the container name!
kubectl logs -n staging deploy/front frontkubectl logs -n staging deploy/front istio-proxy
67 / 143
Accessing the web UIOur front service is exposed on a NodePort.
Let's look at it and see if it works:
Exercise
Get the port of the front service
kubectl get svc -n staging front -o=jsonpath='{ .spec.ports[0].nodePort }{"\n"}'
Open the web UI in your browser (http://node-ip-address:3xxxx/)
68 / 143
Accessing the web UIOur front service is exposed on a NodePort.
Let's look at it and see if it works:
Exercise
Get the port of the front service
kubectl get svc -n staging front -o=jsonpath='{ .spec.ports[0].nodePort }{"\n"}'
Open the web UI in your browser (http://node-ip-address:3xxxx/)
You should see the frontend application showing the versions of both its backends
69 / 143
70 / 143
Istio Observability Features
Previous section | Back to table of contents | Next section
71 / 143
Istio Observability FeaturesObservability (or o11y) is an important concept in microservice-based approach
Observability of our systems is composed of three main components:
Logs
Metrics
Traces
Istio makes inter-service networking observable by:
Collecting request metrics
Collecting distributed traces
72 / 143
Istio and Observability
Question:
What Istio component is responsible for collecting telemetry in Istio?
73 / 143
Istio and Observability
Question:
What Istio component is responsible for collecting telemetry in Istio?
Answer:
Mixer is responsible for collecting and shipping telemetry
74 / 143
Mixer and its AdaptersMixer is pluggable. Mixer Adapters allow us to post to multiple backends:
75 / 143
Observability Add-Ons in Our Istio InstallationLet's see what observability services we have in our installation.
Exercise
kubectl get svc -n istio-system
We have:
Prometheus: for network telemetryGrafana: to visualize Prometheus dataKiali: to visualize the connections between our servicesJaeger (it's the service named tracing): to store and visualize distributed tracesZipkin: another option to store and visualize distributed traces
76 / 143
Explore the TelemetryExercise
Let's expose Jaeger, Grafana and Servicegraph on NodePort
Get the ports for the exposed services:
Do the same for kiali and tracing servicesBrowse to http://your-node-ip:3XXXX (replace with actual service port)
for service in tracing grafana kiali; do kubectl patch svc -n istio-system $service --type='json' -p '[{"op":"replace","patdone;
bectl get svc grafana -n istio-system -o jsonpath='{ .spec.ports[0].nodePort }{"\n"}'
77 / 143
78 / 143
Monitoring with Istio
Previous section | Back to table of contents | Next section
79 / 143
Monitoring with IstioAll request metrics are sent by Mixer to Prometheus and visualized with Grafanadashboards
Exercise
Browse to istio-service-dashboard in Grafana
Create some load by reloading your browser a few times
Check Grafana for the following metrics:
Request success rate by source
Request duration
80 / 143
81 / 143
Distributed tracing withJaeger
Previous section | Back to table of contents | Next section
82 / 143
Distributed tracing with JaegerExercise
Generate some traffic by reloading front in your browser.
Look at the traces in Jaeger
Is one of the backends slower than the other one?
83 / 143
Distributed tracing with JaegerExercise
Generate some traffic by reloading front in your browser.
Look at the traces in Jaeger
Is one of the backends slower than the other one?
Looks like beth is taking too slow to respond...
84 / 143
What makes distributed tracing possible?Although Istio proxies are able to automatically send spans, they need some hints to tietogether the entire trace. Applications need to propagate the appropriate HTTP headers sothat when the proxies send span information, the spans can be correlated correctly into asingle trace.
In front/front.py line 17:
incoming_headers = [ 'x-request-id', 'x-b3-traceid', 'x-b3-spanid', 'x-b3-parentspanid', 'x-b3-sampled', 'x-b3-flags', 'x-ot-span-context' ]
Each service needs to pass these headers to its downstream connections.
85 / 143
Let's fix the problem!We've found slowness in beth responses
Let's see what's causing this:
Exercise
Look at beth/api.py, line 33
A-ha, looks like someone forgot to remove some testing code...
Let's fix the issue, build a new version and redeploy.
86 / 143
87 / 143
Deploying to K8s with Istio
Previous section | Back to table of contents | Next section
88 / 143
Deploying to K8s with IstioThe plan:
Fix slowness by removing the time.sleep(2) line in beth/api.py
Build new version by building localhost:32000/beth:0.2
Push the new version to our internal registry
Update beth deployment to serve the new version
89 / 143
Deploying by Kill-And-ReplaceExercise
cd alephbeth/beth
Fix the code of beth/api.py in your editor of choice
docker build . -t localhost:32000/beth:0.2docker push localhost:32000/beth:0.2kubectl -n staging --record deploy/beth set image beth=localhost:32000/beth:0.2
Verify deployment is updated
Check Jaeger to see if slowness is resolved
Do you know what that --record flag in the last command does?
90 / 143
Is This the Right Way to Fix This?We just replaced a backend service by killing it
91 / 143
Is This the Right Way to Fix This?We just replaced a backend service by killing it
What if it was in the middle of serving a request?
92 / 143
Is This the Right Way to Fix This?We just replaced a backend service by killing it
What if it was in the middle of serving a request?
Is the new version even functioning correctly?
93 / 143
Is This the Right Way to Fix This?We just replaced a backend service by killing it
What if it was in the middle of serving a request?
Is the new version even functioning correctly?
Look at the version displayed for beth. It's the wrong number! We have a bug!
94 / 143
Is This the Right Way to Fix This?We just replaced a backend service by killing it
What if it was in the middle of serving a request?
Is the new version even functioning correctly?
Look at the version displayed for beth. It's the wrong number! We have a bug!
How can we do better?
95 / 143
96 / 143
Progressive DeliveryStrategies
Previous section | Back to table of contents | Next section
97 / 143
Progressive Delivery Strategies*Progressive Delivery is the collective definition for a set of deployment techniques thatallow for gradual, reliable and low-stress release of new software versions into productionenvironments. Istio's advanced traffic shaping capabilities make some of these techinquessignificantly easier.
Techniques we will be looking at today are:
Dark launch
Canary deployments
Traffic mirroring
98 / 143
Dark LaunchDark Launch refers to the process where the new version is released to production but isonly available to internal or friendly users - via feature toggles or smart routing. This waywe can battle-test new features and bug fixes in production long before the payingcustomers get affected.
99 / 143
Canary Deployments*Canary Deployments is the process in which a new version that is released to productiongets only a tiny percent of actual production traffic. While the rest of traffic continues to beserved by the old version. This may cause a minimal, sufferable service disruption. If thenew version functions fine - we gradually switch more traffic over to it from the oldversion. Until all traffic is served by the new version and the old version can be retired.
100 / 143
Traffic MirroringTraffic Mirroring (or traffic shadowing) is more of a testing technique whereas we releasethe new version to production and channel all the production traffic to it. This happens inparallel to serving this traffic by the old version. No responses are sent back from the newversion. This allows us to test the new version with full production traffic and data withoutimpacting our users.
101 / 143
How Can Istio HelpLet's see how we can implement all of the above with Istio's help
But first let's learn the basics of Istio traffic management
102 / 143
103 / 143
Istio Traffic ManagementBasics
Previous section | Back to table of contents | Next section
104 / 143
Istio Traffic Management BasicsIn order to implement Progressive Delivery with Istio we need to use 2 Istio resources:
Virtual Service
Destination Rule
105 / 143
Virtual ServiceA VirtualService defines a set of traffic routing rules to apply when a host is addressed.Each routing rule defines matching criteria for traffic of a specific protocol. If the traffic ismatched, then it is sent to a named destination service (or subset/version of it) defined inthe registry.
The source of traffic can also be matched in a routing rule. This allows routing to becustomized for specific client contexts.
106 / 143
Virtual Service - path rewriting
apiVersion: networking.istio.io/v1alpha3kind: VirtualServicemetadata: name: reviews-routespec: hosts: - reviews.prod.svc.cluster.local http: - match: - uri: prefix: "/wpcatalog" - uri: prefix: "/consumercatalog" rewrite: uri: "/newcatalog" route: - destination: host: reviews.prod.svc.cluster.local
107 / 143
Virtual Service - header based routing
apiVersion: networking.istio.io/v1alpha3kind: VirtualServicemetadata: name: promotionsspec: hosts: - promotions.prod.svc.cluster.local http: - match: - headers: User-Agent: regex: ".*Mobile.*" uri: prefix: "/promotions/mobile" route: - destination: host: promotions-mobile.prod.svc.cluster.local - route: - destination: host: promotions.prod.svc.cluster.local
108 / 143
Virtual Service - versioned destinations
apiVersion: networking.istio.io/v1alpha3kind: VirtualServicemetadata: name: reviews-routespec: hosts: - reviews.prod.svc.cluster.local http: - route: - destination: host: reviews.prod.svc.cluster.local subset: v2 weight: 25 - destination: host: reviews.prod.svc.cluster.local subset: v1 weight: 75
Wait, where do these subsets come from?
109 / 143
Destination RuleDestinationRule defines policies that apply to traffic intended for a service after routinghas occurred. These rules specify configuration for load balancing, connection pool sizefrom the sidecar, and outlier detection settings to detect and evict unhealthy hosts fromthe load balancing pool.
Version specific policies can be specified by defining a named subset and overriding thesettings specified at the service level.
On Kubernetes these subsets can be defined by referencing pod labels.
110 / 143
Destination RuleThe following rule uses a round robin load balancing policy for all traffic going to a subsetnamed testversion that is composed of endpoints (e.g.: pods) with labels (version:v3).
apiVersion: networking.istio.io/v1alpha3kind: DestinationRulemetadata: name: bookinfo-ratingsspec: host: ratings.prod.svc.cluster.local trafficPolicy: loadBalancer: simple: LEAST_CONN subsets: - name: testversion labels: version: v3 trafficPolicy: loadBalancer: simple: ROUND_ROBIN
111 / 143
112 / 143
Our App with Istio
Previous section | Back to table of contents | Next section
113 / 143
Our App with IstioOk, time to start managing our app with Istio
The first thing to do is create VirtualService entities for each of our services
I've prepared a definition for front service in alephbeth/istio/front-vs.yaml
Exercise
kubectl apply -f alephbeth/istio/front-vs.yaml -n staging
Note: you won't notice a change. We're only accessing the service from outside of thecluster.
Controlling traffic to the front service would require defining a Gateway object. But that isout of the scope of our training today.
114 / 143
VirtualServices for EveryoneNow create virtual services for aleph and beth
Exercise
Create yaml definitions for both services
Apply them to your cluster
kubectl apply -f alephbeth/istio/aleph-vs.yaml -n stagingkubectl apply -f alephbeth/istio/beth-vs.yaml -n staging
Verify
kubectl get virtualservice -n staging
Is everything still working?
115 / 143
Let's build a New VersionRemember that beth wasn't displaying the version right?
Let's fix that and deploy a new version.
But this time we'll launch darkly!
Exercise
In beth/api.py change the version on line 12:
'version': '0.3',
Build a new docker image and push it to local registry
Don't update your existing beth deployment. We will launch darkly!
116 / 143
117 / 143
Launching Darkly
Previous section | Back to table of contents | Next section
118 / 143
Launching DarklyOur existing deployments already have a version label:
labels: app: beth version: v01
Exercise
Create a new deployment for beth in file deployments/beth-v03.yaml labeled as:
version: v03
Don't forget to also update deployment name and image name
Deploy
kubectl apply -f deployments/beth-v03.yaml -n staging
119 / 143
Did This Work As Planned?Try reloading front UI in your browser
Hmm, we get both versions intermittently. Not what we wanted!
Let's fix our virtual service.
Exercise
kubectl apply -f istio/dark-launch.yaml
Look at istio/dark-launch.yaml
120 / 143
Privileged AccessBack in you browser - sign in as user developer (the Sign in button is at top right)
You should be consistently getting version 0.3
Sign out now.
Are you getting the older version again?
121 / 143
122 / 143
Traffic Mirroring
Previous section | Back to table of contents | Next section
123 / 143
Traffic MirroringRolling out the app to internal users is great.
It allows us to test features in isolation.
But this still isn't the real traffic.
Let's replicate all the traffic to the new version and see how it behaves.
124 / 143
Let's mirror all traffic to v03:Exercise
kubectl apply -n staging -f - <<EOFapiVersion: networking.istio.io/v1alpha3kind: VirtualServicemetadata: name: bethspec: hosts: - beth http: - route: - destination: host: beth subset: v01 mirror: host: beth subset: v03EOF
125 / 143
Traffic MirroringIn UI check the version of beth you're now getting
Check the logs of beth-03 pod to make sure it's getting all incoming requests
Exercise
kubectl logs deploy/beth-03 -n staging -f beth
In a new shell
kubectl logs deploy/beth -n staging -f beth
You should see all your requests arriving to both versions
126 / 143
127 / 143
Rolling out to Production withCanary
Previous section | Back to table of contents | Next section
128 / 143
Rolling out to Production with CanaryThe new version seems to behave fine
It's time to release it
But we still want to be on the safe side
We want stress-free releases
Forty-three percent of all adults suffer adverse health effects from stress.
Canary to the rescue
129 / 143
Giving some traffic to our new versionExercise
Look at istio/canary.yaml:
http: - route: - destination: host: beth port: number: 80 subset: v01 weight: 99 - destination: host: beth port: number: 80 subset: v03 weight: 1
130 / 143
Giving some traffic to our new versionExercise
kubectl apply -f istio/canary.yaml
We don't want to test this through the UI, do we?
Let's run curl in a loop and see what we get
131 / 143
Running a curl podExercise
kubectl create -n staging -f - <<EOFapiVersion: v1kind: Podmetadata: name: curlspec: containers: - args: - sleep - "100000" image: otomato/alpine-curl name: curlEOF
132 / 143
Curling Our ServiceExercise
kubectl exec -it -n staging curl sh
Inside the container:
while true; do curl http://beth/version; done
Are you getting any 0.3 versions?
Leave this running
133 / 143
Releasing the CanaryExercise
Edit VirtualService beth to gradually release more traffic to v03
kubectl edit virtualservice -n staging beth
Once we see only 0.3 getting returned by our curler (and our browser) - we're done. Version v01 can now be deleted.
134 / 143
135 / 143
Summing It All Up
Previous section | Back to table of contents | Next section
136 / 143
Summing It All UpWe've learned what a Service Mesh is
We've learned how Istio works
We've seen the following progressive delivery strategies:
Dark Launch
Traffic Mirroring
Canary Deployment
137 / 143
Wrap-up ExerciseCheck out final branch of istio.workshop
Exercise
git checkout final
Build and push a new version of aleph
Exercise
cd alephdocker build . -t ${REGISTRY}/aleph:0.2docker push ${REGISTRY}/aleph:0.2
138 / 143
Wrap-up ExerciseCreate a DestinationRule for aleph in namespace staging:
With subset production pointing at pods with label version=v01With subset canary pointing at pods with label version=v02
Create a VirtualService in namespace staging:
Default route: aleph with subset production
Mirror traffic to subset canary
Create a new deployment aleph-v02 with labels :
version: v02app: aleph
139 / 143
Check Status of the New DeploymentGenerate load on aleph service
(Hint: use the curler pod we've created)
Check Graphana for aleph service stats
Is the new version healthy?
140 / 143
Check Status of the New DeploymentGenerate load on aleph service
(Hint: use the curler pod we've created)
Check Graphana for aleph service stats
Is the new version healthy?
It's not!
Remove the deployment for aleph v02
141 / 143
Let's Fix ThisFix aleph. Hint - the bug is in version method
Build version 0.3 of aleph
Deploy the new version
Expose it as a canary. Increment by 20 percent each time, verifying that all the requestsare successful.
142 / 143
That's It for Today!Thanks for attending!
Any future questions: Slack or contact@otomato.link
For more training : https://devopstrain.pro
143 / 143