Getting started with Kubernetes on AWS€¦ · Getting started with Kubernetes on AWS Abby Fuller,...

Post on 26-Aug-2020

13 views 1 download

Transcript of Getting started with Kubernetes on AWS€¦ · Getting started with Kubernetes on AWS Abby Fuller,...

Getting started with Kubernetes on AWS

Abby Fuller, Sr Technical Evangelist, AWS@abbyfuller

© 2017, Amazon Web Services, Inc. or its Affiliates. All rights reserved.

Kubernetes

• Container orchestration platform that manages containers across your infrastructure in logical groups• Rich API to integrate 3rd parties• Open Source

What are orchestration tools and why should I care?Containers are lots of work (and moving pieces)! Orchestration tools help you manage, scale, and deploy your containers.

What platform is right for me?

Bottom line: use the tool that’s right for you.

That means that you should choose whatever makes the most sense for you and your architecture, that you’re comfortable with, and that you can scale, maintain, and manage.

Bottom line: we want to be the best place to run your containers, however you want to do it.

Getting started with Kubernetes

Initial set up

I’m using a Cloudformation stack provided by AWS and Heptiofor my initial cluster set up. To see the stack in full, you can look here:

https://s3.amazonaws.com/quickstart-reference/heptio/latest/templates/kubernetes-cluster-with-new-vpc.template

This will download the full template.

Choosing my parameters

The setup template takes a few parameters:

STACK=k8s-demo TEMPLATEPATH=https://s3.amazonaws.com/quickstart-reference/heptio/latest/templates/kubernetes-cluster-with-new-vpc.templateAZ=us-east-1bINGRESS=0.0.0.0/0 KEYNAME=demo

Running the stack

abbyfull$ aws cloudformation create-stack --stack-name $STACK \> --template-body $TEMPLATEPATH \> --capabilities CAPABILITY_NAMED_IAM \> --parameters ParameterKey=AvailabilityZone,ParameterValue=$AZ \> ParameterKey=AdminIngressLocation,ParameterValue=$INGRESS \> ParameterKey=KeyName,ParameterValue=$KEYNAME

This should return the ARN

{"StackId": "arn:aws:cloudformation:us-east-

1:<accountID>:stack/k8s-demo/a8ec95d0-c47e-11e7-b1fb-50a686e4bb1e"}

ARN is the Amazon Resource Name. This is a unique identifier that can be used within AWS.

Checking the values for my cluster

To see more information about my cluster, I can look at the Cloudformation stack like this:

abbyfull$ aws cloudformation describe-stacks --stack-name $STACK

This will return the values the stack was created with, and some current information.

I have a cluster created- now what?

You can ssh to your instance like this:

Run:$aws cloudformation describe-stacks --query 'Stacks[*].Outputs[?OutputKey == `GetKubeConfigCommand`].OutputValue' --output text --stack-name

And use this output to SSH:$STACKSSH_KEY="demo.pem"; scp -i $SSH_KEY -o ProxyCommand="ssh -i \"${SSH_KEY}\" ubuntu@52.90.105.146 nc %h %p" ubuntu@10.0.31.91:~/kubeconfig ./kubeconfig

There are some tools available to help manage your K8s infrastructureIn this demo, we’re using kubectl: https://kubernetes.io/docs/user-guide/kubectl/

There are some other good options out there, like:

kubicorn: https://github.com/kris-nova/kubicornkubeadm: https://kubernetes.io/docs/setup/independent/install-kubeadm/

Or you can find a list of tools here: https://kubernetes.io/docs/tools/

Download and test kubectl

I installed kubectl with homebrew:

$ brew install kubectl

Next, test it against your cluster:

$ kubectl get nodes

NAME STATUS ROLES AGE VERSIONip-blah.ec2.internal Ready <none> 3h v1.8.2

I probably don’t want a cluster with just one nodeThis gets our cluster token:

$ aws cloudformation describe-stacks --stack-name $STACK | grep -A 2 -B 2 JoinNodes

This returns a token:

$ kubeadm join --token=<token>

Next, run join on the new node

CLUSTERTOKEN=xxxxxx.xxxxxxxxxxxxxxxxPRIVATEIP=10.0.0.0

$ kubeadm join --token=$CLUSTERTOKEN $PRIVATEIP

You don’t have to update your nodes automatically thoughYou can add capacity through the autoscaling group in AWS:

How about some content?

We probably want to actually install things. You can run applications on Kubernetes clusters a couple of different ways: you can install from helm (helm.sh), which is a package manager for Kubernetes, like this:

$ brew install kubernetes-helm==> Downloading https://homebrew.bintray.com/bottles/kubernetes-helm-2.7.0.el_capitan.bottle.tar.gz

Or, use a YAML fileHere’s a YAML file for an Nginx deployment:apiVersion: apps/v1beta2 kind: Deploymentmetadata:

name: nginx-deploymentspec: selector:

matchLabels: app: nginx

replicas: 2 # tells deployment to run 2 pods matching the template template: # create pods using pod definition in this template

metadata: # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is

# generated from the deployment name labels:

app: nginxspec:

containers: - name: nginx

image: nginx:1.7.9 ports: - containerPort: 80

I can run my deployment like this:$ kubectl apply -f https://k8s.io/docs/tasks/run-application/deployment.yaml

deployment "nginx-deployment" created

I can get more information by running:

$ kubectl describe deployment nginx-deployment

Check for running pods from my deploymentA pod is a group of containers (like an ECS service) with shared network/storage. I can check for pods related to a deployment like this:

$ kubectl get pods -l app=nginx

For my Nginx example, it returns this (names abbreviated):

NAME READY STATUS RESTARTS AGEnginx-deployment-568 1/1 Running 0 13mnginx-deployment-569 1/1 Running 0 13m

Scaling up and down

Earlier, we covered how to scale our underlying infrastructure with nodes or autoscaling groups. We can also scale our deployments!

Remember our YAML file? I can update the value of replicas to scale my deployment up or down. Then, I just reapply the deployment.

replicas: 2$ kubectl apply -f https://k8s.io/docs/tasks/run-application/deployment.yaml

Updating my content

I can update my content the same way: by changing the YAML file, and re-running my apply command:

$ kubectl apply -f https://k8s.io/docs/tasks/run-application/deployment.yaml

You’ll also need a Load Balancer

We can run the kubectl command for this:

$ kubectl expose --namespace=nginx deployment echoheaders --type=LoadBalancer --port=80 --target-port=8080 --name=echoheaders-public

Just like with non-containerized apps, Load Balancers help distribute traffic. In a containerized app, Load Balancers distribute traffic between the containers of a pod.

High Availability in Kubernetes

• Generally in AWS, the best practice is to run highly available apps. This means that your is designed to work in the event of an Availability Zone or Region failure. If one AZ went down, your application would still function.• This is not quite the same in Kubernetes: rather than run one

cluster that spans multiple AZs, you run one cluster per AZ.• You can learn more about high availability in Kubernetes here.• You can manage multiple clusters in Kubernetes with

something called “federation”.

Kubernetes and the master node

• An important difference between Kubernetes and ECS is the master node: a Kubernetes cluster has a master node, which hosts the control plane. This is responsible for deployments, and updates, and if a node is lost.

In case of master node emergency

• So what happens if the master node goes down?• For AWS, you can use EC2 Auto Recovery.• In a lot of cases, not necessarily to have a highly available master: as

long as Auto Recovery can replace the node fast enough, the only impact on your cluster will be that you can’t deploy new versions, or update the cluster until the master is back online.

Cluster setup with kops

Kubernetes setup with kops

In real life, it’s probably best to stick with tools. A popular one is kops, which is maintained by the Kubernetes community. Being used in production by companies like Ticketmaster.

Kops will help you out with things like service discovery, high availability, and provisioning.

Download kops

First, let’s download kops:

$ wgethttps://github.com/kubernetes/kops/releases/download/1.7.0/kops-darwin-amd64 $ chmod +x kops-darwin-amd64 $ mv kops-darwin-amd64 /usr/local/bin/kops

Some kops specific setup

Kops is built on DNS, so we need some specific setup in AWS before we get rolling:

First, you’ll need a hosted zone in Route 53. This is something like kops.abby.com.

You can do this with the CLI (assuming you own the domain!):

$ aws route53 create-hosted-zone --name kops.abby.com --caller-reference 1

Next, I’ll need an s3 bucket to store cluster infoCreate the bucket like this:

$ aws s3 mb s3://config.kops.abby.com

And then:

$ export KOPS_STATE_STORE=s3://config.kops.abby.com

Create a cluster configuration with kops

To create the config:

$ kops create cluster --zones=us-east-1c useast1.kops.abby.com

To create the cluster resources:

$ kops update cluster useast1.dev.example.com --yes

So let’s recap.

• VPC with nodes in private subnets (only ELB in public)• Limit ports, access, and security groups• For production workloads, run multiple cluster in different AZs

for fault tolerance and high availability• Kubernetes clusters can involve a fair amount of setup and

maintenance: highly recommend taking advantage of tools for both setup (CloudFormation or Terraform), and updates/deployments (like kubectl or kubicorn or kops)• Kubernetes has a rich community- take advantage of it!