How containers helped a SaaS startup be developed and go live

50
HOW CONTAINERS HELPED A SAAS STARTUP BE DEVELOPED INTRANETUM

Transcript of How containers helped a SaaS startup be developed and go live

Page 1: How containers helped a SaaS startup be developed and go live

HOW CONTAINERS HELPED A SAAS STARTUP BE DEVELOPED

INTRANETUM

Page 2: How containers helped a SaaS startup be developed and go live

intranetum

Ramon Navarro Bosch

CTO iskra.cat

CTO intranetum.com

FWT plone.org

WHO AM I?

Machine Learning - Python - Angular 2

Agile Test Driven Development

Always a sysadmin in my heart

In love with docker / k8s

Page 3: How containers helped a SaaS startup be developed and go live

intranetumWHAT IS INTRANETUM?

BRAIN TO CLASSIFY

▸ Knowledge management

▸ Files, Notes, Links

▸ Deep learning by scope/user

▸ Auto classify information

▸ Search information

▸ Less time to find what you are looking for + discover knowledge inside the company

Page 4: How containers helped a SaaS startup be developed and go live

intranetumTHE PROBLEM

FAST, AGILE, TEST, COMPLEX

▸ We needed to develop a SaaS solution in 3 months (proof)

▸ 1 Desember - 23 Febrary (4YFN)

▸ A team of 2 developers

▸ Needs to be modular for evolution

▸ Needs to scale fast

▸ Initial architecture design showed 10 different components ( 40 by the end of February )

Page 5: How containers helped a SaaS startup be developed and go live

intranetum

DEMO INTRANETUM…

Page 6: How containers helped a SaaS startup be developed and go live

intranetum

DEMO DEPLOY I …

Page 7: How containers helped a SaaS startup be developed and go live
Page 8: How containers helped a SaaS startup be developed and go live
Page 9: How containers helped a SaaS startup be developed and go live

intranetum

CHOSEN OPS COMPONENTS

Page 10: How containers helped a SaaS startup be developed and go live

intranetumCOMPONENTS

DOCKER

▸ General Adopted Container Solution

▸ Native support Mac OS X + Linux + Windows

▸ Standardization of build process of images

▸ Service is not an application, is a container

▸ Ports, volumes and build layers management

▸ Definition of repository of images

Page 11: How containers helped a SaaS startup be developed and go live

intranetumCOMPONENTS

KUBERNETTES (K8S)

▸ Same deployment infrastructure for production, testing, development and stage

▸ Same deployment infrastructure for cloud and in-house

▸ Load balancer integrated

▸ Management of HDD

▸ Secret configuration management

▸ Internal network discovery (DNS)

▸ Jobs

Page 12: How containers helped a SaaS startup be developed and go live

intranetumCOMPONENTS

GOOGLE CLOUD PLATFORM

▸ Clusters for stage / production

▸ HTTPS load balancer

▸ 100% integrated with k8s

▸ Full system monitoring

▸ Scaling fast

▸ Private Docker Registry

Page 13: How containers helped a SaaS startup be developed and go live

intranetumCOMPONENTS

JENKINS + GIT

▸ Continuous Integration

▸ Continuous Deployment

▸ Orchestration of versioning and stability

▸ Workflow of testing & deployment

▸ quality, Quality, QUALITY!

▸ testing, Testing, TESTING!

Page 14: How containers helped a SaaS startup be developed and go live

intranetumCOMPONENTS

METRICS (ELK)

▸ mesure, Mesure, MESURE

▸ Continuous Metrics

▸ Centralized Log Service

▸ Nice diagrams to show VCs

Page 15: How containers helped a SaaS startup be developed and go live

intranetum

WE STARTED…

Page 16: How containers helped a SaaS startup be developed and go live

intranetumGROUPING

CLUSTER | GROUPS | CONTAINERS

STAGE

GROUP2 GROUP3 GROUP4GROUP1

RC RC RC

C1 C2 C1 C2 C3 C1

Page 17: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

MAIN OPEN SOURCE COMPONENTS

▸ PLONE (STORAGE/CONTENT TYPES/SECURITY)

▸ ELASTIC SEARCH (SEARCH / LOGGING)

▸ ZODB (DATABASE)

▸ REDIS (CACHE)

▸ LDAP (USER/PERMISSION DB)

▸ RABBITMQ (QUEUE)

Page 18: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

MAIN OPEN SOURCE FRAMEWORKS

▸ ANGULAR2

▸ TENSORFLOW

▸ NLTP

▸ PROTOBUFFERS

▸ PYRAMID + ASYNCIO

Page 19: How containers helped a SaaS startup be developed and go live

CMSGROUP1

Page 20: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

SEMANTIC GROUP1 CMS - DEV

TEST CLUSTER CLOUD

ELASTIC

ZOPE GROUP2 GROUP3

GROUP4 GROUP1

ZEO

NFS

Page 21: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ELASTICSEARCH - DOCKERFILE

FROM elasticsearch:1.7

MAINTAINER Iskra

# Expose

EXPOSE 9200

EXPOSE 9300

# Cors enabled for testing

CMD ["elasticsearch", "-Des.cluster.name=intranetum", "-Des.http.cors.enabled=true"]

Page 22: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ELASTICSEARCH - LOCAL DEV/TEST DOCKER COMPOSE

elasticsearch:

image: elasticsearch:1.7

ports:

- "9200:9200"

- "9300:9300"

volumes:

- ./esconfig:/usr/share/elasticsearch/config

- ./esdata:/usr/share/elasticsearch/data

command: ["elasticsearch", "-Des.network.publish_host=localhost", "-Dhttp.cors.enabled=true", "-Dhttp.cors.allow-origin='*'"]

Page 23: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ELASTICSEARCH - STAGE GCP

kind: ReplicationController spec: replicas: 1 template: spec: containers: - name: masterelastic resources: limits: cpu: 0.250 image: eu.gcr.io/XXXXXXXX/elasticsearch imagePullPolicy: Always ports: - containerPort: 9200 name: masterelastic volumeMounts: - mountPath: /usr/share/elasticsearch/data name: elastic-data volumes: - name: elastic-data gcePersistentDisk: pdName: YYYYYYYYYYYYYY fsType: ext4

apiVersion: v1 kind: Service metadata: name: serviceelastic labels: name: serviceelastic spec: type: NodePort ports: - port: 9200 selector: name: masterelastic

Page 24: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ELASTICSEARCH - STAGE GCP - SNAPSHOT (INCREMENTAL)

STOP PRODUCTION ELASTIC (unmounted volume - optimal)

gcloud compute disks snapshot prod-elastic-plone-disk --snapshot-names elastic-plone-snapshot

START PRODUCTION ELASTIC

STOP STAGE ELASTIC

gcloud compute disks delete stage-elastic-plone-disk

gcloud compute disks create stage-elastic-plone-disk —source-snapshot=elastic-plone-snapshot

START STAGE ELASTIC

Page 25: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ELASTICSEARCH - PROD GCP

kind: ReplicationController spec: replicas: 1 template: spec: containers: - name: masterelastic resources: limits: cpu: 1 image: eu.gcr.io/XXXXXXXX/elasticsearch imagePullPolicy: Always ports: - containerPort: 9200 name: masterelastic volumeMounts: - mountPath: /usr/share/elasticsearch/data name: elastic-data volumes: - name: elastic-data gcePersistentDisk: pdName: prod-elastic-plone-disk fsType: ext4

apiVersion: v1 kind: Service metadata: name: serviceelastic labels: name: serviceelastic spec: type: NodePort ports: - port: 9200 selector: name: masterelastic

Page 26: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ELASTICSEARCH - PROD GCP - CLUSTER

apiVersion: v1 kind: Service metadata: name: elasticsearch-discovery labels: component: elasticsearch role: master spec: selector: component: elasticsearch role: master ports: - name: transport port: 9300 protocol: TCP

apiVersion: v1 kind: Service metadata: name: elasticsearch labels: component: elasticsearch role: client spec: type: LoadBalancer selector: component: elasticsearch role: client ports: - name: http port: 9200 protocol: TCP

apiVersion: v1 kind: ServiceAccount metadata: name: elasticsearch

Page 27: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ELASTICSEARCH - PROD GCP - CLUSTER

apiVersion: v1 kind: ReplicationController metadata: name: es-master labels: component: elasticsearch role: master spec: replicas: 1 template: metadata: labels: component: elasticsearch role: master spec: serviceAccount: elasticsearch containers: - name: es-master securityContext: capabilities: add: - IPC_LOCK image: quay.io/pires/docker-elasticsearch-kubernetes:1.7.1-4 env: - name: KUBERNETES_CA_CERTIFICATE_FILE value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - name: NAMESPACE

valueFrom: fieldRef: fieldPath: metadata.namespace - name: "CLUSTER_NAME" value: "myesdb" - name: NODE_MASTER value: "true" - name: NODE_DATA value: "false" - name: HTTP_ENABLE value: "false" ports: - containerPort: 9300 name: transport protocol: TCP volumeMounts: - mountPath: /data name: storage volumes: - name: storage source: emptyDir: {}

Page 28: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ELASTICSEARCH - PROD GCP - CLUSTER

apiVersion: v1 kind: ReplicationController metadata: name: es-data labels: component: elasticsearch role: data spec: replicas: 1 template: metadata: labels: component: elasticsearch role: data spec: serviceAccount: elasticsearch containers: - name: es-data securityContext: capabilities: add: - IPC_LOCK image: quay.io/pires/docker-elasticsearch-kubernetes:1.7.1-4 env: - name: KUBERNETES_CA_CERTIFICATE_FILE

value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: "CLUSTER_NAME" value: "myesdb" - name: NODE_MASTER value: "false" - name: HTTP_ENABLE value: "false" ports: - containerPort: 9300 name: transport protocol: TCP volumeMounts: - mountPath: /data name: storage volumes: - name: storage source: emptyDir: {}

Page 29: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ELASTICSEARCH - PROD GCP - CLUSTER

apiVersion: v1 kind: ReplicationController metadata: name: es-client labels: component: elasticsearch role: client spec: replicas: 1 template: metadata: labels: component: elasticsearch role: client spec: serviceAccount: elasticsearch containers: - name: es-client securityContext: capabilities: add: - IPC_LOCK image: quay.io/pires/docker-elasticsearch-kubernetes:1.7.1-4 env: - name: KUBERNETES_CA_CERTIFICATE_FILE value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

- name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: "CLUSTER_NAME" value: "myesdb" - name: NODE_MASTER value: "false" - name: NODE_DATA value: "false" - name: HTTP_ENABLE value: "true" ports: - containerPort: 9200 name: http protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - mountPath: /data name: storage volumes: - name: storage source: emptyDir: {}

Page 30: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ELASTICSEARCH - PROD GCP - CLUSTER

kubectl create -f service-account.yaml kubectl create -f es-discovery-svc.yaml kubectl create -f es-svc.yaml kubectl create -f es-master-rc.yaml

kubectl create -f es-client-rc.yaml

kubectl create -f es-data-rc.yaml

kubectl scale --replicas=3 rc es-master kubectl scale --replicas=2 rc es-client kubectl scale --replicas=2 rc es-data

Page 31: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ZOPE/PLONE - DOCKERFILE PLONEBASE

# plonebase/Dockerfile FROM python:2.7-slim MAINTAINER Intranetum

ADD . /plone WORKDIR /plone

ADD clau/key /root/.ssh/id_rsa

RUN set -x \ && chmod 600 /root/.ssh/id_rsa \ && ploneDeps=' \ openssh-client \ git-core \ libxml2 \ libxslt1.1 \ libjpeg62 \ curl \ gcc \ vim \ libbz2-dev \ libc6-dev \ libncurses-dev \ libreadline-dev \ libsqlite3-dev \ libssl-dev \ libxslt-dev \ libxml2-dev \ libjpeg-dev \

make \ xz-utils \ zlib1g-dev \ build-essential \ ' \ && apt-get update && apt-get install -y $ploneDeps --no-install-recommends

RUN ssh-keyscan -H XXXXXXXXXXX >> /root/.ssh/known_hosts \ && python bootstrap.py && ./bin/buildout -v \ && find /plone \ \( -type d -a -name test -o -name tests \) \ -o \( -type f -a -name '*.pyc' -o -name '*.pyo' \) \ -exec rm -rf '{}' + \ && chown -R www-data:www-data /plone/* \ && chown www-data:www-data /plone/.installed.cfg \ && chown www-data:www-data /plone/.mr.developer.cfg

USER www-data

# Configure and Run ENTRYPOINT ["/plone/docker-entrypoint.sh"] CMD ["/plone/bin/instance", "fg"]

Page 32: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ZOPE/PLONE - DOCKERFILE ZEOCLIENT

FROM eu.gcr.io/XXXXXXXXXX/plonebase MAINTAINER Intranetum

RUN rm -rf /plone/src/*

# Bundle app source ADD . /plone

WORKDIR /plone

RUN set -x \ && rm -rf /plone/libsrc/* \ && chown www-data /plone/.mr.developer.cfg \ && ./bin/buildout -v -c intranetum.cfg \ && chown -R www-data /plone

# Configure and Run ENTRYPOINT ["/plone/docker-entrypoint.sh"] CMD ["/plone/bin/instance", "fg"]

Page 33: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ZOPE/PLONE - ENTRYPOINT.SH

#!/bin/bash set -e

if [ -n "$ZEOSERVER_PORT_8100_TCP_ADDR" ]; then if [ -z "$ZEO_HOST" ]; then ZEO_HOST=$ZEOSERVER_PORT_8100_TCP_ADDR else echo >&2 'warning: both ZEO_HOST and ZEOSERVER_PORT_8100_TCP_ADDR found' echo >&2 " Connecting to ZEO_HOST ($ZEO_HOST)" echo >&2 ' instead of the linked zeo container' fi fi if [ -z "$ZEO_HOST" ]; then echo >&2 'error: missing ZEO_HOST' echo >&2 ' Did you forget to --link some_zeo_container:zeoserver or set an external db' echo >&2 ' with -e ZEO_HOST=hostname' exit 1 fi if [ -n "$ELASTICSEARCH_PORT_9200_TCP_ADDR" ]; then if [ -z "$ELASTIC_HOSTS" ]; then ELASTIC_HOSTS=$ELASTICSEARCH_PORT_9200_TCP_ADDR else echo >&2 'warning: both ELASTIC_HOSTS and ELASTICSEARCH_PORT_9200_TCP_ADDR found' echo >&2 " Connecting to ELASTIC_HOSTS ($ELASTIC_HOSTS)" echo >&2 ' instead of the linked elastic container' fi fi …. if [ -z "$SHAREDBLOB" ]; then echo >&2 'warning: no SHAREDBLOB setted: default off' fi

set_config() { echo "SET VAR" echo $1 echo $2 key="$1" value="$2" sed -i -re 's!('"${key}"'\s+)[^=]*$!\1'"${value}"'!' /plone/parts/instance/etc/zope.conf }

set_config_section() { echo "SET VAR" echo $1 echo $2 echo $3 section="$1" key="$2" value="$3" sed -i -re '/<'"${section}"'>/ ,/<\/'"${section}"'>/ s!('"${key}"'\s+)[^=]*$!\1'"${value}"'!' /plone/parts/instance/etc/zope.conf }

echo "SET CONFIG" set_config_section zeoclient server $ZEO_HOST:$ZEO_PORT set_config_section zeoclient shared-blob-dir $SHAREDBLOB set_config elasticsearch-hosts $ELASTIC_HOSTS

echo "START ZOPE"

exec "$@"

Page 34: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ZOPE/PLONE - INTRANETUM.CFG

[buildout] extends = buildout.cfg

versions = versions always-checkout = true parts = instance

eggs += XXXX

auto-checkout += plone.rest collective.elasticsearch …

[instance] recipe = plone.recipe.zope2instance effective-user = www-data http-address = 8080 zeo-address = $ZEO_ADDR:$ZEO_PORT blob-storage = ${buildout:directory}/var/blobstorage/beta zeo-client = on shared-blob = $SHAREDBLOB eggs = ${buildout:eggs} environment-vars = zope_i18n_compile_mo_files true zope-conf-additional = <product-config intranetum> elasticsearch-hosts $ELASTIC_HOSTS </product-config>

Page 35: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ZOPE/PLONE - DOCKERFILE ZEO

FROM eu.gcr.io/XXXXXXXXXX/zeoserver

MAINTAINER Intranetum

ADD . /plone WORKDIR /plone

RUN set -x \ && chown www-data /plone/.mr.developer.cfg \ && rm -rf /plone/libsrc/* \ && ./bin/buildout -v -c intranetum.cfg \ && chown -R www-data /plone

# Expose EXPOSE 8100 VOLUME /plone/var/filestorage VOLUME /plone/var/blobstorage

# Configure and Run ENTRYPOINT ["/plone/docker-entrypoint.sh"] CMD ["/plone/bin/zeo", “fg"]

[buildout] parts = zeo always-checkout = true extends = buildout.cfg auto-checkout += plone.rest collective.elasticsearch …

[zeo] recipe = plone.recipe.zeoserver zeo-address = 8100 effective-user = www-data eggs = ${buildout:eggs} file-storage = filestorage/beta/Data.fs blob-storage = blobstorage/beta

Page 36: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ZOPE/PLONE - STAGE YAMLapiVersion: v1 kind: ReplicationController metadata: name: masterzeoclient labels: name: masterzeoclient spec: replicas: 2 selector: name: masterzeoclient template: metadata: labels: name: masterzeoclient spec: containers: - image: eu.gcr.io/XXXXXXXXXXXX/zeoclient imagePullPolicy: Always resources: limits: cpu: 0.25 name: masterzeoclient env: - name: ELASTIC_HOSTS value: "serviceelastic" - name: ZEO_HOST value: "servicezeo" … - name: SHAREDBLOB value: "off" ports: - containerPort: 8080 name: masterzeoclient

apiVersion: v1 kind: ReplicationController … spec: replicas: 1 selector: name: masterzeo template: metadata: labels: name: masterzeo spec: containers: - name: masterzeo resources: limits: cpu: 0.4 image: eu.gcr.io/XXXXXXXXXXXX/zeoserver imagePullPolicy: Always ports: - containerPort: 8100 name: masterzeo volumeMounts: - mountPath: /plone/var/filestorage name: zeo-filestorage - mountPath: /plone/var/blobstorage name: zeo-blobstorage volumes: - name: zeo-filestorage gcePersistentDisk: pdName: XXXXXXXXXXXX fsType: ext4 - name: zeo-blobstorage gcePersistentDisk: pdName: zeo-blobstorage-disk fsType: ext4

Page 37: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ZOPE/PLONE - STAGE YAML

apiVersion: v1 kind: Service metadata: labels: name: servicezeoclient name: servicezeoclient spec: type: LoadBalancer ports: # The port that this service should serve on. - port: 80 targetPort: 8080 protocol: TCP # Label keys and values that must match in order to receive traffic for this service. selector: name: masterzeoclient

apiVersion: v1 kind: Service metadata: name: servicezeo labels: name: servicezeo spec: type: NodePort ports: - port: 8100 selector: name: masterzeo

Page 38: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ZOPE/PLONE - PRODUCTION YAMLapiVersion: v1 kind: ReplicationController … spec: replicas: 3 selector: name: masterzeoclient template: metadata: labels: name: masterzeoclient spec: containers: - image: eu.gcr.io/XXXXXXXXXXX/zeoclient imagePullPolicy: Always resources: limits: cpu: 0.8 name: masterzeoclient env: - name: ELASTIC_HOSTS value: "serviceelastic" - name: ZEO_HOST value: "servicezeo" - name: SHAREDBLOB value: "on" ports: - containerPort: 8080 name: masterzeoclient volumeMounts: - mountPath: /plone/var/blobstorage name: nfs-zeo volumes: - name: nfs-zeo persistentVolumeClaim: claimName: nfs-zeo

apiVersion: v1 kind: ReplicationController … spec: replicas: 1 selector: name: masterzeo template: metadata: labels: name: masterzeo spec: containers: - name: masterzeo resources: limits: cpu: 0.5 image: eu.gcr.io/XXXXXXXXXXXXXXX/zeoserver imagePullPolicy: Always ports: - containerPort: 8100 name: masterzeo volumeMounts: - mountPath: /plone/var/filestorage name: zeo-filestorage - mountPath: /plone/var/blobstorage name: nfs-zeo volumes: - name: zeo-filestorage gcePersistentDisk: pdName: zeo-filestorage-disk-prod fsType: ext4 - name: nfs-zeo persistentVolumeClaim: claimName: nfs-zeo

Page 39: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

ZOPE/PLONE - PROD GCP - UPDATE

kubectl rolling-update -f zeo.yaml kubectl rolling-update -f zeoclient.yaml

kubectl scale --replicas=3 rc zeoclient

▸ SNAPSHOTS

▸ NFS

▸ https://goo.gl/Zmub08

▸ HEALTH CHECKS

Page 40: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

TESTING - GROUP 1

▸ LOCAL TESTING

▸ docker-compose Elasticsearch

▸ GROUP CONFIG ENV vars to TESTING ENV

▸ COMMIT ON CODE

▸ JENKINS testing INTEGRATION on NATIVE Buildout agains TESTING ENV with SETTING JOBS

▸ JENKINS BUILD Docker Images and PUSH

▸ JENKINS DEPLOY TESTING - ACCEPTANCE TESTS

▸ JENKINS DEPLOY STAGE

▸ MANUAL TRIGGER DEPLOY PRODUCTION

Page 41: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

JOBS

▸ UPGRADE STEPS ON STAGE / PRODUCTION

▸ MAINTENENCE JOBS

# upgrade-b16.py def upgrade(site):

site.doNothing()

if __name__ == "__main__": output = upgrade(app.Plone)

logger.info(output)

# upgrade-b16.yaml apiVersion: batch/v1 kind: Job metadata: name: upgrade-b16 spec: template: metadata: name: upgrade-b16 spec: containers: - name: plone image: eu.gcr.io/XXXXXXXX/upgrade-plone command: [“/plone/bin/instance“, “/plone/upgrade-b16.py“] restartPolicy: Never

Page 42: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

JOBS

# upgrade-plone/Dockerfile FROM eu.gcr.io/XXXXXXXXXXXX/zeoclient MAINTAINER Intranetum ADD . /plone

kubectl create -f ./upgrade-b16.yaml

Page 43: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

MAIN CLOSED COMPONENTS

▸ FRONT END (NODE + ANGULAR 2)

▸ FRONT END MIDDLEWARE

▸ HORUS (AUTHZ AUTHN)

▸ 3 components PY 3.5 ASYNCIO

▸ BEATS (MACHINE LEARNING ENGINE)

▸ >15 HIPOS (ASYNC components PY3.5/GO/ERLANG)

Page 44: How containers helped a SaaS startup be developed and go live

intranetumDEVELOPMENT

DOCKER FROM 0 - HIPO

FROM python:3.5.1

RUN apt-get update -y \

&& apt-get install -y netcat \

&& apt-get autoremove -y \

&& rm -rf /var/lib/apt/lists/*

RUN pip3.5 install aioamqp python-logstash

RUN pip3.5 install https://pypi.python.org/packages/source/p/protobuf/protobuf-3.0.0b2.tar.gz

COPY docker-entrypoint.sh /

COPY run.py /

COPY api_pb2.py /

ENTRYPOINT ["/docker-entrypoint.sh"]

CMD ["python3", "/run.py"]

Page 45: How containers helped a SaaS startup be developed and go live

intranetum

DEMO DEPLOY II …

Page 46: How containers helped a SaaS startup be developed and go live

intranetumSO…

CONCLUSIONS

▸ Split arch in groups of components to test, deploy and develop

▸ K8s much powerful than docker-compose / swarm (rolling-updates / secrets / scaling / jobs)

▸ Optimization Service on K8S vs SaaS (not everything in containers)

▸ Docker from moment 0 (easier than later)

▸ Tests in Docker, Dev outside Docker (find a way to isolate components and connect to the testing/stage cluster)

▸ Proxy NPM / Debian packages / Pypi / … CI / CD is hard

Page 47: How containers helped a SaaS startup be developed and go live

intranetumI START TO BE HUNGRY…

FUTURE

▸ Ansible on docker ? - We use buildout

▸ Jenkins jobs to rollback to version X

▸ K8S jobs to manage backups/testing data set

▸ Docker to build Jenkins to build Docker

▸ Elastic search cluster deployment

▸ Remove keys from docker

▸ Local k8s hardware (Rancher)

▸ Upgrade jobs testing Stage

▸ TESTS on k8s cluster (snapshot)

Page 49: How containers helped a SaaS startup be developed and go live

intranetum…REALLY HUNGRY

OPEN QUESTIONS

▸ Container data manager VS SaaS data manager

▸ Volumes ZFS/NFS snapshoots

▸ Performance on computation container

▸ Persistent Disks vs local SSD vs Buckets vs RAM disk

▸ Monitoring ?

▸ More questions ?