Digital Service Assistant - Hackoff V3 Healthcare Statement.pdf · kubectl delete backups -n...

12
Digital Service Assistant - Hackoff V3.0 December 9, 2020 Confidential. Not to be copied, distributed, or reproduced without prior approval.

Transcript of Digital Service Assistant - Hackoff V3 Healthcare Statement.pdf · kubectl delete backups -n...

  • Digital Service Assistant - Hackoff V3.0 December 9, 2020

    Confidential. Not to be copied, distributed, or reproduced without prior approval.

  • The Problem Statement

    GE Healthcare has a class of products known as the Edison Healthlink. The Edison Healthlink is an on-premise cloud system to host various healthcare applications. To conduct service operations on the device, the field engineer often runs lines of complex commands, analyzes the output, and determines the course of diagnostic action manually. This is cumbersome, time consuming, and prone to errors.

    Challenge:

    You are required to develop a digital service assistant that simplifies the process of entering the commands in the system and intelligently decides the course of action/workflow. To aid you in your development process, you will be provided with sections of the service manual (PDF). The tool should be able to generate workflows based off the PDF. The critical pain points involved in the process might include (but not limited to):

    1. Execute the commands by parsing the human readable instructions from the PDF.

    2. Analyze the output of the commands.

    3. Decide the course of action at every step by comparing the outputs from the PDF.

    Suggestions regarding Document templatization for improving the efficiency of the solution will also be welcomed.

    Presentation Title Confidential. Not to be copied, distributed, or reproduced without prior approval. December 9, 2020 2

  • Review Points

    Your submission will be evaluated by entering one of the cases mentioned in the service manual provided to you. The manner in which your system responds to it and guides the user will be critical to determine the efficacy of the proposed solution. The review points will be largely dependent on how the solution tackles the individual pain points, auto- populates the system data and handles conditional workflows. You can setup a linux terminal and pre-orchestrate your output/workflow for the demonstration, but it should read from the PDF and decide course of action in real-time.

    Presentation Title Confidential. Not to be copied, distributed, or reproduced without prior approval. December 9, 2020 3

  • Next pages are the test pdf.

  • kubectl get ingress -n edison-system | grep flowmanager

    kubectl patch -n edison-system -p '{"metadata":{"annotations":{"plugins.konghq.com":

    "jwt"}}}' ingress `kubectl get ingress -n edison-system | grep flowmanager-server | awk

    '{print $1}'`

    kubectl patch -n edison-system -p '{"metadata":{"annotations":{"plugins.konghq.com":

    "jwt"}}}' ingress `kubectl get ingress -n edison-system | grep flowmanager-ui | awk

    '{print $1}'`

    kubectl describe ingress -n edison-system `kubectl get ingress -n edison-system | grep

    flowmanager-server | awk '{print $1}'`

    kubectl describe ingress -n edison-system `kubectl get ingress -n edison-system | grep

    flowmanager-ui | awk '{print $1}'`

    Service Instructions to patch Flowmanager Deployment Note: The following commands should be run from the SW Install VM or any of the K8s Master Nodes having access to “kubectl” command line tool

    Pre-Step:

    Check if flowmanager ingress objects exist by running the following command

    Expected Result:

    2 ingress objects should be listed, namely eis-flowmanager-server-ingress

    eis-flowmanager-ui-ingress

    Procedure:

    Patch the 2 ingress objects

    Expected Result:

    The above two commands should say successfully patched the ingress resources

    Post-Step:

    Check if the ingress objects got updated properly

    Expected Result:

    For both the two ingress objects the following entry should be present under “Annotations:” plugins.konghq.com: jwt

    Example:

  • Configuration changes to be done after install completion and prior to backup configuration

    Sl No

    Through Kubernetes UI Through Kubectl cmd

    1 Set Restic Timeout for backups

    Note : 3 different options are given to perform this , user can choose any one

    option1

    Step1: Login to the kubernetes and select name space edison- system

    Step 2: Search for velero

    UI will look like below SearchUI

    Step 3: Click on Edit deployment

    add restic time out like below .

    Restic time out setting update can be done by using Kubectl edit or kubectl patch also.

    Steps to do it with kubectl patch command(option2) Kubectl patch

    step1: Login to cp-swinstall vm

    step2: execute below command to add restic-timeout in the args

    kubectl patch deployment ehl-data-backup-velero --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/1", "value": "--restic- timeout=2m" }]' -n edison-system

    step3 : View the patched Deployment by executing

    kubectl get deployment ehl-data-backup-velero --output yaml -n edison- system

    make sure the change is reflected

    Steps to do it with kubectl edit command(option3) kubectl Edit

    Steps to do it with kubectl edit

    step1: Login to cp-swinstall vm

    step2 :enter the below command

    kubectl edit deployment/ehl-data-backup-velero -n edison-system

    Add the restic-timeout like highlighted below

  • Step 4: Click on Update button

    step 3: save the changes

    2 set the GOGC parameter

    Note : 3 different options are given to perform this , user can choose any one

    option1

    Step1: Login to the kubernetes and select name space edison- system

    Step 2: Search for restic

    GOGC update also can be done through either kubectl patch or kubectl edit

    Steps to do it with kubectl patch command(option2)

    Step1: Login to cp-swinstall vm

    step2: execute below command

    kubectl patch daemonset restic -p '{"spec":{"template":{"spec":{"containers":

    [{"name":"velero","env":[{"name":"GOGC","value":"20"}]}]}}}}' -n edison-system

    View the patched content by

    kubectl get daemonset restic --output yaml -n edison-system

    make sure the content is updated with GOGC env

    Steps to do it with kubectl edit command(option3)

    step1: Login to cp-swinstall vm

    step2 :enter the below command

    kubectl edit daemonset restic -n edison-system

  • Step3: click on View/Edit Yaml

    step4: Add GOGC key value like below

    Step5: Click on Update button

    Add GOGC under env variable as shown in below fig.

    Step3: save the content

    3 Delete Security schedules

    Note: This can be performed only through kubectl command

    Step1: Login to cp-swinstall vm

    step2: execute below 4 commands to delete security related schedules

    Delete security postgres schedule

    1. kubectl delete schedule security-postgres-daily -n edison-system

    Delete wso2 schedule

    2. kubectl delete schedule wso2is-daily -n edison-system

    delete ejbca schedule

    3. kubectl delete schedule ejbca-daily -n edison-system

    delete certificatemgmnt schedule

    4. kubectl delete schedule cert-mgmt-daily -n edison-system

    Changes to be done on change of storage location

  • 1 Add /Edit storage location

    Note: This can be performed only through kubectl command

    Duing storage location change below procedure need to be followed for successful backups

    After storage location change in the UI Login to cp-swinstall vm perform below command to delete existing backups kubectl delete backups -n edison-system --all

    Note:

    Network latency between Minio VM and EHL cluster should be low , preferably both should be in same LAN After adding the storage location in the UI , it might take ~10 minutes to get the repo created , utill then kubectl get resticrepositories.velero.io -n edison- system command will return no resource found error

    Modify Maintanace frequency for repository

    perform below command to find out the resticrepository name(This might take ~5min after storage loaction is added in the UI)

    kubectl get resticrepositories.velero.io -n edison-system

    output will look like below

    Then Edit the repository configuration by executing the below command

    kubectl edit resticrepositories.velero.io edison-system-default-l2c7d -n edison- system

    Change maintenanceFrequency to 72h0m0s

    Note: change the lastMaintenanceTime to run the next prune at 12:07:00 AM local time. the time mentioned here is in UTC , adjust accordingly

    Example if the system is in central time(CST, GMT -6) then need to keep 06:07:00z as the maintenanacetime to run it at 12:07:00 local time

    http://resticrepositories.velero.io/http://resticrepositories.velero.io/http://resticrepositories.velero.io/

  • Restart velero pod by scaling down and scaling up the deployment using below command

    kubectl scale deployment ehl-data-backup-velero --replicas=0 -n edison- system

    kubectl scale deployment ehl-data-backup-velero --replicas=1 -n edison- system

    Changes to be done prior to triggering a restore operation

    1 Set Restic Timeout for Restore

    Note : 3 different options are given to perform this , user can choose any one

    option1

    The below procedure need to be followed before a restore operation

    Step1: Login to the kubernetes and select name space edison- system

    Step 2: Search for velero

    UI will look like below SearchUI

    Step 3: Click on Edit deployment

    change restic time out to 5h like below

    Restic time out setting update can be done by using Kubectl edit or kubectl patch also.

    Steps to do it with kubectl patch command(option2)

    Execute below command in case restictimeout is already there in the deployment.

    kubectl patch deployment ehl-data-backup-velero --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/args/1", "value": "--restic- timeout=5h" }]' -n edison-system

    if restictimeout is not present in the deployment then use below command to add restictimeout to the deployment

    kubectl patch deployment ehl-data-backup-velero --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/1", "value": "--restic- timeout=5h" }]' -n edison-system

    Steps to do it with kubectl edit command(option3)

    step1: Login to cp-swinstall vm

    step2 :enter the below command

  • Step 4: Click on Update button

    kubectl edit deployment/ehl-data-backup-velero -n edison-system

    Add the restic-timeout like highlighted below kubectleditrestoretimeout

    Changes to be done after a restore operation

    Note: Resetting restic time out to 2min should be done after restore operation. Since the timeout was set to 5 hours before restore operation, a backup operation may continuing re-trying for 5 hours which could result the system to be unstable during the complete backup interval.

    1 Set Restic Timeout

    Step1: Login to cp-swinstall vm

    step2: execute below command to set back the restic time out to 2m

    kubectl patch deployment ehl-data-backup-velero --type='json' -p='[{"op":

    "replace", "path": "/spec/template/spec/containers/0/args/1", "value": "--restic- timeout=2m" }]' -n edison-system