TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo...

52
TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005 www.tibco.com TIBCO fuels digital business by enabling better decisions and faster, smarter actions through the TIBCO Connected Intelligence Cloud. From APIs and systems to devices and people, we interconnect everything, capture data in real time wherever it is, and augment the intelligence of your business through analytical insights. Thousands of customers around the globe rely on us to build compelling experiences, energize operations, and propel innovation. Learn how TIBCO makes digital smarter at www.tibco.com. TIBCO ® Messaging on AKS This document describes how to run TIBCO ® Messaging components in an Azure Kubernetes Service (AKS) environment. Version 1.0 May 2019 Initial Document

Transcript of TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo...

Page 1: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

TIBCO Software Inc.

Global Headquarters

3307 Hillview Avenue

Palo Alto, CA 94304

Tel: +1 650-846-1000

Toll Free: 1 800-420-8450

Fax: +1 650-846-1005

www.tibco.com

TIBCO fuels digital business by enabling better decisions and faster, smarter actions through the TIBCO Connected Intelligence Cloud. From APIs and systems to devices and people, we interconnect everything, capture data in real time wherever it is, and augment the intelligence of your business through analytical insights. Thousands of customers around the globe rely on us to build compelling experiences, energize operations, and propel innovation. Learn how TIBCO makes digital smarter at www.tibco.com.

TIBCO® Messaging on AKS This document describes how to run TIBCO® Messaging components in an Azure Kubernetes Service (AKS) environment.

Version 1.0 May 2019 Initial Document

Page 2: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 2

Copyright Notice COPYRIGHT© 2019 TIBCO Software Inc. All rights reserved.

Trademarks TIBCO, the TIBCO logo, TIBCO Enterprise Message Service, TIBCO FTL, Rendezvous, and SmartSockets are either registered trademarks or trademarks of TIBCO Software Inc. in the United States and/or other countries. All other product and company names and marks mentioned in this document are the property of their respective owners and are mentioned for identification purposes only.

Content Warranty The information in this document is subject to change without notice. THIS DOCUMENT IS PROVIDED "AS IS" AND TIBCO MAKES NO WARRANTY, EXPRESS, IMPLIED, OR STATUTORY, INCLUDING BUT NOT LIMITED TO ALL WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. TIBCO Software Inc. shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance or use of this material.

For more information, please contact:

TIBCO Software Inc. 3303 Hillview Avenue Palo Alto, CA 94304 USA

Page 3: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 3

Table of Contents

1 Overview ................................................................................................................................ 6 1.1 Azure Architecture.......................................................................................................................... 6 1.2 Supported Versions ........................................................................................................................ 7 1.3 Excluded TIBCO EMS Features ........................................................................................................ 7 1.4 Prerequisites .................................................................................................................................. 7 1.5 Prepare Local Environment ............................................................................................................. 7 1.6 Prepare Preliminary Azure Account and Kubernetes Configuration ................................................. 9

1.6.1 General (Required) ..................................................................................................................... 9 1.6.2 Configure the Kubernetes Dashboard (Optional) ......................................................................... 9

2 Fault Tolerance and Shared Folder Setup .............................................................................. 10 2.1 Shared Storage ............................................................................................................................. 10 2.2 Create the ASA and File System .................................................................................................... 10

3 Azure AKS Setup ................................................................................................................... 12 3.1 Create a New Azure Kubernetes Service (AKS) .............................................................................. 12 3.2 Configuring Kubectl to connect to Azure Kubernetes Service ........................................................ 13

3.2.1 Configure Kubectl to connect to AKS ......................................................................................... 13 3.2.2 Create a New Namespace ......................................................................................................... 14

3.3 Create a Kubernetes Secret to Access the Azure Storage Account ................................................. 14 3.4 Configure the Azure Container Registry ........................................................................................ 15

3.4.1 Create the Azure Container Registry ......................................................................................... 15 3.4.2 Create a Kubernetes Secret to Access the Individual Azure Container Registry ........................... 15

4 Configuring FTL .................................................................................................................... 17 4.1 Creating the Docker Image ........................................................................................................... 17 4.2 Push the FTL Docker Image to ACR................................................................................................ 18 4.3 Setting up FTL in Kubernetes ........................................................................................................ 19

4.3.1 Configuring FTL for Kubernetes ................................................................................................. 19 4.3.2 Applying FTL in Kubernetes ....................................................................................................... 21 4.3.3 Stopping, Deleting, or Accessing the FTL Servers ....................................................................... 22

4.4 Configuring a New FTL Application for EMS .................................................................................. 22

5 The EMS Docker image ......................................................................................................... 25 5.1 Creating the Base Docker Image ................................................................................................... 25 5.2 Extending the Base EMS Docker Image ......................................................................................... 28 5.3 Hosting the Docker Image ............................................................................................................. 28 5.4 EMS Server Template ................................................................................................................... 29

5.4.1 TIBEMSD Template ................................................................................................................... 29 5.4.2 Alternate TIBEMSD Template.................................................................................................... 31 5.4.3 Creating a Deployment and Service .......................................................................................... 32 5.4.4 Stopping or Deleting an EMS Server .......................................................................................... 32 5.4.5 EMS Server Configuration ......................................................................................................... 33 5.4.6 Connecting to the EMS Server Container via Kubectl ................................................................. 33

5.5 Central Administration Server Template ....................................................................................... 33 5.6 Modifying the Connection Factories in the EMS Server ................................................................. 34

6 Configuring AKD Core (Zookeeper and Kafka) ....................................................................... 35

Page 4: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 4

6.1 Creating the Docker Images .......................................................................................................... 35 6.2 Hosting the Docker Image ............................................................................................................. 35 6.3 Deploying the AKD Core on AKS .................................................................................................... 36 6.4 Stopping, Deleting, or Accessing the AKD Servers ......................................................................... 37 6.5 Configuring AKD Core for the AKD Bridge...................................................................................... 37

7 Configuring the AKD Bridge .................................................................................................. 39 7.1 Creating the Docker Image ........................................................................................................... 39 7.2 Push the AKD Bridge Docker Image to ACR ................................................................................... 40 7.3 Deploying the AKD Bridge on AKS ................................................................................................. 40 7.4 Creating a Deployment and Service for the AKD Bridge ................................................................. 42 7.5 Stopping or Deleting the AKD Bridge Connector............................................................................ 43 7.6 Connecting to the AKD Bridge Container....................................................................................... 43

8 Accessing and Testing the Messaging Components ............................................................... 44 8.1 Accessing the TIBCO Components ................................................................................................. 44

8.1.1 Accessing the EMS Server on AKS .............................................................................................. 44 8.1.2 Accessing FTL on AKS ................................................................................................................ 44 8.1.3 Accessing AKD on AKS............................................................................................................... 44

8.2 Testing the TIBCO Components .................................................................................................... 45 8.2.1 Setting Up the Kafka Send Tests ................................................................................................ 45 8.2.2 Expected Results ....................................................................................................................... 46 8.2.3 Setting up and Sending Messages from EMS ............................................................................. 46 8.2.4 Expect Results........................................................................................................................... 47

Appendix A: Health Checks for the EMS Server ......................................................................... 49 A.1 Liveness and Readiness Probes ..................................................................................................... 49 A.2 Implementation ............................................................................................................................ 49

A.2.1 Liveness Health Check ............................................................................................................... 50 A.2.2 Readiness Health Check ............................................................................................................ 51 A.2.3 Configuring Liveness and Readiness Probes .............................................................................. 52 A.2.4 Recommended settings ............................................................................................................ 52

Page 5: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 5

Table of Figures FIGURE 1 – STORAGE ACCOUNT CREATION ......................................................................................................................... 11 FIGURE 2 - KUBERNETES CLUSTER CREATION ........................................................................................................................ 13 FIGURE 3 - CONFIGURE KUBECTL ...................................................................................................................................... 13 FIGURE 4 - VERIFY CONNECTING TO THE KUBERNETES CLUSTER ................................................................................................ 14 FIGURE 5 - CREATE TIBMSG NAMESPACE ............................................................................................................................. 14 FIGURE 6 - GET STORAGE ACCOUNT KEY ............................................................................................................................ 14 FIGURE 7 - CREATE THE SECRET KEY FOR THE ASA ................................................................................................................ 15 FIGURE 8 - CREATE ACR REGISTRY .................................................................................................................................... 15 FIGURE 9 - LOGIN INTO THE ACR ...................................................................................................................................... 15 FIGURE 10 - GET ACR CREDENTIALS ................................................................................................................................. 16 FIGURE 11 - CREATE THE SECRET KEY FOR ACR .................................................................................................................... 16 FIGURE 12 - TESTING THE FTL CONTAINER .......................................................................................................................... 18 FIGURE 13 - LOGIN INTO THE ACR .................................................................................................................................... 18 FIGURE 14 - TAG EMS DOCKER IMAGE .............................................................................................................................. 18 FIGURE 15 - FTLSERVER-TEMPLATE.YAML EXAMPLE ............................................................................................................... 21 FIGURE 16 - APPLYING THE FTL TEMPLATE ......................................................................................................................... 21 FIGURE 17 - EXAMPLE OF FTL PODS RUNNING IN AKS ........................................................................................................... 22 FIGURE 18 - TO STOP AND START THE FTL STATEFULSET ........................................................................................................ 22 FIGURE 19 - FTL EMSFTL01 APPLICATION ........................................................................................................................... 24 FIGURE 20 - RUNNING TIBEMSCREATEIMAGE_FTL ................................................................................................................. 26 FIGURE 21 - RUNNING EMS DOCKER IMAGE STANDALONE ..................................................................................................... 26 FIGURE 22 - RUNNING DOCKER EMSCA IMAGE STANDALONE ................................................................................................. 27 FIGURE 23 - DOCKERFILE SECURITY EXAMPLE ....................................................................................................................... 28 FIGURE 24 - EMS SECURITY PATHS ................................................................................................................................... 28 FIGURE 24 - LOGIN INTO THE ACR .................................................................................................................................... 28 FIGURE 25 - TAG EMS DOCKER IMAGE .............................................................................................................................. 28 FIGURE 27 - CHECK DEPLOYMENT RESULTS ......................................................................................................................... 32 FIGURE 28 - TO STOP, START, AND DELETE THE DEPLOYMENT ................................................................................................. 33 FIGURE 29 - ACCESSING THE RUNNING CONTAINER ............................................................................................................... 33 FIGURE 30 - APPLY THE EMSA TEMPLATE .......................................................................................................................... 33 FIGURE 31 - LOGIN INTO THE ACR .................................................................................................................................... 35 FIGURE 32 – TAG AND PUSH ZOOKEEPER DOCKER IMAGES TO THE ACR ..................................................................................... 36 FIGURE 33 - TO STOP AND START THE ZOOKEEPER AND KAFKA STATEFULSETS .............................................................................. 37 FIGURE 34 – TAG AND PUSH AKD BRIDGE DOCKER IMAGES TO THE ACR ................................................................................... 40 FIGURE 33 - AKDBRIDGE.YAML EXAMPLE ............................................................................................................................ 42 FIGURE 34 - CHECK DEPLOYMENT RESULTS ......................................................................................................................... 43 FIGURE 35 - TO STOP, START, AND DELETE THE AKD BRIDGE DEPLOYMENT ................................................................................ 43 FIGURE 38 - CONNECTING TO EMS USING TIBEMSADMIN ....................................................................................................... 44 FIGURE 39 - SUCCESSFUL CONNECTION TO KAFKA ................................................................................................................. 44 FIGURE 40 – FTL RESULTS FROM KAFKA PRODUCER .............................................................................................................. 46 FIGURE 41 - EMS RESULTS FROM KAFKA PRODUCER.............................................................................................................. 46 FIGURE 42 – FTL RESULTS FROM EMS PRODUCER ................................................................................................................ 47 FIGURE 43 - KAFKA RESULTS WITH EMS PRODUCER .............................................................................................................. 48

Page 6: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 6

1 Overview

This document will outline how to configure the TIBCO Messaging components in a Kubernetes cluster on Azure. This includes TIBCO Enterprise Messaging Service™, TIBCO FTL®, TIBCO® Messaging - Apache Kafka Distribution - Core (AKD Core), and TIBCO® Messaging – Apache Kafka Distribution – TIBCO FTL Bridge (AKD Bridge). The Kubernetes cluster will be built using the Azure Kubernetes Service (AKS). Running the TIBCO Messaging components on Azure involves:

• Configuring the Azure Kubernetes Service (AKS) for TIBCO Messaging Components • Configuring an Azure Storage Account and File System for the EMS shared storage • Configuring the Azure Container Registry (ACR) for the Docker® image registry • Creating multiple Docker images for EMS, FTL, AKD Core, and AKD Bridge where the

containers will be hosted in ACR • Creating persisted volumes for FTL and the AKD Core • Configuring and creating Kubernetes containers based on the Docker images for the

individual components • Configuring Load Balancer(s) in Azure to access EMS, FTL, and AKD core

1.1 Azure Architecture

Using this document, the following architecture can be produced, as a whole or as individual components. The architecture created will contain:

• One (1) VPC • AKS cluster will consist of twelve (12) nodes with a minimum of 16 GB of RAM and 4

CPUs. It is recommended that 32 GB / 8 core nodes for a production level environment. • Azure managed disks for all nodes in the cluster • Azure managed disks for the FTL and AKD Core persisted disks • Azure Storage Account/Files for storage for EMS configuration and data • ACR Repositories for all containers • ELB (Classic - Kubernetes) for access external access to FTL, EMS, EMSCA, and Kafka • Three (3) node FTL cluster will be created • Six (6) node Kafka cluster • Three (3) Zookeeper nodes • One (1) AKD Bridge connector, which can be replicated to any node in the cluster • One (1) EMS Server, which can be replicated to any node in the cluster • Optionally – One (1) EMS Central Administrator, which can be replicated to any node in

the cluster

Page 7: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 7

1.2 Supported Versions

The steps described in this document are supported for the following versions of the products and components involved:

• TIBCO EMS 8.4.1 • TIBCO FTL 6.1.0 • TIBCO Messaging - Apache Kafka Distribution - Core 2.1.0 • TIBCO Messaging - Apache Kafka Distribution - TIBCO FTL Bridge 2.0.0 • Docker Community/Enterprise Edition should be most recent version, (18.09.2), to address

recent security vulnerabilities • Kubernetes 1.12 or newer

1.3 Excluded TIBCO EMS Features

TIBCO EMS on AKS supports all EMS features, with the following exceptions:

• Excludes transports for TIBCO Rendezvous® • Excludes transports for TIBCO SmartSockets® • Excludes stores of type dbstore • Excludes stores of type mstores

1.4 Prerequisites

The reader of this document must be familiar with:

• Docker concepts • Azure console and the Azure CLI (az) • Kubernetes installation and administration • TIBCO EMS configuration • SMB3 • Kubernetes CLI, kubectl • TIBCO Messaging individual component configuration

Additionally, the following are required:

• All necessary downloads discussed in the next section • The appropriate TIBCO Messaging license(s), if required.

1.5 Prepare Local Environment

General:

The following infrastructure should already be in place:

• A Linux or macOS machine equipped for building Docker images • The following software must already be downloaded to the Linux or macOS machine

equipped for building Docker images.

Page 8: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 8

Note: All software must be for Linux! o JRE installation package (.tar.gz) (optional) o TIBCO EMS 8.4.1 installer (.zip). TIBCO EMS must be the Enterprise Edition, and

must be downloaded from edelivery.tibco.com. o Any TIBCO EMS hotfixes, if required. o TIB_ems_8.4.0_probes.zip (optional). Download from

https://community.tibco.com/wiki/tibcor-messaging-article-links-quick-access o TIBCO FTL 6.1.0 RPM installation files which are part of the TIBCO FTL installer.

Either the Community or Enterprise Edition may be used. Download the EE from edelivery.tibco.com . The CE can be downloaded from https://www.tibco.com/products/tibco-messaging

o TIBCO FTL 6.1.0 libraries. The EMS and AKD Bridge require the FTL libraries as part of the container. These can be obtained from the FTL installer after download.

o TIBCO AKD 2.1.0 TAR and RPM files which are part of the TIBCO Messaging - Apache Kafka Distribution - Core 2.1.0 installer. Either the Community or Enterprise Edition may be used. Download the EE from edelivery.tibco.com. The CE can be downloaded from https://www.tibco.com/products/tibco-messaging

o TIBCO AKD Bridge 2.0.0 RPM files which are part of the TIBCO Messaging - Apache Kafka Distribution -TIBCO FTL Bridge 2.0.0 installer. Either the Community or Enterprise Edition may be used. Download the EE from edelivery.tibco.com. The CE can be downloaded from https://www.tibco.com/products/tibco-messaging

o The tibmsg_aks_files.zip. The zip file contains all of the necessary Docker and Kubernetes build files. Download from https://community.tibco.com/wiki/tibcor-messaging-article-links-quick-access

• Create a directory, place tibmsg_aks_files.zip in the directory. • Unzip tibmsg_aks_files.zip.

For EMS:

• Place the TIBCO EMS installer zip, the JRE tar file, any EMS hotfixes, and the probes zip file in the newly created tibmsg_aks_files/ems/docker directory.

• Copy the FTL 6.1.0 libraries to the tibmsg_aks_files/ems/docker/ftl_libs directory. For FTL:

• Copy all of the tibco-ftl-6.1.0-linux.*.rpm files to tibmsg_aks_files/ftl/docker/bin directory. For AKD:

• Copy the TIB_msg-akd-core_2.1.0_linux_x86_64.tar.gz to the tibmsg_aks_files/akd/docker/kafka/bin/tar and the tibmsg_aks_files/akd/docker/zookeeper/bin/tar directories.

For the AKD Bridge:

• Copy the TIB_msg-akd-core_2.1.0_linux_x86_64.rpm and the TIB_msg_akd-bridge_2.0.0_linux_x86_64.rpm files to the tibmsg_aks_files/akd/connectors/docker directory.

Page 9: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 9

• Copy the FTL 6.1.0 libraries to the tibmsg_aks_files/akd/connectors/docker/ftl_libs directory.

1.6 Prepare Preliminary Azure Account and Kubernetes Configuration

Use the following to prepare the preliminary environment to install the TIBCO messaging components on AKS.

1.6.1 General (Required) • An active Azure account is required. If necessary, create an account at

http://portal.azure.com and follow the on-screen instructions.

• Install the Azure CLI on the workstation used.

• Install Docker on the workstation to build the TIBCO EMS images.

• Install the kubectl command-line tool to manage and deploy applications to Kubernetes in AZURE from a workstation.

1.6.2 Configure the Kubernetes Dashboard (Optional) The Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them. Use the following to setup un the dashboard:

• Issue the following command to deploy the Kubernetes Dashboard: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

• Run kubectl proxy & - to run the proxy in the background, or open a second terminal shell, and run kubectl proxy

• Open web browser and access: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/.

• Use the Authentication link to setup a Service Account Token to access the Dashboard.

Page 10: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 10

2 Fault Tolerance and Shared Folder Setup

2.1 Shared Storage

Note: If the TIBCO Enterprise Messaging Service (EMS) is not being installed, this section can be skipped. A traditional EMS server configured for fault tolerance relies on its state being shared by a primary and a secondary instance, one being in the active state while the other is in standby, ready to take over. The shared state relies on the server store and configuration files to be located on a shared storage device. The fault tolerance model used by EMS on Kubernetes/AKS is different in that it relies on Kubernetes restart mechanisms. Only one EMS server instance is running and, in case of a server failure, will be restarted inside its container. In case of a failure of the container or of the corresponding cluster node, the cluster will recreate the container, possibly on a different cluster node, and restart the EMS server there. Within the container, the health of the EMS server is optionally monitored by two health check probes: the liveness and readiness probes. At this point, the implementation of these is provided as a separate package to be included in the EMS Docker image. Health check probes are detailed in Appendix A. In any case, the server still needs its state to be shared. In Azure, this can only be accomplished with an Azure Storage Account and file setup.

2.2 Create the ASA and File System

This section outlines creating the Azure Storage Account (ASA) and accompanying file system. Though the ASA and file system can be created through the Azure CLI, the following will create the file system through the Azure Console.

• Log on to the Azure console, and click on Storage accounts • Click on Add • For Basic, select the appropriate Subscription, and Resource group. Create a new Resource

group if desired (recommended). Note: This Resource Group should be used through out for all Azure components.

• Provide a new storage account name, location, and Performance mode. Premium storage can provide ~50% better EMS persisted message throughput of Standard, but it costs more. It is recommended that Premium only be used for production environments.

• Select Account kind. General purpose is recommended. • Access Tier can be cool or hot. It is only used for blobs, which will not be used. • Select Replication. Note: only Locally-redundant storage is offered with Premium

performance. If Geo-redundant replication is required, select Standard performance. • Review the settings so far, and click on Advanced when complete.

Page 11: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 11

• Secure transfer is enabled by default. If secured transfers are not required, disable it. Do not change the other options.

• Click on Review and Create. If validation passes, click on Create. Otherwise resolve all validation issues, before continuing.

Figure 1 – Storage Account Creation

• Wait for the creation of the Storage account to complete, and click on Go to Resource • Click on Files Service, and then on ADD • Create the new file share share, and allocate the desired amount of space in GB. Note: To

obtain the best EMS persisted write performance, use 5120GiB. Otherwise select ~20GiB. ~20GiB is recommended for all but production environments.

Page 12: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 12

3 Azure AKS Setup

3.1 Create a New Azure Kubernetes Service (AKS)

A new Kubernetes cluster must be created in AKS. Use the following to build a new Kubernetes Service in Azure. This can be created via the Azure Portal of the Azure CLI. This document will outline building the cluster via the Azure portal.

• Sign into the Azure portal at https://portal.azure.com/ • In the top left-hand corner of the Azure portal, select Create a resource > Kubernetes

Service. • Select a Subscription and Resource group. These should be the same subscription and

Resource group used for the Storage Account created previously. Note: If a new Storage Account was not created, the new Resource group should be created now, and used throughout.

• Provide a new Kubernetes Cluster Name, Region (use the same as for the SA and ACR), Kubernetes version (must be at least 1.12.7), and a DNS name prefix, such as tibmsg.

• For Scale, select the node size. Recommend a B4MS (4 CPU / 16 GB RAM) for development or testing. Select a larger size, such as a B8MS (8 core / 32 GB RAM) for production.

• Select a node count of 12. • Leave virtual nodes disabled and VM scale sets disabled • Click on Next: Authentication • Select to create a new service principal • Click on Yes to Enable RBAC • Click on Next: Networking • Choose either Yes or No for application routing, depending on requirements • Choose either Basic or Advanced for Network configuration. Recommend using Basic. • Use the defaults for monitoring • Wait for the Running the Validation to complete, with validation passed. Fix any issues

before continuing! • Click on Create. It will take several minutes to complete.

Page 13: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 13

Figure 2 - Kubernetes Cluster creation

3.2 Configuring Kubectl to connect to Azure Kubernetes Service

With AKS, the Kubernetes command line tool, kubectl, is used to configure the Kubernetes cluster for EMS on AKS.

3.2.1 Configure Kubectl to connect to AKS After the Kubernetes cluster has been built, kubectl must be configured to connect to the cluster on AKS. Use the following example to set the credentials for kubectl.

Figure 3 - Configure Kubectl

Use kubectl get nodes as shown in the following example to verify connecting to the cluster.

Page 14: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 14

Figure 4 - Verify connecting to the Kubernetes Cluster

3.2.2 Create a New Namespace Create a new namespace in Kubernetes, tibmsg. The default namespace can be used, but is not recommended. All of the Kuberenetes yaml configuration files are designed to use the tibmsg namespace. Note: If the namespace tibmsg is not used, ensure that all provided yaml files are modified to use the correct namespace or default. The examples shown below will use the tibmsg namespace.

Figure 5 - Create tibmsg namespace

3.3 Create a Kubernetes Secret to Access the Azure Storage Account

Kubernetes needs credentials to access the Azure Storage account used for the EMS shared storage.

Note: If TIBCO EMS is not being configured, this step can be skipped. The credentials are stored in a Kubernetes secret. The secret key will be referenced in Section 5.4. This is created through the Azure CLI.

• Use the following to get the Azure Storage Account key.

Figure 6 - Get Storage Account Key

• Create the azure-secret with kubectl with the namespace.

Page 15: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 15

Figure 7 - Create the Secret Key for the ASA

3.4 Configure the Azure Container Registry

New ACR registry(s) must be created to host the TIBCO Component Docker images. Use this section to create the necessary ACR registries. One is required, but five can be created to separate each component into their own registry. In the examples used in this document, a separate registry is used for each messaging component.

3.4.1 Create the Azure Container Registry • Create a new ACR registry, such as tibmsg (if only using one registry for all

components) or ftlserver. The registry can be created via the Azure CLI or via the console. Please note the loginServer of your ACR registry.

Figure 8 - Create ACR Registry

• Replicate the create step, and create the four other registries, if required Note: This only needs to be done for the TIBCO Messaging components being installed.

o tibems o akdbridge o zookeeper o kafka

• Login into the newly created Azure ACR from the Azure CLI.

Figure 9 - Login into the ACR

3.4.2 Create a Kubernetes Secret to Access the Individual Azure Container Registry Kubernetes needs credentials to access the Azure Container Registries used for the Docker images. The credentials are stored in a Kubernetes secret. The secret key will be referenced with the appropriate TIBCO Messaging component. Use the Azure CLI to get the credentials.

• Use the following to get the ACR credentials. Substitute the appropriate registry name for <registry-name>.

Page 16: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 16

Figure 10 - Get ACR Credentials

• Create the ftlserver-acr-secret with kubectl with the tibmsg namespace. This must be done for each of the five registries created, if required. Substitute the appropriate name of the registry for <registry-name>. Note: This only needs to be done for the TIBCO Messaging components being installed.

Figure 11 - Create the Secret Key for ACR

Page 17: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 17

4 Configuring FTL

TIBCO FTL must be installed and created first. Both EMS and the AKD Bridge rely on FTL to communicate with each other. Note: For all components, a Makefile has been provided to build both the Docker image, and to apply the configuration into the Kubernetes cluster. A simple internet search on “how to install makefile” will provide steps on how to download/install Makefile. A simple run_build.sh script is also provided, if Makefile is not used to build the Docker image. Note: If only TIBCO AKD Core is configured and used, this section can be skipped.

4.1 Creating the Docker Image

The content of the FTL container that will run on AKS is derived from a Docker image that first needs to be created and then hosted in the Azure Container Registry (ACR). To create the Docker image, use the following:

• Use docker images to ensure Docker is installed and running • Change directory to the tibmsg_aks_files/ftl/docker directory • Ensure that all of the FTL rpm files have been copied to the bin directory • The FTL Docker container is configured to use the following environmental variables:

o 6.1.0 version of FTL o 12220 for the FTL_PORT o ftl-cluster-0 for the POD_NAME o tibmsg for the NAMESPACE Change the values to fit your environment. The defaults values will be referenced in this document. No other changes are required to the Dockerfile.

• The Docker image will be tibco/ftlserver:6.1.0 • Use make build from the tibmsg_aks_files/ftl/docker directory to create the FTL Docker

image Once the Docker build has completed, test the build using docker run –p 8585:8585 –v `pwd`:/data tibco/ftlserver:6.1.0, as shown in the following example.

Page 18: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 18

Figure 12 - Testing the FTL container

Note: FTL container built for AKS is designed for a three node cluster. The above test will just show that the FTL Server will start. However, it will not be able to become active, as it will wait for another FTL server to join the quorum. The process should be terminated once it begins “cluster waiting for leader”. If a single or pure Docker environment is to be used, refer to the TIBCO FTL 6.1.0 documentation on using the TIBCO supplied Docker FTL containers.

4.2 Push the FTL Docker Image to ACR

Once the Docker image is ready, the image can be tagged and pushed to ACR. Tag the image and push the Docker image to the ACR registry using the URL of the appropriate registry

• Login into the newly created Azure ACR for FTL from the Azure CLI.

Figure 13 - Login into the ACR

• Tag the image and push the Docker image to the ACR repository using the loginServer name noted in section 3.4.1. The loginServer can also be retrieved from the Azure console under your Resource Group. Note: Name of Docker image may differ depending on setup.

Figure 14 - Tag EMS Docker image

• Push the EMS Docker image to ACR. Replace the name of the loginServer

Page 19: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 19

4.3 Setting up FTL in Kubernetes

Once the Docker FTL container is pushed to ACR, FTL can be configured to run as a three-node cluster in AKS.

4.3.1 Configuring FTL for Kubernetes Following, is the tibmsg_aks_files/ftl/kubernetes/ftlserver-template.yaml used to configure TIBCO FTL in AKS. Other than changing the name/location of the ACR registry and the loadBalancerSourceRanges, all of the defaults can be used.

apiVersion: apps/v1 kind: StatefulSet metadata: name: ftl-cluster spec: serviceName: "ftlserver" replicas: 3 selector: matchLabels: app: ftlserver template: metadata: labels: app: ftlserver spec: containers: - name: ftlserver image: "<your ACR registry>" (1) imagePullPolicy: Always volumeMounts: - mountPath: "/data" name: ftl-data ports: - name: muxport containerPort: 12220 (2) env: - name: POD_NAME # use the generated pod name as the ftlserver name valueFrom: fieldRef: fieldPath: metadata.name - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: FTL_PORT value: "12220" (2) dnsPolicy: ClusterFirst imagePullSecrets: - name: ftlserver-acr-secret

Page 20: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 20

restartPolicy: Always schedulerName: default-scheduler volumeClaimTemplates: - metadata: name: ftl-data spec: accessModes: - ReadWriteOnce storageClassName: ftldata resources: requests: storage: 5Gi --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: ftldata provisioner: kubernetes.io/azure-disk reclaimPolicy: Retain parameters: storageaccounttype: Premium_LRS kind: Managed --- kind: Service apiVersion: v1 metadata: name: ftlserver namespace: tibmsg (4) labels: app: ftlserver spec: # uncomment clusterIP and comment type and nodePort: to make internal only # clusterIP: None type: NodePort ports: - port: 12220 (2) name: muxport protocol: TCP nodePort: 32096 selector: app: ftlserver --- apiVersion: v1 kind: Service metadata: labels: name: ftl-lb name: ftl-lb namespace: tibmsg (4) spec: externalTrafficPolicy: Cluster ports: - nodePort: 32098 (3)

Page 21: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 21

port: 12220 (2) name: lbport protocol: TCP targetPort: 12220 (2) selector: app: ftlserver sessionAffinity: None type: LoadBalancer loadBalancerSourceRanges: - <Your trusted IP Address range to access FTL> (5) status: loadBalancer: {}

Figure 15 - Ftlserver-template.yaml example

(1): The name and location of the Azure Container Registry (ACR) where the TIBCO FTL Docker container is located. Ensure the proper permissions are set. The image maybe something different than latest, depending on how it was tagged in Docker. (2): The container port and port used by FTL to internally communicate between FTL cluster nodes. It is recommended this remain 12220, unless building more than one FTL cluster. It will be used to configure the load balancer. (3): The NodePort used to communicate with the Load Balancer (4): The name of the Kubernetes namespace used in the cluster. (5): The trusted source IP Address Range for the load balancer. It must be in the form of x.x.x.x/x.

If the load balancer is to be “open to the world”, use 0.0.0.0/0 (not recommended).

4.3.2 Applying FTL in Kubernetes Once the file has been updated, the ftlserver-template.yaml can be applied using kubectl to AKS. Use make build to apply the template. If makefile is not available, use kubectl -n tibmsg apply -f ./ftlserver-template.yaml.

Figure 16 - Applying the FTL Template

Use kubetctl –n tibmsg get pods to check if ftl-cluster 0,1 and 2 are all running, and all are ready. If all three pods are not running, resolve any issues before continuing. Also try kubectl –n tibmsg describe statefulset ftl-cluster to verify state.

Page 22: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 22

Figure 17 - Example of FTL pods running in AKS

4.3.3 Stopping, Deleting, or Accessing the FTL Servers To stop the FTL servers without deleting the servers, use the kubectl scale operation to set its number of replicas to 0. All data will be retained in the persisted volumes.

For example: > kubectl scale --replicas=0 statefulset ftl-cluster –n tibmsg

To start the FTL cluster again, set its number of replicas back to 3: > kubectl scale --replicas=3 statefulset ftl-cluster –n tibmsg

Figure 18 - To Stop and Start the FTL Statefulset

To delete an FTL statefulset and services entirely, use the kubectl delete operation, or Make clean from the tibmsg_aks_files/ftl/kubernetes directory. The corresponding pods will also be deleted. The PVC and PV (persisted volumes) will not be deleted, nor will the corresponding FTL configuration data. These must be manually deleted. Sometimes it is necessary to login into the FTL container running in AKS. To access any of the pods, use the following: > kubectl –n tibmsg exec –it ftl-cluster-<pod #> -- “/bin/bash”

4.4 Configuring a New FTL Application for EMS

FTL and EMS must be configured to communicate with each other. This section will discuss configuring a new FTL application to communicate with EMS. If not using EMS, this section can be skipped. All configuration is done via the FTL realm service UI. A new FTL application, transports, endpoints, durables will be created. Use the following to configure FTL to communicate with EMS:

• In a web browser, go to the FTL URL via the Kubernetes Load Balancer created as part of the FTL Kubernetes statefulset. To get the External IP Address of the load balancer, use: kubectl –n tibmsg get svc ftl-lb.

• Click on the Edit Mode button, slide it to on. • Create two new transports. Click on Transports, then click on the + next Transports in the

Transports screen. o Leave Group blank, enter a new transport name. Provide a name which represents it

use. These can be ftlqueuein and ftlqueueout. o Use Dynamic TCP from the drop down menu for the Transport Type for both of the

new transports.

Page 23: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 23

• Create a new tibems durable within the existing stores. o Click on Stores, and then on the … above the ftl.persistent.store, and select Add

durable. o Enter tibems as the new durable name, and then Dynamic/Standard for the durable

type. o Click on the … above the new tibems durable, and click on View Details. o Enter a TTL for the messages and the durable itself. Use 300 (5 minutes). This can

be adjusted up or down later if needed. Click on Save. • Once the transports and the durable have been created, the application can be created. One

new application will be created, with the three (3) endpoints. • Click on Applications, and then on the “+” next to Applications to create a new application.

o The name of the application must be the same as the name of the EMS Server. In a Kubernetes cluster, this will be the name of the EMS deployment in Kubernetes. If using the default of the EMS Kubernetes template discussed in the next section, it will be emsftl01. In the following example, emsftl01 is the application name and the name of the EMS server.

o Three endpoints need to be created. These need to match the endpoint names from the EMS transports. From the transports configuration which will be created in the Kubernetes EMS deployment, these will be FTLtopic, FTLqueuein, and FTLqueueout.

o Click on the … above the new emsftl01 application, and click on Add Endpoint. Create the three endpoints.

o Enter the Store for the FTLqueuein and FTLqueueout endpoints. Use the store name ftl.persistent. The FTLtopic endpoint will use the ftl.nonpersistent.store.

o For the FTLqueuein and FTLqueueout endpoints, use tibems for the durable template. The FTLtopic endpoint uses the ftl.pubsub.template template.

o For the transports, use server for the FTLtopic endpoint, use ftlqueuein for the FTLqueuein endpoint, and ftlqueueout for the FTLqueueout endpoint.

o Leave other columns as the default. The emsftl01 application should look like the following example.

Page 24: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 24

Figure 19 - FTL emsftl01 Application

• The changes to FTL can now be deployed. o After verifying all applications, transports, endpoints, and durables are configured,

the configuration can be deployed. Click on Deploy at the top of the FTL Realm Server main screen.

o Use test to validate the configuration. Fix any problems that are listed (if any), then deploy the configuration. Click Done.

Page 25: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 25

5 The EMS Docker image

This section discusses configuring TIBCO EMS to communicate with TIBCO FTL in a Kubernetes cluster on AKS. If EMS is not going to be used, this section can be skipped. If EMS is to be used, but not communicating with FTL, refer to the TIBCO EMS on Kubernetes with AKS document on TIBCommunity for creating a standalone EMS deployment on Kubernetes with AKS.

5.1 Creating the Base Docker Image

The content of the container that will run on AKS derives from a Docker image that first needs to be created and then hosted in the Azure Container Registry (ACR). To create the Docker images, use the following:

• Use docker images to ensure Docker is installed and running • To create an EMS Docker image, use the

tibmsg_aks_file/ems/docker/tibemscreateimage_ftl script on the machine equipped for building Docker images.

• The script needs to be pointed to the software packages to be installed: the EMS installation package, optional EMS hotfixes, the optional health check probes package and an optional Java package. It lets you choose which EMS installation features to embed (server, client, emsca) and whether to save the image as an archive. It also creates a user and group set to the required uid and gid.

Note: It is up to you to first download the FTL 6.1 libraries to the ftl_libs directory as discussed in section1.5, TIB_ems_8.4.1_linux_x86_64.zip, TIB_ems_8.4.0_probes.zip, any hotfixes, and Java packages to the tibmsg_aks_file/ems/docker directory before running the script.

Page 26: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 26

Figure 20 - Running Tibemscreateimage_ftl

In the above figure, a Docker image is created with the server and emsca EMS installation features based on the EMS 8.4.1 Linux installation package, a JVM, the 1000 uid the 1000 gid, and the wftl tag. Note: The optional probes package was not used. If EMS should take more than a few seconds to become active, which is possible when connecting to FTL or there are large EMS data stores, the Kubernetes pod may restart, due to the probes not verifying EMS is active. To prevent the restarts, the probes should not be used.

To test and run this image stand-alone:

• Use docker run -p 7222:7222 -v `pwd`:/shared ems:wftl tibemsd

Figure 21 - Running EMS Docker image standalone

• The EMS server will not become active due to an FTL realm server not responding on http://localhost:12220. If an FTL Server was running on the localhost:12220, the EMS server will become active. Kill the Docker container using Docker kill <ID>.

• This creates a sample EMS server folder hierarchy and configuration in the current directory and attempts to start the corresponding server.

• The tibemsca container will start successfully, since it has no dependency on FTL.

Page 27: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 27

Figure 22 - Running Docker EMSCA image standalone

• This creates a sample Central Administration server folder hierarchy and configuration in the current directory and starts the corresponding server.

It is also possible to override the default settings. This following example starts an EMS server using the <path to shared location>/<your server config file> configuration.

> docker run -p 7222:7222 -v <path to shared location>:/shared \ ems:latest tibemsd -config /shared/<your server config file>

Page 28: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 28

5.2 Extending the Base EMS Docker Image

The base EMS Docker image can be extended to include custom JAAS authentication and JACI authorization modules. NOTE: If using EMSCA on Azure, this step should be followed to ensure EMSCA is secure! See the JAAS module section in the EMS Users Guide as well as the EMS Central Administration Guide on details for setting up JAAS in EMS.

• Copy your custom JAAS or JACI plugin files, including the static configuration files they may rely on, to a temporary folder.

• From the temporary folder, use a Dockerfile based on the example given below to copy these files into the base Docker image:

Figure 23 - Dockerfile Security example

• Customize the EMS configuration, to include the relevant paths to those files in the Security Class path, JAAS Classpath and JACI Classpath properties. Modify the “ENV PATH” in the tibemscreateimage script to include the relevant paths required.

• Note that the other required files are in their usual location: /opt/tibco/ems/<version>/bin and /opt/tibco/ems/<version>/lib For example: /opt/tibco/ems/docker/security/user_jaas_plugin.jar:/opt/tibco/ems/8.4/bin/tibemsd_jaas.jar:/opt/tibco/ems/8.4/lib/tibjmsadmin.jar, etc.

Figure 24 - EMS Security Paths

5.3 Hosting the Docker Image

A new ACR repository tibems, was previously created to host the EMS Docker image. Refer to section 3.4 for details.

• Login into the newly created Azure ACR from the Azure CLI.

Figure 25 - Login into the ACR

• Tag the image and push the Docker image to the ACR repository using the loginServer name noted earlier. Note: Name of Docker image may differ depending on setup.

Figure 26 - Tag EMS Docker image

• Push the EMS Docker image to ACR. Replace the name of the loginServer

Page 29: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 29

5.4 EMS Server Template

EMS server containers are created in a Kubernetes cluster through the provided tibemsd-template_ftl.yaml sample template. A deployment and a service will be created. This template includes sections that define a limited set of parameters, ports, and names for the deployment and the service, which can be changed to meet the environment. Note: The template creates the EMS server with TCP access.

5.4.1 TIBEMSD Template The tibemsd_ftl-template.yaml has a parameters that may need modification. The deployment section includes the names of the Kubernetes deployment, ports, FTL connection parameters, and location/name of the ACR registry. The service section contains the port numbers for the load balancer and IP source ranges. The default values in the example below can all be used, except for the name and location of the ACR repository, the runAsUser, and the IP source range for the LB. These must be updated for the environment. Only modify the value marked. Changes other values may prevent the TIBEMS deployment/service from being created or running. The example shown below, does not include the probes.

apiVersion: apps/v1 kind: Deployment metadata: labels: name:emsftl01 (1) name:emsftl01 (1) spec: replicas: 1 selector: matchLabels: name:emsftl01 (1) strategy: type: Recreate template: metadata: labels: name:emsftl01 (1) name:emsftl01(1) spec: containers: - name: tibems image: <your Azure ACR url> (2) imagePullPolicy: Always (3) env: - name: EMS_NODE_NAME valueFrom: fieldRef:

Page 30: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 30

fieldPath: spec.nodeName - name: FTL_URL value: http://ftl-cluster-0.ftlserver:12220|http://ftl-cluster-1.ftlserver:12220|http://ftl-cluster-2.ftlserver:12220 (7) - name: EMS_PUBLIC_PORT value: "30726" (4) - name: EMS_SERVICE_NAME value: emsftl01 (1) - name: POD_NAME # use the generated pod name valueFrom: fieldRef: fieldPath: metadata.name args: - tibemsd ports: - containerPort: 7222 (5) name: tibemsd-tcp protocol: TCP resources: {} securityContext: runAsUser: 1000 (6) terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /shared name: tibemsd-volume dnsPolicy: ClusterFirst imagePullSecrets: - name: tibems-acr-secret (10) restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 volumes: - name: tibemsd-volume azureFile: secretName: azure-secret (12) shareName: share (11) readOnly: false --- apiVersion: v1 kind: Service metadata: labels: name: emsftl-lb (8) name: emsftl-lb (8) namespace: tibmsg (9) spec: externalTrafficPolicy: Cluster ports: - nodePort: 30726 (4) port: 30726 (4) protocol: TCP

Page 31: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 31

targetPort: 7222 (5) selector: name: emsftl01 sessionAffinity: None type: LoadBalancer loadBalancerSourceRanges: - <Your trusted IP Address range> (13) status: loadBalancer: {}

(1): The name of the EMS deployment. If modifying, change ALL locations. (2): The name and location of the Azure Container Repository (ACR) where the TIBCO EMS

Docker image is located. Ensure the proper permissions are set. The image maybe something different for latest, depending on how it was tagged in Docker.

(3): Determines if the EMS Docker image should be pulled from ACR prior to starting the container. Use Always to download the Docker image every time, or Never to use the existing image.

(4): Throughout the template, 30726 is used for the external EMS port. If 30726, is not used, change the value accordingly.

(5): Throughout the template, 7222 is used for the internal EMS port. If 7222, is not used, change the value accordingly.

(6): The uid the container will run as. The default is 1000. Change runAsUser to the uid the EMS server container must run as. Note: The uid provided here must match that used when creating the EMS Docker image.

This constraint should be removed in a future version of Kubernetes. (7): The FTL URL of the FTL cluster running in Kubernetes. This is the internal URL, and

should not be changed unless a different port or FTL server name was used. (8): The name of the EMS Load Balancer service. If modifying, change all locations. (9): The name of the Kubernetes namespace used in the cluster. If no namespace is used, change

to default. It is recommended to leave this at tibmsg. (10): The name of the secret created previously to access the Azure Container Registry Only

needs to be changed if tibems-acr-secret was not used. (11): The name of the file shared created previously in the Azure storage account. Only needs to

be changed if share was not used. (12): The name of the secret created previously to access the Azure Storage Account. Only needs

to be changed if acr-secret was not used. (13): The trusted source IP Address Range for the load balancer. It must be in the form of

x.x.x.x/x. If the load balancer is to be “open to the world”, use 0.0.0.0/0 (not recommended).

5.4.2 Alternate TIBEMSD Template As mentioned previously, the tibemsd template does not contain the readiness probes, as in certain circumstances, this can cause multiple unnecessary EMS restarts. However, to deploy EMS with the Readiness or Liveness Probes, the tibemsd-wprobe_ftl-template.yaml is provided to deploy EMS with the probes. It is identical to tibemsd_ftl-template.yaml, except it includes the probes

Page 32: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 32

section. Makes changes to this file as outlined in section 5.4.1. Note: the tibems Docker container must also be rebuilt with the probes includes. For more information on the probes, see appendix A.1

5.4.3 Creating a Deployment and Service Create a deployment and service with an EMS server using the corresponding template. For example: > kubectl apply –f tibemsd_ftl-template.yaml –n tibmsg

If Makefile is installed, make build can also be used. The kubectl operation transforms the tibemsd_ftl-template.yaml template into a list of resources using the default and overridden parameter values. That list is then passed on to create process for creation. In this particular case, it results in the creation of a deployment, a ReplicaSet, a pod and a service. Three of the four objects can be selected through the emsftl01 label. The service will have the label emsftl-lb. The service exposes itself as emsftl-lb inside the cluster and maps internal port 7222 to port 30726 for both inside and outside the cluster.

Check the results using the following: > kubectl –n tibmsg get --selector name=emsftl01 all > kubectl –n tibmsg describe deploy/emsftl01 > kubectl –n tibmsg describe svc/emsftl-lb

Figure 27 - Check Deployment Results

or in the Kubernetes Web Console (see section 1.6.2). To verify EMS has started, and connected to FTL, first get the EMS pod name from the above check. Use the following with the appropriate pod name > kubectl –n tibmsg logs emsftl01xxxxxx

There should be several messages regarding the connection of FTL, and creation of the FTL transports. The last line should be Server is Active. If it is not, then EMS did not connect to FTL or completely startup. Resolve issues before continuing.

5.4.4 Stopping or Deleting an EMS Server To stop an EMS server without deleting it, use the kubectl scale operation to set its number of replicas to 0.

For example: > kubectl scale --replicas=0 deploy emsftl01 –n tibmsg

To start it again, set its number of replicas back to 1: > kubectl scale --replicas=1 deploy emsftl01 –n tibmsg

Page 33: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 33

To delete an EMS server deployment and service entirely, use the kubectl delete operation. For example: > kubectl delete –f tibemsd_ftl-template.yaml –n tibmsg

Figure 28 - To Stop, Start, and Delete the Deployment

The corresponding pod and ReplicaSet will also be deleted. The PVC and PV will not be deleted, nor will the EMS data be deleted.

5.4.5 EMS Server Configuration As mentioned in Section 5.1, running a container off of the EMS Docker image creates a default EMS server folder hierarchy and configuration. In an AKS cluster, the configuration will be created under /shared/ems/config/<EMS_SERVICE_NAME>.json. The Central Administration server works in a similar way. This is handled by the tibems.sh script embedded in tibemscreateimage_ftl and is invoked through the Docker image ENTRYPOINT. Feel free to alter tibems.sh or to directly provision your own configuration files to suit your needs.

5.4.6 Connecting to the EMS Server Container via Kubectl The EMS server logs and configuration can be accessed directly through the the following command. Substitute the name of the running EMS pod for <Pod name>. This can be useful for viewing the logs or configuration file. Accessing the pod may be necessary to modify the EMS Connection Factories discussed in Section Error! Reference source not found.. > kubectl exec –it <pod name> -- “/bin/bash” –n tibmsg

Figure 29 - Accessing the running Container

5.5 Central Administration Server Template

A Central Administration server container is created in the Kubernetes cluster through the tibemsca-template.yaml sample template. The structure of this template is almost identical to that of the EMS server template. Most of the concepts described in the previous section also apply to the Central Administration server. Note: Ensure to update the Docker image location from the ACR, source IP range for the LB, and the external port number. The default is 30088. Note: The Central Administrator is not secure. The following is for example only! JAAS must be implemented in both the EMS server and CA before use. Example of deployment and service creation with a Central Administration server: > kubectl apply -f tibemsca-template.yaml –n tibmsg

Figure 30 - Apply the EMSA Template

Page 34: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 34

5.6 Modifying the Connection Factories in the EMS Server

The EMS server running in AKS must be modified for the Connection Factory URLs. The EMS Connection Factories must be updated from the Node Name to the load balancer External-IP address and port. If not done, the EMS clients will not be able to reconnect to EMS after a fail-over of the EMS Server deployed in AKS. The approaches to use to modify the tibemsd json configuration file being used in the container are:

• The tibemsadmin CLI from another machine (can be external) o Use the external-ip and port from the emsftl-lb Kubernetes service o Use the setprop option and modify all Connection Factories for the URL o Commit your changes o Exit out tibemsadmin Note: a script can be created and used to make the modifications via tibemsadmin

• Logging into the Kubernetes node, and modifying the configuration file. o Use kubectl exec –it <pod> -n tibems -- /bin/bash o Edit ems/config/emsftl01.json and modify all Connection Factories for the URL o Exit out of the Kubernetes node o Stop and re-start the deployment as defined in Section 5.4.4

• The EMS Central Administrator, if so configured. o Easiest approach. o Must ensure the Central Administrator is secured. o Modify the Connection Factories for the URL in EMSCA o Save and deploy the new configuration

Page 35: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 35

6 Configuring AKD Core (Zookeeper and Kafka)

This section discusses configuring the TIBCO Messaging - Apache Kafka Distribution (AKD) Core in AKS. The setup is similar to configuring AKD in a generic Kubernetes environment. Configuring FTL and EMS are not required to install and configure AKD in AKS. However, sections 1 and 3 must be completed beforehand, except for the sections on AFS, as it is not used with AKD.

6.1 Creating the Docker Images

The contents of the Zookeeper and Kafka containers that will run on EKS are derived from Docker images that first need to be created and then hosted in the Azure Container Registry (ACR). To create the Docker images (Zookeeper and Kafka), use the following:

• Ensure Docker is running on your workstation. Use docker images to verify Docker is available.

• Open a terminal shell and navigate to tibmsg_aks_files/akd/docker. • Ensure that TIB_msg-akd-core_2.1.0_linux_x86_64.tar.gz has been copied to

tibmsg_aks_files/akd/docker/kafka/bin/tar and the tibmsg_aks_files/akd/docker/zookeeper/bin/tar directories.

• Navigate to the tibmsg_aks_files/akd/docker/zookeeper directory. • Execute the make build command to build the Zookeeper - Docker image. Alternatively, if

you do not have make utility installed, use the run_build.sh script. The Docker image, tibco/zookeeper with the 2.1.0 tag will be created.

• Navigate to the tibmsg_aks_files/akd/docker/kafka directory. • Execute make build command to build the Kafka – Docker image. Alternatively, if you do

not have make utility installed, use the run_build.sh script. The Docker image, tibco/kafka with the 2.1.0 tag will be created.

• To test the Docker images, open a second terminal shell, and navigate the tibmsg_aks_files/akd/docker/zookeeper directory. Execute make run command to start the Zookeeper – Docker image. In the first terminal shell, and navigate the tibmsg_aks_files/akd/docker/kafka directory. Execute make run command to start the Kafka – Docker image. Both should start successfully, and the Kafka broker should connect to Zookeeper.

6.2 Hosting the Docker Image

The new ACR registries, tibco/zookeeper and tibco/kafka were previously created to host the AKD Docker images. Refer to section 3.4 for details.

• Login into the Azure ACR from the Azure CLI for Zookeeper.

Figure 31 - Login into the ACR

Page 36: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 36

• Tag the Docker image to the ACR repository using the loginServer name noted earlier. Note: Name of Docker image may differ depending on setup.

• Push the Zookeeper Docker image to ACR. Replace the name of the loginServer

Figure 32 – Tag and push Zookeeper Docker images to the ACR

• Repeat the three steps show above for the Kafka Docker images. The Kafka images will go to the kafka ACR.

6.3 Deploying the AKD Core on AKS

Once Zookeeper and Kafka containers have been “pushed” to ACR, they can be deployed into AKS with kubectl.

• Change directory to tibmsg_aks_files/akd/kubernetes/kafka • kafka.yaml requires some updates before use.

o The image location must be updated to reference the ACR registry configured in the previous section

o The imagePullSecrets must be updated with the name of the kafka-acr-secret o Number of replicas. The default is six (6). Adjust if required. Development

environments can set this value to three (3). o The resources section is defined to require a minimum of 12GB memory and 2 cpu

for each of the six containers used for Kafka, and a limit of 16GB memory and 4 cpu for each container. To use more or less resources accordingly, adjust accordingly, but ensure that the Kubernetes node have enough system resources to support the containers.

o If not using tibmsg for the namespace, change accordingly. However, it is highly recommended to leave the tibmsg as the namespace.

o The Source IP range for the Load Balancer, in the format 0.0.0.0/0. Set to a trusted IP Range. 0.0.0.0/0 is “open to the world”, and is not recommended.

• The tibmsg_aks_files/akd/kubernetes/zookeeper/zookeeper.yaml must be modified for the: o image location to reference the ACR repository configured in the previous section. o The imagePullSecrets with the name of the zookeeper-acr-secret

• Execute make build command to deploy Zookeeper and Kafka. Alternatively, if you do not have make utility installed, issue below set of commands manually. Scripts will create new Kubernetes namespace, services and dedicated storage class for Zookeeper and Kafka pods.

$ kubectl apply -f ./namespace.json $ kubectl apply -f ./zookeeper/zookeeper-storage.yaml, \ ./zookeeper/zookeeper-service.yaml,\ ./zookeeper/zookeeper.yaml $ kubectl apply -f ./kafka/kafka-storage.yaml,\ ./kafka/kafka-service.yaml, \ ./kafka/kafka.yaml

Page 37: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 37

Note: It will take several minutes for this step to complete!

• Execute kubectl –n tibmsg get pods –watch to review state of the cluster. Wait until all Zookeeper and Kafka instances are up and running. Again, this will take some time to complete. It can take ~40 minutes to complete. Note: there will be errors/CrashLoopBackoff warnings! This is due to the time required for AKS to create the necessary volumes/network interfaces, and the readiness probes. It takes ~5 minutes for the three Zookeeper pods to be in a “running” state, and ~8 minutes for the first Kafka pod to reach a “running” state. The first Kafka pod may also have ~6 restarts. No errors or warning should occur after the first Kafka pod is running.

6.4 Stopping, Deleting, or Accessing the AKD Servers

To stop the Zookeeper or Kafka servers without deleting the servers, use the kubectl scale operation to set its number of replicas to 0. For example: > kubectl scale --replicas=0 statefulset kafka –n tibmsg

> kubectl scale –replicas=0 statefuleset zookeeper –n tibmsg

To start the Zookeeper cluster again, set its number of replicas back to 3: > kubectl scale --replicas=3 statefulset zookeeper –n tibmsg

To start the Kafka cluster again, set its number of replicas back to 6: > kubectl scale --replicas=6 statefulset kafka –n tibmsg

Figure 33 - To Stop and Start the Zookeeper and Kafka Statefulsets

To delete the AKD statefulset and service entirely, use the kubectl delete operation, or Make clean from the tibmsg_aks_files/ftl/kubernetes directory. The corresponding pods will also be deleted. The PVC and PV will not be deleted, nor will the corresponding Zookeeper/Kafka data. The PV and PVC must be manually deleted to remove the data. Sometimes it is necessary to login into either a Zookeeper or Kafka container running in AKS. To access any of the pods, use the following: > kubectl –n tibmsg exec –it <zookeeper/kafka>-<pod #> -- “/bin/bash”

6.5 Configuring AKD Core for the AKD Bridge

This section will discuss configuring the necessary Kafka topics for use with TIBCO Messaging – Apache Kafka Distribution – TIBCO FTL Bridge (AKD Bridge). If FTL, EMS, or the AKD Bridge are not being used, this section can be skipped. However, these steps can be used to verify that the AKD core is accessible.

• Connect to one of Kafka pods and create a two new topics.

Page 38: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 38

> kubectl –n tibmsg exec –it kafka-0 -- “/bin/bash”

# kafka-topics.sh –-create --zookeeper ${_KAFKA_ZOOKEEPER_CONNECT} --replication-factor 3 --partitions 10 --topic toftl --config min.insync.replicas=2

# kafka-topics.sh -–create --zookeeper ${_KAFKA_ZOOKEEPER_CONNECT} --replication-factor 3 --partitions 10 --topic fromftl --config min.insync.replicas=2

• While still connected to the Kafka pod, try to send and receive messages. Please note that

we are using the broker Kubernetes service, which is available only internally inside the cluster.

# kafka-console-producer.sh --broker-list kafka-0.broker:9092,kafka-1.broker:9092,kafka-2.broker:9092 --topic toftl --request-required-acks all

# kafka-console-consumer.sh --bootstrap-server kafka-0.broker:9092 --from-beginning --topic toftl

If messages can be published and consumed, Zookeeper and Kafka are functioning correctly.

Page 39: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 39

7 Configuring the AKD Bridge

The final TIBCO Messaging component to configure is the TIBCO Messaging – Apache Kafka Distribution – TIBCO FTL Bridge (AKD Bridge). If FTL is not being configured to connect to Kafka, this section can be skipped.

7.1 Creating the Docker Image

The content of the AKD Bridge Docker container that will run on AKS is derived from a Docker image that first needs to be created and then hosted in the Azure Container Registry (ACR). To create the Docker image, use the following:

• Use docker images to ensure Docker is installed and running • Change directory to the tibmsg_aks_files/akd/connectors/docker directory • Ensure that both the AKD bridge and core rpm files have been copied to the bin directory • Ensure the FTL 6.1 libraries are copied to the ftl_libs directory • The AKD Bridge Docker container is configured to use the following environmental

variables: o AKD Core version = 2.1.0 o AKD Bridge version = 2.0.0 version o FTL_URL = “http://localhost:12220” o KAFKA_BROKERS = “http://localhost:9092” o KAFKA_SOURCE_TOPIC = "fromftl" o KAFKA_SINK_TOPIC = "toftl" o FTL_APP_NAME = "emsftl01" o FTL_SINK_ENDPOINT = "FTLqueuein" o FTL_SOURCE_ENDPOINT = "FTLqueueout" These are default values for the Docker container. Unless you have a Kafka and FTL server running locally, with the other values set, running, the Docker container will only verify it builds correctly. No other changes are required to the Dockerfile.

• The Docker image will be tibco/akdbridge:2.0.0 • Use make build from the tibmsg_eks_files/akd/connectors/docker directory to create the

AKD Bridge Docker image. If Makefile is not available, is the run_build.sh script. Once the Docker build has completed, test the build using docker run tibco/akdbridge:2.0.0. Note: AKD Bridge container built for AKS is designed to connect to both a Kafka Broker and to an FTL server. The above test will just show that the Docker container will start. However, it will not be able to become active, as it will try to connect to the Kafka Broker. The process should be terminated once it begins “Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available”. This verifies the AKD Bridge Docker image can be pushed to ACR.

Page 40: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 40

7.2 Push the AKD Bridge Docker Image to ACR

Once the Docker image is ready, the image can be tagged and pushed to ACR. The new ACR registry, tibco/akdbridge was previously created to host the AKD Docker image. Refer to section 3.4 for details.

• Retrieve the login command to use to authenticate your Docker client with AWS registry. Adjust AWS region and access keys created previously.

• Tag the Docker image to the ACR repository using the loginServer name noted earlier.

Note: Name of Docker image may differ depending on setup.

• Push the AKD Bridge Docker image to ACR. Replace the name of the loginServer

Figure 34 – Tag and push AKD Bridge Docker images to the ACR

7.3 Deploying the AKD Bridge on AKS

Once the AKD Bridge containers have been “pushed” to ACR, it can be deployed into AKS with kubectl.

• Change directory to tibmsg_aks_files/akd/connectors/kubernetes • akdbridge.yaml require some updates before use. Mainly, just the URL of the ACR registry.

If the defaults for the environment were changed, other changes to the file may be necessary.

apiVersion: apps/v1 kind: Deployment metadata: labels: name: akdbridge (1) name: akdbridge (1) spec: replicas: 1 selector: matchLabels: name: akdbridge (1) strategy: type: Recreate template: metadata: labels: name: akdbridge (1)

Page 41: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 41

name: akdbridge (1) spec: containers: - name: akdbridge-container image: "<YOUR_ACR_URL>/akdbridge:latest" (2) imagePullPolicy: Always ports: - containerPort: 12220 (3) name: ftlout - containerPort: 9092 (4) name: kafkaout env: - name: KAFKA_BROKERS (5) value: “http://kafka-0.broker:9092,http://kafka 1.broker:9092,http://kafka-2.broker:9092,http://kafka-3.broker:9092,http://kafka-4.broker:9092,http://kafka-5.broker:9092” - name: FTL_URL (6) value: "http://ftl-cluster-0.ftlserver:12220|http://ftl-cluster-1.ftlserver:12220|http://ftl-cluster-2.ftlserver:12220" - name: KAFKA_SOURCE_TOPIC value: "fromftl" (7) - name: KAFKA_SINK_TOPIC value: "toftl" (8) - name: FTL_APP_NAME value: "emsftl01" (9) - name: FTL_SINK_ENDPOINT value: "FTLqueuein" (10) - name: FTL_SOURCE_ENDPOINT value: "FTLqueueout" (11) - name: POD_NAME # use the generated pod name valueFrom: fieldRef: fieldPath: metadata.name args: - akdbridge (1) resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullsecrets: (15) - name: akdbridge-acr-secret dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 --- apiVersion: v1 kind: Service metadata: name: akdbridge (1) namespace: tibmsg (12) labels: app: akdbridge (1)

Page 42: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 42

spec: type: NodePort ports: - port: 12220 (3) name: ftlout nodePort: 32080 (13) - port: 9092 (4) name: kafkaout nodePort: 32081 (14) selector: app: akdbridge (1)

Figure 35 - Akdbridge.yaml Example

(1): The name of the AKD Bridge deployment and service. If modifying, change ALL locations. (2): The name and location of the Azure Container Registry (ACR) where the TIBCO AKD

Bridge Docker image is located. Ensure the proper permissions are set. The image maybe something different for latest, depending on how it was tagged in Docker.

(3): The FTL port of the FTL Cluster nodes (4): The Kafka Broker port of all Kafka Brokers. (5): The Kafka Brokers URL. This should not need to be changed, unless the Kafka service(s) are

changed. Not recommended. (6): The FTL URL in the Kubernetes cluster. This should not need to be changed, unless the FTL

service(s) are changed. Not recommended. (7): The Kafka source topic. Default is fromftl. If another topic name is used in Kafka, change

accordingly. (8): The Kafka sink topic. Default is toftl. If another topic name is used in Kafka, change

accordingly. (9): The FTL application name. Default is emsftl01. This must match the name of the EMS

deployment, and the application name in FTL. (10): The FTL sink endpoint. Default is FTLqueuein. This must match what is used as the

endpoint in FTL. (11): The FTL source endpoint. Default is FTLqueueout. This must match what is used as the

endpoint in FTL. (12): The name of the Kubernetes namespace used in the cluster. This must match what is used

throughout the other components. The default is tibmsg. (13): The nodePort number for FTL. The default is 32080. There is not need to change this, unless

this port is already in use. (14): The nodePort number for kafkaout. The default is 32081. There is not need to change this,

unless this port is already in use. (15): The ACR registry secret. The default is akdbridge-acr-secret.

7.4 Creating a Deployment and Service for the AKD Bridge

Execute make build command to deploy the tibmsg_aks_files/akd/connectors/kubernetes directory. Alternatively, if you do not have make utility installed, issue the command below manually.

Page 43: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 43

> kubectl apply –f akdbridge.yaml –n tibmsg

Check the results using the following: > kubectl –n tibmsg get --selector name=akdbridge all > kubectl –n tibmsg describe deploy/akdbridge

Figure 36 - Check Deployment Results

or in the Kubernetes Web Console (see section 1.6.2). To verify the AKD Bridge has started, and connected to FTL and Kafka, first get the AKD Bridge pod name from the above check. Use the following with the appropriate pod name > kubectl –n tibmsg logs akdbridgexxxxxx

There should be several messages regarding the connection to FTL and Kafka, as well as the create and connection of the sink and source connectors. There can be some warnings, but there should be no errors. If there are, resolve issues before continuing.

7.5 Stopping or Deleting the AKD Bridge Connector

To stop the AKD Bridge without deleting it, use the kubectl scale operation to set its number of replicas to 0.

For example: > kubectl scale --replicas=0 deploy akdbridge –n tibmsg

To start it again, set its number of replicas back to 1: > kubectl scale --replicas=1 deploy akdbridge –n tibmsg

To delete the AKD Bridge deployment and service entirely, use the kubectl delete operation, or just make clean from the tibmsg_aks_files/akd/connectors/kubernetes directory. For example: > kubectl delete –f akdbridge.yaml –n tibmsg

Figure 37 - To Stop, Start, and Delete the AKD Bridge Deployment

The corresponding pod, service, and ReplicaSet will also be deleted. Since the AKD Bridge is stateless, it has no persisted data to delete.

7.6 Connecting to the AKD Bridge Container

The AKD Bridge container can be accessed directly through the following command. Substitute the name of the running AKD Bridge pod for <Pod name>. > kubectl exec –it <pod name> -- “/bin/bash” –n tibmsg

Page 44: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 44

8 Accessing and Testing the Messaging Components

This section will outline testing the TIBCO Messaging components running in AKS. The access and testing assumes all components are installed.

8.1 Accessing the TIBCO Components

8.1.1 Accessing the EMS Server on AKS The EMS Server running in AKS is accessed via the External-IP address and Port created via the emsftl-lb service.

To get the load balancer External-IP address and Port, use:

To test access, use the EMS tibemsadmin CLI. Use the external-ip and port provided. An example would be: ./tibemsadmin –server 44.112.248.11:30726

Figure 38 - Connecting to EMS using tibemsadmin

If unable to connect, check to ensure EMS is running, and the appropriate inbound and outbound access to the security group and load balancer are setup correctly. This is the usual issue. Verify that the ftlqueuein and ftlqueueout queues exist, after connecting to the EMS server with tibemsadmin. They should have been created as part of section 5. These queues are required for the connection between FTL and EMS, and will also be used as part of the testing.

8.1.2 Accessing FTL on AKS FTL access should have already been verified in section 4.4 when configuring FTL to communicate with EMS.

8.1.3 Accessing AKD on AKS Accessing Zookeeper and Kafka has already been established in section 6.4. However, this was internal to the Kubernetes cluster. Accessing the Kafka brokers is similar to accessing the EMS server and the FTL realm service, through the Kubernetes load balancer. Use the same kubectl get svc operation to get the kafka 0-5 load balancers. Not all load balancers must be part of the test. The External-IP address and Port from kafka-0 should suffice. To test access to Kafka, use the following with the appropriate external-ip and port: ./kafka-console-producer.sh –-broker-list 23.444.444.11:32400 –-topic toftl

>

Figure 39 - Successful connection to Kafka

Page 45: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 45

If the connection to Kafka is successful, a prompt “>” to allow input of messages. Leave this window/connection open. It will be used in the new section to test the AKD Bridge between AKD/FTL/EMS. If unable to connect, check to ensure the Kafka Broker(s) are running, and the appropriate inbound and outbound access to the security group and load balancer are setup correctly. This is the usual issue. With Kafka, another broker can also be tried.

8.2 Testing the TIBCO Components

After connection to the individual TIBCO Messaging components has been verified, testing connectivity between the components via the AKD Bridge can be done. Since the Kafka producer console is configured, that test will done first. Two additional terminal windows will be needed. On for EMS, and the other for FTL. For FTL, the external-ip and port from the Kubeternetes ftl-lb service are required. For EMS, the external-ip and port for the Kubernetes emsftl-lb service are required. For AKD, the Kafka console-producer script should already be running and connected to the Kafka broker.

8.2.1 Setting Up the Kafka Send Tests For these tests, the Kafka topic toftl, the FTL endpoint FTLqueuein, and the EMS queue ftlqueuein will be used. Start the FTL receiver test program. In the following example, the FTL Realm Service is running http://<your FTL LB>:<LB PORT>, the endpoint is FTLqueuein, the FTL application is emsftl01, the durable is tibems, and the message count to receive is 5. cd $TIBCO_HOME/ftl/6.1/samples/bin ./tibrecvex -c 5 -e FTLqueuein -a emsftl01 –d tibems http://<your_FTL_LB>:<LB PORT> # # ./tibrecvex # # TIBCO FTL Version 6.1.0 V5 # Invoked as: ./tibrecvex -c 5 -e FTLqueuein -a emsftl01 –d tibems http://... waiting for message(s)

Start the EMS consumer java application. In the following example, the EMS server is running at tcp://<your ems service external ip>:<emsftl-lb service port>, and the queue is ftlqueuein. cd $TIBCO_HOME/ems/8.4/samples/java . ./setup.sh java tibjmsMsgConsumer -server tcp://<your emsftl01 LB Ip address>:<emsftl-lb port> -queue ftlqueuein

Navigate back to the terminal window where the Kafka producer is running: ./kafka-console-producer.sh --broker-list <your Kafka LB IP address>:<kafka LB port> --topic toftl >{"text":"message1"} >{"text":"message2"} >{"text":"message3"} >{"text":"message4"} >{"text":"message5"}

Page 46: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 46

8.2.2 Expected Results After the Kafka publisher script has been used to send five messages {“text”:”message”}, both the TIBCO FTL receiver program and the TIBCO EMS consumer java application should receive the five messages. The output should be similar to the following. If FTL does not receive the messages, EMS will not. Check the connector properties for problems. If FTL receives the messages, but EMS does not, check the EMS and FTL bridge configuration.

FTL:

Figure 40 – FTL results from Kafka producer

EMS:

Figure 41 - EMS results from Kafka producer

Stop the Kafka producer, and the FTL/EMS consumers. Leave all terminals open, as they will be needed for the next test.

8.2.3 Setting up and Sending Messages from EMS In this section, EMS will send messages via FTL to Kafka. The EMS sample java application tibjmsMsgProducer, FTL sample program tibrecvex, and the Kafka kafka-connect-consumer.sh

Page 47: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 47

script will be used to test the bridge between the messaging components. For these tests, the Kafka topic fromftl, the FTL endpoint FTLqueueout, and the EMS queue ftlqueueout will be used. Start the FTL receiver test program. In the following example, the FTL Realm Service is http://<your FTL LB>:<LB PORT>, the endpoint is FTLqueueout, the FTL application is emsftl01, the durable is tibems, and the message count to receive is 5. cd $TIBCO_HOME/ftl/6.1/samples/bin ./tibrecvex -c 5 -e FTLqueueout -a emsftl01 –d tibems http://<your_FTL_LB>:<LB PORT> # # ./tibrecvex # # TIBCO FTL Version 6.1.0 V5 # Invoked as: ./tibrecvex -c 5 -e FTLqueueout -a emsftl01 –d tibems http://... waiting for message(s)

Start the Kafka consumer-console script to receive messages. cd $TIBCO_HOME/akd/core/2.1/bin ./kafka-console-consumer.sh --bootstrap-server http:<your Kafka-0 LB IP address>:<kafka-0 LB port> --topic fromftl –-from-beginning Run the EMS Producer java application. cd $TIBCO_HOME/ems/8.4/samples/java . ./setup.sh java tibjmsMsgProducer -server tcp://<your emsftl01 LB Ip address>:<emsftl-lb port> -queue ftlqueueout message message1 message message message111111

8.2.4 Expect Results After the TIBCO EMS publisher has sent the five messages, both the TIBCO FTL receiver program and the Kafka consumer script should receive the five messages. The output should be similar to the following. If FTL does not receive the messages, Kafka will not. Check the EMS and FTL bridge configuration. If FTL receives the messages, but Kafka does not, check the connector properties for problems. FTL:

Figure 42 – FTL results from EMS producer

Page 48: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 48

Kafka:

Figure 43 - Kafka results with EMS producer

This completes all configuration and tests. All test applications and terminals can be shutdown.

Page 49: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 49

Appendix A: Health Checks for the EMS Server

This appendix documents how EMS server container health checks are implemented using the sample TIB_ems_<version>_probes.zip. At this point, these rely on executing commands in the container. Note: In environments where large data stores are maintained, or the the EMS server must wait on the FTL Server, the EMS container will not be able to start successfully if it takes over one minute for EMS to become active. To deploy EMS in such environments, it is recommended not to use the Readiness or Liveness Probes. Without the probes, the container will start successfully and EMS will become active after FTL is available or reading the data stores. Tibemsd-wprobe_ftl-template.yaml is provided to deploy EMS with the probes. It is identical to tibemsd_ftl-template.yaml without the probes section. Makes changes to this file as outlined in section 5.4.1. Note: TIBCO AKD Core (Zookeeper and Kafka) also have similar readiness and liveness probes, with a much longer initialDelaySeconds (300 sec). Much of the information in this appendix can be used with Kafka and Zookeeper. FTL and the AKD Bridge do not have the probes.

A.1 Liveness and Readiness Probes

The Kubernetes and Kubernetes documentation describe health checks here: https://docs.Kubernetes.com/container-platform/3.9/dev_guide/application_health.html https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes For an EMS server container, a liveness health check helps detect when an EMS server stops servicing client heartbeats but somehow does not exit. When this health check fails a number of times in a row, the EMS server container will be restarted. A readiness health check helps detect when an EMS server stops servicing new client requests but is still up and running. When this health check fails a number of times in a row, the EMS server endpoints are removed from its container, such that the server is made unreachable. This could be useful for deliberately hiding an EMS server while it handles a long internal operation (e.g. performing a compact) that prevents it from servicing client requests. As it may or may not fit your operations, it is up to you to decide whether you need the readiness health check. If not relevant to you, feel free to remove it from the template.

A.2 Implementation

Sample health checks are provided in TIB_ems_<version>_probes.zip, which includes a number of internal tools.

Page 50: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 50

These rely on server heartbeats flowing from the EMS server to its clients as well as a client timeout of missing server heartbeats. Those are set through the following two EMS server properties:

• server_heartbeat_client (set to a value > 0) • client_timeout_server_connection (set to a value > 0)

The implementation also relies on establishing an admin client connection to the EMS server. As a result, credentials for an admin user with at least the view-server administrator permission need to be configured. The admin user name and password will be used to configure the liveness and readiness probes. Additionally, if the EMS server is TLS only, TLS credentials will also be required when configuring the probes.

A.2.1 Liveness Health Check The sample liveness probe is provided through the live.sh script configured in the deployment object (see section 5.4.1): ... livenessProbe: exec: command: - live.sh - '-spawnTimeout' - '4' - '--' - '-server' - tcp://localhost:<EMS_INTERNAL_PORT> - '-user' - 'probeuser' - '-password' - 'probepassword' initialDelaySeconds: 1 timeoutSeconds: 5 periodSeconds: 6 ...

Here the cluster will perform a periodic liveness check based on the live.sh script and the corresponding parameters. The -spawnTimeout parameter is an internal timeout used by the probe that should be set relative to the probe's periodSeconds setting (see recommended settings in section A.2.4). The above example matches the EMS server sample configuration. It should be tailored to your target configuration using the following additional probe parameters, when relevant: -server <server-url> Connect to specified server (default is tcp://localhost:7222). -timeout <timeout> Timeout of server request (in seconds) (default is 10).

Page 51: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 51

-delay <delay> Delay between server requests (in seconds) (default is 1). -user <user-name> Use this user name to connect to server (default is admin). -password <password> Use this password to connect to server (default is NULL). -pwdfile <passwd file> Use the password in the specified file (to hide it). -module_path <path> Path to find dynamic libraries such as SSL. -ssl_trusted <filename> File containing trusted certificate(s). This parameter may be

entered more than once if required. -ssl_identity <filename> File containing client certificate and optionally extra issuer

certificate(s) and private key. -ssl_issuer <filename> File containing extra issuer certificate(s) for client-side identity. -ssl_password <password> Private key or PKCS12 password. -ssl_pwdfile <pwd file> Use the private key or the PKCS12 password from this file. -ssl_key <filename> File containing private key. -ssl_noverifyhostname Do not verify host name against the name in the certificate. -ssl_hostname <name> Name expected in the certificate sent by host. -ssl_trace Show loaded certificates and certificates sent by the host. -ssl_debug_trace Show additional tracing.

A.2.2 Readiness Health Check The sample readiness probe is provided through the ready.sh script configured in the deployment object: ... readinessProbe: exec: command: - ready.sh - '-spawnTimeout' - '4' - '-responseTimeout' - '4' - '--' - '-server' - tcp://localhost:<EMS_INTERNAL_PORT> - '-user' - 'probeuser' - '-password' - 'probepassword' initialDelaySeconds: 1 timeoutSeconds: 5 periodSeconds: 6 ...

Page 52: TIBCO Messaging on K8 with AKS · TIBCO Software Inc. Global Headquarters 3307 Hillview Avenue Palo Alto, CA 94304 Tel: +1 650-846-1000 Toll Free: 1 800-420-8450 Fax: +1 650-846-1005

©2019 TIBCO Software Inc. All Rights Reserved. 52

Here the cluster will perform a periodic readiness check based on the ready.sh script and the corresponding parameters. The -spawnTimeout and -responseTimeout parameters are internal timeouts used by the probe that should be set relative to the probe's periodSeconds setting (see recommended settings in section A.2.4). The above example matches the EMS server sample configuration. Just as with the liveness health check, it should be tailored to your target configuration using the same parameters.

A.2.3 Configuring Liveness and Readiness Probes In addition to the above settings, Kubernetes provides settings to adjust the behavior of the probes: initialDelaySeconds: Number of seconds after the container has started before the liveness or

readiness probes are initiated. timeoutSeconds: Number of seconds after which the probe times out. Defaults to 1

second. Minimum value is 1. periodSeconds: How often (in seconds) to perform the probe. Default to 10 seconds.

Minimum value is 1. These settings are individually set on each probe and affect the timing of health check events.

A.2.4 Recommended settings livenessProbe initialDelaySeconds: 1

periodSeconds: 6 = <PERIOD> used below timeoutSeconds: <PERIOD> -1

readinessProbe initialDelaySeconds: 1 periodSeconds: <PERIOD> timeoutSeconds: <PERIOD> - 1

live.sh -spawnTimeout: <PERIOD> - 2 ready.sh -spawnTimeout: <PERIOD> - 2

-responseTimeout: <PERIOD> - 2 EMS server configuration server_heartbeat_client: 5

client_timeout_server_connection: 20 The tibemsd-wprobe_ftl-template.yaml sample template is already populated with recommended values.