Table of Contents - VMware › HOL-2019 › hol-1935-01-net_pdf... · 2020-02-10 · Digital...
Transcript of Table of Contents - VMware › HOL-2019 › hol-1935-01-net_pdf... · 2020-02-10 · Digital...
Table of ContentsLab Overview - HOL-1935-01-NET - VMware Pivotal Container Service on VMware NSX-T -Getting Started ................................................................................................................. 2
Lab Guidance .......................................................................................................... 3Advanced Networking and Security in a Cloud-Native world with VMware NSX DataCenter ................................................................................................................... 10VMware Tech Preview............................................................................................ 14
Module 1 - Introduction to PKS on NSX-T (30 minutes) ................................................... 15Introduction........................................................................................................... 16Hands-on Labs Interactive Simulation: Introduction to PKS on NSX-T ...................18
Module 2 - NSX-T Network Virtualization and Kubernetes Namespaces (45 minutes).....20Introduction........................................................................................................... 21Environment review .............................................................................................. 23Kubernetes Namespace creation and NSX-T routing / NAT review ........................29Kubernetes Service creation and NSX-T load-balancer for Kubernetes Service.....40Creation of SNAT IP addresses for a Kubernetes Service.......................................47Conclusion............................................................................................................. 58Lab clean-up ......................................................................................................... 59
Module 3 - NSX-T Micro-segmentation and Network services for Kubernetes (45minutes).......................................................................................................................... 60
Introduction........................................................................................................... 61Pre-configure distributed firewall rules.................................................................. 63NSX-T ingress load-balancer for Kubernetes service .............................................83Secure application with Kubernetes Network Policy.............................................. 96Conclusion........................................................................................................... 106Lab clean-up ....................................................................................................... 107
Module 4 - NSX-T Operational Tools and Visibility (30 minutes) ....................................108Introduction......................................................................................................... 109Lab preparation................................................................................................... 111NSX-T Traceflow for Kubernetes Pods .................................................................. 116NSX-T Port Mirroring for container ports.............................................................. 127Conclusion........................................................................................................... 135
HOL-1935-01-NET
Page 1HOL-1935-01-NET
Lab Overview -HOL-1935-01-NET -
VMware Pivotal ContainerService on VMware NSX-T
- Getting Started
HOL-1935-01-NET
Page 2HOL-1935-01-NET
Lab GuidanceNote: It may take more than 90 minutes to complete this lab. You shouldexpect to only finish 2-3 of the modules during your time. The modules areindependent of each other so you can start at the beginning of any moduleand proceed from there. You can use the Table of Contents to access anymodule of your choosing.
The Table of Contents can be accessed in the upper right-hand corner of theLab Manual.
This lab will demonstrate how VMware NSX-T integrates with VMware Pivotal ContainerService (PKS).
The lab will demonstrate how NSX-T automated network provisioning ties into PKScluster and Kubernetes Namespace creation.
You will also see how Kubernetes Namespaces, Services, Ingress Rules and NetworkPolicies relate to different NSX-T logical network constructs.
Finally, you will also explore some troubleshooting methods for Kubernetes containernetworks with the included NSX-T tools from a single management interface.
Lab Module List:
• Module 1 - Introduction to PKS on NSX-T (30 minutes) (Intermediate)This module will provide an overview of the new NSX-T and Pivotal ContainerService (PKS) integration.You will roll out a new Kubernetes cluster with PKS CLI commands and verify thecorrect startup of the cluster.Once the cluster is started, you will explore the automatically created NSX-Tnetwork components such as T1 routers, load-balancers and logical-switches.
• Module 2 - NSX-T network virtualization and Kubernetes namespaces (45minutes) (Advanced)In this module you will create Kubernetes Namespaces and related pods. You willverify the automatic creation of related NSX-T network constructs like T1 routers,logical-switches and virtual ports for Kubernetes pods.You will verify the creation of NAT-rules for Kubernetes Namespaces as well as thecorrect propagation of routes for "no-NAT" Namespaces.Finally, you will create a Kubernetes Service and configure connectivity from thephysical network by means of NSX-T load-balancing integration in Kubernetes.You will also explore how NSX-T is able to provide a SNAT IP address for aKubernetes Service to identify it within the physical network providingconnectivity to a classic backend database.
HOL-1935-01-NET
Page 3HOL-1935-01-NET
• Module 3 - NSX-T Micro-segmentation and network services forKubernetes (45 minutes) (Advanced)This module will show you advanced NSX-T networking and security services forKubernetes.You will configure a Kubernetes service and review the automatically createdLayer 7 load-balancing rules in NSX-T.You will also secure your running pods by leveraging the NSX-T securityframework. You will also see how Kubernetes Network Policies will be translatedinto NSX-T firewall rules to provide visibility to the infrastructure administrator.
• Module 4 - NSX-T operational tools and visibility (30 minutes)(Intermediate)This module will show you how to use the integrated monitoring andtroubleshooting tools in NSX-T for a Kubernetes environment.You will use NSX-T Traceflow to verify connectivity between two pods in yourenvironment to identify and potential issues.You will also set up a remote span session to your local machine, to analyze podnetwork traffic with tools like Wireshark.
Lab Captain:
• Christoph Puhl, Network Virtualization and Security Architect EMEA,Germany
• Frank Snyder, Sr. Systems Engineer - NSBU, Mid-North, USA
This lab manual can be downloaded from the Hands-on Labs Document site found here:
http://docs.hol.vmware.com
This lab may be available in other languages. To set your language preference and havea localized manual deployed with your lab, you may utilize this document to help guideyou through the process:
http://docs.hol.vmware.com/announcements/nee-default-language.pdf
HOL-1935-01-NET
Page 4HOL-1935-01-NET
Location of the Main Console
1. The area in the RED box contains the Main Console. The Lab Manual is on the tabto the Right of the Main Console.
2. A particular lab may have additional consoles found on separate tabs in the upperleft. You will be directed to open another specific console if needed.
3. Your lab starts with 90 minutes on the timer. The lab can not be saved. All yourwork must be done during the lab session. But you can click the EXTEND toincrease your time. If you are at a VMware event, you can extend your lab timetwice, for up to 30 minutes. Each click gives you an additional 15 minutes.Outside of VMware events, you can extend your lab time up to 9 hours and 30
minutes. Each click gives you an additional hour.
Alternate Methods of Keyboard Data Entry
During this module, you will input text into the Main Console. Besides directly typing itin, there are two very helpful methods of entering data which make it easier to entercomplex data.
HOL-1935-01-NET
Page 5HOL-1935-01-NET
Click and Drag Lab Manual Content Into Console ActiveWindow
You can also click and drag text and Command Line Interface (CLI) commands directlyfrom the Lab Manual into the active window in the Main Console.
Accessing the Online International Keyboard
You can also use the Online International Keyboard found in the Main Console.
1. Click on the Keyboard Icon found on the Windows Quick Launch Task Bar.
<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><ahref="http://www.youtube.com/watch?v=xS07n6GzGuo" target="_blank">Try watching this video on www.youtube.com</a>, or enableJavaScript if it is disabled in your browser.</div></div>
HOL-1935-01-NET
Page 6HOL-1935-01-NET
Click once in active console window
In this example, you will use the Online Keyboard to enter the "@" sign used in emailaddresses. The "@" sign is Shift-2 on US keyboard layouts.
1. Click once in the active console window.2. Click on the Shift key.
Click on the @ key
1. Click on the "@ key".
Notice the @ sign entered in the active console window.
HOL-1935-01-NET
Page 7HOL-1935-01-NET
Activation Prompt or Watermark
When you first start your lab, you may notice a watermark on the desktop indicatingthat Windows is not activated.
One of the major benefits of virtualization is that virtual machines can be moved andrun on any platform. The Hands-on Labs utilizes this benefit and we are able to run thelabs out of multiple datacenters. However, these datacenters may not have identicalprocessors, which triggers a Microsoft activation check through the Internet.
Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoftlicensing requirements. The lab that you are using is a self-contained pod and does nothave full access to the Internet, which is required for Windows to verify the activation.Without full access to the Internet, this automated process fails and you see this
watermark.
This cosmetic issue has no effect on your lab.
Look at the lower right portion of the screen
HOL-1935-01-NET
Page 8HOL-1935-01-NET
Please check to see that your lab is finished all the startup routines and is ready for youto start. If you see anything other than "Ready", please wait a few minutes. If after 5minutes your lab has not changed to "Ready", please ask for assistance.
HOL-1935-01-NET
Page 9HOL-1935-01-NET
Advanced Networking and Security in aCloud-Native world with VMware NSXData Center
Digital transformation is no longer just hype or a buzzword. Reality has caught up. Infact, 50 percent of CEOs expect their industries to be substantially or unrecognizablytransformed by digital.1 The businesses that not only survive, but truly thrive, are theones that recognize the potential of digital transformation and embrace it. We live in aworld where applications are increasingly at the center of everything a business does.They are powered by software and driving differentiation and innovation acrossindustries. Apps are being built to transform the customer experience, provide newinnovative services, increase the speed and agility of business, and drive efficiencies. By2020, 50 percent of the Global 2000 will see the majority of their business depend ontheir ability to create digitally-enhanced products, services, and experiences.2
This digital transformation is driving the need for new application architectures that areradically different from those of the past. Most enterprise apps today are based on the3-tier model (leveraging web, app, database servers), deployed in virtual machines(VMs), and developed on platforms that have been in use for years. For organizations tokeep up with the rapid pace of app development and deployment, they are turning to
HOL-1935-01-NET
Page 10HOL-1935-01-NET
new app architectures, based on microservices, deployed in containers, and developedon cloud-native app platforms like Kubernetes, Pivotal Cloud Foundry (PCF), OpenShift,and others.
1 GARTNER, “2016 CEO Survey: The Year of Digital Tenacity,” 20 April, 2016
2 http://www.idc.com/getdoc.jsp?containerId=prUS41888916
The Cloud-Native Networking Challenge
Businesses need faster time-to-market and more innovation, all while controlling costsand mitigating risk. Developers need higher productivity, greater speed and agility,improved operational efficiency, and to leverage infrastructure-as-code. IT teams needto ensure that apps and data are protected, gain visibility into costs, and improveoperational control over the environments they manage.
As developers build and deploy cloud-native apps faster than ever to respond to theneeds of the business, networking and security challenges have arisen. Developersneed to get the application up and running as quickly as possible, but IT organizationsare encountering challenges in keeping up with the pace that they are being developed,deployed, and iterated upon. The problem arises from the fact that traditionalnetworking and security configuration is still a manual process, often on infrastructurehardware. In addition, because of the limited networking and security services in thecloud-native platforms themselves, provisioning these services on traditional networkarchitectures can add days or weeks to the development cycle, bottlenecking not onlyapp development but hindering the speed and agility of business as well.
How Do We Get There?
In order for the needs of developers, IT teams, and the businesses to be met,networking and security needs to be provisioned, managed, and monitored with a cloud-native apps level of speed and agility. This requires a networking and security modelthat is independent of the underlying infrastructure with security that can be wrappedaround containers, VMs, and microservices, and that applies to development and controlacross new app frameworks like Kubernetes, Red Hat OpenShift, and Pivotal CloudFoundry. So how does this all come together? The answer is, with an infrastructure-independent and app-aware networking and security model.
This involves networking and security services running in software, with deepintegration into new and existing app platforms. Networking and security services mustbe a consequence of the application and the developer’s code, with policies that followapps as they move in and between environments. This enables IT teams to provideguardrails that allow developers to move fast while providing advanced networkingservices, and ensuring security and compliance for the broader business. The sum totalof these things is an organization where developers get the speed and agility they need,IT teams get the visibility and control they need, and the business gets the applicationsthey need, in a fast and secure fashion.
HOL-1935-01-NET
Page 11HOL-1935-01-NET
How NSX Data Center Can Help
The VMware NSX® Data Center network virtualization and security platform can helporganizations achieve the full potential of cloud-native apps and bring a number ofbenefits to the table. NSX Data Center enables advanced networking and securityacross any application framework, helps speed the delivery of applications by removingbottlenecks in developer and IT team workflows, enables micro-segmentation down tothe microservice level, enhances monitoring and analytics for microservices, and hasreference designs to help organizations get started. It enables a single network overlayand micro-segmentation for both VMs and containers as well as common monitoring andtroubleshooting for traditional and cloud-native apps. NSX Data Center integrates withexisting tools in the data center and public cloud for IT teams and plugs in to theContainer Network Interface (CNI) to empower developers without slowing down orchanging the workflows to which they are accustomed.
HOL-1935-01-NET
Page 12HOL-1935-01-NET
How NSX Data Center Can Help
NSX Data Center empowers both developers and IT teams to work together to thebenefit of both, as well as the businesses they support, by enabling commonnetworking, security, workflows, and management across any device, any app, anyframework, and any infrastructure. Increased speed and agility for developers coupledwith increased connectivity, security, visibility, and control for IT teams mean that theentire organization can operate in tandem to drive the digital transformation of theirbusinesses forward.
HOL-1935-01-NET
Page 13HOL-1935-01-NET
VMware Tech PreviewDisclaimer
This session may contain product features that are currently underdevelopment.
This session/overview of the new technology represents no commitment fromVMware to deliver these features in any generally available product.
Features are subject to change, and must not be included in contracts,purchase orders, or sales agreements of any kind.
Technical feasibility and market demand will effect final delivery.
Pricing and packaging for any new technologies or features discussed orpresented have not been determined.
• “These features are representative of feature areas under development. Featurecommitments are subject to change, and must not be included in contracts,purchase orders, or sales agreements of any kind. Technical feasibility and marketdemand will affect final delivery.”
HOL-1935-01-NET
Page 14HOL-1935-01-NET
Module 1 - Introduction toPKS on NSX-T (30
minutes)
HOL-1935-01-NET
Page 15HOL-1935-01-NET
IntroductionThis module will provide an overview of the new NSX-T and Pivotal Container Service(PKS) integration.
You will roll out a new Kubernetes cluster with PKS CLI commands and verify the correctstartup of the cluster.
Once the cluster is started, you will explore the automatically created NSX-T networkcomponents such as T1 routers, load-balancers and logical-switches.
• Review environment and pre-configured NSX-T objects• Create PKS Kubernetes cluster with PKS CLI• Review automatically created networks for PKS Kubernetes cluster
Lab Topology
HOL-1935-01-NET
Page 16HOL-1935-01-NET
This lab was partially pre-configured for you, so you can start right away with deployingVMware Pivotal Container Service (PKS) Kubernetes clusters.
1. A NSX-T Tier-0 router called t0-pks was pre-created for you and attached to thephysical networks via BGP routing. PKS was also configured to use this Tier-0router to attach the automatically created Tier-1 routers for the KubernetesNamespaces.
2. A NSX-T Tier-1 router called t1-pks-mgmt was pre-created for you. This routerhomes all PKS control VMs like PKS itself, BOSH, Pivotal Ops Manager and HarborRegistry.
3. For every PKS Kubernetes cluster created, PKS will automatically created a Tier-1routing instance and attach the PKS Kubernetes nodes to it. During this moduleyou will create a PKS Kubernetes cluster and review how a Tier-1 router for thecluster was created and connected to its upstream Tier-0 router.
HOL-1935-01-NET
Page 17HOL-1935-01-NET
Hands-on Labs Interactive Simulation:Introduction to PKS on NSX-TThis part of the lab is presented as a Hands-on Labs Interactive Simulation. This willallow you to follow steps which are too time-consuming or resource intensive to do livein the lab environment. In this simulation, you can use the software interface as if youare interacting with a live environment.
1. Click here to open the interactive simulation. It will open in a new browserwindow or tab.
2. When finished, click the “Return to the lab” link to continue with this lab.
The lab continues to run in the background. If the lab goes into standby mode, you canresume it after completing the module.
You've finished Module 1
Congratulations on completing Module 1.
If you are looking for additional information on VMware PKS and NSX-T, try one of these:
• Read more about NSX Data Center on the official VMware product page• Download the VMware NSX-T Reference Design Guide• Read the Pivotal Container Service blog
Proceed to any module below which interests you most.
• Module 1 - Introduction to PKS on NSX-T• Module 2 - NSX-T network virtualization and Kubernetes namespaces• Module 3 - NSX-T Micro-segmentation and network services for
Kubernetes• Module 4 - NSX-T operational tools and visibility
HOL-1935-01-NET
Page 18HOL-1935-01-NET
How to End Lab
To end your lab click on the END button.
HOL-1935-01-NET
Page 19HOL-1935-01-NET
Module 2 - NSX-T NetworkVirtualization and
Kubernetes Namespaces(45 minutes)
HOL-1935-01-NET
Page 20HOL-1935-01-NET
IntroductionIn this module you will create Kubernetes Namespaces and related pods. You will verifythe automatic creation of related NSX-T network constructs like T1 routers, logical-switches and virtual ports for Kubernetes pods.
You will verify the creation of NAT-rules for Kubernetes Namespaces as well as thecorrect propagation of routes for "no-NAT" Namespaces.
Finally, you will create a Kubernetes Service and configure connectivity from thephysical network by means of NSX-T load-balancing integration in Kubernetes.
You will also explore how NSX-T is able to provide a SNAT IP address for a KubernetesService to identify it within the physical network providing connectivity to a classicbackend database.
Lab Topology
This is the NSX-T networking topology you create in this module. Parts of it are alreadypre-created or have been provisioned in the previous module.
1. A NSX-T Tier-0 router called t0-pks was pre-created for you and attached to thephysical networks via BGP routing. PKS was also configured to use this Tier-0router to attach the automatically created Tier-1 routers for the KubernetesNamespaces.
2. A NSX-T Tier-1 router called t1-pks-mgmt was pre-created for you. This routerhomes all PKS control VMs like PKS itself, BOSH, Pivotal Ops Manager and HarborRegistry.
3. This is Tier-1 router for your PKS Kubernetes cluster, which was automaticallyprovisioned during PKS cluster deployment in module 1.
HOL-1935-01-NET
Page 21HOL-1935-01-NET
4. You will create this Tier-1 routing instance for a NATted Kubernetes Namespacenamed nsx during this module.
5. You will create this Tier-1 routing instance for a routed Kubernetes Namespacenamed no-nat-namespace during this module.
HOL-1935-01-NET
Page 22HOL-1935-01-NET
Environment reviewIn this lesson you will have a look at the environment and verify PKS has deployed allcomponents within NSX-T correctly.
Open Chrome Browser from Windows Quick Launch TaskBar
1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.
HOL-1935-01-NET
Page 23HOL-1935-01-NET
Navigate to NSX-T Manager
1. Use the bookmark called NSX-T Manager to connect to NSX-T Manger.
HOL-1935-01-NET
Page 24HOL-1935-01-NET
2. Login to NSX-T Manager using the following credentials:
Username: adminPassword: VMware1!
3. Click Login button
Review existing virtual routers
1. Click on Routing to navigate to the existing virtual routers.
Here you can see all logical routers, which were automatically deployed by PKS duringcluster instantiation.
These routers server as the default-gateways for your the pods in your KubernetersNamespaces, your actual Kubernetes master and worker nodes and also provide load-balancing services for your environment.
Note, that the UUIDs of the routers in your lab will differ to the ones in thescreenshots provided.
Nevertheless the suffix of the router names should match, as these arerelating to the default Namespaces existing in your Kubernetes clustercreated by VMware Pivotal Container Service (PKS)
HOL-1935-01-NET
Page 25HOL-1935-01-NET
Open Putty from Windows Quick Launch Task Bar
1. Click on the Putty icon on the Windows Quick Launch Task Bar.
Connect to cli-vm
1. Highlight the saved cli-vm session2. Click on Load3. Click Open to connect to the cli-vm via SSH
You will be automatically logged in to a terminal session.
HOL-1935-01-NET
Page 26HOL-1935-01-NET
Get Kubernetes credentials via PKS CLI
Use the following command to retrieve the login credentials your Kubernetes clustercreated by VMware Pivotal Container Service (PKS):
pks get-credentials my-cluster
Note: You will not need to switch between different PKS cluster configurations, as thereis only one PKS Kubernetes cluster running in this lab.
HOL-1935-01-NET
Page 27HOL-1935-01-NET
Review existing Kubernetes Namespaces
1. Use the following command to retrieve a list of all existing KubernetesNamespaces:
kubectl get ns
Also take a look at the mapping between the existing Kubernetes namespaces and theNSX-T logical routers: For each Kubernetes Namespace, NSX-T automatically created aTier-1 logical router and an associated logical-switch.
Remember what you learned in module 1:
The logical router starting with prefix "lb-pks" is used for load-balancingservices for the corresponding PKS Kubernetes cluster.
The logical router ending with suffix "cluster-router" is homing themanagement interfaces of PKS Kubernets cluster and was automaticallycreated by PKS during creation of the cluster.
HOL-1935-01-NET
Page 28HOL-1935-01-NET
Kubernetes Namespace creation andNSX-T routing / NAT reviewIn this lesson you will you see, how NSX-T translates Kubernetes Namespace constructsinto virtual network topologies.
You will create a Kubernetes Namespace leveraging source NAT to connect to networksoutside of NSX-T network virtualization, which you will use throughout this lab to deployKubernetes applications to and to use more advanced NSX-T networking services.
You will also create a Kubernetes Namespace which leverages direct routing via BGPbetween NSX-T network virtualization and the physical network. This will show you, thatNSX-T offers the freedom of choice on a Kubernetes Namespace by Namespace basis,on how you want to connect your NSX-T Kubernetes network to external networks.
Change directory in cli-vm
The pre-created .yaml files used in this lab are located in the 1935 directory. These.yaml files define different kinds of objects you will create during the lab, like KubernetesNamespaces, applications running in Kubernetes pods or Kubernetes Network Policies tosecure your applications.
1. Use the following command to change the directory:
cd 1935
HOL-1935-01-NET
Page 29HOL-1935-01-NET
Kubernetes Namespace creation
In this step you will create a Kubernetes Namespace with the default settings.
NSX-T will automatically create a logical-switch for pods running in this Namespace andconnect it to an automatically created Tier-1 router for this Namespace which will serveas the default gateway for all Kubernetes pods residing in this Namespace.
As discussed in module 1 NSX-T Manager has an integrated IP address managementsolution which has a pre-defined IP block of 172.16.0.0/16 for Kubernetes Namespaceswith default settings.
For each newly created Kubernetes Namespace with default settings, NSX-T will allocatea /24 subnet out of this range.
Finally a source NAT (SNAT) entry will be created for this Namespace's 172.16.x.0/24subnet on the Tier-0 router.
1. Create a Kubernetes Namespace called nsx with the command:
kubectl create ns nsx
2. Use the following command to verify that the Namespace was created:
kubectl get ns
HOL-1935-01-NET
Page 30HOL-1935-01-NET
Verify NSX-T logical router
Switch back to your Chrome Browser and hit the refresh button in the bottom of thescreen to get an updated view on the existing logical routers in NSX-T.
Notice, that an additional Tier-1 router for your nsx Namespace was createdautomatically.
HOL-1935-01-NET
Page 31HOL-1935-01-NET
Verify IP subnet used for nsx Namespace
1. Click on the new Tier-1 router for the nsx Namespace (Click on the hyperlink)2. Click Configuration3. Click Router Ports4. Take a note of the IP subnet used for this Kubernetes Namespace, which was
automatically pulled out of the IP address management system of NSX-T (asalready discussed in module 1)
The actual IP subnet in your lab might differ from the one in the screenshotprovided.
HOL-1935-01-NET
Page 32HOL-1935-01-NET
Verify SNAT entries on Tier-0 router
You will discover the automatically assigned SNAT IP address for this KubernetesNamespace.
Every time a Kubernetes pod residing in this Namespace will connect to an IP addressoutside of the NSX-T virtual network, it wil be source NATted to this particular IPaddress.
With this it is possible to identify different Kubernetes Namespaces even in the physicalnetwork based on its source IP address.
In a later lesson of this module, you will also learn how NSX-T can providesource NAT entries in an even more granular fashion for specific KubernetesServices.
1. Click on t0-pks from the list of Logical Routers2. Click on Services3. Click on NAT4. Discover the last entry created in the NAT table of the Tier-0 router (you might
need to scroll down)
The Translated IP address is the IP address, which will be used by the newly creatednsx Namespace.
HOL-1935-01-NET
Page 33HOL-1935-01-NET
Note that the actual IP addresses in your lab might be different to the the IPaddresses provided in the screenshots.
Kubernetes routed Namespace
With NSX-T it is also possible to created routed Namespaces.
When creating a routed Namespace, NSX-T will not create a SNAT entry for the subnetassigned to the Namespace. It will announce the Namespace's IP prefix by BGP (or youcould also use static routing) to allow for direct communication between Kubernetespods and IP addresses outside of the NSX-T virtual network.
As discussed in module 1 NSX-T Manager has an integrated IP address managementsolution which has a pre-defined IP block of 172.100.0.0/16 for KubernetesNamespaces which are created as routed Namespaces.
For each newly created routed Kubernetes Namespace, NSX-T will allocate a /24 subnetout of this range.
To declare a Kubernetes Namespace as a routed Namespace, a Kubernetes Annotationis used.
Switch back to your Putty window.
To review the .yaml file you will use to create a routed (no-nat) Kubernetes Namespaceuse the command:
more no-nat-namespace.yaml
Take a note of the annotation which informs NSX-T to create a routed Namespace:
annotations:ncp/no_snat: "true"
HOL-1935-01-NET
Page 34HOL-1935-01-NET
Kubernetes routed Namespace creation
1. Create a routed (no-nat) Kubernetes Namespace with this .yaml file using thecommand:
kubectl create -f no-nat-namespace.yaml
2. Use the following command to verify that the Namespace was created:
kubectl get ns
HOL-1935-01-NET
Page 35HOL-1935-01-NET
Verify NSX-T logical router
Switch back to your Chrome Browser
1. Hit the refresh button in the bottom of the screen to get an updated view onthe existing logical routers within NSX-T.
2. Notice, that an additional Tier-1 router for your no-nat-namespace Namespacewas created automatically.
HOL-1935-01-NET
Page 36HOL-1935-01-NET
Verify IP subnet used for no-nat-namespace Namespace
1. Click on the new Tier-1 router for the no-nat-namespace Namespace2. Click Configuration3. Click Router Ports4. Take a note of the IP Address used for this Kubernetes Namespace, which was
automatically pulled out of the IP address management system of NSX-T (asalready discussed in module 1)
The actual IP subnet in your lab might differ to the one in the screenshotprovided.
HOL-1935-01-NET
Page 37HOL-1935-01-NET
Verify SNAT entry not created on Tier-0 router
Verify, that there is no SNAT entry being created for the new routed / no-natNamespace.
1. Click on t0-pks2. Click on Services3. Click on NAT4. Discover the last entry created in the NAT table of the Tier-0 router (you might
need to scroll down)
You will notice, that there is no newly created SNAT entry mapping to an IP address outof the 172.100.0.0/16 IP pool used for routed Namespaces.
HOL-1935-01-NET
Page 38HOL-1935-01-NET
Verify connectivity to routed Namespace
1. Open a command prompt2. Use the command following command to verify connectivity to your routed
Namespace:
ping **IP address of Tier-1 router for no-nat-nsx Namespace**
Conclusion
In this lesson you have learned how NSX-T can provide NATted and routed connectivityto Kubernetes pods for different Kubernetes Namespaces.
HOL-1935-01-NET
Page 39HOL-1935-01-NET
Kubernetes Service creation and NSX-Tload-balancer for Kubernetes ServiceIn this lesson you will deploy a tiny application and a Kubernetes Service of typeLoadBalancer to see, how NSX-T can automatically provide a load-balancing instance fora Kubernetes Service based on its type.
Here you will look at a Layer 4 load-balancer which will be automatically assigned avirtual IP address for the respective service. This type of load-balancer is mostly usedfor Kubernetes Services which are not HTTP(S) based and run on a different port thanTCP 80 / 443.
Set Kubernetes context
Use the following command to set your working Namespace to nsx Namespace:
kubectl config set-context my-cluster --namespace nsx
HOL-1935-01-NET
Page 40HOL-1935-01-NET
Review .yaml file for app
You can review the .yaml file, which describes the demo application used in this modulewith the command
more nsx-demo-type-loadbalancer.yaml
The demo application will consist of two pods, which are load-balanced by a NSX-T Layer4 load-balancing instance, which will be configured automatically due to the KubernetesService type: LoadBalancer
HOL-1935-01-NET
Page 41HOL-1935-01-NET
Create demo application in Kubernetes
1. To create the demo application run command:
kubectl create -f nsx-demo-type-loadbalancer.yaml
Check status of Kubernetes pods
1. Run the following command several times until the Status of your pods movesfrom Status ContainerCreating to Status Running:
kubectl get pods
Spinning up your Kubernetes pods for the first time might take a shortamount of time.
HOL-1935-01-NET
Page 42HOL-1935-01-NET
Review pod IP addressing
To also show the IP adresses of your pods use the command:
kubectl get pods -o wide
You will see, that these IP addresses are out of the subnet of your Tier-1 router for thensx Namespace.
Describe service
As part of this demo application you also configured a Kubernetes Service type:LoadBalancer
In the next steps you will review this service in Kubernetes and verify the automaticallycreated NSX-T load-balancer.
Run the command
kubectl describe service nsx-demo
HOL-1935-01-NET
Page 43HOL-1935-01-NET
1. You will see the IP addresses of your pods as endpoints of this service, whichmaps to load-balancer pool members in the NSX-T load-balancer
2. You will see an automatically assigned IP address called LoadBalancerIngress which represents the virtual IP address of the NSX-T load-balancerfor this service. You will verify this IP address in the next step.
Connect to service and verify load-balancing between thepods
You will now use curl to connect to the virtual IP address of the NSX-T load-balancer forthis service to verify, that load-balancing between the pods is working as desired.
Use the following command several times, to verify that the connection is distributedacross your pod endpoints:
curl **LoadBalancer Ingress IP Address**
HOL-1935-01-NET
Page 44HOL-1935-01-NET
Review NSX-T load-balancer for service
Switch back to your Chrome Browser with NSX Manager UI.
You will now review the automatically created NSX-T load-balancer configuration for thedemo application.
1. Click on Load-Balancing, which will automatically bring you to the LoadBalancer menu
2. Click your pks-UUID load-balancer instance3. Click Virtual Servers4. Verify the virtual server for nsx-demo application, which was automatically
created for you.The IP address of the virtual server should be identical to the one you identified inKubernetes!
A server pool called pks-UUID--nsx-nsx-demo-80 was automatically created andassigned to your virtual server.
Review Server Pool for load-balancer
HOL-1935-01-NET
Page 45HOL-1935-01-NET
To see the actual member of your backend load-balancer server pool follow these steps:
1. Click on Server Pools2. Click on your pks-UUID--nsx-nsx-demo-80 server pool3. Click on Pool Members4. Verify, that both of your Kubernetes pods are part the server pool
You should be familiar with this output, as it is the NSX-T representation a of Kubernetescommand you used earlier:
kubectl describe service nsx-demo
HOL-1935-01-NET
Page 46HOL-1935-01-NET
Creation of SNAT IP addresses for aKubernetes ServiceIn this lesson you will see how NSX-T can provide granular assignment of SNAT IPaddresses to services residing in a NATted Kubernetes Namespace.
The default behavior of NSX-T is to assign one SNAT IP address to all Kubernetes podsresiding in the same Kubernetes Namespace, so there is no possibility to decide (e.g. ona physical firewall), if traffic sources from service A or service B.
To offer this differentiation, NSX-T is able to assign specific SNAT IP addresses to specificKubernetes Services.
You will configure an SNAT IP address to Service mapping and verify the assignment witha packet capture.
Verify current SNAT IP for Kubernets Namespace
Currently you have one SNAT IP address assigned for your nsx Kubernetes Namespace,which all of your pods are using, when connecting to IP addresses outside of NSX-T.
First of all verify which IP address is currently used as SNAT IP by your pods.
Open Wireshark from Windows Quick Launch Task Bar
To verify which IP address is used as a source IP address for your pods, you will leverageWireshark and do a packet capture on your Windows machine.
1. Click on the Wireshark icon on the Windows Quick Launch Task Bar.
HOL-1935-01-NET
Page 47HOL-1935-01-NET
Capture icmp packets with Wireshark
1. Set capture filter in Wireshark to icmp.code == 0 and hit Enter-Key2. Start packet capture by clicking on the blue fin button
icmp.code == 0
Open second Putty instance from Windows Quick LaunchTask Bar
1. Click on the Putty icon on the Windows Quick Launch Task Bar.
HOL-1935-01-NET
Page 48HOL-1935-01-NET
Connect to cli-vm
1. Highlight the saved cli-vm session2. Click on Load3. Click Open to connect to the cli-vm via SSH
You will be automatically logged in to a terminal session.
Connect into container instance
To be able to send icmp requests out of a Kubernetes pod instance, towards yourWindows machine, you first need to connect into one of your running Kubernetes pods.
1. List your running Kubernetes pods:
kubectl get pods
2. Connect into a pod instance and get a shell:
kubectl exec -it nsx-demo-UUID /bin/bash
HOL-1935-01-NET
Page 49HOL-1935-01-NET
3. Notice how the CLI prompt of your putty session changes, to verify you areconnected to your pod.
Send icmp request towards Windows machine
Send an icmp request to your Windows machine from your Kubernetes pod by usingcommand:
ping 192.168.110.10 -c 3
HOL-1935-01-NET
Page 50HOL-1935-01-NET
Verify Wireshark capture
In your running Wireshark packet capture you should see icmp requests coming in froma source IP address out of subnet 10.40.14.32/27 (which maps to the NSX IP pool forSNAT IP addresses discussed in module 1).
You may need to adjust the difference sections of Wireshark, to have a betterview.
At this point in time, you should not see any icmp packet in your capture.
HOL-1935-01-NET
Page 51HOL-1935-01-NET
Review pre-defined SNAT IP address for demo application
HOL-1935-01-NET
Page 52HOL-1935-01-NET
HOL-1935-01-NET
Page 53HOL-1935-01-NET
Switch back to your Chrome Browser with NSX Manager and navigate to the IP Poolssection.
1. Click on Inventory2. Click on IP Pools3. Find the IP Pool called svc-snat-ip-pool-014. Click on 1 in the Subnets column5. You can see, that this IP Pool only contains a single IP address of 10.50.15.10,
which you will assign as an SNAT IP to the demo application in the next step.
Review Kubernetes Service definition and annotation
You will now create a special Kubernetes Service, that contains an annotation to informNSX-T which SNAT IP address to use for this Kubernetes Service.
Use the following command to review the Kubernetes Service definition:
more nsx-demo-service-snat.yaml
Take particular note of the service annotation:
annotations:ncp/snat_pool: svc-snat-ip-pool-01
This instructs NSX to use IP Pool svc-snat-ip-pool-01 as SNAT IP Pool for this service.
HOL-1935-01-NET
Page 54HOL-1935-01-NET
Create service and assign specific SNAT IP address
Create the Kubernetes Service with the command
kubectl create -f nsx-demo-service-snat.yaml
Send icmp request towards Windows machine
Switch back to your Putty window connected to the Kubernetes pod instance and sendan icmp request to your Windows machine again by using command
ping 192.168.110.10 -c 3
HOL-1935-01-NET
Page 55HOL-1935-01-NET
Verify Wireshark capture
In your running Wireshark packet capture you should immediately see icmp requestscoming in from a new source IP address: 10.50.15.10.
Verify new SNAT entries for service in NSX Tier-0 router
Switch back to your Chrome Browser with NSX Manager and navigate to the SNATentries configured on t0-pks Tier-0 router.
1. Click on Routing2. Click on your t0-pks Tier-0 router
HOL-1935-01-NET
Page 56HOL-1935-01-NET
3. Click on Services4. Click on NAT5. Find the new created SNAT entry for the demo application on top of the NAT
table
You will notice your familiar pod IP addresses as source IP addresses for the newlycreated NAT rule.
As this is a more explicit NAT rule, than the default NAT rule for your whole KubernetesNamespace, NSX-T created this rule with a higher priority and put it on top of the NATtable.
NSX-T NAT options for Kubernetes Namespaces
You have seen, that NSX-T offers you the freedom of choice, whether you configurerouted Namespaces, NAT entries for a complete Namespace or granular NAT entries forspecific Kubernetes Services.
HOL-1935-01-NET
Page 57HOL-1935-01-NET
ConclusionYou've finished Module 2
Congratulations on completing Module 2.
If you are looking for additional information on VMware PKS and NSX-T, try one of these:
• Read more about NSX Data Center on the official VMware product page• Download the VMware NSX-T Reference Design Guide• Read the Pivotal Container Service blog
Proceed to any module below which interests you most.
• Module 1 - Introduction to PKS on NSX-T• Module 2 - NSX-T network virtualization and Kubernetes namespaces• Module 3 - NSX-T Micro-segmentation and network services for
Kubernetes• Module 4 - NSX-T operational tools and visibility
How to End Lab
If you are NOT moving on to Module 3, you can end the lab here by clicking the end asdepicted.
If you are planning to continue onto Module 3, then proceed to the module clean upprocedures described on the following page.
HOL-1935-01-NET
Page 58HOL-1935-01-NET
Lab clean-upIn this lesson you will clean-up the lab for the upcoming modules.
Lab clean-up
Please clean-up your lab environment to free space and conserve resources for theupcoming modules.
Run the following commands to delete objects, you will not need anymore:
kubectl delete -f nsx-demo-service-snat.yaml
kubectl delete -f nsx-demo-type-loadbalancer.yaml
kubectl delete -f no-nat-namespace.yaml
Verify Lab clean-up
You can check that all Kubernetes objects are deleted successfully with command:
kubectl get all
HOL-1935-01-NET
Page 59HOL-1935-01-NET
Module 3 - NSX-T Micro-segmentation and
Network services forKubernetes (45 minutes)
HOL-1935-01-NET
Page 60HOL-1935-01-NET
IntroductionThis module will show you advanced NSX-T networking and security services forKubernetes.
You will configure a Kubernetes service and review the automatically created Layer 7load-balancing rules in NSX-T.
You will also secure your running pods by leveraging the NSX-T security framework. Youwill also see how Kubernetes Network Policies will be translated into NSX-T firewall rulesto provide visibility to the infrastructure administrator.
Lab Topology
This is the NSX-T networking topology you will be working with in this module. All of itwas already created during the previous modules.
1. A NSX-T Tier-0 router called t0-pks was pre-created for you and attached to thephysical networks via BGP routing. PKS was also configured to use this Tier-0router to attach the automatically created Tier-1 routers for the KubernetesNamespaces.
2. A NSX-T Tier-1 router called t1-pks-mgmt was pre-created for you. This routerhomes all PKS control VMs like PKS itself, BOSH, Pivotal Ops Manager and HarborRegistry.
HOL-1935-01-NET
Page 61HOL-1935-01-NET
3. This is Tier-1 router for your PKS Kubernetes cluster, which was automaticallyprovisioned during PKS cluster deployment in module 1.
4. This is the Tier-1 routing instance for your NATted Kubernetes Namespace namednsx. You will deploy all demo applications within this Kubernetes Namespace andverify NSX-T networking objects within this Namespace.
HOL-1935-01-NET
Page 62HOL-1935-01-NET
Pre-configure distributed firewall rulesIn this lesson you will pre-configure NSX-T NSGroups and configure NSX-T distributedfirewall rules for the respective NSGroups.
You will see, how this enables security administrators to pre-configure a firewall ruleseteven for applications which are not existent at this point in time.
You will discover how NSX-T auto-discovers Kubernetes pods based on assignedKubernetes Labels, which translate to NSX-T security tags, and provide your pre-configured firewall ruleset to these pods.
Also, you will see how NSX-T provides true Micro-segmentation between Kubernetespods.
Open Chrome Browser from Windows Quick Launch TaskBar
1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.
HOL-1935-01-NET
Page 63HOL-1935-01-NET
Navigate to NSX-T Manager
1. Use the bookmark called NSX-T Manager to connect to NSX-T Manager.
HOL-1935-01-NET
Page 64HOL-1935-01-NET
2. Login to NSX-T Manager using the following credentials:
Username: adminPassword: VMware1!
3. Click Login button
Navigate to NSGroup configuration
When connected to NSX Manager navigate to the NSGroups configuration section.
HOL-1935-01-NET
Page 65HOL-1935-01-NET
1. Click on Inventory and you will be be automatically directed to the Groupssection
Add NSGroup for Web frontend
Prepare a NSGroup which will match a Kubernetes Label.
Remember: A Kubernetes Label will be translated to a NSX security tag.
Any newly created Kubernetes pod matching this label will be automatically added as amember of this dynamic NSGroup.
1. Click Add to add a new NSGroup2. Name the group yelb-web
HOL-1935-01-NET
Page 66HOL-1935-01-NET
Add dynamic membership criteria
To add a criteria for dynamic group membership perform the following steps
1. Switch to Membership Criteria tab2. Add Criteria3. Set criteria to Logical Port Tag Equals web Scope Equals secgroup as shown in
the screenshot4. Click Save
After saving your NSGroup definition, there wil be no member. You will see theNSGroup populating its members, as soon as you have have rolled out thenext demo application.
HOL-1935-01-NET
Page 67HOL-1935-01-NET
Add NSX firewall rule for NSGroup
You will now add a new NSX firewall rule for this NSGroup, which will be adjusted later,to demonstrate how NSX-T is able to deliver true Micro-segmentation even forcontainerized workloads in a Kubernetes environment.
HOL-1935-01-NET
Page 68HOL-1935-01-NET
1. Click on Firewall to navigate to the NSX Firewall UI2. Click on the three horizontal bars on the left hand side of the section called
Admin Section3. Click Add Rule
Define firewall rule
In the first step, you will add a rule allowing all traffic between the web frontendKubernetes pods.
You will adjust this rule at a later point in time, to test if Micro-segmentation is workingas desired.
1. Click on the pencil icon in the Name column and call the rule Web-UI micro-segmentation
HOL-1935-01-NET
Page 69HOL-1935-01-NET
Add NSGroup as source and destination
You will need to perform the following steps in the sources and destinationssection for this firewall rule!
1. Click on the pencil icon in the Sources / Destination column2. Use the drop-down menu and navigate to NSGroup3. Mark yelb-web NSGroup4. Use the right arrow to move yelb-web to the selected groups section5. Click OK6. Perform this step again for the Sources / Destination column
HOL-1935-01-NET
Page 70HOL-1935-01-NET
Save firewall rule
Your firewall rule should now look like the one in the screenshot.
1. Click on Save
Open Putty from Windows Quick Launch Task Bar
1. Click on the Putty icon on the Windows Quick Launch Task Bar.
Connect to cli-vm
1. Highlight the saved cli-vm session2. Click on Load
HOL-1935-01-NET
Page 71HOL-1935-01-NET
3. Click Open to connect to the cli-vm via SSH
You will be automatically logged in to a terminal session.
Get Kubernetes credentials via PKS CLI
If you started the lab with this module, please perform the following step.
If you are coming from the previous module, you can skip this step!
Use the command following command to retrieve the login credentials your Kubernetescluster created by VMware Pivotal Container Service (PKS):
pks get-credentials my-cluster
You will not need to switch between different PKS cluster configurations, asthere is only one PKS Kubernetes cluster running in this lab
HOL-1935-01-NET
Page 72HOL-1935-01-NET
Change directory in cli-vm
The pre-created .yaml files used in this lab are located in the 1935 directory.
1. Use the following command to change the directory:
cd 1935
Kubernetes Namespace creation
If you started the lab with this module, please perform the following step.
If you are coming from the previous module, you can skip this step!
In this step you will create a Kubernetes Namespace with the default settings.
1. Create a Kubernetes Namespace called nsx:
kubectl create ns nsx
2. Verify that the Kubernetes Namespace was created:
kubectl get ns
HOL-1935-01-NET
Page 73HOL-1935-01-NET
Set Kubernetes context
If you started the lab with this module, please perform the following step.
If you are coming from the previous module, you can skip this step!
To set your working Namespace to nsx Namespace use the command:
kubectl config set-context my-cluster --namespace nsx
Review .yaml file for demo application
You can review the pre-configured .yaml file for the demo application used in thismodule with the command:
more yelb-app.yaml
This application consists of various pieces and an explanation for each and every one isoutside of the scope of this course.
The interesting parts of the .yaml for this lab are the following:
The deployment specifications for the frontend UI pods will add a label called secgroup:web to the pods, which will be translated to an NSX-T security tag by the NSXcontainer plugin.
With this label / security tag assigned, the frontend UI pods will be automatically placedin the NSGroup yelb-web configured earlier.
apiVersion: extensions/v1beta1kind: Deploymentmetadata:
name: yelb-uispec:
replicas: 2template:
metadata:
HOL-1935-01-NET
Page 74HOL-1935-01-NET
labels:app: yelb-uitier: frontendsecgroup: web
...
Also, NSX-T will automatically configure a Kubernetes Ingress load-balancer for thisapplication, which is a Layer 7 load-balancer and which will listen for HTTP requestson URL yelb.demo.corp.local and forward to the frontend pods of this applicationaccordingly.
The ingress load-balancer will be reviewed during the next lessons.
apiVersion: extensions/v1beta1kind: Ingressmetadata:
name: yelb-ui-ingressspec:
rules:- host: yelb.demo.corp.local
http:paths:- backend:
serviceName: yelb-uiservicePort: 80
...
Deploy demo application
Deploy an application by using the command:
HOL-1935-01-NET
Page 75HOL-1935-01-NET
kubectl create -f yelb-app.yaml
Check status of Kubernetes pods
Wait until all Kubernetes pods are moving from status ContainerCreating to statusRunning.
You can check for the status with the command:
kubectl get pods -o wide
It can take some minutes until the status changes to Running and you canproceed to the next step.
Also, take a note of the names and IP addresses of your yelb-ui-UUID pods, as you willneed them during the next steps.
The IP addresses in your lab will differ from the ones provided in thescreenshots, as they are dynamically allocated by NSX-T.
HOL-1935-01-NET
Page 76HOL-1935-01-NET
Verify NSGroup membership of UI pods
Switch back to your Chrome browser with NSX Manager UI.
You will now verify, that the pre-configured NSGroup auto-populated its members withyour two yelb-ui pods.
1. Click on Inventory2. Select your yelb-web NSGroup3. Select Members
HOL-1935-01-NET
Page 77HOL-1935-01-NET
Verify NSGroup membership of UI pods
Use the drop-down menu to toggle between the different types of Member Objects.
Have a look at the Logical Port and IP Address Member Objects.
HOL-1935-01-NET
Page 78HOL-1935-01-NET
In the Logical Port section choose Effective members and notice the two yelb-ui portsbeing a member of this group.
In the IP Address section you will see the IP addresses related to these logical ports,which are the IP addresses of your yelb-ui pods you saw in the previous steps.
Conclusion of NSGroups
You verified the capabilities of NSX-T to automatically add members to a pre-configuredgroup based on Kubernetes Labels, which will be translated to NSX-T security tags.
In the following steps, you will perform some basic connectivity tests to verify that NSXMicro-segmentation is working for your yelb-ui pods.
Connect to yelb-ui pod
Open your Putty session to find out the names and IP addresses of your yelb-ui podswith the command:
HOL-1935-01-NET
Page 79HOL-1935-01-NET
kubectl get pods -o wide
Take a note of the NAME of one of your yelb-ui-UUID pods.
Connect to yelb-ui pod
To connect to a yelb-ui pod of your choice use the command:
kubectl exec -it yelb-ui-UUID /bin/bash
You will notice how the command prompt changes, as soon as you are connected to thepod instance.
The pod UUIDs in your lab will differ from the ones provided in thescreenshots!
Perform basic connectivity test
Ping between your yelb-ui pods by using the command:
ping **IP address of other yelb-ui pod** -c 3
You will see how your other yelb-ui instance replies to your icmp requests.
HOL-1935-01-NET
Page 80HOL-1935-01-NET
Adjust firewall rule
Switch back to your Chrome browser with NSX Manager UI.
You will now change the pre-configured firewall rule to add Micro-segmentation betweenyour frontend yelb-ui pods.
1. Click on Firewall and navigate to the Web-UI micro-segmentation rule2. Change firewall action to Reject by using the pencil icon next to the firewall
action3. Click Save to deploy the firewall changes
You could have also set an firewall action of Drop, but to see the firewall isactually dropping packets, an action of Reject is used in this lab.
Re-perform basic connectivity test
HOL-1935-01-NET
Page 81HOL-1935-01-NET
Switch back to the Putty window with the connection into the pod instance and re-perform your Ping test to the other pod instance.
You will see, that your icmp requests fail and NSX-T distributed firewall is rejecting thepackets providing Micro-segmentation between your yelb-ui pods.
ping **IP address of other yelb-ui pod** -c 3
exit
Exit the pod instance after performing the Ping.
HOL-1935-01-NET
Page 82HOL-1935-01-NET
NSX-T ingress load-balancer forKubernetes serviceIn the previous module you already discovered, how NSX-T can provide Layer 4 load-balancing for a Kubernetes Service which does not necessarily need to listen to HTTP /HTTPS TCP ports.
In this lesson you will see, how NSX-T can also provide Layer 7 load-balancing forKubernetes Services of type ingress for HTTP / HTTPS based applications.
You will deploy two applications sharing the same virtual IP address on a NSX-T load-balancer and you will see how NSX-T is able to perform load-balancing based on Layer 7HTTP requests to decide how to handle packets.
Check NSX-T Virtual Server IP addresses
Switch to your Chrome Browser with NSX Manager UI and navigate to the virtual servers
1. Click on Load Balancing2. Click on Virtual Servers
If the IP address in your virtual server configuration is not 10.40.14.36, youneed to adjust a DNS entry to make your ingress load-balancer work!
Take a note of your IP address if it is not 10.40.14.36!
HOL-1935-01-NET
Page 83HOL-1935-01-NET
Open DNS
If the IP address in your virtual server configuration for Kubernetes ingressload-balancer is 10.40.14.36, you can skip the next steps and proceed withstep Verify DNS entries
1. Click on the Windows Startmenu icon2. Search for dns3. Open DNS
HOL-1935-01-NET
Page 84HOL-1935-01-NET
Locate Forward Lookup Zone
1. Expand ControlCenter.corp.local2. Expand Forward Lookup Zones3. Click on demo.corp.local to see DNS entries for this zone4. Double click on * Host (A) 10.40.14.36 DNS entry
HOL-1935-01-NET
Page 85HOL-1935-01-NET
Adjust DNS entry
1. Adjust the IP address of the DNS entry to match your Kubernetes ingress load-balancer IP address
2. Click OK
Verify DNS entries
In this lesson you will review the automatically created NSX-T ingress load-balancer forthe demo application you deployed earlier.
HOL-1935-01-NET
Page 86HOL-1935-01-NET
The lab is configured in a way, that there is a wildcard DNS entry for *.demo.corp.localresolving to 10.40.14.36.
Your pre-created load-balancer, which you reviewed in the first module is using this IPaddress of 10.40.14.36 to terminate a HTTP and HTTPS VIP and perform load-balancingbased on the requested URL.
To show this capability of the NSX-T load-balancer you will deploy a second demoapplication using the same IP address, but a different URL.
First verify the DNS entry by opening a command prompt and check the DNS entry:
nslookup *.demo.corp.local
If you needed to adjust the DNS entry in the previous steps, your IP addresswill differ to the one in the screenshot!
Deploy second demo application
You can review the configuration details of the second application you will deploy in thisstep by using the command:
more nsx-demo-type-ingress.yaml
This application will also use an ingress load-balancer and uses a URL ofnsx.demo.corp.local as you can see in the ingress definition.
apiVersion: extensions/v1beta1kind: Ingressmetadata:
name: nsx-demo-ingressspec:
rules:- host: nsx.demo.corp.local
http:paths:- backend:
HOL-1935-01-NET
Page 87HOL-1935-01-NET
serviceName: nsx-demoservicePort: 80
...
Deploy this application with the command:
kubectl create -f nsx-demo-type-ingress.yaml
Review NSX-T load-balancer configuration
Switch back to your Chrome Browser with NSX Manager UI and navigate to the virtualservers
1. Click on Load Balancing2. Click on Virtual Servers
You will notice two virtual servers using the same IP address of 10.40.14.36 forterminating HTTP (80) and HTTPS (443) requests.
These virtual servers were automatically created during PKS Kubernetes cluster creationand serve as Kubernetes ingress load-balancer.
Note: You will also see a third virtual server listening on 10.40.14.34:8443. This is thevirtual server for the Kubernetes masters' API, which was already discussed in module 1.
The load-balancer virtual server IP addresses may be different in your labenvironment!
HOL-1935-01-NET
Page 88HOL-1935-01-NET
Locate load-balancer rules
As your demo applications both use HTTP (TCP 80) protocol, navigate to the HTTP virtualserver to review auto-created load-balancing rules for your ingress load-balancer.
1. Choose HTTP virtual server2. Click on LB Rules3. Use Phase drop-down menu and choose Http Forwarding
Review load-balancer rules
You can see two HTTP forwarding rules for your demo applications.
Each rule distinguishes based on URL which backend pool to use to balance incomingHTTP requests.
HOL-1935-01-NET
Page 89HOL-1935-01-NET
Review load-balancer backend pools
To navigate to the auto-created backend load-balancer pools, click on Server Pools
Review pool members
You can see that each load-balancer server pool for the deployed applications hastwo members.
HOL-1935-01-NET
Page 90HOL-1935-01-NET
Review pool members
By clicking 2 in the Members column you can see the details about your poolmembers.
You will notice that the pool members match the names of your deployed Kubernetespods for your applications.
Perform this task for both load-balancer pools.
HOL-1935-01-NET
Page 91HOL-1935-01-NET
Verify Kubernetes pod names
You can double-check the Kubernetes pod names by switching back to your Puttysession and using the command:
kubectl get pods
HOL-1935-01-NET
Page 92HOL-1935-01-NET
Test demo applications
Switch back to your Chrome browser and open a new tab.
Click on Yelb shortcut to open you demo application.
Click on the Vote buttons and reload the page several times, to fill the backenddatabase.
*There is a chance that DNS will not resolve inside properly the browser. If this happensyou will be presented with a search page rather than the application. If this happens toverify the app go to the cli-vm and issue the following command:
HOL-1935-01-NET
Page 93HOL-1935-01-NET
curl yelb.demo.corp.localThis will return the HTML response from the page.
Test demo applications
HOL-1935-01-NET
Page 94HOL-1935-01-NET
Open another tab in your Chrome browser and click on the NSX Demo shortcut thistime.
Notice, that even though both applications are using HTTP (TCP 80) on the same IPaddress, NSX-T load-balancer directs you to the right backend pools depending on whichapplication URL you are requesting.
Click on the NSX Demo shortcut several times to see how NSX-T load-balancing worksand see how your request are redirected between your two backend pods each time youreload the page.
*There is a chance that DNS will not resolve properly inside the browser. If this happensyou will be presented with a search page rather than the application. If this happens toverify the app go to the cli-vm and issue the following command:curl nsx.demo.corp.localThis will return the HTML response from the page. This can be repeated to show both IPsbeing returned.
HOL-1935-01-NET
Page 95HOL-1935-01-NET
Secure application with KubernetesNetwork PolicyIn this lesson you will learn a second way to secure your Kubernetes container networkswith NSX-T distributed firewall.
In a previous lesson you have learned how a security administrator can pre-configureNSX-T firewall rules for Kubernetes applications.
But it is also possible to allow a Kubernetes administrator to secure its own applicationsby means of Kubernetes Network Policy.
You will configure a Kubernetes Network Policy and see how it translates into NSX-Tdistributed firewall rules, giving the security administrator a single pane of glass toreview, what was configured by an application owner within Kubernetes.
HOL-1935-01-NET
Page 96HOL-1935-01-NET
Review NSX firewall markers for Kubernetes NetworkPolicy
An NSX-T administrator can decide, where NSX Container Plugin (NCP) will placetranslated Kubernetes Network Policies within the NSX-T firewall ruleset.
HOL-1935-01-NET
Page 97HOL-1935-01-NET
To define where NCP will place Kubernetes Network Policies, NSX tags are used forspecific firewall sections.
Switch to your Chrome Browser with NSX Manager UI.
1. Click on Firewall to navigate to the NSX Firewall UI2. Click on the three horizontal bars on the left hand side of the Admin Section3. Click Manage Tags
Review NSX firewall markers for Kubernetes NetworkPolicy
You will see, that the Admin Section is tagged with Scope: ncp/fw_sect_marker andTag: top
This tag instructs NCP to put all Kubernetes Network Policies below this section, as it isthe top section.
HOL-1935-01-NET
Page 98HOL-1935-01-NET
Review NSX firewall markers for Kubernetes NetworkPolicy
Perform the same step for the Default Layer3 Section and you will notice it is taggedwith Scope: ncp/fw_sect_marker and Tag: bottom
With these two tagged firewall sections, NCP knows that it needs to put all KubernetesNetwork Policies between these sections.
Click on Cancel to return to the NSX firewall management.
HOL-1935-01-NET
Page 99HOL-1935-01-NET
Review Kubernetes Network policy definition
HOL-1935-01-NET
Page 100HOL-1935-01-NET
HOL-1935-01-NET
Page 101HOL-1935-01-NET
Switch back to your Putty window and review the pre-defined Kubernetes Network Policywith command:
more yelb-netpol.yaml
This policy secures the yelb demo application and uses Kubernetes Labels (which will betranslated to NSX-T security tags) to automatically generate grouping of the respectiveKubernetes pods.
It also defines for each application tier, which other application tier is allowed to connecton which TCP port.
Deploy Kubernetes Network Policy
Deploy the Kubernetes Network Policy with the command:
kubectl create -f yelb-netpol.yaml
HOL-1935-01-NET
Page 102HOL-1935-01-NET
Review translated Kubernetes Network Policy
Switch back to your Chrome browser with NSX Manager firewall UI and refresh the pageby hitting F5 on your keyboard.*
HOL-1935-01-NET
Page 103HOL-1935-01-NET
You will notice a lot of new firewall sections were added between Admin Section andDefault Layer3 Section. This is the Kubernetes Network Policy you just deployedtranslated into NSX-T distributed firewall rules.
*The refresh button will not load the new sections. The browser page must be reloaded.
Review translated Kubernetes Network Policy
You can review the firewall rules by expanding each of them by clicking the + icon onthe left.
Click on the different objects within the rule itself.
You will see the TCP ports as defined in the Kubernetes Network Policy.
Also have a look at the source and destination groups and you will see your alreadyfamiliar IP addresses of the web frontend pods of your demo application.
HOL-1935-01-NET
Page 104HOL-1935-01-NET
Conclusion
You have seen, how NSX-T is able to translate Kubernetes Network Policies and usethem to auto-populate the NSX-T firewall ruleset.
In the next module you will verify, that just the deployed Kubernetes Network Policy isworking as desired, by checking reachability of your frontend pods with a NSX-Tintegrated tool called Traceflow.
HOL-1935-01-NET
Page 105HOL-1935-01-NET
ConclusionYou've finished Module 3
Congratulations on completing Module 3.
If you are looking for additional information on VMware PKS and NSX-T, try one of these:
• Read more about NSX Data Center on the official VMware product page• Download the VMware NSX-T Reference Design Guide• Read the Pivotal Container Service blog
Proceed to any module below which interests you most.
• Module 1 - Introduction to PKS on NSX-T• Module 2 - NSX-T network virtualization and Kubernetes namespaces• Module 3 - NSX-T Micro-segmentation and network services for
Kubernetes• Module 4 - NSX-T operational tools and visibility
How to End Lab
If you are NOT moving on to Module 4, you can end the lab here by clicking the end asdepicted.
If you are planning to continue onto Module 4, then proceed to the module clean upprocedures described on the following page.
HOL-1935-01-NET
Page 106HOL-1935-01-NET
Lab clean-upIn this lesson you will clean-up the lab for the upcoming modules.
Lab clean-up
Please clean-up your lab environment to free space and conserve resources for theupcoming modules.
Run the following command to delete objects, you will not need anymore:
kubectl delete -f nsx-demo-type-ingress.yaml
Please do not delete the yelb demo application as this will still be used in thenext module!
HOL-1935-01-NET
Page 107HOL-1935-01-NET
Module 4 - NSX-TOperational Tools andVisibility (30 minutes)
HOL-1935-01-NET
Page 108HOL-1935-01-NET
IntroductionThis module will show you how to use the integrated monitoring and troubleshootingtools in NSX-T for a Kubernetes environment.
You will use NSX-T Traceflow to verify connectivity between two pods in yourenvironment to identify any potential issues.
You will also set up a remote span session to your local machine, to analyze podnetwork traffic with tools like Wireshark.
Lab Topology
This is the NSX-T networking topology you will be working with in this module. All of itwas already created during the previous modules.
1. A NSX-T Tier-0 router called t0-pks was pre-created for you and attached to thephysical networks via BGP routing. PKS was also configured to use this Tier-0router to attach the automatically created Tier-1 routers for the KubernetesNamespaces.
2. A NSX-T Tier-1 router called t1-pks-mgmt was pre-created for you. This routerhomes all PKS control VMs like PKS itself, BOSH, Pivotal Ops Manager and HarborRegistry.
HOL-1935-01-NET
Page 109HOL-1935-01-NET
3. This is Tier-1 router for your PKS Kubernetes cluster, which was automaticallyprovisioned during PKS cluster deployment in module 1.
4. This is the Tier-1 routing instance for your NATted Kubernetes Namespace namednsx. You will deploy all demo applications within this Kubernetes Namespace andverify NSX-T networking objects within this Namespace.
HOL-1935-01-NET
Page 110HOL-1935-01-NET
Lab preparationIf you are starting your lab with this module, you will need to perform thefollowing steps!
If you already worked through the previous module and have your yelb demoapplication still running, you can skip to the next lesson!
Open Putty from Windows Quick Launch Task Bar
1. Click on the Putty icon on the Windows Quick Launch Task Bar.
Connect to cli-vm
1. Highlight the saved cli-vm session2. Click on Load3. Click Open to connect to the cli-vm via SSH
HOL-1935-01-NET
Page 111HOL-1935-01-NET
You will be automatically logged in to a terminal session.
Get Kubernetes credentials via PKS CLI
Use the command following command to retrieve the login credentials for yourKubernetes cluster created by VMware Pivotal Container Service (PKS):
pks get-credentials my-cluster
You will not need to switch between different PKS cluster configurations, asthere is only one PKS Kubernetes cluster running in this lab
Change directory in cli-vm
The pre-created .yaml files used in this lab are located in the 1935 directory. These.yaml files define different kinds of objects you will create during the lab, like Kubernetes
HOL-1935-01-NET
Page 112HOL-1935-01-NET
Namespaces, applications running in Kubernetes pods or Kubernetes Network Policies tosecure your applications.
1. Use the following command to change the directory:
cd 1935
Kubernetes Namespace creation
In this step you will create a Kubernetes Namespace with the default settings.
1. Create a Kubernetes Namespace called nsx:
kubectl create ns nsx
2. Verify that the Kubernetes Namespace was created:
kubectl get ns
Set Kubernetes context
To set your working Namespace to nsx Namespace use the command:
kubectl config set-context my-cluster --namespace nsx
HOL-1935-01-NET
Page 113HOL-1935-01-NET
Deploy demo application
Deploy a demo application:
kubectl create -f yelb-app.yaml
Verify Kubernetes pod status
Wait until all pod instances are moving from status ContainerCreating to status Running.
You can check for the status with the command:
HOL-1935-01-NET
Page 114HOL-1935-01-NET
kubectl get pods -o wide
It can take some minutes until the status changes to Running and you canproceed to the next step.
The IP addresses in your lab will differ from the ones provided in thescreenshots, as they are dynamically allocated by NSX-T.
Deploy Kubernetes Network Policy
Deploy a Kubernetes Network Policy with the command:
kubectl create -f yelb-netpol.yaml
HOL-1935-01-NET
Page 115HOL-1935-01-NET
NSX-T Traceflow for Kubernetes PodsNSX-T Traceflow allows you to inspect the path of a packet as it travels from one logicalport on the logical network to another logical port on the network.
Traceflow traces the transport node-level path of a packet injected at a logical port.
The trace packet traverses the logical switch overlay, but is not visible to interfacesattached to the logical switch. In other words, no packet is actually delivered to the testpackets intended recipients.
Deploy a second demo application in a differentNamespace
Remember, that all Kubernetes pods in this lab belonging to the sameKubernetes Namespace share one logical-switch.
This means that no IP routing happens when pods communicate with otherpods in the same Kubernetes Namespace.
To show the full capabilities of the NSX-T Traceflow feature including routing, you needto deploy a second demo application in a different Kubernetes Namespace, to enforcerouting.
Use the following command to deploy a second demo application in the defaultKubernetes Namespace:
kubectl create -n default -f nsx-demo-type-ingress.yaml
Open Chrome Browser from Windows Quick Launch TaskBar
HOL-1935-01-NET
Page 116HOL-1935-01-NET
1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.
HOL-1935-01-NET
Page 117HOL-1935-01-NET
Navigate to NSX-T Manager
1. Use the bookmark called NSX-T Manager to connect to NSX-T Manger.
HOL-1935-01-NET
Page 118HOL-1935-01-NET
2. Login to NSX-T Manager using the following credentials:
Username: adminPassword: VMware1!
3. Click Login button
Navigate to NSX-T Traceflow
1. Click on Tools2. Click on Traceflow
HOL-1935-01-NET
Page 119HOL-1935-01-NET
Select Source for Traceflow
In this lesson you will choose source and destination Kubernetes pods to emulatenetwork traffic.
1. Use the Type drop-down menu and select Logical Port2. Under Port start typing nsx-demo and choose one of the two Kubernetes pod
ports which are part of the nsx demo application
Select Destination for Traceflow
HOL-1935-01-NET
Page 120HOL-1935-01-NET
Perform the same steps to select a destination for Traceflow
1. Use the Type drop-down menu and select Logical Port2. Under Port start typing yelb-ui and choose one of the two Kubernetes pod ports
which are part of the yelb-ui web frontend application
Check Source and Destination IP and MAC
NSX-T will automatically populate the IP and MAC address associated with the logicalKubernetes pods ports.
Notice, that the IP addresses are out of different IP subnets, as theKubernetes pods selected are members of different Kubernetes Namespaces.
HOL-1935-01-NET
Page 121HOL-1935-01-NET
Start Trace
1. Click on Trace to start the packet trace
Traceflow result
After some seconds the results for Traceflow will appear.
1. On the left side of the screen you see the logical network connection betweenthe two selected logical ports.You can see the logical network representation on the top and the physical ESXiservers on the bottom.
HOL-1935-01-NET
Page 122HOL-1935-01-NET
You can click on each of the components to drill down into the component and toget a more detailed information.
2. On the right side of the screen you see the Traceflow results and you willnotice, that the packet which NSX-T Traceflow sent out on behalf of the containerwas dropped immediately by NSX-T distributed firewall.You can identify the firewall rule dropping the packet by matching the shown ruleID to the corresponding rule in the Firewall management section.This is the expected result of this trace, as you used the default settings, which isto use an icmp echo request to perform the trace between source and destinationKubernetes pods, but the Kubernetes Network Policy applied to the destinationpod earlier allows only HTTP traffic.
Adjust Traceflow settings
1. Click on Edit to return to the Traceflow configuration
HOL-1935-01-NET
Page 123HOL-1935-01-NET
Adjust Traceflow settings and re-trace
1. Click on Advanced to show more options for Traceflow configuration2. Use the drop-down menu next to Type and choose TCP as a protocol3. Type in Source port 12344. Type in Destination port 805. Click on Trace to perform the Traceflow
HOL-1935-01-NET
Page 124HOL-1935-01-NET
Traceflow result
As you used a legitimate packet type now to perform the Traceflow, you will see, thatTraceflow succeeded this time.
1. The logical network connection diagram on the left side of the screen will bethe same as in the previous Traceflow result, as you were working with the exactsame Kubernetes pods.
2. The Traceflow results on the right side will now show you that the packet wassuccessfully delivered to the destination logical port.Every logical network component within NSX-T will report back to NSX-TManager how it handled the packet.
3. To get even more detailed informations, you can download Routing and ARPtables from the logical routers in the packet path and MAC and TEP tablesfrom the logical switches in the packet path by clicking the download symbolnext to the component name.
Conclusion
This lesson showed you, how to use NSX-T Traceflow to troubleshoot connectionsbetween two endpoints connected to a NSX-T virtualized network, may it be a container,Kubernetes pod or a virtual machine.
With NSX-T Traceflow it is not necessary to have access to a specific source ordestination to test network connections between them.
HOL-1935-01-NET
Page 125HOL-1935-01-NET
NSX-T Traceflow will synthesize the desired packets for you and allows you totroubleshoot connections end to end without the help of an owner of the Kubernetes podor virtual machine you want to troubleshoot.
HOL-1935-01-NET
Page 126HOL-1935-01-NET
NSX-T Port Mirroring for containerportsLogical port mirroring lets you replicate and redirect all of the traffic coming in or out ofa logical switch port attached to a virtual port of a Kubernetes pod or a virtual machine.
The mirrored traffic is sent encapsulated within a Generic Routing Encapsulation (GRE)tunnel to a collector so that all of the original packet information is preserved whiletraversing the network to a remote destination.
Typically port mirroring is used in the following scenarios:
• Troubleshooting - Analyze the traffic to detect intrusion and debug and diagnoseerrors on a network.
• Compliance and monitoring - Forward all of the monitored traffic to a networkappliance for analysis and remediation.
Compared to the physical port mirroring, logical port mirroring ensures that all of thepod or VM network traffic is captured. If you implement port mirroring only in thephysical network, some of the pod or VM network traffic fails to be mirrored. Thishappens because communication between VMs or Kubernetes pods residing on thesame host never enters the physical network and therefore does not get mirrored.
You can forward the traffic captured by a workload connected to a logical network andmirror that traffic to a collector.
HOL-1935-01-NET
Page 127HOL-1935-01-NET
Add a custom Port Mirroring Switching Profile
Switch to your Chrome browser with NSX Manager UI.
1. Click Switching in the navigation panel2. Click the Switching Profiles tab3. Click Add4. Select Port Mirroring
HOL-1935-01-NET
Page 128HOL-1935-01-NET
Configure custom Port Mirroring Switching Profile
1. Name the switching profile HOL-port-mirroring-profile2. Add IP address 192.168.110.10 as a Destination. This is the IP address of the
Windows machine you are working on.3. Click Save
Navigate to Logical Ports
HOL-1935-01-NET
Page 129HOL-1935-01-NET
1. Click on Ports tab
Find Logical Port to mirror traffic from
1. Start typing yelb-app in the search bar on the top right. This will filter for thelogical port of the yelb-app Kubernetes pod which will be monitored in this lesson.
Find IP address of container instance
HOL-1935-01-NET
Page 130HOL-1935-01-NET
1. Click on the port of yelb-app Kubernetes pod (this should be the only visiblelogical port after using the search function in the previous step)
2. Click on Address Bindings3. Take a note of the IP address of your Kubernetes pod associated with this logical
port under Manual Bindings
The IP address in your lab might be different from the one shown in thescreenshot.
Open Wireshark from Windows Quick Launch Task Bar
1. Click on the Wireshark icon on the Windows Quick Launch Task Bar.
Set capture filter
1. Set capture filter in Wireshark to tcp.port == 4567 and not icmp and hitEnter-Key
2. Start packet capture by clicking on the blue fin button
tcp.port == 4567 and not icmp
You may need to adjust the difference sections of Wireshark, to have a betterview.
HOL-1935-01-NET
Page 131HOL-1935-01-NET
TCP Port 4567 is used by the web frontend pods of the demo application toconnect to the app-tier pods.
At this point in time, you should not see any packet in your capture as theport mirroring profile is not assigned to the virtual port, yet.
Apply Port Mirroring Profile
Switch back to your Chrome browser with NSX Manager UI.
1. Click Manage2. Select Port Mirroring
Change Port Mirroring Profile
1. Click on Change to select another Port Mirroring Profile
HOL-1935-01-NET
Page 132HOL-1935-01-NET
Choose Port Mirroring Profile
1. Choose HOL-port-mirroring-profile from the drop-down menu2. Click Save
Generate traffic
HOL-1935-01-NET
Page 133HOL-1935-01-NET
1. Open another tab in your Chrome Browser and press the Yelb shortcut button2. Click on the Vote buttons several time to generate traffic between the web
frontend pods and the app-tier pods
Verify Wireshark capture
In your running Wireshark packet capture you should see TCP port 4567 packets comingin for the IP adress of your app-tier Kubernetes pod you discovered earlier.
Conclusion
In this lesson you have seen how NSX-T is able to remote span ports of Kubernetes podsdue to the fact that every single Kubernetes pod is connected by its own logicalinterface directly to a NSX-T virtual network.
This a big differentiator for NSX-T as most other solutions on the market only connectKubernetes nodes to the network and have no insight into the virtual Kubernetesnetwork itself.
HOL-1935-01-NET
Page 134HOL-1935-01-NET
ConclusionYou've finished Module 4
Congratulations on completing Module 4.
This is the end of this lab.
If you are looking for additional information on VMware PKS and NSX-T, try one of these:
• Read more about NSX Data Center on the official VMware product page• Download the VMware NSX-T Reference Design Guide• Read the Pivotal Container Service blog
How to End Lab
To end your lab click on the END button.
HOL-1935-01-NET
Page 135HOL-1935-01-NET
ConclusionThank you for participating in the VMware Hands-on Labs. Be sure to visithttp://hol.vmware.com/ to continue your lab experience online.
Lab SKU: HOL-1935-01-NET
Version: 20200210-210231
HOL-1935-01-NET
Page 136HOL-1935-01-NET