Module 1 Vmware vSphere Network an Overview

48
To understand the vNetwork components in detail, it is important to understand the vSphere datacenter architecture. A typical vSphere datacenter comprises basic physical building blocks such as x86 computing servers, storage networks and arrays, IP networks, a management server, and desktop clients. The computing servers are industry standard x86 servers that run VMware ESXi™ Server on the bare metal hardware. The virtual machines are hosted on an ESXi Server. All resources needed by a virtual machine is provided by the ESXi Server. Each computing server is referred to as a standalone host in the virtual environment. A number of similarly configured x86 servers can be grouped together with connections to the same network and storage subsystems to provide a pool of resources in the virtual environment. Fiber Channel Storage Area Network or SAN arrays, Internet Small Computer Systems Interface or iSCSI SAN arrays, and Network- attached Arrays or NAS arrays are widely-used storage technologies supported vSphere to meet different datacenter storage needs. Sharing the storage arrays between groups of servers through the storage area networks allows the IT department to pool storage resources and provides more flexibility in provisioning storage resources for virtual machines. Each computing server can have multiple gigabit Ethernet network interface cards to provide high bandwidth and reliable networking to the entire datacenter. VMware vCenter™ provides a convenient single point of control to the datacenter. It runs on the top of Windows 2003 and 2008 Server 64-bit versions to provide many essential datacenter services such as access control, performance monitoring, and configuration. vCenter Server unifies resources from the individual computing servers to be shared among virtual machines in the entire datacenter. It accomplishes this by managing the assignment of virtual machines to computing servers and the assignment of resources to virtual machines within a given computing server based on the policies set by the system administrator.

Transcript of Module 1 Vmware vSphere Network an Overview

Page 1: Module 1 Vmware vSphere Network an Overview

To understand the vNetwork components in detail, it is important to understand the vSphere datacenter architecture.

A typical vSphere datacenter comprises basic physical building blocks such as x86 computing servers, storage networks and arrays, IP networks, a management server, and desktop clients. The computing servers are industry standard x86 servers that run VMware ESXi™ Server on the bare metal hardware. The virtual machines are hosted on an ESXi Server. All resources needed by a virtual machine is provided by the ESXi Server. Each computing server is referred to as a standalone host in the virtual environment. A number of similarly configured x86 servers can be grouped together with connections to the same network and storage subsystems to provide a pool of resources in the virtual environment.

Fiber Channel Storage Area Network or SAN arrays, Internet Small Computer Systems Interface or iSCSI SAN arrays, and Network-attached Arrays or NAS arrays are widely-used storage technologies supported vSphere to meet different datacenter storage needs. Sharing the storage arrays between groups of servers through the storage area networks allows the IT department to pool storage resources and provides more flexibility in provisioning storage resources for virtual machines.

Each computing server can have multiple gigabit Ethernet network interface cards to provide high bandwidth and reliable networking to the entire datacenter.

VMware vCenter™ provides a convenient single point of control to the datacenter. It runs on the top of Windows 2003 and 2008 Server 64-bit versions to provide many essential datacenter services such as access control, performance monitoring, and configuration. vCenter Server unifies resources from the individual computing servers to be shared among virtual machines in the entire datacenter. It accomplishes this by managing the assignment of virtual machines to computing servers and the assignment of resources to virtual machines within a given computing server based on the policies set by the system administrator.

Computing servers continue to function even if vCenter Server becomes unreachable, for example, when the network is severed. Computing servers can be managed separately and continue to run the virtual machines assigned to them based on the resource assignment that was last set. After vCenter Server becomes reachable, it can manage the datacenter again.

vSphere provides a selection of interfaces for managing datacenters and accessing virtual machines. Users can select the interface that best meets their needs. This includes VMware vSphere® Client™, vSphere Web Client through a Web browser, or terminal services such as Windows Terminal

Services or Xterm.

You will now learn how networking operates in the physical and virtual worlds.

Historically, network administrators have owned the core, distribution, and access layers of a physical network in a physical computing environment. In a physical world, each server has a dedicated network cable plugged into a physical port on a switch.

Page 2: Module 1 Vmware vSphere Network an Overview

The access layer provides a good place to monitor network traffic and interpose on network traffic if the need arises.

For organizations that manage physical infrastructure with separate administrators for servers and networks, the switch port is the line of demarcation between servers and network.

Virtualization moves the edge of the network or the access layer into the realm of the server as virtual machines access the network layer through virtual switches. These virtual switches are configured either independently on each ESXi host or through a centrally configured virtual switch.

The vSphere infrastructure provides two types of virtual networking architecture, the standard virtual switch architecture and the distributed virtual switch architecture.

Standard virtual switches manage virtual machines and networking at the host level. This networking architecture is supported on all versions of vSphere. Distributed virtual switch manages virtual machines and networking at the datacenter level. Distributed virtual switches are not available in all versions of vSphere. VMware recommends that all networks be set up or migrated using the distributed virtual switch architecture, since it simplifies the datacenter by centralizing network configuration in addition to providing a more robust feature set.

Although the distributed network architecture is recommended for setting up virtual networks in vSphere 5.0, it is important to understand how the components of the standard virtual switch work so you can successfully either migrate components from this architecture to the distributed network architecture as required or support environments that only have standard virtual switches implemented.

The next few screens will explain each type of networking architecture in detail.

The components of the standard virtual switch architecture are configured at the host level. The standard virtual environment provides similar networking elements as those found actual physical switches.

Like a physical machine, each virtual machine has one or more virtual network adapters or virtual network interface cards or vNICs. The operating system and applications communicate with the vNIC through a standard device driver or a VMware optimized device driver just as though the vNIC is a physical NIC. The vNIC has its own MAC address, can be configured with multiple IP addresses and responds to the standard Ethernet protocol exactly like a physical NIC would. Nonetheless, an outside agent can determine that it is communicating with a virtual machine only if it checks the six byte vendor identifier in the MAC address.

A standard virtual switch, or also referred to as a vSwitch, operates just like a layer-2 physical switch. It maintains a MAC, a port forwarding table and performs three important functions. These include looking up each frame’s destination MAC when it arrives, forwarding a frame to one or more ports for transmission, and avoiding unnecessary deliveries.

Page 3: Module 1 Vmware vSphere Network an Overview

Each host server can have multiple standard virtual switches. You can create up to 127 virtual switches on each ESXi host. Each standard virtual switch has two sides to it. On one side of the virtual switch you have port groups. Port groups connect virtual machines to the standard virtual switch. On the other side of the standard virtual switch you have what are known as uplink ports. Uplink ports connect the standard virtual switch to physical Ethernet adapters which resides on the host. In turn, these physical Ethernet adapters connect to physical switches leading to the outside world.

A standard virtual switch can connect its uplink ports to more than one physical Ethernet adapter to enable NIC teaming. With NIC teaming, two or more physical adapters can be used for load balancing or to provide failover capabilities in the event of a physical adapter hardware failure or a network outage.

The virtual ports on a virtual standard switch provide logical connection points among and between virtual and physical devices. You can think of the virtual ports as virtual RJ-45 ports. Each virtual switch can have up to 1,016 virtual ports, with a limit of 4,096 ports on all virtual switches on a host. This system-wide limit includes eight reserved ports per standard virtual switch.

Virtual Ethernet adapters or vNICs connect to virtual ports when you power on the virtual machine on which the adapters are configured, when you take an explicit action to connect the device, or when you migrate a virtual machine using VMware vSphere® vMotion®.

A virtual Ethernet adapter updates the virtual switch port with the MAC filtering information when it is initialized and whenever it changes. A virtual port may ignore any requests from the virtual Ethernet adapter that would violate the Layer 2 security policy in effect for the port. For example, if MAC spoofing is blocked, the port drops any packets that violate this rule.

Let us now discuss port groups.

A port group is a unique concept in the virtual environment. A port group is a mechanism for setting policies that govern the network connected to it. Instead of connecting to a particular port on standard virtual switch, a virtual machine connects its vNIC to a port group. All virtual machines that connect to the same port group belong to the same network inside the virtual environment.

Port groups can be configured to enforce a number of policies that provide enhanced network security, network segmentation, better performance, higher availability, and traffic management.

Just like port groups that can be created to handle the virtual machine traffic, VMkernel connection type or VMkernel Port can also be created to provide network connectivity for the host and handling VMware vMotion, IP storage, and Fault Tolerance.

Moving a virtual machine from one host to another is called migration. Using vMotion you can migrate powered on virtual machines with no downtime. Please note that your VMkernel networking stack must be set up properly to accommodate vMotion.

Page 4: Module 1 Vmware vSphere Network an Overview

IP storage refers to any form of storage that uses TCP/IP network ESXi. Because these storage types are network based, therefore, they can use the same VMkernel interface and port group.

The main drawback of a standard virtual switch is that every ESXi host should have separate vSwitches configured on it. That means that virtual local area networks or VLAN, security policies, and teaming policies have to be individually configured on each and every ESXi host. If a policy needs to change, the vSphere administrator must change that policy on every host. While vCenter Server does allow the administrator to centrally manage the ESXi hosts, the changes to standard virtual switches still have to be done on each ESXi host.

Another drawback is that when a virtual machine is migrated with VMware vMotion, the networking state of the virtual machine gets reset. This makes network monitoring and troubleshooting a more complex

task in a virtual environment.

Distributed virtual switches or dvSwitch are configured and managed using vCenter Server. Distributed virtual switches are the recommended architecture to use when setting up a vNetwork.

A distributed virtual switch acts as a single virtual switch across all associated hosts. This allows virtual machines to maintain a consistent network configuration as they migrate across hosts. Like a standard virtual switch, a distributed virtual switch is a layer 2 network mechanism for virtual machines. A distributed virtual switch can route traffic internally between virtual machines or link to an external network.

Distributed switches exist across two or more clustered ESXi hosts. vCenter owns the configuration of distributed virtual switches, and the configuration is consistent across all hosts. The uplink ports on the distributed virtual switch connects to uplink ports on hidden standard virtual switches. The hidden standard virtual switch uplink ports connect to physical NICs, which then connect to physical switches in the outside world.

Be careful not to confuse a distributed virtual switch with a single switch spanning across several hosts. Two virtual machines on different hosts can communicate with each other only if both virtual machines have uplinks in the same broadcast domain with each other.

Each distributed switch includes distributed ports. A distributed port is a port on a vSphere distributed switch that connects to the VMkernel or to a virtual machine's network adapter. vCenter Server stores the state of distributed ports in the vCenter Server database, so networking statistics and policies migrate with virtual machines when moved across hosts. Migrating the state of a distributed port with vMotion is important when implementing state-dependent features, such as inline intrusion detection systems, firewalls, and third-party virtual switches.

Distributed port groups perform the same functions as port groups in standard virtual switches. They provide a way to logically group distributed ports to simplify configuration and they inherit all distributed switch properties. A distributed port groups does not constitute the means to segregate traffic within the distributed switch unless you use private VLANs.

Page 5: Module 1 Vmware vSphere Network an Overview

Within a distributed virtual switch, the control and I/O planes are separate. The control plane resides in and is owned by vCenter Server. The control plane is responsible for configuring distributed switches, distributed port groups, distributed ports, uplinks, and NIC teaming. The control plane also coordinates the migration of the ports and is responsible for the switch configuration. For example, in the case of a conflict in the assignment of a distributed port (say, because a virtual machine and its template are powered on), the control plane is responsible for deciding what to do.

The I/O Plane is implemented as a hidden standard virtual switch inside the VMkernel of each ESXi host. The I/O plane manages the actual I/O hardware on the host and is responsible for forwarding packets.

The diagram on the screen shows the components of the I/O plane of a distributed virtual switch. On each host, an I/O plane agent runs as a VMkernel process and is responsible for communicating between the control and the I/O planes.

I/O filters are attached to the I/O chains connecting the vNICs to the distributed ports and the distributed ports to the uplinks. vNetwork Appliance APIs make it possible to define custom filters and apply them to the I/O chains. The APIs also provide the means to preserve filtering information for the virtual machine connected to each port, even after a vMotion migration.

Inside the I/O plane, the forwarding engine decides how to forward packets to other distributed ports. The engine can forward the packets towards other virtual machines on the same distributed switch or to an uplink, requiring it to make NIC teaming decisions. Forwarding functions can also be customized using the vNetwork Appliance APIs.

Network configuration at the datacenter level offers several advantages.

First, it simplifies datacenter setup and administration by centralizing network configuration. For example, adding a new host to a cluster and making it vMotion compatible is

much easier.

Second, distributed ports migrate with their clients. So, when you migrate a virtual machine with vMotion, the distributed port statistics and policies move with the virtual machine, thus simplifying debugging and troubleshooting.

Finally, enterprise networking vendors can provide proprietary networking interfaces to monitor, control, and manage virtual networks.

vNetwork Appliance APIs allow third-party developers to create distributed switch solutions for use in a vSphere datacenter. Third-party solutions allow network administrators to extend existing network operations and management into the vSphere datacenter.

This diagram shows the basic way a third-party solution plugs in to the vNetwork architecture.

The Custom Control Plane is implemented outside of vCenter Server, for example it may be implemented as a virtual appliance.

The vSphere Client includes a plug-in to provide a management interface.

Page 6: Module 1 Vmware vSphere Network an Overview

vCenter Server includes an extension to handle the communication with the control plane.

On the host, a custom I/O plane agent replaces the standard I/O plane agent and the I/O plane itself may be replaced for customization of forwarding and filtering.

An example of a third-party switch that leverages the vNetwork APIs is the Cisco Nexus 1000V. Network administrators can use this solution in place of the distributed switch to extend vCenter Server to manage Cisco Nexus and Cisco Catalyst switches.

This concludes the VMware vSphere Network: An Overview module. In summary

- A typical vSphere datacenter

comprises of basic physical

building blocks such as x86

computing servers, storage

networks and arrays, IP

networks, a management

server, and desktop clients.

- The components of the standard

virtual switch architecture are

configured at the host level. The

standard virtual switch provides

similar networking capabilities as

its physical counterparts in the

real world.

- vCenter Server owns and

stores the state of distributed

port in its database, so

networking statistics and policies

migrate with virtual machines

when moved from host to host.

This capability is important to

supporting state-dependent

Page 7: Module 1 Vmware vSphere Network an Overview

features such as inline intrusion

detection systems, firewalls,

and third-party virtual

switches.

- vNetwork Appliance APIs allow

third-party developers to create

distributed switch solutions for

use in a vSphere datacenter.

Third-party solutions allow

network administrators to

extend existing network

operations and management

into the vSphere datacenter.

Now that you have completed this module, feel free to review it until you are ready to start the next module. When you are ready to proceed, close this browser window to return to the course contents page.

A vNetwork provides different servicesto the host and virtual machines.

You can enable two types of network services in ESXi, VMkernel and virtual machines. The first type connects VMkernel services, such as NFS, iSCSI, or VMware vSphere® vMotion® to the physical network. The second type connects virtual machines to the physical network.

vNetwork comprises of two types of networking architecture, the standard virtual switch and the distributed virtual switch. The next few screens explain how to create and configure each type.

Setting up a networking environment using standard virtual switches can be done using the Configuration tab in the Hosts and Clusters view in vSphere Client.

Port groups for virtual machine networking or networking services, such as vMotion and iSCSI networking are configured using Add Network Wizard.

New standard virtual switches can be created during the port group creation process, or you can connect your new port group to an already existing standard virtual switch.

A virtual machine is connected to a virtual network by assigning the virtual machine’s NIC to that network’s port group.

Page 8: Module 1 Vmware vSphere Network an Overview

This demonstration will show you the steps for creating a standard vSwitch. It will also show you the steps for creating port groups. When you are ready to begin the demonstration, click the Play button.

In this demonstration, we will see how to create a standard virtual switch and port groups.

- After logging in to VMware

vSphere Client, we will

navigate to the Networking

view from the vCenter

Server inventory.

- Now, we will select a datacenter.

For the purpose of this

demo, we have used the

datacenter named

“My vPod”.

- Next, we will click the

Networks tab.

- We will right-click the

datacenter and select

New vSphere Distributed

Switch. Alternately, we can

click the New vNetwork

Distributed Switch icon in the

toolbar.

- The Create vSphere

Distributed Switch dialog

box is displayed. Now, we

will select the appropriate

Page 9: Module 1 Vmware vSphere Network an Overview

vSphere Distributed Switch

Version. For the purpose of

this demo, we will use

vSphere Distributed Switch

Version: 5.0.0.

- We will click Next.

- Then, we need to enter a

display name for the switch

and specify the maximum

number of physical adapters

per host. For the demo

purpose, we will keep the

default values.

- Then, we will click Next.

- We can either add hosts

and their physical network

adapters to the new vSphere

distributed switch now or later.

Note: We do not add hosts

here. We will select Add later.

- Then, we will click Next.

- We can now choose to

Automatically create a

default port group. For the

purpose of this demo, we

will leave this option

unchecked and will add

the port group later.

Page 10: Module 1 Vmware vSphere Network an Overview

- We will click Finish.

- A new distributed switch

appears under the datacenter

with the default name dvSwitch.

- To add a port group, we

will select the dvSwitch.

- We will then right-click the

dvSwitch and click New

Port Group.

- The Create Distributed Port

Group wizard appears.

- We will provide a name for

the port group and provide

the number of ports. For the

demo purpose, we will use

the default name, dvPortGroup.

Note that if you are using

VLAN, you can provide the

VLAN ID here.

- We will click Next.

- We will now verify the

settings for the new port

group.

- Finally, we will click Finish.

In this demonstration, we saw how to create a distributed virtual switch and port groups.

The virtual machine network connections on a standard virtual switch allow virtual machines to access the network. The Add Network Wizard guides you through the tasks required to create a virtual network to which virtual machines can connect. These tasks include choosing

Page 11: Module 1 Vmware vSphere Network an Overview

the connection type, adding the virtual network to a new or an existing standard virtual switch, and configuring the connection settings for the network label and the VLAN ID.

When a virtual machine is created, you can select the networks that you want to connect to and their options. You can select the number of NICs you want to connect to, the names of the networks, and whether you want to connect to them at power on.

The Network drop-down menu lists the port groups that are configured for virtual machine use on the host. If no virtual machine port groups are configured, a warning dialog box appears, and you are not allowed to configure any virtual network cards.

Notice the Adapter field on the image. Your selection here, and whether you install VMware Tools, determines the virtual adapter for the virtual machine.

After the virtual machine is created, you configure the IP setting in the guest operating systems for each virtual NIC configured for the virtual machine.

In the example displayed on the screen, the Microsoft Windows Virtual Machines’ TCP/IP properties are configured in the Windows operating system.

Once the network adapter has been created for the virtual machine, you can edit the connections of the virtual machine from the Virtual Machine Properties dialog in vSphere Client.

VMkernel networking is used to provide network access to the host and to handle vMotion, IP Storage, and VMware Fault Tolerance.

One of the uses for a VMkernel port is vMotion. vMotion uses VMkernel ports to migrate powered on virtual machines from one ESXi host to another. Dedicated Gigabit-switched paths are configured between ESXi hosts and this is considered a vMotion Network.

The virtual machine can then be migrated from one host to another with no downtime. vMotion requires a Gigabit Ethernet or GigE network between all vMotion-enabled hosts.

In addition to providing support for vMotion, the VMkernel TCP/IP stack provides networking support for software iSCSI and for access to the NFS datastores.

If you have more than two NICs for iSCSI, you can configure iSCSI multipathing. ESXi supports only NFS version 3 over TCP/IP.

Distributed virtual switches ease the management burden of every host virtual switch configuration by treating the network as an aggregate resource. In this configuration, individual host-level virtual switches are abstracted into a single large vNetwork distributed virtual switch that spans multiple hosts at the datacenter level.

Although VMware supports standard virtual switches, it is a best practice and recommended that the distributed virtual switch architecture be used for all virtual networking purposes, including the virtual machine connections and the VMkernel connections to the physical network for VMkernel services such

Page 12: Module 1 Vmware vSphere Network an Overview

as NFS, iSCSI, or vMotion. vSphere Distributed switches 5.0 only supports ESXi 5.0 and later versions.

With a standard virtual switch, a separate configuration in a separate management panel is required to maintain each ESXi host’s network configuration. So in the example on the screen, in order for an administrator to view the network configuration of the datacenter, the administrator would have to view the network configuration tab of each separate ESXi host.

With a distributed virtual switch, the administrator only has to view one management panel to view the network configuration for the entire datacenter.

There are different steps involved in creating a distributed switch.

To begin, create a distributed virtual switch in vSphere Client using the Create vNetwork Distributed Switch wizard. Then use the same wizard to add hosts or create a distributed port group.

Next, configure virtual network adapters to handle network services over the distributed switch. You can configure VMkernel network adapters for a host through an associated distributed virtual switch, either by creating new virtual adapters or by migrating existing virtual adapters on the host to the vNetwork distributed virtual switch.

Finally, connect virtual machine NICs to distributed virtual port groups. You can connect virtual machine NICs through individual machine NIC configuration or by migrating virtual machine networking from the distributed virtual switch itself.

This demonstration will show you the steps for creating a Distributed Virtual Switch and port groups. When you are ready to begin the demonstration, click the Play button.

In this demonstration, we will see how to create a distributed virtual switch and port groups.

- After logging in to VMware

vSphere Client, we will navigate

to the Networking view from

the vCenter Server inventory.

- Now, we will select a datacenter.

For the purpose of this demo,

we have used the datacenter

named “My vPod”.

- Next, we will click the

Page 13: Module 1 Vmware vSphere Network an Overview

Networks tab.

- We will right-click the

datacenter and select

New vSphere Distributed

Switch.

- Alternately, we can click the

New vNetwork Distributed

Switch icon in the toolbar.

- The Create vSphere Distributed

Switch dialog box is displayed.

Now, we will select the appropriate

vSphere Distributed Switch

Version. For the purpose of

this demo, we will use

vSphere Distributed

Switch Version: 5.0.0.

- We will click Next.

- Then, we need to enter a

display name for the switch

and specify the maximum

number of physical adapters

per host. For the demo purpose,

we will keep the default values.

- Then, we will click Next.

- We can either add hosts and

their physical network adapters

to the new vSphere distributed

switch now or later. Note:

Page 14: Module 1 Vmware vSphere Network an Overview

We do not add hosts here.

We will select Add later.

- Then, we will click Next.

- We can now choose to

Automatically create a default

port group. For the purpose of

this demo, we will uncheck this

option to add the port group later.

- We will click Finish.

- A new distributed switch

appears under the datacenter

with the default name dvSwitch.

- To add a port group, we

will select the dvSwitch.

- We will then right-click the

dvSwitch and click New

Port Group.

- The Create Distributed Port

Group wizard appears.

- We will provide a name for

the port group and provide

the number of ports. For the demo

purpose, we will use the default

name, dvPortGroup. Note that

if you are using VLAN, you

can provide the VLAN ID here.

- We will click Next.

- We will now verify the settings

Page 15: Module 1 Vmware vSphere Network an Overview

for the new port group.

- Finally, we will click Finish.

In this demonstration, we saw how to create a distributed virtual switch and port groups.

It is possible to migrate a group of virtual machines from standard virtual switches on the host to a distributed virtual switch. In this example, the virtual machines on Host2 are all connected to the production port group. On the distributed virtual switch, a port group that virtual machines will migrate to is created. Then the Migrate Distributed Virtual Machine Networking wizard is used to move the virtual machines from the standard to the distributed switch.

This demonstration will show you the steps to migrate virtual machines from a Standard vSwitch to a Distributed vSwitch.

When you are ready to begin the demonstration, click the Play button.

In this demonstration, we will see how to migrate virtual machines from a standard virtual switch to a distributed virtual switch.

- After logging in to VMware

vSphere Client, we will

navigate to the Networking

view from the vCenter Server

inventory.

- Then, we will select the

distributed switch where

we want our virtual machines

to migrate to.

- We will right-click the

distributed switch and

select Migrate Virtual Machine

Page 16: Module 1 Vmware vSphere Network an Overview

Networking.

- The Migrate Virtual Machine

Networking wizard appears.

- Now, we need to select the

source and destination network

for the virtual machine network

adapters to migrate. For the

purpose of this demo, we

will migrate all virtual machines

from VM Network to

dvPortGroup dvSwitch.

- First, we will select VM Network

from the drop-down menu

under Source Network.

- The dvPortGroup is selected by

default in the Destination

Network. We will keep the

default selection.

- Remember to make your

selection based on your setup.

- We will click Next.

- Next, we will select All

Virtual Machines.

- We will click Next.

- We will review the summary

and click Finish to start

migrating the selected

virtual machine network

Page 17: Module 1 Vmware vSphere Network an Overview

adapters.

- Now, we will click the Virtual

Machines tab.

- We can now see the migrated

virtual machines.

In this demonstration, we saw how to migrate virtual machines from a standard virtual switch to a distributed virtual switch.

A VMkernel port on a distributed virtual switch works similar to a standard virtual switch. The VMkernel port is used by VMkernel to provide services like vMotion or access to storage such as NFS or Software iSCSI.

This demonstration will show you the steps to create a VMkernel network adapter on a vSphere distributed switch. When you are ready to begin the demonstration, click the Play button.

In this demonstration, we will see how to create a VMkernel network adapter on a vSphere distributed virtual switch.

- After logging in to VMware

vSphere Client, we will

navigate to the Hosts and

Clusters view from the vCenter

Server inventory.

- We will select one of the

hosts in the Inventory pane.

- Then, we will click the

Configuration tab.

- Next, we will click

Networking under the

Hardware panel.

Page 18: Module 1 Vmware vSphere Network an Overview

- Now, we will click the

vSphere Distributed Switch

view.

- Next, we will click the Manage

Virtual Adapters link.

- We will click Add.

- Then, we will keep the default

selection of New virtual adapter

on the Creation Type screen.

- We will click Next.

- On the Virtual Adapter Type

screen, VMkernel is selected.

Click Next.

- We will choose a distributed

port or distributed port group

connection for the virtual

adapter. For the purpose of

the demo, we have selected

dvPortGroup.

- In this demo, we are creating

the virtual adapter for vMotion,

so we will select the Use this

virtual adapter for vMotion

checkbox so that this virtual

adapter can advertise itself to

other ESXi hosts as network

connection where vMotion

traffic is sent. Note that you

Page 19: Module 1 Vmware vSphere Network an Overview

can enable this property for

only one vMotion and IP storage

port group for each host. If this

property is not enabled for any

port group, migration with

vMotion to this host is not

possible.

- We will click Next.

- Now, we will specify the

VMkernal IP Settings.

- We will click Edit to set the

VMkernel default gateway for

the VMkernel services, such

as vMotion, NAS, and iSCSI.

Note that we will keep the

default gateway.

- On the Routing tab, we will use

the existing default gateway

information for VMkernel.

- On the DNS Configuration tab,

the name of the host is entered

by default. The DNS server

addresses and domain that

were specified during installation

are also preselected.

- The Static IP settings is the

default. Do not use routing

with software iSCSI Multipathing

Page 20: Module 1 Vmware vSphere Network an Overview

configurations or dependent

hardware iSCSI adapters.

We will click OK.

- Here, we will enter the static

IP address and its Subnet Mask.

- We will click Next.

- Further, we will click Finish

to confirm the new virtual

adapter configuration.

- Finally, we will click Close.

- Under dvSwitch in the

Networking section, we can see

the newly created adapter as

VMkernal Ports.

In this demonstration, we saw how to create a VMkernel network adapter on a vSphere distributed virtual switch.

This demonstration will show you the steps to create a VMkernel network adapter on a vSphere distributed switch. When you are ready to begin the demonstration, click the Play button.

In this demonstration, we will see how to create a VMkernel network adapter on a vSphere distributed virtual switch.

- After logging in to VMware

vSphere Client, we will

navigate to the Hosts and

Clusters view from the vCenter

Server inventory.

- We will select one of the

Page 21: Module 1 Vmware vSphere Network an Overview

hosts in the Inventory pane.

- Then, we will click the

Configuration tab.

- Next, we will click

Networking under the

Hardware panel.

- Now, we will click the

vSphere Distributed Switch

view.

- Next, we will click the Manage

Virtual Adapters link.

- We will click Add.

- Then, we will keep the default

selection of New virtual adapter

on the Creation Type screen.

- We will click Next.

- On the Virtual Adapter Type

screen, VMkernel is selected.

Click Next.

- We will choose a distributed

port or distributed port group

connection for the virtual

adapter. For the purpose of

the demo, we have selected

dvPortGroup.

- In this demo, we are creating

the virtual adapter for vMotion,

so we will select the Use this

Page 22: Module 1 Vmware vSphere Network an Overview

virtual adapter for vMotion

checkbox so that this virtual

adapter can advertise itself to

other ESXi hosts as network

connection where vMotion

traffic is sent. Note that you

can enable this property for

only one vMotion and IP storage

port group for each host. If this

property is not enabled for any

port group, migration with

vMotion to this host is not

possible.

- We will click Next.

- Now, we will specify the

VMkernal IP Settings.

- We will click Edit to set the

VMkernel default gateway for

the VMkernel services, such

as vMotion, NAS, and iSCSI.

Note that we will keep the

default gateway.

- On the Routing tab, we will use

the existing default gateway

information for VMkernel.

- On the DNS Configuration tab,

the name of the host is entered

by default. The DNS server

Page 23: Module 1 Vmware vSphere Network an Overview

addresses and domain that

were specified during installation

are also preselected.

- The Static IP settings is the

default. Do not use routing

with software iSCSI Multipathing

configurations or dependent

hardware iSCSI adapters.

We will click OK.

- Here, we will enter the static

IP address and its Subnet Mask.

- We will click Next.

- Further, we will click Finish

to confirm the new virtual

adapter configuration.

- Finally, we will click Close.

- Under dvSwitch in the

Networking section, we can see

the newly created adapter as

VMkernal Ports.

In this demonstration, we saw how to create a VMkernel network adapter on a vSphere distributed virtual switch.

vSphere 5.0 supports both Internet Protocol version 4 or IPv4 and Internet Protocol version 6 or IPv6 environments. IPv6 has been designated by the Internet Engineering Task Force as the successor to IPv4. The adoption of IPv6, both as a standalone protocol and in a mixed environment with IPv4, is increasing. Most notably, the United States Federal Government requires that all new purchases include IPv6 support.

The most obvious difference between IPv4 and IPv6 is the address length. IPv6 uses 128 bit ‐addresses rather than the 32 bit addresses used by IPv4. This combats the problem of address ‐exhaustion that is present with IPv4 and eliminates the need for network address translation.

Page 24: Module 1 Vmware vSphere Network an Overview

Other notable differences include link-local addresses that appear as the interface is initialized, addresses that are set by router advertisements, and the ability to have multiple IPv6 addresses on an interface. IPv6 is disabled by default.

An IPv6-specific configuration in vSphere involves providing IPv6 addresses, either by entering static addresses or by using DHCP for all relevant vSphere networking interfaces. IPv6 addresses can also be configured using stateless autoconfiguration sent by router advertisement.

IPv6 support is configured at the host level, and it is disabled by default. To enable or disable IPv6 support through the vSphere client, you must adhere to certain steps.

To begin, select the Hosts and Clusters view from the vSphere Client Home page. Next, select the host and click the Configuration tab.

Then, click the Networking link under Hardware.

In the vSphere Standard Switch view, click the Properties link.

Next, select Enable IPv6 support on the host system, and click OK.

Finally, reboot the host.

You can also configure IPv6 support through the command line. In either case, you must reboot the host for the change to take effect. Enabling IPv6 on the host does not disable IPv4. IPv4 and IPv6 can co-exist without any problems.

You can enable IPv6 support when you create or edit a virtual adapter to connect to either a standard or distributed virtual switch. After IPv6 is enabled you have the option to specify IPv4 or IPv6 addresses.

There are three ways to assign an IPv6 address to an adapter. The first way is automatically, using DHCP. The second way is also automatically, but using the IPv6 stateless auto-configuration. This option automatically generates a Link-Local IP address assigned to communicate with potential routers in the same link, for example through advertisement. The third way is by entering static IPv6 addresses.

You also have the option to set a unique default gateway.

This concludes the vNetwork: Configuring Virtual Switches module. In summary:

- Setting up a networking

environment using standard

virtual switches can be done

on the Configuration tab of the

host view in vSphere Client.

Page 25: Module 1 Vmware vSphere Network an Overview

- Distributed virtual switches

ease the management burden

of every host virtual switch

configuration by treating the

network as an aggregate

resource. In this configuration,

individual host-level

virtual switches are

abstracted into a single large

vNetwork distributed virtual

switch that spans multiple

hosts at the datacenter level.

- In a standard virtual switch

configuration, a separate

configuration in a separate

management panel is required

to maintain each ESXi host’s

network configuration. In a

distributed virtual switch

configuration, the administrator

would only have to view one

management panel to view the

network configuration for the

entire datacenter.

Now that you have completed this module, feel free to review it until you are ready to start the next module. When you are ready to proceed, close this browser window to return to the course contents page.

Page 26: Module 1 Vmware vSphere Network an Overview

Private VLANs support broader compatibility with existing networking environments using the private VLAN technology. Private VLANs enable users to restrict communication between virtual machines on the same VLAN or network segment, significantly reducing the number of subnets required for certain network configurations.

The next few screens will explain how to set up and configure private VLANs.

Private VLANs or PVLANs allow you to isolate traffic between virtual machines in the same VLAN. This allows PVLANs to provide additional security between virtual machines on the same subnet without exhausting the VLAN number space. PVLANs are useful on a DMZ where the server needs to be available to external connections and possibly internal connections, but rarely needs to communicate with the other servers on the DMZ. A PVLAN can be configured in a way that allows the servers to communicate only with the default gateway on the DMZ, denying communication between the servers. If one of the servers is compromised by a hacker, or infected with a virus, the other servers on the DMZ are safe.

The basic concept behind PVLANs is to divide an existing VLAN, now referred to as the primary PVLAN, into one or more segments. These segments are called secondary PVLANs. A PVLAN is identified by its primary PVLAN ID. A primary PVLAN ID can have multiple secondary PVLAN IDs associated with it. Primary PVLANs are promiscuous, so virtual machines on a promiscuous PVLAN are reachable by and can reach any node in the same promiscuous PVLAN, as well as any node in the primary PVLAN. Ports on secondary PVLANs can be configured as either isolated or community. Virtual machines on isolated ports communicate only with virtual machines on promiscuous ports, whereas virtual machines on community ports communicate with both promiscuous ports and other ports on the same secondary PVLAN.

Virtual machines in a promiscuous private VLAN are reachable by and can reach any node in the same promiscuous private VLAN, as well as any node in the primary PVLAN.

In the example depicted on the screen, virtual machines E and F are in the promiscuous private VLAN 5, so all virtual machines communicate with each other as well as with any nodes in the primary private VLAN 5.

Virtual machines in an isolated private VLAN cannot communicate with other virtual machines except those in the promiscuous private VLAN. In this example, virtual machines C and D are in isolated private VLAN 155, so they cannot communicate with each other. However, virtual machines C and D can communicate with virtual machines E and F.

Virtual machines in a community private VLAN can communicate with each other and with the virtual machines in the promiscuous private VLAN , but not with any other virtual machine. In this example, virtual machines A and B can communicate with each other and with E and F because they are in the promiscuous private VLAN. However, they cannot communicate with C or D because they are not in the community private VLAN.

Network packets originating from a community are tagged with the secondary PVLAN ID as it transverses the network.

There are a couple of things to note about how vNetwork implements private VLANs. First, vNetwork does not encapsulate traffic inside private VLANs. In other words, there is no

Page 27: Module 1 Vmware vSphere Network an Overview

secondary private VLAN encapsulated inside a primary private VLAN packet. Also, traffic between virtual machines on the same private VLAN, but on different ESXi hosts, moves through the physical switch. Therefore, the physical switch must be private VLAN-aware and configured appropriately so that traffic in the secondary private VLAN can reach its destination.

The Creating a Private VLAN demonstration will show you the steps to create a private VLAN. When you are ready to begin the demonstration, click the Play button.

In this demonstration, we will see how to create a private VLAN.

- After logging in to VMware

vSphere Client, we will

navigate to the Networking

view from the vCenter Server

Inventory.

- We will right-click the

distributed switch in the

inventory pane. For the purpose

of this demo, we will select

Production dvSwitch.

- Now, we will select Edit

Settings.

- Then, we will select the

Private VLAN tab.

- Here, we can see two

sections, Primary private

VLAN ID and Secondary

private VLAN ID.

- Primary VLANs are Promiscuous,

so that ports on a private VLAN

can communicate with ports

Page 28: Module 1 Vmware vSphere Network an Overview

configured as the primary

VLAN.

- Ports on a secondary VLAN

can be either Isolated or

Community. Isolated ports

communicate only with

Promiscuous ports while

Community ports communicate

with both Promiscuous ports

and other ports on the same

secondary VLAN.

- To create a private VLAN, we

will click [Enter a Private VLAN

ID here] under Primary private

VLAN ID.

- We will enter the number of

the primary private VLAN as 100.

Note that you can choose any

number between 1 -4094.

- Now, we will select the primary

private VLAN that we just added.

- The Primary private VLAN

appears under Secondary private

VLAN ID.

- Now, under Secondary private

VLAN ID, we will add VLAN IDs

corresponding to the primary

VLAN.

Page 29: Module 1 Vmware vSphere Network an Overview

- For each VLAN, we will create

two secondary VLANs. In the

demo, for VLAN with VLAN ID

100, we have created two

secondary VLANs with VLAN ID

101 and 102.

- After entering the VLAN ID,

we will select whether they

should be Isolated or

Community. We need to click

the drop down next to the

VLAN ID and select Isolated

for 101 and Community

for 102. For the purpose of this

demo, we have selected the

port types as Isolated and

Community respectively.

- Finally, we will click OK to close the wizard.

In this demonstration, we saw how to create a private VLAN.

Virtual machines in a promiscuous private VLAN are reachable by and can reach any node in the same promiscuous private VLAN, as well as any node in the primary PVLAN.

In the example depicted on the screen, virtual machines E and F are in the promiscuous private VLAN 5, so all virtual machines communicate with each other as well as with any nodes in the primary private VLAN 5.

Virtual machines in an isolated private VLAN cannot communicate with other virtual machines except those in the promiscuous private VLAN. In this example, virtual machines C and D are in isolated private VLAN 155, so they cannot communicate with each other. However, virtual machines C and D can communicate with virtual machines E and F.

Virtual machines in a community private VLAN can communicate with each other and with the virtual machines in the promiscuous private VLAN , but not with any other virtual machine. In this example, virtual machines A and B can communicate with each other and with E and F

Page 30: Module 1 Vmware vSphere Network an Overview

because they are in the promiscuous private VLAN. However, they cannot communicate with C or D because they are not in the community private VLAN.

Network packets originating from a community are tagged with the secondary PVLAN ID as it transverses the network.

There are a couple of things to note about how vNetwork implements private VLANs. First, vNetwork does not encapsulate traffic inside private VLANs. In other words, there is no secondary private VLAN encapsulated inside a primary private VLAN packet. Also, traffic between virtual machines on the same private VLAN, but on different ESXi hosts, moves through the physical switch. Therefore, the physical switch must be private VLAN-aware and configured appropriately so that traffic in the secondary private VLAN can reach its destination.

You will now learn about the load balancing and failover policies and how they are configured.

Load balancing and failover policies allow you to determine how network traffic is distributed between adapters and how to re-route traffic in the event of a failure. You can edit your load balancing and failover policies by configuring the load balancing policy, failover detection, and network adapter order.

Load balancing and failover policies can be controlled at either the standard virtual switch level or at the port group level on a distributed virtual switch and can be set in the vSphere Client.

Although teaming can be configured on a standard virtual switch, load-based teaming is only available with distributed virtual switches. Initially, ports are assigned the way they are assigned in source port-based load balancing. The algorithm in load-based teaming regularly checks the load of all teaming NICs. If one NIC gets overloaded while another has bandwidth available, the distributed virtual switch reassigns the port-NIC mapping to reach a balanced status. Until the next check is performed, the mapping maintains a stable state.

The settings for load balancing enable you to specify how a physical uplink should be selected by the VMkernel.

The next few screens will take you through different load balancing policies. These include route based on the originating port ID and route based on IP or MAC hash.

Routing traffic based on the originating port ID balances the load based on the virtual port where the traffic enters the vSwitch or dvSwitch. Port ID-based assignments use fixed assignments.

In some cases, multiple heavy loaded virtual machines are connected to the same pNIC and the load across the pNICs is not balanced.

The image on the screen shows that pNIC1 is connected to two virtual machines with heavier load and is overloaded, whereas pNIC0 has only one virtual machine with a low load.

Route based on IP hash chooses an uplink based on a hash of the source and destination IP addresses of each packet. Evenness of traffic distribution depends on the number of TCP/IP sessions to unique destinations.

Page 31: Module 1 Vmware vSphere Network an Overview

When using route based on source MAC hash option on a drop-down menu in the vSphere Client, an uplink is selected based on the hash from the source Ethernet adapter.

When you use this setting, traffic from a given virtual Ethernet adapter is consistently sent to the same physical adapter unless there is a failover to another adapter in the NIC team. The replies are received on the same physical adapter as the physical switch learns the port association. This setting provides an even distribution of traffic if the number of virtual Ethernet adapters is greater than the number of physical adapters. A given virtual machine cannot use more than one physical Ethernet adapter at any given time unless it uses multiple source MAC addresses for traffic it sends.

Route based on IP hash chooses an uplink based on a hash of the source and destination IP addresses of each packet. Evenness of traffic distribution depends on the number of TCP/IP sessions to unique destinations.

When using route based on source MAC hash option on a drop-down menu in the vSphere Client, an uplink is selected based on the hash from the source Ethernet adapter.

When you use this setting, traffic from a given virtual Ethernet adapter is consistently sent to the same physical adapter unless there is a failover to another adapter in the NIC team. The replies are received on the same physical adapter as the physical switch learns the port association. This setting provides an even distribution of traffic if the number of virtual Ethernet adapters is greater than the number of physical adapters. A given virtual machine cannot use more than one physical Ethernet adapter at any given time unless it uses multiple source MAC addresses for traffic it sends.

Failover policies determine the method to be used for failover detection and how traffic is re-routed in the event of a physical adapter failure on the host. The failover policies that can be set are network failure detection, notify switches, failback, and failover order.

The next few screens will explain each policy in detail.

Network failover detection specifies the method to use for failover detection. The policy can be set to either the Link Status only option or the Beacon Probing option within the vSphere Client.

When the policy is set to Link Status only, failover detection will rely solely on the link status that the network adapter provides. This option detects failures, such as cable pulls and physical switch power failures. However, it does not detect configuration errors, such as a physical switch port being blocked by spanning tree protocol or misconfigured to the wrong VLAN or cable pulls on the other side of a physical switch.

The Beaconing option sends out and listens for beacon probes on all NICs in the team and uses this information, along with link status, to determine link failure. This option detects many failures that are not detected by Link Status only alone.

Page 32: Module 1 Vmware vSphere Network an Overview

When you use the notify switches policy, you must specify how the VMkernel communicates with the physical switches in the event of a failover. The notify switches can be set to either Yes or No. If you select Yes, whenever a virtual Ethernet adapter is connected to the vSwitch or dvSwitch or whenever that virtual Ethernet adapter’s traffic is routed over a different physical Ethernet adapter in the team due to a failover event, a notification is sent out over the network to update the lookup tables on physical switches. In almost all cases, this is desirable for the lowest latency when a failover occurs.

By default, NIC teaming applies a failback policy. This means that if a physical Ethernet adapter that had failed comes back online, the adapter is returned to active duty immediately, displacing the standby adapter that took over its slot. This policy is in effect when the Rolling Failover setting is set to No. If the primary physical adapter experiences intermittent failures, this setting can lead to frequent changes in the adapter in use.

Another approach is to set Rolling Failover to Yes. With this setting, a failed adapter is left inactive even after recovery until another currently active adapter fails, requiring replacement. Please note that the Failover Order policy can be set in the vSphere Client.

You can use the Failover Order policy setting to specify how to distribute the work load for the physical Ethernet adapters on the host. You can place some adapters in active use, designate a second group as standby adapters for use in failover situations, and designate other adapters as unused, excluding them from NIC Teaming.

Please note that the Failover Order policy can be set for vSphere Client.

You will now learn about network I/O control and its architecture.

In environments that use 1 Gigabit Ethernet or GigE physical uplinks, it is not uncommon to see multiple physical adapters dedicated to

certain traffic types.

1 GigE is rapidly being replaced by 10 GigE networks. While it provides an ample bandwidth for all the traffic, it creates a new challenge. Different kind of traffic, which was limited to the bandwidth of a single 1 GigE can now use up to 10 GigE. So, for optimum utilization of 10 GigE link, there has to be a way to prioritize the network traffic by traffic flows. Prioritizing traffic will ensure that latency sensitive and critical traffic flows can access the bandwidth they require.

Network I/O control provides control to converge different kinds of traffic flows on a single pipe. It provides control to the administrator to ensure predictable network performance when multiple traffic types are

flowing in the same pipe.

Page 33: Module 1 Vmware vSphere Network an Overview

Network I/O control enables the convergence of diverse workloads on a single networking pipe. It provides sufficient controls to the vSphere administrator in the form of limits and shares parameters to enable and ensure predictable network performance when multiple traffic types contend for the same physical network resources.

Network I/O control provides its users with different features. These include isolation, shares, and limits.

The next few screens will explain each

of the features in detail.

Isolation ensures traffic isolation so that a given flow will never be allowed to dominate over others, thus preventing drops and undesired jitter.

When network I/O control is enabled, distributed switch traffic is divided into the following predefined network resource pools: VMware Fault Tolerance traffic, iSCSI traffic, management traffic, NFS traffic, virtual machine traffic, vMotion traffic vSphere Replication or VR traffic.

Shares allow a flexible networking capacity partitioning to help users in dealing with over commitment when flows compete aggressively for the same resources. Network I/O control uses shares to specify the relative importance of traffic flows.

The system administrator can specify the relative importance of a given resource-pool flow using shares that are enforced at the dvUplink level. The underlying dvUplink bandwidth is then divided among resource-pool flows based on their relative shares in a work-conserving way. This means that unused capacity will be redistributed to other contending flows and won’t go to waste.

As shown in the image, the network flow scheduler is the entity responsible for enforcing shares and therefore is in charge of the overall arbitration under over commitment. Each resource-pool flow has its own dedicated software queue inside the scheduler so that packets from a given resource pool are not dropped due to high utilization by other flows.

Limits specify an absolute bandwidth for a traffic flow. Traffic from a given flow is never allowed to exceed its limit. The limit is specified in megabits per second. Limits are useful when you do not want to have the other traffic affected too much by other network events.

The system administrator can specify an absolute shaping limit for a given resource-pool flow using a bandwidth capacity limiter. As opposed to shares that are enforced at the dvUplink level, limits are enforced on the overall vNetwork distributed switch or vDS set of dvUplinks, which means that a flow of a given resource pool will never exceed a given limit for a vDS out of a given vSphere host.

Page 34: Module 1 Vmware vSphere Network an Overview

Consider an example where virtual machine and iSCSI traffic use nearly all the available bandwidth. vMotion starts and consumes a large percentage of the bandwidth. In this case, it might be a good idea to limit the bandwidth of vMotion.

In the Network Resource Pool Settings, you can configure QoS priority. You can tag user-defined network resource pools with this tag. The QoS priority tag field allows you to select a priority code in the range of 1 to 7, where 1 is the lowest priority and 7 is the highest priority. When a physical switch is 802.1p enabled, the network receives a packet and prioritizes the traffic based on the tag.

Please note that priority tagging is available only with distributed switches.

vSphere 5.0 provides several improvements to the functionality of distributed switches. These include NetFlow, Cisco discovery protocol or CDP and link layer discovery protocol or LLDP, and port mirroring.

Please note that CDP is actually available in the earlier versions of vSphere. The next few screens will explain these enhancements in detail.

vSphere 5.0 provides several improvements to the functionality of distributed switches. These include NetFlow, Cisco discovery protocol or CDP and link layer discovery protocol or LLDP, and port mirroring.

Please note that CDP is actually available in the earlier versions of vSphere. The next few screens will explain these enhancements in detail.

NetFlow is a common tool for analyzing network traffic. It is a specification for collecting types of network data for monitoring and reporting.

NetFlow has multiple uses, including network monitoring and profiling, billing, intrusion detection and prevention, networking forensics, and Sarbanes-Oxley compliance. It sends aggregated networking flow data to a third-party collector, such as an appliance or a server. The collector and analyzer reports on various information such as the current top flows consuming the maximum bandwidth in a particular distributed virtual switch, the IP addresses that are behaving irregularly, and the number of bytes a particular virtual machine has sent and received in the past 24 hours. NetFlow enables visibility into the virtual machine traffic for ESXi deployments.

vSphere 5.0 sends data in the format that adheres to the NetFlow specification. The collector and analyzer are not a part of vSphere and must come from a third party.

Port mirroring is a technology that duplicates network packets of a switch port (the source) to another port (mirrored port). The source’s network traffic can monitored at the mirrored.

On physical switches, administrators are accustomed to being able to mirror the traffic to special ports in order to assist in troubleshooting. Mirrored traffic can also be used as an input for security and other network analysis appliances.

Page 35: Module 1 Vmware vSphere Network an Overview

Port mirroring overcomes the limitations of promiscuous mode. That is, if you enable the promiscuous mode on a distributed port, this port will see all the network traffic going through the distributed switch. There is no way to select which traffic from a port or port group that a particular promiscuous port is allowed to see. It can see all the traffic on the same broadcast domain. Port mirroring overcomes this limitation by allowing the administrator to control which traffic on the distributed virtual switch can be seen by the port enabled for promiscuous mode.

vSphere 5.0 supports CDP and LLDP. CDP is available for both standard switches and distributed switches. LLDP is only available for distributed switches.

CDP is a Cisco protocol and is design to interact with Cisco physical switches. LLDP is an industry standard protocol and is designed to interact with any physical switch that supports it.

CDP or LLDP are used by network devices for advertising their identity and capabilities on a network. When CDP or LLDP are enabled between peer virtual and physical switches, you can use the vSphere Client to view properties of the physical switch. These properties include such items as the device ID, the software version, port number, and timeout values.

CDP was supported on vSphere 4.x, however, LLDP is available only in vSphere 5.0.

Networking enhancements have been made to vSphere 5.0 that improves firewall management.

In classic ESX, the service console was protected with a firewall based on iptables to filter connections coming in and going out to the service console. ESXi does not have a service console, so it cannot take advantage of the capabilities of iptables.

The ESXi 5.0 management interface is protected by a new service-oriented and stateless firewall. It is enabled by default and at installation time, it is configured to block incoming/outgoing traffic, except for default services such as DNS Client, DHCP Client, and SNMP Server.

The vSphere Client interface for configuring the firewall on ESXi is similar to that used with the classic ESX firewall. This means that users who are familiar with the classic ESX firewall should not have any difficulty with the ESXi version.

Page 36: Module 1 Vmware vSphere Network an Overview

With this firewall, users have the ability to restrict access to specific services based on IP address and subnet mask.

This concludes the vNetwork: Advanced Topics module. In summary:

- Private VLANs support broader

compatibility with existing

networking environments using

the private VLAN technology.

Private VLANs enable users to

restrict communication between

virtual machines on the same

VLAN or network segment,

significantly reducing the

number of subnets required

for certain network

configurations.

- Load balancing and failover

policies allow you to determine

how network traffic is distributed

between adapters and how to

re-route traffic in the event of

a failure. You can edit your load

balancing and failover policies

by configuring the load balancing

policy, failover detection, and

network adapter order.

Page 37: Module 1 Vmware vSphere Network an Overview

- Network I/O control provides

control to converge different

traffic flows on a single pipe.

It provides control to the

administrator to ensure

predictable network performance

when multiple traffic types are

flowing in the same pipe. In

vSphere 5.0, various

enhancements have been made

to network I/O control. These

include user-defined network

resource pools, QoS tagging,

and bandwidth control for

vSphere replication.

Network I/O control

provides its users with different

features. These include

isolation, shares, limits, and

load-based teaming.

- vSphere 5.0 provides several

improvements to the

functionality of distributed

switches. These include

NetFlow, LLDP, and

port mirroring.

- NetFlow increases the visibility of

traffic flowing between virtual

Page 38: Module 1 Vmware vSphere Network an Overview

machines, LLDP improves

network monitoring, and port

mirroring can assist in network

troubleshooting and in many

ways is similar to Cisco’s

SPAN feature.

- The ESXi 5.0 management

interface is protected by a

new service-oriented and

stateless firewall. It is enabled

by default and at installation

time, it is configured

to block incoming/outgoing

traffic, except for default

services such as DNS Client,

DHCP Client, and SNMP Server.

Now that you have completed this course, feel free to review it. When you are ready to proceed, close this browser window to return to the myLearn Learning Management System.