Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are...

34
1 Network Offload and Optimization Technologies in Windows Server 2016 There are many networking features in each release of Windows Server. Each of these features is intended to address the needs of customer workloads. In this document we’ll examine the different network offload and optimization features and discuss how they help make networking more efficient. We’ll wrap up by looking at the different networking stacks and feature sets for different applications of Windows Server 2016. This document is a living document. The latest update was done June 14, 2017. Contents 1 Networking features...............................................3 1.1 Types of networking features...................................3 1.2 Networking feature management..................................3 1.2.1 NIC Advanced Properties....................................3 2 The Features......................................................6 2.1 Software Only (SO) Features:...................................6 2.1.1 Access Control Lists (ACLs)................................6 2.1.2 Extended ACLs.............................................. 6 2.1.3 NIC Teaming................................................ 7 2.1.4 SDN ACLs................................................... 7 2.1.5 SDN QoS.................................................... 7 2.1.6 SET........................................................ 7 2.1.7 Software vRSS.............................................. 7 2.1.8 vmQoS (Hyper-V QoS)........................................7 2.2 Software/Hardware Integrated Features..........................8 Draft 0.98 Microsoft Confidential

Transcript of Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are...

Page 1: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

1

Network Offload and Optimization Technologies in Windows Server 2016There are many networking features in each release of Windows Server. Each of these features is intended to address the needs of customer workloads. In this document we’ll examine the different network offload and optimization features and discuss how they help make networking more efficient. We’ll wrap up by looking at the different networking stacks and feature sets for different applications of Windows Server 2016.

This document is a living document. The latest update was done June 14, 2017.

Contents1 Networking features............................................................................................................................3

1.1 Types of networking features......................................................................................................3

1.2 Networking feature management...............................................................................................3

1.2.1 NIC Advanced Properties.....................................................................................................3

2 The Features........................................................................................................................................6

2.1 Software Only (SO) Features:.......................................................................................................6

2.1.1 Access Control Lists (ACLs)...................................................................................................6

2.1.2 Extended ACLs.....................................................................................................................6

2.1.3 NIC Teaming.........................................................................................................................7

2.1.4 SDN ACLs..............................................................................................................................7

2.1.5 SDN QoS...............................................................................................................................7

2.1.6 SET.......................................................................................................................................7

2.1.7 Software vRSS......................................................................................................................7

2.1.8 vmQoS (Hyper-V QoS)..........................................................................................................7

2.2 Software/Hardware Integrated Features.....................................................................................8

2.2.1 Converged NIC.....................................................................................................................8

2.2.2 Data Center Bridging (DCB)................................................................................................10

2.2.3 Hyper-V Network Virtualization – v1 (HNVv1)...................................................................10

2.2.4 Hyper-V Network Virtualization – v2 NVGRE (HNVv2 NVGRE)...........................................11

2.2.5 Hyper-V Network Virtualization – v2 VxLAN (HNVv2 VxLAN).............................................11

Draft 0.98 Microsoft Confidential

Page 2: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

2

2.2.6 IPsec Task Offload (IPsecTO)..............................................................................................11

2.2.7 PVLAN................................................................................................................................11

2.2.8 Remote Direct Memory Access (RDMA)............................................................................11

2.2.9 RSS.....................................................................................................................................12

2.2.10 SR-IOV................................................................................................................................12

2.2.11 TCP Chimney Offload.........................................................................................................13

2.2.12 VLAN..................................................................................................................................13

2.2.13 VMQ...................................................................................................................................13

2.2.14 VMMQ...............................................................................................................................15

2.2.15 Progression of queuing in Windows Server: RSS to VMMQ...............................................16

2.3 Hardware Only (HO) Features....................................................................................................19

2.3.1 Address Checksum Offload................................................................................................19

2.3.2 Tips on using Address Checksum Offloads.........................................................................20

2.3.3 Interrupt Moderation (IM).................................................................................................20

2.3.4 Jumbo Frames....................................................................................................................21

2.3.5 Large Send Offload (LSO)...................................................................................................21

2.3.6 RSC.....................................................................................................................................21

3 Feature mutual compatibility............................................................................................................21

4 Windows Server 2016 Networking Stacks.........................................................................................22

5 References.........................................................................................................................................25

5.1 BLOGS:.......................................................................................................................................25

5.2 VIDEOS.......................................................................................................................................25

5.3 DOCUMENTATION AROUND SDN..............................................................................................25

5.4 GITHUB......................................................................................................................................25

5.5 DOCUMENTATION AROUND Windows Server 2016 NETWORKING..........................................25

Draft 0.98 Microsoft Confidential

Page 3: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

3

1 Networking features1.1 Types of networking featuresThere are 3 categories of networking features that users of Windows Server should be aware of:

1. Software only (SO) features: These features are implemented as part of the OS and are independent of the underlying NIC(s). Sometimes these features will require some tuning of the NIC for optimal operation. Examples of these include Hyper-v features such as vmQoS, ACLs, and non-Hyper-V features like NIC Teaming.

2. Software-Hardware (SH) integrated features: These features have both software and hardware components. The software is intimately tied to hardware capabilities that are required for the feature to work. Examples of these include VMMQ, VMQ, Send-side IPv4 Checksum Offload, and RSS.

3. Hardware only (HO) features: These hardware accelerations improve networking performance in conjunction with the software but are not intimately part of any software feature. Examples of these include Interrupt Moderation, Flow Control, and Receive-side IPv4 Checksum Offload.

SO features are generally available in all hardware architectures and without regard to NIC Speed or capabilities. If the feature exists in Windows it exists no matter what NICs are installed.

SH and HO features are available if the installed NIC supports it. The feature descriptions below will cover how to tell if your NIC supports the feature.

1.2 Networking feature managementIn general there are two ways to manage NICs and their features. The first is PowerShell. This guide will use PowerShell examples. All the features can be managed through the Network Control Panel (ncpa.cpl) as well. So to help readers who prefer to use the GUI to know where to go, we include an example of how to access the Network Control Panel.

Documentation on the PowerShell cmdlets used to manage Network Adapters can be found at https://technet.microsoft.com/en-us/library/jj134956(v=wps.630).aspx.

1.2.1 NIC Advanced PropertiesTo determine whether the installed NIC supports a feature you may need to look at the Advanced Properties of the NIC. There are two ways to get the Advanced Properties:

1. PowerShell (“Get-NetAdapterAdvancedProperties”). The same cmdlet run against two different make/model of NICs is shown below:

Draft 0.98 Microsoft Confidential

Page 4: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

4

Figure 1 - Examples of Get-NetAdapterAdvancedProperty

There are similarities and differences in these two NIC Advanced Properties Lists.

2. Alternatively, right-click on the NIC in the Network Control Panel. a. To bring up the Network Control Panel go to the start button or a Cmd window or a

PowerShell window and type ncpa.cpl. Network Control Panel looks like:

Draft 0.98 Microsoft Confidential

Page 5: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

5

Figure 2 - Network Control Panel

Right-click on C1 presents this window:

Figure 3 - Network Control Panel, NIC Properties

Selecting “Configure” (highlighted above) presents this window:

Draft 0.98 Microsoft Confidential

Page 6: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

6

Figure 4 - Network Control Panel, NIC Properties, General

Selecting the Advanced tab (highlighted in the figure above) results in the window showing the advanced properties as shown below. The items in this list will correlate to the items in the Get-NetAdapterAdvancedProperties output.

2 The Features2.1 Software Only (SO) Features:2.1.1 Access Control Lists (ACLs)Access Control Lists: A Hyper-V and SDNv1 feature for managing security for a VM. See also http://www.aidanfinn.com/?p=12634.

This feature applies to the non-virtualized Hyper-V stack and the HNVv1 stack. For ACLs in the SDN stack see section 2.1.4.

Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and Remove-VMNetworkAdapterAcl PowerShell cmdlets. Details and examples for the Add-VMNetworkAdapterAcl cmdlet can be found at https://technet.microsoft.com/en-us/library/hh848505.aspx.

2.1.2 Extended ACLsHyper-V switch extended ACLs enable the system administrator to configure the Hyper-V Virtual Switch Extended Port Access Control Lists (ACLs) to provide firewall protection and enforce security policies for the tenant VMs in their datacenters. Because the port ACLs are configured on the Hyper-V Virtual Switch rather than within the VMs, the administrator can manage security policies for all tenants in a multitenant environment.

Draft 0.98 Microsoft Confidential

Page 7: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

7

This feature applies to the HNVv1 stack. For ACLs in the SDN stack see section 2.1.4.

Hyper-V switch extended ACLs are managed through the Add-VMNetworkAdapterExtendedAcl and Remove-VMNetworkAdapterExtendedAcl PowerShell cmdlets. More details and examples can be found at https://technet.microsoft.com/en-us/library/dn375962.aspx.

2.1.3 NIC TeamingNIC Teaming, also sometimes called NIC Bonding, is the aggregation of multiple NIC Ports into an entity the host perceives as a single NIC Port. This provides protection against the failure of a single NIC port (or the cable connected to it). It also aggregates the traffic for better throughput. See https://www.microsoft.com/en-us/download/details.aspx?id=40319 for information on NIC Teaming in Windows Server 2012 R2.

With Windows Server 2016 there are two ways to do teaming. The teaming solution in Windows Server 2012 R2 carries forward into Windows Server 2016. Windows Server 2016 also has Switch Embedded Teaming (SET). See Section 2.1.6 for more information on SET.

The NIC Teaming Deployment Guide for Windows Server 2016 can be found at New Windows Server 2016 Technical Preview NIC and Switch Embedded Teaming User Guide for Download.

2.1.4 SDN ACLsThe SDN-extension in Windows Server 2016 has a new, improved way to support ACLs. In the SDNv2 stack this is used instead of ACLs and Extended ACLs. See the SDN documentation in Section 5.3 for details on how to set ACLs through the Network Controller.

2.1.5 SDN QoSThe SDN-extension in Windows Server 2016 has a new, improved way to provide bandwidth control (egress reservations, egress limits, and ingress limits) on a 5-tuple basis. Typically these policies are applied at the vNIC or vmNIC level but they can be made much more specific. In the SDNv2 stack this is used instead of vmQoS.

SDN QoS is managed through the Network Controller. See the SDN documentation in Section 5.3 for details on how to set ACLs through the Network Controller.

2.1.6 SETSwitch Embedded Teaming (SET): in Windows Server 2016 a new implementation of NIC Teaming has been integrated into the Hyper-V switch. Details on what SET can do and how to manage it can be found in the NIC Teaming Deployment Guide for Windows Server 2016.

2.1.7 Software vRSSSoftware vRSS is used to spread incoming traffic destined for a VM across multiple logical processors (LPs) of the VM. This enables the VM to handle more networking traffic than a single LP would be able to handle. See https://technet.microsoft.com/en-us/library/dn383582.aspx for more information.

2.1.8 vmQoS (Hyper-V QoS)Virtual Machine Quality of Service is a Hyper-V feature that allows the switch to set limits on traffic generated by each VM. It also enables a VM to reserve an amount of bandwidth on the external network connection such that one VM can’t starve another VM for bandwidth. See the section on

Draft 0.98 Microsoft Confidential

Page 8: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

8

Hyper-V QoS at https://technet.microsoft.com/en-us/library/hh831679.aspx. vmQoS is replaced in the Windows Server 2016 SDNv2 stack by SDN-QoS.

vmQoS has the ability to set egress limits and egress reservations. Egress reservations can be made based on absolute bandwidth or by relative weight. The egress reservation mode (weight or absolute) must be decided at the time the Hyper-V switch is created.

The egress reservation mode (relative weight or absolute bandwidth) is determined by the setting of the –MinimumBandwidthMode parameter on the New-VMSwitch PowerShell cmdlet.

Setting the value for the egress limit is done by setting the –MaximumBandwidth parameter on the Set-VMNetworkAdapter PowerShell cmdlet.

Setting the value for the egress reservation is done by setting either the -MinimumBandwidthAbsolute or –MinimumBandwidthWeight parameter depending on what the switch type is. I.e.,

o if the –MinimumBandwidthMode parameter on the New-VMSwitch cmdlet is Absolute, then set the –MinimumBandwidthAbsolute parameter on the Set-VMNetworkAdapter cmdlet.

o If the –MinimumBandwidthMode parameter on the New-VMSwitch cmdlet is Weight, then set the –MinimumBandwidthWeight parameter on the Set-VMNetworkAdapter cmdlet.

Because of limitations in the algorithm used for this feature, Microsoft recommends that the highest weight or absolute bandwidth should not be more than 20 times the lowest weight or absolute bandwidth. If greater granularity of control is needed consider using the SDN stack and the SDN-QoS feature contained therein.

See https://technet.microsoft.com/en-us/library/hh848455.aspx for more information on the New-VMSwitch PowerShell cmdlet and its parameters.

See https://technet.microsoft.com/en-us/library/hh848457.aspx for more information on the Set-VMNetworkAdapter PowerShell cmdlet and its parameters.

2.2 Software/Hardware Integrated Features2.2.1 Converged NICConverged NIC is a technology that allows virtual NICs in the Hyper-V host to expose RDMA services to host processes. See also the RDMA topic below.1

2.2.1.1 What Converged NIC can doPrior to Windows Server 2016 customers using RDMA for their storage transport had to have RDMA-capable NICs dedicated to this purpose. NICs placed in NIC Teaming teams and NICs bound to a Hyper-V Virtual Switch were blocked from exposing RDMA capabilities. The standard configuration for using RDMA looked like what’s shown in Figure 5.

1 Converged NIC may also be referred to in some documentation as NDKPI Mode 2 operation.

Draft 0.98 Microsoft Confidential

Page 9: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

9

Figure 5 - Using RDMA in Windows Server 2012 R2

In Windows Server 2016 separate NICs are no longer required for RDMA. The Converged NIC feature is a feature that allows the Virtual NICs in the Host partition (vNICs) to expose RDMA to the host partition and share the bandwidth of the NICs between the RDMA traffic and the VM and other TCP/UDP traffic in a fair and manageable manner. In Windows Server 2016 the picture changes to that shown in Figure 6.

Figure 6 - RDMA converged on the Hyper-V switch in Windows Server 2016

Draft 0.98 Microsoft Confidential

Page 10: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

10

2.2.1.2 How Converged NICs are managedConverged NIC operation may be managed through VMM or through PowerShell. The PowerShell cmdlets are those for RDMA. (See RDMA below.)

To use the Converged NIC capability

1. Ensure that the host is set up as necessary for DCB. See DCB below.2. Ensure RDMA is enabled on the NIC or, in the case of a SET team, the NICs that are bound to the

Hyper-V switch. See RDMA below.3. Ensure RDMA is enabled on the vNICs that are designated to be used for RDMA in the host. See

RDMA below.

Detailed instructions can be found at https://technet.microsoft.com/en-us/library/mt403349.aspx.

2.2.2 Data Center Bridging (DCB)Data Center Bridging: A feature that provides hardware queue based bandwidth management in a host with cooperation from the adjacent switch. See https://technet.microsoft.com/en-us/library/hh849179.aspx for more information.

DCB is used to mean all four of the following technologies:

1. Priority-based Flow Control (PFC), standardized in IEEE 802.1Qbb. PFC is used to create a (nearly) lossless network fabric by preventing queue overflow within traffic classes.

2. Enhanced Transmission Selection (ETS), standardized in IEEE 802.1Qaz. ETS enables the division of the bandwidth into reserved portions for up to 8 classes of traffic. Each traffic class has its own transmit queue and, through the use of PFC, can start and stop transmission within a class.

3. Congestion Notification, standardized in IEEE 802.1Qau. Windows Server doesn’t make use of DCB’s congestion notification. This congestion notification is at layer 2 and as such is distinct from the ECN bits of the IP header.

4. Data Center Bridging Capabilities Exchange Protocol (DCBX). DCBX leverages functionality of the LLDP protocol defined in the 802.1az standard. While Windows Server may allow a NIC to consume and act on DCBX information from an adjacent switch, the normal behavior is to tell Windows Server NICs to ignore any DCBX information received from the adjacent switch.

In summary, Windows Server in only interested in the PFC and ETS technologies of DCB.

In Windows Server 2012 R2

1. DCB policy was only used on NICs that were not bound to the Hyper-V switch; and2. DCB policy was applied the same on all NICs to which it could be applied (i.e., all those except

the ones bound to Hyper-V switches).

In Windows Server 2016 DCB can be applied to any NIC individually and it can be applied to NICs bound to the Hyper-V switch.

2.2.3 Hyper-V Network Virtualization – v1 (HNVv1)Hyper-V Virtual Networking (NVGRE): See https://technet.microsoft.com/en-us/library/jj134230.aspx

Draft 0.98 Microsoft Confidential

Page 11: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

11

2.2.4 Hyper-V Network Virtualization – v2 NVGRE (HNVv2 NVGRE)Hyper-V Virtual Network v2 (NVGRE): HNVv2 is part of the SDN-Extension in Windows Server 2016. See the SDN documentation for more information on this feature.

HNVv2 (NVGRE) is managed through the Network Controller.

2.2.5 Hyper-V Network Virtualization – v2 VxLAN (HNVv2 VxLAN)Hyper-V Virtual Network v2 (VxLAN): HNVv2 is part of the SDN-Extension in Windows Server 2016. See the SDN documentation for more information on this feature.

HNVv2 (VxLAN) is managed through the Network Controller.

2.2.6 IPsec Task Offload (IPsecTO)IPsec task offload: A NIC feature that enables the operating system to use the processor on the NIC for the IPsec encryption work. See https://technet.microsoft.com/en-us/library/dd125367(v=ws.10).aspx for a full description and instructions on how to enable IPsecTO.

There are no changes to IPsecTO in either Windows Server 2012 R2 or in Windows Server 2016.

This feature is presently available on very few NICs.

2.2.7 PVLANPrivate VLAN. The Hyper-V stack and the SDN stacks support PVLAN Isolated Port mode only. See e.g., https://en.wikipedia.org/wiki/Private_VLAN .

For more information on PVLANs in Hyper-V see https://blogs.technet.microsoft.com/scvmm/2013/06/04/logical-networks-part-iv-pvlan-isolation/.

There were no changes to PVLANs in Windows Server 2016.

2.2.8 Remote Direct Memory Access (RDMA)Remote Direct Memory Access is a technology that allows two hosts to move data between their memories without doing any network overhead in the host. See https://msdn.microsoft.com/en-us/library/windows/hardware/dn163544(v=vs.85).aspx for more general knowledge about RDMA.

Terms:

RDMA-capable: This means the NIC (physical or virtual) is capable of exposing RDMA to an RDMA client.

RDMA-enabled: This means an RDMA-capable NIC is exposing the RDMA interface up the stack.

The PowerShell cmdlets of interest to this feature include the Disable-NetAdapterRdma, Enable-NetAdapterRdma, Get-NetAdapterRdma, and Set-NetAdapterRdma cmdlets.

To see whether or not a NIC is RDMA-capable, execute the Get-NetAdapterRdma cmdlet. If the NIC is listed in the response it is RDMA-capable. The “Enabled” field shows whether it is enabled or not. See Figure 7.

Draft 0.98 Microsoft Confidential

Page 12: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

12

Figure 7 - Get-NetAdapterRdma output example

To make an RDMA-capable NIC into an RDMA-enabled NIC use the Enable-NetAdapterRdma cmdlet.

Figure 8 - RDMA-enabling NICs

As can be seen in the above examples, both physical NICs and virtual NICs can be RDMA-capable and RDMA-enabled.

2.2.9 RSSReceive Side Scaling is a NIC feature that can segregate different sets of streams and deliver them to different processors for processing. This parallelizes the networking processing enabling a host to scale to very high data rates. For more information on RSS see https://hnet.microsoft.com/en-us/library/hh997036.aspx and the additional information at https://msdn.microsoft.com/library/windows/hardware/ff556942(v=vs.85).aspx .

2.2.10 SR-IOVSingle-Root Input/Output Virtualization is a feature that allows traffic for a VM to move directly from the NIC to the VM without passing through the Hyper-V host. See https://msdn.microsoft.com/en-us/library/windows/hardware/hh440148(v=vs.85).aspx for more information.

SR-IOV is an incredible improvement in performance for a VM. It suffers, however, from a lack of ability for the host to manage that pipe. SR-IOV must only be used when the workload is well-behaved, trusted, and generally the only VM in the host.

Traffic that uses SR-IOV bypasses the Hyper-V switch. That means that any policies (ACLs, etc.) or bandwidth management won’t be applied. SR-IOV traffic also can’t be passed through any network virtualization capability so NV-GRE or VxLAN encapsulation can’t be applied. Since the host policies, bandwidth management, and virtualization technologies can’t be used, this is a tool only for very well trusted workloads in very specific situations.

Draft 0.98 Microsoft Confidential

Page 13: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

13

However, there is hope. Two technologies are expected to appear in future NICs that would allow SR-IOV to be used more generally. Generic Flow Tables (GFT) and Hardware QoS Offload (bandwidth management in the NIC) are two technologies under active consideration for future releases of Windows – once the NICs in our ecosystem support them. The combination of these two technologies would make SR-IOV useful for all VMs, would allow policies, virtualization, and bandwidth management rules to be applied, and could result in great leaps forward in the general application of SR-IOV.

2.2.11 TCP Chimney Offload TCP Chimney Offload, also known as TCP Engine Offload (TOE), is a technology that allows the host to offload all TCP processing to the NIC. Because the Windows Server TCP stack is almost always more efficient that the TOE engine, Windows discourages the use of TCP Chimney. See https://technet.microsoft.com/en-us/library/gg162709(v=ws.10).aspx for more information.

2.2.12 VLANVirtual Local Area Network. An extension to the Ethernet frame header to enable partitioning of a Local Area Network into multiple VLANs, each using its own address space. See e.g., https://en.wikipedia.org/wiki/Virtual_LAN for more general information.

In Windows Server 2016 VLANs are set on ports of the Hyper-V switch or by setting team interfaces on NIC Teaming teams. More information about the latter can be found in the NIC Teaming Deployment Guide for Windows Server 2016 at https://technet.microsoft.com/en-us/windows-server-docs/networking/technologies/nic-teaming/nic-teaming?f=255&MSPPError=-2147217396 .

To set a VLAN on a Hyper-V Switch port use the Set-VMNetworkAdapterVlan cmdlet. VLANs can also be set through the Hyper-V manager.

More information on the Set-VMNetworkAdapterVlan cmdlet can be found at https://technet.microsoft.com/en-us/library/hh848475.aspx.

There is a second method of setting VLANs in Windows Server 2016. The Set-VMNetworkAdapterIsolationMode cmdlet can also be used to set VLANs on Hyper-V switch ports.

CAUTION: If a VLAN is set using the Set-VMNetworkAdapterVlan cmdlet it will not show up if queried with the Get-VMNetworkAdapterIsolationMode cmdlet. If the VLAN is set using the Set-VMNetworkAdapterIsolationMode cmdlet it will not show up if queried with the Get-VMNetworkAdapterVlan cmdlet. A user must use one and only one PowerShell method for setting and querying VLAN information.

2.2.13 VMQVirtual Machine Queues is a NIC feature that allocates a queue for each VM. The NIC divides the incoming traffic into the appropriate queues. Interrupts for each queue are mapped to different processors.

Over the past few years the architecture of NICs has been evolving. At the time VMQ was first implemented NICs had queues and the ability to filter traffic into queues based on e.g., the MAC address and VLAN tag. This created the ability to map a VM’s vmNIC to a queue so that all incoming traffic for that vmNIC was delivered through the same queue. Each queue would interrupt a processor

Draft 0.98 Microsoft Confidential

Page 14: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

14

independent of other queues so those interrupts and subsequent packet processing could be spread across multiple processors in the host.

With the advent of SR-IOV capable NICs came the presence of an Ethernet switch embedded in the NIC (NIC Switch). As with any Ethernet switch, the NIC Switch maps its ports to specific Ethernet addresses (MAC/VLAN). The Windows Server software can detect NICs that have the NIC Switch architecture. When a NIC Switch was found, instead of setting up a queue in the NIC, the operating system has the NIC map a NIC Switch port (vPort) to a queue. There is one queue per vPort and, to the user experience, it looks just like VMQ has always looked.

Windows Server 2012 R2 uses the NIC Switch architecture when an SR-IOV capable vSwitch is created. This is because in Windows Server 2012 R2 the NIC Switch and SR-IOV were coupled together.

Windows Server 2016 splits the NIC Switch away from the SR-IOV feature. SR-IOV still requires a NIC Switch as do other Windows Server 2016 features (e.g., VMMQ), but the NIC Switch can be used and the benefits thereof realized without requiring SR-IOV to be enabled.

Take-away: Early versions of VMQ used filters to assign traffic to queues in the NIC. Later versions of VMQ, including Windows Server 2016, use NIC Switch vPorts with a single queue assigned to the vPort to provide the same functionality.

Planning: If you expect to have fewer VMs on a host than your NIC has queues (or vPorts), you can ignore this section. If you expect to have as many or more VMs than your NIC has queues, then you need to do some planning. Keep in mind that the use of NIC Teaming may result in the number of queues available being the least number of queues of any team member (MIN_QUEUES mode) or the sum of all queues from all team members (SUM_QUEUES mode). See the NIC Teaming documentation for more information.

The number of queues available on your NIC can be found by running the PowerShell cmdlet Get-NetAdapterVmq and looking at the column NumberOfReceiveQueues as shown in Figure 9.

Figure 9 - Get-NetAdapterVmq output

Since the number of VMQs in a NIC is finite and historically fairly small (8 to 125) and some customers are putting lots of VMs on the host (20 to 150), the ability to provide a queue for each VM is an impossible goal. All traffic for VMs that don’t get their own queue gets dumped into the same “default queue, shares the same CPU, and lacks any optimizations that can occur with presorted traffic such as the other queues have.

If you find yourself in this situation you should enable VMQ selectively. Look at your VMs and decide which ones are the high-bandwidth, important VMs from a networking perspective. Enable VMQ on those VMs only ensuing that the number of VMs with VMQ enabled is fewer than the number of queues

Draft 0.98 Microsoft Confidential

Page 15: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

15

on the NIC. The Enable-NetAdapterVmq and Disable-NetAdapterVmq PowerShell cmdlets will help.

In the documentation for Set-VMNetworkAdapter there is reference to a –VmqWeight parameter to be used with VMQ. Presently that parameter is only used to indicate Enabled (non-zero) or Disabled (zero). This value can be used to determine which VMs (really, Hyper-V vSwitch ports) are eligible to get a VMQ and which ones are not. There is no difference between different non-zero values.

More information on the Disable-NetAdapterVmq cmdlet can be found at https://technet.microsoft.com/en-us/library/jj130870(v=wps.630).aspx.

More information on the Enable-NetAdapterVmq cmdlet can be found at https://technet.microsoft.com/en-us/library/jj130870(v=wps.630).aspx.

More information on the Get-NetAdapterVmq cmdlet can be found at https://technet.microsoft.com/en-us/library/jj130881(v=wps.630).aspx.

More information on the Set-VMNetworkAdapter cmdlet can be found at https://technet.microsoft.com/en-us/itpro/powershell/windows/hyper-v/set-vmnetworkadapter.

2.2.14 VMMQVirtual Machine Multi-Queue is a Windows Server 2016 NIC feature that allows traffic for a VM to be spread across multiple queues, each processed by a different physical processor. The traffic is then passed to multiple LPs in the VM as it would be in vRSS. This allows for very large networking bandwidth to be delivered to the VM.

Read Section 2.2.13. VMMQ is the evolution of VMQ with Software vRSS. Whereas in VMQ the NIC Switch vPort is mapped to a single queue, VMMQ assigns multiple queues to the same vPort. RSS hashing (see section 2.2.9) is used to spread the incoming traffic between the queues assigned to the vPort. The result is effectively a hardware offload version of vRSS (See section 2.1.7).

The first benefit of VMMQ is that the default queue can now be a set of queues assigned to the default vPort. NICs supporting VMMQ typically have a good number of queues available that can be distributed across the vPorts. While the number of vPorts may be limited, the number of queues is much less so.

This doesn’t mean that VMMQ is always better than VMQ. Or rather, VMMQ with more than one queue per vPort doesn’t always mean better performance. It often will, but not always.

One of the benefits of VMQ is the batching of packets for a VM at each interrupt. If the number of packets arriving for a queue between interrupts is small, the amount of time spent processing the interrupt per packet received becomes proportionally larger. Since VMMQ enables multiple queues per vPort and each queue interrupts the host independently, low traffic VMs may find that the overhead of interrupt processing will result in higher CPU loads (more cycles/byte) with VMMQ than with VMQ. High traffic VMs, on the other hand, will benefit from the CPU load spreading that multiple queues can provide.

Guidance: Assign one VMMQ queue for a VM for every 3-4 Gbps of incoming networking traffic that a VM requires. If you have more VMs than vPorts, assign one queue to the default queue for every 2 Gbps of aggregate bandwidth that the VMs sharing the default queue will require. (Some NICs that

Draft 0.98 Microsoft Confidential

Page 16: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

16

support VMMQ may only be able to support the same number of queues on every vPort. This may require some testing to find the right balance between load spreading and interrupt processing.)

To manage VMMQ Queues for VMs, use the Set-VMNetworkAdapter PowerShell cmdlet with the –VmmqEnabled and -VmmqQueuePairs parameters. (A queue is technically a queue pair, send and receive.)

To manage the number of queues assigned to the default vPort use the Set-VMSwitch PowerShell cmdlet with the -DefaultQueueVmmqEnabled and -DefaultQueueVmmqQueuePairs parameters.

More information on the Set-VMNetworkAdapter cmdlet can be found at https://technet.microsoft.com/en-us/library/hh848457.aspx.

More information on the Set-VMSwitch cmdlet can be found at https://technet.microsoft.com/en-us/library/hh848515.aspx.

2.2.15 Progression of queuing in Windows Server: RSS to VMMQAround 2005 the more advanced new NICs began to offer queues that could be independently affinitized to different processors within the CPU(s). Windows Server began to look at how these queues could be used to accelerate packet processing in the operating system. Around this time in packet processing a single CPU could keep up with more than a Gigabit/second of packet processing but NICs were starting to be able to deliver up to 10Gbps of packets. It was important to find a way to bring more CPU power to bear on the network processing workload.

The first use of queues to accelerate processing was done with RSS technology as shown in Figure 10.

Figure 10 - RSS2

As Hyper-V virtualization began to need acceleration of packets to the VMs the queues were converted to a new usage, VMQ, as shown in Figure 11. Now, instead of using Toeplitz hash calculations to direct packets to queues, a filter was inserted containing the MAC and VLAN of a given port on the Hyper-V

2 Where the figure says, “Interrupt processor”, the processor interrupt is moderated by the Interrupt Moderation setting (See Section 2.3.3).

Draft 0.98 Microsoft Confidential

Page 17: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

17

switch. All the traffic destined to that port was now delivered to the queue whose filter matched the packet header. Any traffic not matching a filter was delivered to a queue known as the default queue.

Figure 11 – VMQ using filters

The technology kept advancing. NICs, especially those designed for use with SR-IOV, started to have embedded Ethernet switches. This meant that the NIC could more efficiently route the packets to a port on the embedded Ethernet switch. The switches had programmable forwarding tables. The operating system could detect whether the NIC supported filter-based VMQ or instead had an embedded Ethernet switch and would program the VMQs accordingly. The switch-based architecture is shown in Figure 12.

Figure 12 – VMQ using embedded Ethernet switch

Both variants of VMQ suffer from a common challenge. Since all the packets going to a single VM interrupt on a single processor, the maximum bandwidth of a VM is limited to what one processor can handle. This problem gets solved with a technology known as virtual RSS (vRSS).

vRSS is RSS over VMQ. VMQ is used in the hardware to segregate traffic for a given VM. When the packets arrive at the Hyper-V switch, the switch distributes the arriving packets across multiple processors for calculation of the Toeplitz hash and other switch processing. Each processor, based on

Draft 0.98 Microsoft Confidential

Page 18: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

18

the calculated hash, places the packet on a vmBus sub-channel such that it flows to a particular logical processor in the VM. This variant of RSS results in the packets being spread across a VM’s logical processor set based on the RSS parameters enabled in the VM. See Figure 13.

vRSS offers considerable acceleration over VMQ alone. Typically VMQ alone will result in a data rate of around 5Gbps/VM (assuming no other workload is running in that processor). Adding vRSS can result in data rates up to near 20Gbps/VM if the VM has sufficient logical processors and the host processors are available to do the spreading.

A few important points:

The number of processors used in the host to do the vRSS spreading is independent of and unrelated to the number of logical processors used in the VM.

The number of vmBus sub-channels used is equal to the number of RSS processors set in the VM.

Each processor used to do the Toeplitz hash calculations can place the packet on any vmBus sub-channels and will do so to match the hash table received from the VM.

Draft 0.98 Microsoft Confidential

Page 19: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

19

Figure 13 - vRSS

vRSS acceleration is sufficient for many workloads, but there are always those VMs that want even more. If we could move the RSS queues into the hardware with the VMQ, we’d get even faster speeds.

So with Windows Server 2016 we introduce VMMQ. VMMQ is VMQ plus vRSS at its core but we now are using both the embedded switch’s ports and the hardware RSS queues we started with. The multiple processors of software vRSS are replaced (mostly) by the queues in the NIC hardware. Toeplitz hash calculation takes place in the NIC hardware as well so the work to put a packet in the appropriate vmBus sub-channel is minimized.

The VMMQ queue architecture is shown in Figure 14. Notice that each of the different switch ports can have a different number of queues assigned to it based on the traffic volume expected by the associated VM. (There is at least one brand of NIC in the market that presently can only assign the same number of

Draft 0.98 Microsoft Confidential

Page 20: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

20

queues to every port, but most NICs that implement VMMQ can support a variable number of queues per port.) Data rates in excess of 45 Gbps/VM have been frequently observed using VMMQ.

Figure 14- VMMQ

As an aside, in the embedded Ethernet switch model some ports may be dedicated to Virtual Functions (VFs), an SR-IOV concept. All the ports in the VMQ/VMMQ models are ports within the Physical Function (PF).

2.3 Hardware Only (HO) Features2.3.1 Address Checksum Offload2.3.1.1 What Address Checksum Offloads doAddress checksum offloads are a NIC feature that offloads the calculation of address checksums (IP, TCP, UDP) to the NIC hardware for both send and receive.

On the receive path the checksum offload calculates the checksums in the IP, TCP, and UDP headers (as appropriate) and indicates to the OS whether the checksums passed, failed, or weren’t checked. If the NIC asserts that the checksums are valid, the OS accepts the packet unchallenged. If the NIC asserts the checksums are invalid or weren’t checked the IP/TCP/UDP stack internally calculates the checksums again. If the computed checksum fails the packet is discarded.

On the send path the checksum offload calculates and inserts the checksums into the IP, TCP, and/or UDP header as appropriate.

Disabling checksum offloads on the send path does not disable checksum calculation and insertion for packets sent to the miniport driver using the Large Send Offload (LSO) feature. To disable all checksum offload calculations the user must also disable LSO.

2.3.1.2 How Address Checksum Offloads are managedIn the Advanced Properties there are several distinct properties:

IPv4 Checksum Offload TCP Checksum Offload (IPv4) TCP Checksum Offload (IPv6)

Draft 0.98 Microsoft Confidential

Page 21: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

21

UDP Checksum Offload (IPv4) UDP Checksum Offload (IPv6)

By default these are all always enabled. Microsoft strongly recommends that all of these offloads are always enabled.

The Checksum Offloads can be managed using the Enable-NetAdapterChecksumOffload and Disable-NetAdapterChecksumOffload cmdlets. For example, the following cmdlet enables the TCP (IPv4) and UDP (IPv4) checksum calculations:

Enable-NetAdapterChecksumOffload –Name * -TcpIPv4 -UdpIPv4

2.3.2 Tips on using Address Checksum OffloadsAddress Checksum Offloads should ALWAYS be enabled no matter what workload or circumstance. This most basic of all offload technologies will always improve your network performance. Checksum offloading is also required for other stateless offloads to work including receive side scaling (RSS), receive segment coalescing (RSC), and large send offload (LSO).

2.3.3 Interrupt Moderation (IM)Historically NICs interrupted the operating system every time a packet was received. As bitrates increased this led to lots of interrupts – enough that they were a drain on the CPUs used by the operating system.

The solution was to have NICs buffer multiple received packets before interrupting the operating system. Now when a NIC receives a packet it starts a timer. When the buffer is full or the timer expires, whichever comes first, the NIC interrupts the operating system.

Many NICs support more than just on/off for Interrupt Moderation. Most NICs support the concepts of a low, medium, and high rate for IM. The different rates will represent shorter or longer timers and appropriate buffer size adjustments to reduce latency (low interrupt moderation) or reduce interrupts (high interrupt moderation).

There is a balance to be struck between reducing interrupts and excessively delaying packet delivery. Generally packet processing will be more efficient with Interrupt Moderation enabled. High performance or low latency applications may need to evaluate the impact of disabling or reducing Interrupt Moderation.

2.3.4 Jumbo FramesJumbo frames is a NIC and network feature that allow an application to send frames that are much larger than the default 1500 bytes. Typically the limit on jumbo frames is about 9000 bytes but may be smaller. See https://en.wikipedia.org/wiki/Jumbo_frame for general information on Jumbo Frames.

There were no changes to jumbo frame support in Windows Server 2012 R2.

In Windows Server 2016 there is a new offload: MTU_for_HNV. This new offload works with Jumbo Frame settings to ensure encapsulated traffic doesn’t require segmentation between the host and the adjacent switch. This new feature of the SDN stack has the NIC automatically calculate what MTU to advertise and what MTU to use on the wire. These values for MTU will be different if any HNV offload is

Draft 0.98 Microsoft Confidential

Page 22: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

22

in use. (In the feature compatibility table, Table 1, MTU_for_HNV would have the same interactions as the HNVv2 offloads have since it is directly related to the HNVv2 offloads.)

2.3.5 Large Send Offload (LSO)Large send offload (also known as large segment offload) (Also known as LSOv2 as there was an earlier version of the feature that was called LSOv1 during the period both were available): LSO allows an application to pass a large block of data to the NIC whereupon the NIC breaks the data into packets that fit within the Maximum Transfer Unit (MTU) of the network. See https://en.wikipedia.org/wiki/Large_segment_offload for general information on LSO.

There were no changes to LSO in Windows Server 2016.

2.3.6 RSCReceive segment coalescing, also known as Large Receive Offload, is a NIC feature that takes packets that are part of the same stream that arrive between network interrupts and coalesce them into a single packet before delivering them to the OS.

RSC is not available on NICs bound to the Hyper-V switch. There were no changes to RSC in Windows Server 2016.

See https://technet.microsoft.com/en-us/library/hh997024.aspx for more information.

3 Feature mutual compatibilityWith all the networking features in Windows Server 2016 it’s inevitable that some features don’t work with others. It’s surprising how many features do work together. The table that follows points out the compatibility of each pair of features. The table is easiest to read if available in color.

Draft 0.98 Microsoft Confidential

Page 23: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

23

Table 1 - Feature compatibility matrix

4 Windows Server 2016 Networking StacksWindows Server 2016 (Windows Server 2016) supports different networking stacks depending on what the customer is using the server to do. Specifically,

1. Native (bare metal). This is e.g., a physical server workload (e.g., file server).a. A Windows VM is a variant of this with minor differences.3

2. Hyper-V server with SDNv1. This stack first shipped in Windows Server 2012 and continues to exist in Windows Server 2016.

3. Hyper-V server with SDNv2. This stack ships new in Windows Server 2016 a. MAS stack is a variant of this with only management differences.

The feature sets exposed in each of these scenarios is different, yet there is a large amount of overlap between them. Table 2 is an attempt to show how features map to scenarios in Windows Server 2016.

3 Excludes nested Hyper-V.

Draft 0.98 Microsoft Confidential

Page 24: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

24

Keep in mind that not all features work with all other features so two features that are available in the same stack may not be available at the same time (e.g., NVGRE and VxLAN are not able to be used in the same host at the same time).

Draft 0.98 Microsoft Confidential

Page 25: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

25

Table 2 - Feature/Stack compatibility table

  Features Native Hyper-V/SDNv1 SDNv2  N

IC O

ffloa

ds

Address checksum DCB IPsecTO Jumbo Frames LSO NVGRE TO RDMA/Converged NIC RSC RSS SR-IOV TCP Chimney Offload VMMQ VMQ VxLAN TO

Stac

k

NIC Teaming LBFO LBFO or SET SET - Switch independent - LACP LBFO only Software vRSS

HNV HNVv1/NVGRE

HNVv2/NVGRE HNVv2/VxLAN

Hype

r-V

ACLs/Extended ACLs SDN ACLs QoS vmQoS SDN QoS 3rd party Switch extensions VLANs/PVLANs

Man

agem

ent Powershell With NC

VMM Fabric Only Network Controller *RPs and Portal

Key Purple Not relevantRed Not possibleGreen Available

Draft 0.98 Microsoft Confidential

Page 26: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

26

5 References5.1 BLOGS:

1. 4 datacenter challenges and how Windows Server 2016 software defined networking can help. https://blogs.technet.microsoft.com/hybridcloud/2015/11/04/4-datacenter-challenges-and-how-windows-server-2016-software-defined-networking-can-help/

2. Zero to SDN in under five minutes. https://blogs.technet.microsoft.com/windowsserver/2016/02/04/zero-to-sdn-in-under-five-minutes/

3. What’s New In Windows Server 2016 Standard Edition Part 6 – Networking https://blogs.technet.microsoft.com/ausoemteam/2016/09/01/whats-new-in-windows-server-2016-standard-edition-part-6-networking/

5.2 VIDEOS1. Microsegment and secure your networks with the Azure-inspired Software Defined Networking

https://myignite.microsoft.com/videos/28722. Explore Windows Server 2016 Software Defined Datacenter

https://myignite.microsoft.com/videos/2975 3. Deploy complex workloads with Azure Agility - from zero to SDN in 60 minutes

https://myignite.microsoft.com/videos/2873 4. Dig into cloud networking performance, monitoring, and diagnostics

https://myignite.microsoft.com/videos/3117

5.3 DOCUMENTATION AROUND SDN1. Software Defined Networking (SDN) https://technet.microsoft.com/en-us/windows-server-

docs/networking/sdn/software-defined-networking--sdn- 2. Network Controller https://technet.microsoft.com/en-us/windows-server-docs/networking/

sdn/technologies/network-controller/network-controller

5.4 GITHUB1. SDN Documentation and scripts; Switch configuration information

https://github.com/Microsoft/sdn 

5.5 DOCUMENTATION AROUND Windows Server 2016 NETWORKING1. Networking https://technet.microsoft.com/en-us/windows-server-docs/networking/

networking?f=255&MSPPError=-2147217396 2. Windows Server 2016 Supported Networking Scenarios

https://technet.microsoft.com/en-us/windows-server-docs/networking/windows-server-2016-supported-networking-scenarios

Draft 0.98 Microsoft Confidential

Page 27: Networking features · Web viewFor ACLs in the SDN stack see section 2.1.4. Hyper-V switch ACLs are managed through the Add-VMNetworkAdapterAcl and RemoveVMNetworkAdapterAcl PowerShell

27

3. Deploying NIC Teaming and Switch Embedded Teaming https://gallery.technet.microsoft.com/Windows-Server-2016-839cb607

Draft 0.98 Microsoft Confidential