Windows Server 2012 Hyper-V Deep Dive
Transcript of Windows Server 2012 Hyper-V Deep Dive
Régis
LaurentDirector of Operations,
Global Knowledge
Competencies include:
Gold Learning
Silver System Management
Windows Server 2012 Hyper-V Deep Dive
Jeff Woolsey (#wsv_guy)
Windows Server & Cloud
Hyper-V in Windows Server 2012
Scale Performance and DensityMission Critical, Scale Up Workloads
StorageInvestments in File & Block
More Secure Multi-Tenancy
Flexible InfrastructureContinuous Availability
VM Mobility
Windows Server 2008 R2 Editions…
Windows Server 2008 R2
Standard
Windows Server 2008 R2
Enterprise
Windows Server 2008 R2
Datacenter
Maximum Processors Up to 4 Up to 8 Up to 64
Maximum Memory Up to 32 GB Up to 2 TB Up to 2 TB
Failover Clustering and Multi-
Path IO
No Yes Yes
Virtualized Server Guest OS
Instances
1 4 Unlimited
Licensed Per Server Per Server Per Processor
Windows Server 2012: Simplicity
Windows Server 2012 Standard and Datacenter Editions share the same capabilities
ScaleUp to 64 physical processors, up to 4 TB physical memory
Up to 64 virtual processors per VM, up to 1 TB memory per VM
RolesActive Directory, File Server, Hyper-V, IIS, Remote Access, all there…
FeaturesBranchCache, BitLocker, Failover Clustering, MPIO, all there…
The only difference is virtualization licensingStandard: Two instances of Windows Server
Datacenter: Unlimited instances of Windows Server
Windows Server 2012 Editions…
Windows Server 2012 Standard Windows Server 2012 Datacenter
Maximum Processors Up to 64 Up to 64
Maximum Memory Up to 4 TB Up to 4 TB
Failover Clustering and Multi-Path IO Yes Yes
Windows Server Guest OS Instances 2 Unlimited
Licensed Per Processor Per Processor
Mission Critical Workloads
Scaling up: Physical NUMA
NUMA (Non-uniform memory access)Helps hosts scale up the number of
cores and memory access
Partitions cores and memory into “nodes”
Allocation and latency depends on the memory location relative to a processor
High performance applications detect NUMA and minimize cross-node memory access
Host NUMA
Memory
Processors
NUMA node 1 NUMA node 2
Scaling up: Physical NUMA
This is optimal…Memory allocation and
thread allocations within the same NUMA node
Memory populated in each NUMA node
Host NUMA
Memory
Processors
NUMA node 1 NUMA node 2
Memory
Processors
NUMA node 3 NUMA node 4
Scaling up: Physical NUMA
This isn’t optimal…System is imbalanced
Memory allocation and thread allocations across different NUMA nodes
Multiple node hops
Node 2: odd number of DIMMS
Node 3: insufficient memory
Node 4: no local memory (worst case)
Memory
Processors
NUMA node 1 NUMA node 2
Memory
Processors
NUMA node 3 NUMA node 4
Host NUMA
Scaling Up: Guest NUMA
Guest NUMAPresenting NUMA topology within VM
Guest operating systems & apps can make intelligent NUMA decisions about thread and memory allocation
Guest NUMA nodes are aligned with host resources
Policy driven per host –best effort, or force alignment
vNUMAnode A vNUMAnode B vNUMAnode A vNUMAnode B
NUMA node 1 NUMA node 2 NUMA node 3 NUMA node 4
Hyper-V Scale Comparison
Windows Server 2008 Windows Server 2008 R2 Windows Server 2012
HW Logical Processor
Support
16 LPs 64 LPs 320 LPs
Physical Memory Support 1 TB 1 TB 4 TB
Virtual Machine Processor
Support
Up to 4 VPs Up to 4 VPs Up to 64 VPs
VM Memory Up to 64 GB Up to 64 GB Up to 1 TB
Live Migration Yes, one at a time Yes, one at a time Yes, with no limits. As many
as hardware will allow.
Live Storage Migration No. Quick Storage
Migration via SCVMM
No. Quick Storage
Migration via SCVMM
Yes, with no limits. As many
as hardware will allow.
Servers in a Cluster 16 16 64
Cluster Scale 16 Nodes up to 1000 VMs 16 Nodes up to 1000 VMs 64 Nodes up to 8000 VMs
Linear Performance & Scale
ESG Labs Validation: Windows Server 2012 with SQL Server 2012
The sum of the number of transactions processed per second and the average response time
for the 10 transaction types were monitored as virtual CPUs were added from 4-64. The
OLTP workload and concurrent user counts remained constant. The number of brokerage
transactions per second scaled linearly up to 64 virtual processors.
Customer input on Storage
Windows Server 2008
R2
Windows Server 2012
250,000 IOPs 1,000,000+ IOPs
Hyper-V: Over 1 Millions IOPs from a Single VM
Industry Leading IO
Performance
• VM storage performance on par
with native
• Performance scales linearly with
increase in virtual processors
• Windows Server 2012 Hyper-V
can virtualize over 99% of the
world’s SQL Server.
VHD Stack
Without Offloaded Data Transfer (ODX)
Traditional data copy modelServer issues read request to SAN
Data is read into memory
Data is written from memory to SAN
ProblemsIncreased CPU & memory utilization
Increased storage traffic
Inefficient for SAN
VHD Stack
Offloaded Data Transfer (ODX)
Offload-enabled data copy modelServer issues offload read request to
SAN
SAN returns token representing request
Server issues write request to san using token
SAN completes data copy internally
SAN confirms data was copied
Reduce maintenance timeMerge, mirror, VHD/VHDX creation
Increased workload performanceVMs are fully ODX-aware and enabled
Hyper-V ODX Support
Secure Offloaded data transfer
Fixed VHD/VHDX Creation
Dynamic VHD/VHDX Expansion
VHD/VHDX Merge
Live Storage Migration
Just one example…
0
50
100
150
200
Average
Desktop
ODX
Creation of a 10 GB
Fixed Disk
Time
(seconds)<1 Second!
~3 Minutes
Virtual Fibre Channel
Extends Fibre Channel into VMsHigh-performance workloads
Guest clustering
Exposes SAN functionality
Uses NPIV functionality
SupportGuest: Windows Server 2008
Host: Windows Server 2012Updated NPIV HBA driver
Live migration just works
WWN
NPIV HBA
WWN
Virtual Fibre Channel and Live Migration
Shared Storage
WWPN A: C0:03:FF:78:22:A0:00:14
WWPN B: C0:03:FF:78:22:A0:00:15
WWPN A: C0:03:FF:78:22:A0:00:14
WWPN B: C0:03:FF:78:22:A0:00:15
Fibre Channel
Tips:Requires Windows Server 2008 and later for the guest OS
Verify latest drivers & firmware for FC adapter
Verify NPIV is enabled on the FC adapter
Verify NPIV is enabled on the FC switch port
Hyper-V Storage No Limits & Dynamic
Windows Server 2008 Windows Server 2008 R2 Windows Server 2012
Live Storage Migration No. Quick Storage
Migration via SCVMM
No. Quick Storage
Migration via SCVMM
Yes, with no limits. As many
as hardware will allow.
VMs on File Storage No No Yes, SMB 3.0
Guest Fiber Channel No No Yes
Virtual Disk Format VHD up to 2 TB VHD up to 2 TB VHD up to 2 TB
VHDX up to 64 TB
VM Guest Clustering Yes, via iSCSI Yes, via iSCSI Yes, via iSCSI or FC
Native 4k Disk Support No No Yes
Live VHD Merge No, offline. No, offline. Yes
Live New Parent No No Yes
Secure Offloaded Data
Transfer (ODX)
No No Yes
Storage Documentation Publicly Available
SMB 3.0 Protocol:http://msdn.microsoft.com/en-us/library/cc246482(v=prot.20).aspx
Offloaded Data Transfer:http://msdn.microsoft.com/en-us/library/windows/desktop/hh848056(v=vs.85).aspx
VHDX Format:http://www.microsoft.com/en-us/download/details.aspx?id=34750
Windows Server 2012 & SDN
NIC Teaming
Multiple modes: switch dependent and switch
independent
Hashing modes: port and 4-tuple
Active/active and active/standby
Multi-Tenant Network Requirements
Tenant wants to easily move VMs to/from the cloud
Hoster wants to place VMs anywhere in the data center
Both want: Easy Onboarding, Flexibility & Isolation
Cloud Data Center
Woodgrove Bank
Blue 10.1.0.0/16
Contoso Bank
Red 10.1.0.0/16
Scale beyond VLANs with Hyper-V network virtualization
How network virtualization works
• Two IP addresses for each virtual
machine
• General Routing Encapsulation (GRE)
• IP address rewrite
• Policy management server
Problems solved
• Removes VLAN constraints
• Eliminates hierarchical IP address
assignment for virtual machines
29
Scale beyond VLANs with Hyper-V network virtualization
How GRE works
• Defined by RFC 2784 and RFC 2890
• One customer address per virtual machine
• One provider address per host
• Tenant network ID
• MAC header
Benefits
• Lower burden on switches
• Allow traffic analysis, metering, and
control
Hyper-V Extensible Switch
Physical NIC
Root Partition
Extensible Switch
Extension Protocol
Extension Miniport
Host NICVM NIC
VM1
VM NIC
VM2 Capture extensions can inspect traffic and
generate new traffic for report purposes
Capture extensions do not modify existing Extensible Switch traffic
Example: sflow by inMon
Windows Filter Platform (WFP) Extensions can inspect, drop, modify, and insert packets using WFP APIs
Windows Antivirus and Firewall software uses WFP for traffic filtering
Example: Virtual Firewall by 5NINE Software
Forwarding extensions direct traffic, defining the destination(s) of each packet
Forwarding extensions can capture and filter traffic
Examples:– Cisco Nexus 1000V and UCS
– NEC ProgrammableFlow's vPFS OpenFlow
Capture Extensions
(NDIS)
Windows Filter
Platform (WFP)
Forwarding ExtensionsForwarding Extensions
(NDIS)
Filtering Engine
BFE Service Firewall
Callout
Windows Server 2012 Networking: It’s All There
Windows Server 2008 Windows Server 2008 R2 Windows Server 2012
NIC Teaming Yes, via partners Yes, via partners Windows NIC Teaming in box.
VLAN Tagging Yes Yes Yes
MAC Spoofing Protection No Yes, with R2 SP1 Yes
ARP Spoofing Protection No Yes, with R2 SP1 Yes
SR-IOV Networking No No Yes
Network QoS No No Yes
Network Metering No No Yes
Network Monitor Modes No No Yes
IPsec Task Offload No No Yes
VM Trunk Mode No No Yes
Highly scalable infrastructure for the private cloud
Increased scale out and scale up8x scale over Windows Server 2008 R2Scale out to 64-nodes
Scale up to 8,000 VMs per cluster• Up to 1,024 VMs per node
. . .
Sca
le u
p
Scale out
Robust management tools
..
.
Resource Placement in your Cloud
Virtual Machine Priority Enhanced Failover Placement
Starting the most important VMs first
Ensure the most important VMs are runningPreemption to shut down low priority VMs to free up resources for higher priority VMs to start
Each VM placed based on node with best available memory resources
Memory requirements evaluated on a per VM basisNon-Uniform Memory Access (NUMA) aware
High
Medium
Low
Node Maintenance ModeSimple single-click operation to drain all roles off a nodeGeneric in-box infrastructure which was previously only available through SCVMM
Simplifies maintenance and patching of cluster nodes
Scriptable with PowerShell Suspend-ClusterNode -Drain
Supports all cluster roles and intelligent to the type of move supportedLeverages live migration for VMsVMs can be configured to use Quick or Live migration based on priority
Configured via VM resource type private property NodeEvacuationMoveTypeThreshold
Traditional move group for workloads like SQL or File Server
Draining a node•Node is paused preventing new groups from moving to that
node
•All groups are issued a move
•VMs are queued up and live migrated off based on priority
Resuming a node•Resume-ClusterNode –Failback invokes
failback policies to return groups to that node
when it is brought out of Maintenance Mode
Cluster-Aware Updating
Simple automated updating of clusters
Coordinator updates nodes in the clusterCoordinates with Windows Update Agent (WUA)
Updates in a rolling fashion, 1 node at a time
Serially steps through all nodes
Coordinator can be made clustered, for Self-Updating mode
Workflow1. Scan nodes to identify appropriate updates needed
2. Identify node with fewest workloads
3. Nodes drained
4. Call to WUA to patch (which leverages WSUS or Windows Update)
5. Verify successful
6. Repeat Steps 2 – 5 on next node
7. Repeat on remaining nodes
Update
Coordinator
Admin
Initiate
Cluster-Aware
Updating
So You’re a Building a Cloud…
I have good processes in place, but
what other safeguards can I use to
protect my data?
HIPAA Breach: Stolen Hard Drives
March 2012: Large Medical Provider in Tennessee paying $1.5 million to the US Dept. Health & Human ServicesTheft of 57 hard drives that contained protected health information (ePHI) for over 1 million individuals
Secured by:Security Patrols
Biometric scanner
Keycard scanner
Magnetic locks
Keyed locks
“71% of health care organizations have suffered at least one data breach within the last year”
-Study by Veriphyr
Critical Safeguard for the Cloud
BitLocker encrypted cluster disks
Support for traditional failover disks
Support for Cluster Shared Volumes
Cluster Name Object (CNO) identity used
to lock and unlock Clustered volumes
Enables physical security for deployments
outside of secure datacenters
Branch office deployments
Volume level encryption for compliance
requirements
Negligible (<1%) performance impact
Customer Thoughts on VM Mobility
Don’t provide new features
that preclude Live
Migration.
I want to be able to securely
move any part of a VM
anywhere at anytime. No
Limits.
No Downtime Servicing
SAN Upgrades/Migrations
When VMs migrate, move
the historical data with the
VM
Fully Leverage hardware to
speed migrations
Virtual Machine Mobility
Live Migration with High Availability
SMB Live Migration
Live Storage Migration
Wouldn’t it be great if you could Live Migrate a VM with nothing but an Ethernet
cable?
We think so too…
Introducing: Shared Nothing Live Migration
Shared Nothing Live Migration
Just a network
No clusters, no SANs
Enables new mobility options
Live Migrate into a cluster
Live Migrate out of a cluster
Live Migrate between clusters
VM Mobility
Live Migration with High Availability
Live Migrate among servers in a failover cluster
SMB Live Migration
Live Migrate VMs among servers with SMB storage
Live Storage Migration
Live Migrate VM storage from one volume to another without
downtime
Share Nothing (SNO) Live Migration
Live Migrate VMs among servers with nothing, but an Ethernet
connection
Hyper-V Replica
Disaster Recovery Scenarios:
Planned, Unplanned and Test Failover
Pre-configuration for IP settings for
primary/remote location
Key Features:
RPO/RTO in minutes
Seamless integration with Hyper-V and Clustering
Automatically handles all VM mobility scenarios
(e.g. Live migration)
Supports heterogonous storage between primary
and recovery
Integrates with Volume Shadow Services (VSS)
Hyper-V Replica: Complements Array Based Replication
Replication
Provider
Cost Management Performance
Hyper-V
Replica
Microsoft • Flexible Storage
Options Available
• Unlimited VM
Replication included
• VM Granularity
• Open APIs
provide
extensibility,
interoperability
and prevent
vendor lock-in
• 5 minutes RPOs
• Application Level
Consistency
• File Level
Consistency
Storage Based
Replication
NetApp, HP, Fujitsu,
IBM, Hitachi, FalconStor,
3Par, EMC, LSI,
Compellent, EqualLogic
and more…
• High end replicating
storage
• Additional
replication software
• LUN-VM Layout
• Coordination
with storage
team
• Synchronous
Replication
• High Data
Volumes
Key Hyper-V Replica Takeaways
Easy to SetupVia wizard
Or, via PowerShell
Works with your current hardware
All you need is two connected servers running
Windows Server 2012
Transparent to Guest
Microsoft Committed to Interoperability
July 2009Microsoft contributes Linux drivers under GPL v2
March 2012“Microsoft appeared in the top-20 contributors for a kernel release”
Q2 2012Hyper-V Drivers in mainline Linux KernelStorage, Networking, VMBus, Input, Utilities, etc
SUSE, CentOS, RedHat, Ubuntu and others include Hyper-V drivers in the distro
Linux on Hyper-V
Linux workloads can be consolidated into VMs running on
to a Microsoft hypervisor at no cost
Hyper-V hosted Linux VMs can leverage high-end
enterprise features:High Availability
Live Migration, Shared Nothing Live Migration
VM Replication with Hyper-V Replica
Linux VMs can be managed centrally from System Center
VM scale improvements (CPU, memory, disk, etc.)
Microsoft LIS Distribution Approach
Microsoftdevelopers
Linux kernel main
Linux community
LIS for Hyper-V
Distro vendors
Linux distro w/LISCustomer
servers w/Hyper-V
MSdownloadcenter
Customer installs LIS
Complete List of Supported Guests
WindowsXP SP3, Server 2003 SP2 and later…
Linux GuestsRHELCentOSSUSEOpenSUSEUbuntu
Link:http://technet.microsoft.com/library/hh831531.aspx
Microsoft Workloads Run Best on Hyper-V
The most tested platformOn an average day, the Server division (alone) creates then tests over 25,000
virtual machines
Common Engineering Criteria (CEC)Per CEC, Microsoft workloads test, validate and provide guidance for running
atop Hyper-V
Integration and Partnership in Windows Server 2012Active Directory/Hyper-V delivering the first supported and safe way to
deploy cloned DCs in VMs
BitLocker/Hyper-V/Cluster delivering encrypted Shared Storage
Microsoft Workloads Run Best on Hyper-V
Exchange recommends Hyper-V for virtualization
SQL Server recommends Hyper-V for virtualization
Dynamics recommends Hyper-V for virtualization
SharePoint recommends Hyper-V for virtualization
Windows Azure uses Hyper-V
The best supported platform for Microsoft workloadsMicrosoft can best determine how to troubleshoot a Microsoft workload
running on a Microsoft OS in a VM on Hyper-V.
Windows Server 2012 for Cloud
Most Feature Rich, All Server Editions include:
1. Hyper-V Extensible Virtual Switch
2. Hyper-V Replica
3. Live Storage Migration
4. QoS
5. Shared Nothing Live Migration
6. SR-IOV (with Live Migration)
More…
7. Storage Virtualization
8. Hyper-V Offloaded Data Transfer
9. GPU Accelerated VM Video
10. ….And…
Network Virtualization
Detailed VMware vs. Hyper-V Analysis?
Windows Server 2012 Hyper-V over VMware vSphere 5.1 here:http://t.co/R25yZMOX
© 2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational
purposes only and represents the current view of Microsoft Corporation as of the date of this presentations. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft,
and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
Thank you for coming!Feedback can be given via mobile or laptop through techdays.fiseminar schedule.
#td2013fi