Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it....

47

Transcript of Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it....

Page 1: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.
Page 2: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

File Storage Strategies for Private CloudJose BarretoPrincipal Program ManagerFile Server and Clustering teamMicrosoft

WS-B309 Blog: http://smb3.info

Page 3: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

• Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it.

• Enumerate the most common performance bottlenecks in Hyper over SMB configurations.

• Outline a few Hyper-V over SMB configurations that can provide continuous availability, including details on networking and storage.

Session Objectives

Page 4: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Sample Configurations

Agenda

Hyper-V over SMB - Overview

Performance Considerations

Basic Configurations

Page 5: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Overview

Page 6: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Hyper-VHyper-VHyper-VHyper-VHyper-VHyper-V

Hyper-V over SMBWhat is it?• Store Hyper-V files in shares over the SMB 3.0

protocol(including VM configuration, VHD files, snapshots)

• Works with both standalone and clustered servers

(file storage used as cluster shared storage)

Highlights• Increases flexibility• Eases provisioning, management and migration• Leverages converged network• Reduces CapEx and OpEx

Supporting Features• SMB Transparent Failover - Continuous availability• SMB Scale-Out – Active/Active file server clusters• SMB Direct (SMB over RDMA) - Low latency, low

CPU use• SMB Multichannel – Network throughput and

failover• SMB Encryption - Security• VSS for SMB File Shares - Backup and restore• SMB PowerShell - Manageability

File Server

File Server

SharedStorage

Hyper-V

SQLServer

IIS

VDIDesktop

Hyper-V

SQLServer

IIS

VDIDesktop

Hyper-V

SQLServer

IIS

VDIDesktop

Hyper-V Cluster

File Server Cluster

Page 7: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

File Server Cluster

SMB Transparent FailoverFailover transparent to server application• Zero downtime – small IO delay during failover

Supports planned and unplanned failovers• Hardware/Software Maintenance• Hardware/Software Failures• Load Rebalancing

Resilient for both file and directory operations

Requires:• File Servers configured as Windows Failover

Cluster• Windows Server 2012 on both the servers

running the application and file server cluster nodes

• Shares enabled for “continuous availability” (default configuration for clustered file shares)

• Works for both classic file server clusters (cluster disks) or scale-out file server clusters (CSV)

Hyper-V

Failover share - connections and handles lost,temporary stall of IO

2

2

Normal operation1

Connections and handles auto-recoveredApplication IO continues with no errors

3

1 3

File Server Node A

File Server Node B

\\fs\share \\fs\share

Page 8: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

SMB Scale-OutTargeted for server app storage• Example: Hyper-V and SQL Server• Increase available bandwidth by adding

nodes• Leverages Cluster Shared Volumes (CSV)

Key capabilities:• Active/Active file shares• Fault tolerance with zero downtime• Fast failure recovery• CHKDSK with zero downtime• Support for app consistent snapshots• Support for RDMA enabled networks• Optimization for server apps• Simple management

Single File System Namespace

Cluster Shared Volumes

Single Logical File Server (\\FS\Share)

Hyper-V Cluster(Up to 64 nodes)

File Server Cluster

(Up to 8 nodes)

Datacenter Network(Ethernet, InfiniBand or

combination)

Page 9: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

SMB Client SMB Server

SMB ServerSMB Client

User

Kernel

Application

DiskR-NIC

Network w/RDMA support

NTFSSCSI

Network w/RDMA support

R-NIC

SMB Direct (SMB over RDMA)

Advantages• Scalable, fast and efficient storage access• High throughput with low latency• Minimal CPU utilization for I/O processing• Load balancing, automatic failover and

bandwidth aggregation via SMB Multichannel

Scenarios• High performance remote file access for

application servers like Hyper-V, SQL Server, IIS and HPC

• Used by File Server and Clustered Shared Volumes (CSV) for storage communications within a cluster

Required hardware• RDMA-capable network interface (R-NIC)• Three types: iWARP, RoCE and Infiniband• RDMA NICs should not be teamed (use SMB

Multichannel)

Page 10: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Multiple RDMA NICsMultiple 1GbE NICsSingle 10GbE RSS-capable NIC

SMB Server

SMB Client

SMB Multichannel

Full Throughput

• Bandwidth aggregation with multiple NICs

• Multiple CPUs cores engaged when NIC offers Receive Side Scaling (RSS)

Automatic Failover

• SMB Multichannel implements end-to-end failure detection

• Leverages NIC teaming (LBFO) if present, but does not require it

Automatic Configuration

• SMB detects and uses multiple paths

SMB Server

SMB Client

SMB Server

SMB Client

Sample Configurations

Multiple 10GbE in LBFO team

SMB Server

SMB ClientLBFO

LBFO

Switch10GbE

NIC10GbE

NIC10GbE

Switch10GbE

NIC10GbE

NIC10GbE

NIC10GbE

NIC10GbE

Switch1GbE

NIC1GbE

NIC1GbE

Switch1GbE

NIC1GbE

NIC1GbE

Vertical lines are logical channels, not cables

Switch10GbE/IB

NIC10GbE/

IB

NIC10GbE/

IB

Switch10GbE/IB

NIC10GbE/

IB

NIC10GbE/

IB

Switch10GbE

Page 11: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

SMB Encryption

End-to-end encryption of SMB data in flight• Protects data from eavesdropping or

snooping attacks on untrusted networks

Zero new deployment costs• No need for IPSec, specialized

hardware, or WAN accelerators

Configured per share or for the entire server

Can be turned on for a variety of scenarios where data traverses untrusted networks

• Application workload over unsecured networks

• Branch Offices over WAN networks

ServerClient

SMB Encryption

Page 12: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

VSS for SMB File Shares

Application consistent shadow copies for server application data stored on Windows Server 2012 file shares

Backup and restore scenarios

Full integration with VSS infrastructure

Volume Shadow Copy Service

\\fs\fooData volume

\\fs\foo@t1Shadow

Copy

Backup Server

Application Server File Server

File Share Shadow Copy Agent

Coordinate Shadow Copy

Create Shadow Copy

Create Shadow Copy

Request Shadow Copy

VSS Providers

Backup A

B

C

D

E

Read fromShadow CopyShare

G

Relay Shadow Copy

request

Backup Agent

Volume Shadow Copy Service

File Share Shadow Copy Provider

F

Page 13: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Basic Configurations

Page 14: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

File Server ConfigurationsSingle-node File Server

• Lowest cost for shared storage• Shares not continuously available

Dual-node File Server• Low cost for continuously available

shared storage• Limited scalability

(up to a few hundred disks)

Multi-node File Server• Highest scalability

(up to thousands of disks)• Higher cost, but still lower than

connecting all Hyper-V hosts with FC

Hyper-V Parent 1

Child 1Config

VHD Disk

Hyper-V Parent N

Child NConfig

VHD Disk

File Server

Share1 Share2

Disk Disk

Hyper-V Parent 1

Child 1Config

VHD Disk

Hyper-V Parent N

Child NConfig

VHD Disk

File Server 1

Share1 Share2

File Server 2

Share1 Share2

Shared SAS Storage

Disk DiskDisk Disk

Hyper-V Parent 1

Child 1Config

VHD Disk

Hyper-V Parent N

Child NConfig

VHD Disk

FS 1

Share1

Fibre Channel Storage Array

Disk Disk Disk DiskDisk Disk Disk

FS 2

Share2

FS 3

Share3

FS 4

Share4

A B C

Page 15: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Network Configurations1GbE

NetworksMixed 1GbE/10GbE 10GbE or InfiniBand Networks

Hyper-V 1

File Server 1

Hyper-V 2

File Server 2

1GbE 1GbE

1GbE 1GbE

Hyper-V 1

File Server 1

Hyper-V 2

File Server 2

10GbE / IB 10GbE / IB

1GbE 1GbE

Hyper-V 1

File Server 1

Hyper-V 2

File Server 2

10GbE / IB 10GbE / IB

10GbE / IB 10GbE / IB

Clients Clients Clients

File Server 1

File Server 2

10GbE / IB 10GbE / IB

Clients

B CA D

Hyper-V 1 Hyper-V 2

Page 16: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Permissions for Hyper-V over SMB

• Full permissions on NTFS folder and SMB share for• Hyper-V Administrator• Computer Account of Hyper-V hosts• If Hyper-V is clustered, the Hyper-V Cluster Account (CNO)

1. Create FolderMD F:\VMFolder

2. Create ShareNew-SmbShare -Name VMShare -Path F:\VMFolder -FullAccess Dom\HAdmin, Dom\HV1$, Dom\HV2$, Dom\HVC$

3. Apply Share permissions to NTFS Folder permissions

(Get-SmbShare –Name VMShare).PresetPathAcl | Set-Acl

Page 17: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

How to use it: simply type a UNC pathNew-VHD -Path \\FS1\VMShare\VM1.VHDX -Dynamic -SizeBytes 100GBNew-VM -Name VM1 -Path \\FS1\VMShare -VHDPath \\FS1\VMShare\VM1.VHDX -Memory 4GB

Page 18: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Important notes on Hyper-V over SMB

• Hyper-V supports SMB version 3.0 onlyo The Hyper-V Best Practices Analyzer (BPA)

will check the version of SMBo Third-party SMB 3.0 implementations from

storage partners like EMC and NetApp

• Continuously Available shares are recommended

• Active Directory is requiredo Computer accounts, which are required for

configuring proper permissions, only exist in a domain

• Loopback configurations are not supportedo File Server and Hyper-V must be separate

serverso If using Failover Clusters, File Server and

Hyper-V must be on separate clusters

• Virtual Machine Manager 2012 SP1 supports Hyper-V over SMBo Available since January 2013

• Remote Managemento Use PowerShello Use Server Manager (for file

shares)o Use Remote Desktop (RDP) o Use VMM 2012 SP1 o If using Hyper-V Manager

remotely, Constrained Delegation is required

Page 19: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Performance

Page 20: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Hyper-V

ClientClient

Hyper-VHyper-VHyper-V

Host

FileServer

2

Typical Configuration for Hyper-V over SMB

FileServer

1

SAS HBASAS HBA

R-NIC

R-NIC

R-NIC

R-NIC

Client

Storage

Spaces

SMB 3.0Server

SMB 3.0Client

Switch5

Switch6

NIC

NIC

NIC Teaming

vSwitch

Switch4

Switch1

NIC

RouterSwitch

2

ClientClient

Client

NIC

VMVMVMVirtual

Machine

vNIC vDiskFile

ShareSpaceFile

Share Space

……

…NIC

NIC

Switch3

FileServerDHCP

DC/DNSManagement

NIC NIC

File ServerCluster

JBODsClientsHyper-VCluster

SAS JBOD

SASModule

SASModule

Disk

Disk

Disk

Disk

SAS JBOD

SASModule

SASModule

Disk

Disk

Disk

Disk

SAS JBOD

SASModule

SASModule

Disk

Disk

Disk

Disk

R-NIC

R-NIC

NIC

NICSAS HBASAS HBA

Page 21: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

SAS JBODSAS JBOD

Performance considerations

Hyper-V

ClientClient

Hyper-VHyper-VHyper-V

Host

FileServer

FileServer

SAS HBASAS HBA

R-NIC

R-NIC

R-NIC

R-NIC

Client Storage

Spaces

SMB 3.0ServerSMB 3.0

ClientSwitch

5

Switch6

NIC

NIC

NIC Teaming

vSwitch

Switch4

Switch1

NIC

RouterSwitch

2

ClientClient

Client

NIC

VMVMVMVirtual

Machine

vNIC vDisk

FileShare

SpaceFileShare Space

……

NIC

NIC

Switch3

FileServerDHCP

DC/DNSManagement

NIC NIC

VMs per hostVirtual processes per

VMRAM per VM

R-NICs per Hyper-V host

Speed of R-NICs

SAS ports per module

SAS Speed

SAS HBAs per File Server

SAS Speed

R-NICs per file server,Speed of R-NICs

NICs per Hyper-V host

Speed of NICs

Disks per JBODDisk SpeedSAS Speed

Number of SpacesColumns per spaceCSV cache config

Hyper-V hostsCores per Hyper-V

hostRAM per Hyper-V host

Number of clientsSpeed of client NICs

SAS JBOD

SASModule

SASModule

Disk

Disk

Disk

Disk

Page 22: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Designing a solution

• The workload• 500 VDI VMs, 2GB RAM, 1 virtual proc• ~50GB per VM, ~30 IOPS per VM

• Some building blocks• Disks: 900 GB HDD @ 10Krpm, ~140

IOPS• JBOD: 24 or 60 disks per JBOD• Hyper-V host, 16 cores, 128GB RAM

• Working out the storage• IOPS: 30 * 500 /140 = ~107 disks• Capacity: 50GB * 2 * 500 / 900 = ~56

disks

• Rounding up• 107 disks for IOPS, twice the required

capacity• 2 JBODs x 60 disks 120 disks (some

spares)

• Working out the Hyper-V hosts• 2GB VM/128GB ~ 50 VM/host (some RAM for

host)• 50 VMs * 1 virtual procs / 16 cores ~ 3:1 ratio• 500 VMs/50 ~ 10 hosts (+1 as a spare)

• Networking• 500 VMs*30 IOPS*64KB = 937 MBps required• Single 10GbE = 1100 MBps . 2 for fault

tolerance.• Single 4-lane SAS 6Gbps = 2200 MBps. 2 for FT.

• File Server • 500 * 25 = 12,500 IOPS. Single file server. 2 for

FT.• RAM = 64GB, good size CSV cache (up to 20%

of RAM)

• Now let’s draw this out…

Page 23: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

VDI workload (sample only, your requirements may vary)

~4

.4 G

B/s

ec

2 x

10

Gb

E x

2Hyper-V

ClientClient

Hyper-VHyper-VHyper-V

Host

FileServer

FileServer

R-NIC

R-NIC

R-NIC

R-NIC

Client Storage

Spaces

SMB 3.0ServerSMB 3.0

Client

Switch5

Switch6

NIC

NIC

NIC Teaming

vSwitch

Switch4

Switch1

NIC

RouterSwitch

2

ClientClient

Client

NIC

VMVMVMVirtual

Machine

vNIC vDisk

FileShare

SpaceFileShare Space

……

NIC

NIC

Switch3

FileServerDHCP

DC/DNSManagement

…NIC NIC

2GB per VM50 VMs per host

500 VMs total50GB VHD per VM

2 R-NIC @ 10Gbps

4 SAS ports @ 6 Gbps

2 SAS HBAs @ 6Gbps2 SAS ports/HBA

2 R-NIC @ 10Gbps

2 NICs @ 10Gbps60 disks/JBOD120 disks total

900GB @ 10Krpm

8 mirrored spaces16 columns/space12 GB CSV cache

11 Hyper-V hosts16 cores/host

128GB RAM/host

500 clients1 Gbps NICs

SAS JBOD

SAS HBASAS HBA SAS JBOD

SASModule

SASModule

Disk

Disk

Disk

Disk

8.8

GB

/sec

2 x

6G

b S

AS

x4

x

2

Page 24: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Speeds and Feeds – Maximum Theoretical Throughput

NIC Throughput

1Gb Ethernet ~0.1 GB/sec

10Gb Ethernet ~1.1 GB/sec

40Gb Ethernet ~4.5 GB/sec

32Gb InfiniBand (QDR)

~3.8 GB/sec

56Gb InfiniBand (FDR) ~6.5 GB/sec

HBA Throughput

3Gb SAS x4 ~1.1 GB/sec

6Gb SAS x4 ~2.2 GB/sec

4Gb FC ~0.4 GB/sec

8Gb FC ~0.8 GB/sec

16Gb FC ~1.5 GB/sec

Bus Slot Throughput

PCIe Gen2 x4 ~1.7 GB/sec

PCIe Gen2 x8 ~3.4 GB/sec

PCIe Gen2 x16 ~6.8 GB/sec

PCIe Gen3 x4 ~3.3 GB/sec

PCIe Gen3 x8 ~6.7 GB/sec

PCIe Gen3 x16 ~13.5 GB/sec

Memory Throughput

DDR2-400 (PC2-3200) ~3.4 GB/sec

DDR2-667 (PC2-5300) ~5.7 GB/sec

DDR2-1066 (PC2-8500)

~9.1 GB/sec

DDR3-800 (PC3-6400) ~6.8 GB/sec

DDR3-1333 (PC3-10600)

~11.4 GB/sec

DDR3-1600 (PC3-12800)

~13.7 GB/sec

DDR3-2133 (PC3-17000)

~18.3 GB/sec

Intel QPI Throughput

4.8 GT/s ~9.8 GB/sec

5.86 GT/s ~12.0 GB/sec

6.4 GT/s ~13.0 GB/sec

7.2 GT/s ~14.7 GB/sec

8.0 GT/s ~16.4 GB/sec

Only a few common configurations listed. Numbers are rough approximations.Actual throughput in real life will be lower than these theoretical maximums.

Numbers provided are for one way traffic only (double for full duplex). One interface/port only.Numbers use base 10 (1 GB/sec = 1,000,000,000 bytes per second)

Page 25: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Potential Variations

Hyper-V

ClientClient

Hyper-VHyper-VHyper-V

Host

FileServer

FileServer

R-NIC

R-NIC

R-NIC

R-NIC

Client Storage

Spaces

SMB 3.0ServerSMB 3.0

ClientSwitch

5

Switch6

NIC

NIC

NIC Teaming

vSwitch

Switch4

Switch1

NIC

RouterSwitch

2

ClientClient

Client

NIC

VMVMVMVirtual

Machine

vNIC vDisk

FileShare

SpaceFileShare Space

……

NIC

NIC

Switch3

FileServerDHCP

DC/DNSManagement

NIC NIC

Regular NICs insteadof RDMA NICs

Fibre Channel oriSCSI instead of SAS

Third-party SMB 3.0 NAS

Instead of Windows File Server Cluster

SAS JBODSAS JBOD

SAS HBASAS HBA SAS JBOD

SASModule

SASModule

Disk

Disk

Disk

Disk

Traditional SAN instead of JBODS

Page 26: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Demo

Hyper-V over SMB

Page 27: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Demo: Summary

Workload BWGB/sec

IOPSIOs/sec

%CPUPrivileged

Physical host, 512KB IOs, 100% read, 2t, 12o ~16.8 ~32K ~16%

Physical host, 32KB IOs, 100% read, 8t, 4o ~10.9 ~334K ~52%

12 VMs, 4VP, 512KB IOs, 100% read, 2t, 16o ~16.8 ~32K ~12%

12 VMs, 4VP, 32KB IOs, 100% read, 4t, 32o ~10.7 ~328K ~62%

File Server(SMB 3.0)

Hyper-VHost VM1

RDMA NIC

RDMA NIC

RDMA NIC

RDMA NIC

RDMA NIC

RDMA NIC

SA

S

SASHBA

JBODSSDSSDSSDSSDSSDSSDSSDSSD

SA

S

SASHBA

JBODSSDSSDSSDSSDSSDSSDSSDSSD

SA

S

SASHBA

JBODSSDSSDSSDSSDSSDSSDSSDSSD

SA

S

SASHBA

JBODSSDSSDSSDSSDSSDSSDSSDSSD

SA

S

SASHBA

JBODSSDSSDSSDSSDSSDSSDSSDSSD

SA

S

SASHBA

JBODSSDSSDSSDSSDSSDSSDSSDSSD

Storage Spaces

VM2 VM3 VM12…

Results from EchoStreams FlacheSAN2 server, with 2 Intel CPUs at 2.40 Ghz. Hyper-V Server with 2 Intel CPUs at 2.70 GHz. Both using three Mellanox ConnectX-3 network interfaces on PCIe Gen3 x8 slots.

Data goes to 6 LSI SAS adapters and 48 Intel SSDs, attached directly to the EchoStreams server. This demo could use further optimization of the VM configuration – Work in progress still…

Page 28: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Sample Configurations

Page 29: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Sample Configurations

Sharing a collection of sample configurations coming from internal testing, demo systems, conferences:

• All are using SMB, not all of them use Hyper-V• Includes standalone, classic cluster and scale-out

clusters• Includes both SAS and Fibre Channel• Includes both Storage Spaces and SAN • Includes performance details when available

Page 30: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Standalone, 1GbE, FC Array

• Random IO bandwidth actually below 1Gbps• SMB only 3% slower than DAS (in the last 19min)• 1GbE nearly meets 2 x 4GbFC, bottleneck is disk• Details in Dan Lovinger’s SDC 2012 presentation• White paper coming soon

Client Server

CPU 2 sockets, 8 cores total, 2.26 GHz

Memory 24 GB RAM

Network 1 x 1GbE NIC (onboard)

Storage adapter

N/A 1 FC adapter2 x 4Gbps links

Disks N/A 24 x 10Krpm HDD20 used for data

2 used for log

Page 31: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Dell Servers, Standalone, 10GbE, SAS, Storage Spaces

• 2 Dell R910 servers running Windows Server 2012, 10GbE Chelsio NICs, 6Gbps LSI SAS HBA.• 1 RAID Inc. JBOD, 60 600GB 15K SAS HDD, configured using Storage Spaces as mirrored Spaces.• 8 Hyper-V VMs running Windows Server 2012, 4 vCPUs, 12GB of RAM, SQL Server (OLTP

workload)• Remote (over SMB 3.0) compares well to local (~5% difference). Bottleneck are the HDDs.• Details in white paper by ESG

VMsLocalIOPS

RemoteIOPS

Remote/Local

1 900 850 94.4%

2 1,750 1,700 97.1%

4 3,500 3,350 95.7%

6 5,850 5,600 95.7%

8 7,000 6,850 97.9%

Page 32: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Intel, Standalone, 56GbIB, FusionIO

Results from two Intel Romley machines with 2 sockets each, 8 cores/socketBoth client and server using a single port of a Mellanox network interface PCIe Gen3 x8 slot

Data goes all the way to persistent storage, using 4 FusionIO ioDrive 2 cards

Configuration BWMB/sec

IOPS512KB IOs/sec

%CPUPrivilege

d

Non-RDMA (Ethernet, 10Gbps)

1,129 2,259 ~9.8

RDMA (InfiniBand QDR, 32Gbps)

3,754 7,508 ~3.5

RDMA (InfiniBand FDR, 54Gbps) 5,792 11,565 ~4.8

Local 5,808 11,616 ~6.6

Workload: 512KB IOs, 8 threads, 8 outstanding

SMB 3.0 + RDMA (InfiniBand FDR)

SMBClient

SMB Server

Fusion IOFusion IOFusion IOFusion IO

IO Micro Benchmark

IB FDR

IB FDR

Page 33: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

SuperMicro, Standalone, 2 x 56GbIB, SAS, LSI RAID

Results from SuperMicro servers, each with 2 Intel E5-2680 CPUs at 2.70 GhzBoth client and server using two Mellanox ConnectX-3 network interfaces on PCIe Gen3 x8 slotsData goes to 4 LSI 9285-8e RAID controllers and 4 LSI JBODs, each with 8 OCZ Talos 2 R SSDs

Data on the table is based on a 60-second average. Performance Monitor data is an instant snapshot.

Configuration BWMB/sec

IOPS512KB IOs/sec

%CPUPrivileged

Latencymilliseconds

1 – Local 10,090 38,492 ~2.5% ~3ms

2 – Remote 9,852 37,584 ~5.1% ~3ms

3 - Remote VM 10,367 39,548 ~4.6% ~3 ms

Workload: 256 KB IOs, 2 threads, 16 outstanding

Hyper-V (SMB 3.0)

File Server(SMB 3.0)

VM

RDMA NIC

RDMA NIC

RDMA NIC

RDMA NIC

SQLIO

SA

S

RAIDControlle

r

JBODSSDSSDSSDSSDSSDSSDSSDSSD

SA

S

RAIDControlle

r

JBODSSDSSDSSDSSDSSDSSDSSDSSD

SA

S

RAIDControlle

r

JBODSSDSSDSSDSSDSSDSSDSSDSSD

SA

S

RAIDControlle

r

JBODSSDSSDSSDSSDSSDSSDSSDSSD

Shown at TechEd 2012

Page 34: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

File Server(SMB 3.0)

File Client(SMB 3.0)

EchoStreams, Standalone, 3 x 56GbIB, Storage Spaces

SQLIO

RDMA NIC

RDMA NIC

RDMA NIC

RDMA NIC

RDMA NIC

RDMA NIC

SA

S

RAIDControlle

r

JBODSSDSSDSSDSSDSSDSSDSSDSSD

SA

S

SASHBA

JBODSSDSSDSSDSSDSSDSSDSSDSSD

SA

S

SASHBA

JBODSSDSSDSSDSSDSSDSSDSSDSSD

SA

S

SASHBA

JBODSSDSSDSSDSSDSSDSSDSSDSSD

SA

S

SASHBA

JBODSSDSSDSSDSSDSSDSSDSSDSSD

SA

S

SASHBA

JBODSSDSSDSSDSSDSSDSSDSSDSSD

Storage Spaces

WorkloadBW

MB/secIOPSIOs/sec

%CPUPrivileged

Latency

milliseconds

512KB IOs, 100% read, 2t, 8o 16,778 32,002 ~11% ~ 2 ms

8KB IOs, 100% read, 16t, 2o 4,027 491,66

5 ~65% < 1 ms

Results from EchoStreams FlacheSAN2 server, with 2 Intel E5-2620 CPUs at 2.00 GhzBoth client and server using three Mellanox ConnectX-3 network interfaces on PCIe Gen3 x8 slotsData goes to 6 LSI SAS adapters and 48 Intel SSDs, attached directly to the EchoStreams server.

Data on the table is based on a 60-second average. Performance Monitor data is an instant snapshot.

Shown via TechNet RadioWhite paper coming…

Page 35: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

X-IO, Standalone, FC SAN, 3 x 56GbIB

Full-sized server rack with• 10 X-IO ISE-2 storage units (FC attached)• 200 10Krpm 2.5” SAS HDDs (20 per unit)• 2 HP DL380 G8 + 2 DL360 G7 (3 servers, 1

client)• 2 QLogic 5800 8GB FC switches • 6 Qlogic QLA2564 (quad-port) HBAs (2 per

server)• 6 Mellanox ConnectX-3 adapters (56Gbps)

Performance highlights:• Over 15 Gbytes/sec throughput• Over 120,000 I/Os per second Details in this partner team blog post

Page 36: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Wistron, Cluster-in-a-box, 10GbE, SAS, Storage Spaces

• 2 File Server cluster nodes, single enclosure

• LSI 6Gbps SAS controllers, one per node

• Internal shared SAS JBOD, 24 x SAS HDDs

• Storage Spaces • Dual 10GbE ports, plus dual 1GbE ports• Shown in TechEd 2012 demo

Page 37: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

HP StoreEasy 5000, Cluster-in-a-box, 10GbE, SAS

• 2 cluster nodes in single enclosure (right side)• Intel Xeon at 2.4 GHz, 4 cores (per

node)• 24GB RAM, upgradable to 96GB (per

node)• Internal shared SAS JBOD tray (left side)

• 16 x 3.5” SAS drives or 36 x 2.5” SAS drives

• External SAS JBOD option for additional drives

• HP controller with Failover Cluster support

• Two 10GbE ports, plus 1GbE ports (per blade)

• HP iLO (Integrated Lights-Out) for management

Page 38: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Quanta, Cluster-in-a-box, 56GbIB, SAS, LSI HA-DAS

Quanta Cluster-in-a-box, 2 Intel Romley CPUs, 16GB RAMLSI HA-DAS MegaRAID ® and SAS Controllers on PCIe Gen2 x8 slots

Mellanox ConnectX-3 network interfaces on PCIe Gen3 x8 slots24 OCZ Talos 2 SSDs in a Quanta JBOD

• Around 2.5 GBytes/sec throughput• Shown at TechEd 2012 demo

Page 39: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Violin Memory Prototype, Cluster-in-a-box, 56GbIB

Configuration• Cluster-in-a-box configuration• Fits in only 3 Rack Units (3U unit)• Up to 32TB of raw flash capacity• 2 x 56Gbps InfiniBand network

ports

Results using real SQL Server workloads• Over 4.5+ Gbytes/sec with 256KB

IOs(loading database to memory)

• Over 200,000 IOPS with 8KB IOs(transaction processing)

• Very low latency• Shown at TechEd 2012

Page 40: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

VDI boot storm, Scale-Out (File Server test team)Comparison 1:• Local: 8 hosts running Hyper-V directly• Remote: 8 hosts using Hyper-V over SMB• 2,560 VMs (320 VMs per host). 8GB CSV

cache.

Comparison 2:• 16 hosts using Hyper-V over SMB• Comparing 8GB CSV cache enabled/disabled• 5,120 VMs (320 VMs per host)

On both setups, using Hitachi SAN back-end. 2 x 8GbFC Server to Fabric, 4 x 4GbFC Fabric to SAN.For Hyper-V over SMB configurations, using 2 x 10GbE between Hyper-V cluster nodes and File Server cluster nodes.For all setups, using parent VHDX with differential VHDX files. Time measured from VM state change to user logon complete.

Page 41: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

holSystems: Dell Servers, Cluster, 10GbE, FC Arrays

HV 1

FS 1

FC Array1

FS 2 FS 3 FS 4

FC Array2

FC Array 7

FS 5

FC Array2

HV 2 HV 3 HV 4 HV 5 HV 50…

FC Switch 1

10GbE Switch 1

10GbE Switch 2

10GbE Switch 3

10GbE Switch 4

FC Switch 2

Failover Cluster 2Failover Cluster 1

Configuration• 50 Hyper-V Hosts (Dell, 72GB-400GB RAM each, 10TB

total)• 5 File Servers (Dell, 1 standalone, 2 x two-node

clusters)• 7 Fibre Channel arrays of varying sizes. 100TB total.• 10GbE between Hyper-V and File Servers (120 ports)• 8GbFC between File servers and FC arrays (2 switches)

Workload • Running Windows Server 2012 Virtual Labs and HOL/ILL

labs for major Microsoft events including TechEd, TR, Lync Ignite, SharePoint Ignite, SQL Server Labs, Convergence

• Each user spins up a new set of VMs in just seconds!• Commonly used by 500 users at 16GB each.• Capacity tested for up to 7,000 VMs.• Over 800,000 VMs just in the last 3 years.• New set of VMs deployed every 5-7 minutes (in

average).• Try it yourself using the link above

Thanks to Corey Hynes for the details!

Page 42: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Microsoft IT: HP Servers, Cluster, 6 x 10GbE, FC Array

HV 1

EMC VMAX

FS 1 FS 2 FS 3 FS 4

HV 2 HV 3 HV 28…

FC Switch 1

10GbE Switch 1

10GbE Switch 2

10GbE Switch 3a

10GbE Switch 3b

FC Switch 2

Failo

ver

Clu

ster

Single Stamp Configuration

• Microsoft IT Scale Unit v3 (Network and Infrastructure Services team)

• 28 Hyper-V Hosts (HP BL660, 1TB RAM each, 28TB total)

• 4 File Servers (HP BL660, 1TB RAM, 4-node scale-out file server)

• EMC VMAX 40K in a 2+1 configuration (400 disks, 3 tiers)

Networking

• 2 x 10GbE Broadcom between Hyper-V and File Servers

• 4 x 10GbE Broadcom between every server and the outside world

• 2 x Cisco 3064 for East-West traffic, 2 x Cisco 5548 for North-South traffic

• 2 x 8GbFC Emulex HBAs per server, 24 FC ports + 8 iSCSI to VMAX

• 2 x Brocade 6510 FC Switches with 48 ports (up to 16GbFC)

Workload

• Private cloud environment with LOB apps (SQL, IIS, others)

• Average VM size is 16 virtual processors with 32GB of RAM

• Largest VM supported is 32 virtual processors and 128GB of RAM

• Built for high performance networking, including SR-IOV support

• 1,000 VMs per stamp projected

Thanks to Jeromy Statia for the details!

Page 43: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

NTTX: Dell servers, Clusters, 10GbE, SAS JBODs Hardware

• 48 Hyper-V hosts (10 R910, 18 R720, 24 M1000e)• Hyper-V hosts divided into 7 distinct clusters in 2

sites• 10 File Servers, 5 clusters in 2 sites (4 R710, 6 Dell

R320)• 38 JBODs (Dell MD1200, 12 drives each)• Mostly 15K rpm 600GB SAS HDDs. Some 7.2K 2TB

SAS for DPM. ~300TB total raw (~150TB usable in mirrored spaces).

Networking• 1GbE: 4 PowerConnect 5548. 10GbE: PowerConnect

8164• Site 1 Scale-Out File Servers use 8x1GbE NICs, in a

team• Site 2 Scale-Out File Servers use 4x10GbE NICs, in a

team• Between sites, 4 NetASQ U450 (2 per site), 400

MbpsWorkload • Mixed workload (hosting) includes SQL, VDI,

Exchange, Lync, SharePoint, CRM, other app servers, Backup.

• ~3,000 VMs total with varying storage, RAM, processors

• Hyper-V Replica between the 2 sites, heavily customized

Highlights• Cost reduction. Efficiency. High scalability. Full HA

(CA).• Delivers on varying and evolving workloads.• Built on commodity hardware. 100% Microsoft stack.

Thanks to Philip Moss for the details!

Cluster 14 x Dell R910

8 JBODs96 HDDs

FS10FS9

8 JBODs96 HDDs

FS8FS7

6 JBODs72 HDDs

FS6FS5

8 JBODs96 HDDs

FS4FS3

8 JBODs96 HDDs

FS2FS1

Cluster 24 x Dell R910

1GbE Switch

1GbE Switch

1GbE Switch

1GbE Switch

Cluster 46 x Dell R720

Cluster 312 x Dell M1000e

10GbE Switch

Cluster 54 x Dell R910

Cluster 712 x Dell

R720

Cluster 612 x Dell M1000e

NetASQ

U450

NetASQ

U450NetASQ

U450

NetASQ

U450

Site 1 Site 2

Page 44: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

In Review: Session Objectives

• Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it.

• Enumerate the most common performance bottlenecks in Hyper over SMB configurations.

• Outline a few Hyper-V over SMB configurations that can provide continuous availability, including details on networking and storage.

Page 45: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Evaluation

Complete your session evaluations today and enter to win prizes daily. Provide your feedback at a CommNet kiosk or log on at www.2013mms.com.Upon submission you will receive instant notification if you have won a prize. Prize pickup is at the Information Desk located in Attendee Services in the Mandalay Bay Foyer. Entry details can be found on the MMS website.

We want to hear from you!

Page 46: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Resources

http://channel9.msdn.com/Events

Access MMS Online to view session recordings after the event.

Page 47: Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

© 2013 Microsoft Corporation. All rights reserved. Microsoft, Windows and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.