Elastic Extension Tables for Multi-tenant Cloud Applications
Service Provider Next generation managed services Public Cloud (true multi- tenant) Private Cloud...
-
Upload
shavonne-garrett -
Category
Documents
-
view
217 -
download
0
Transcript of Service Provider Next generation managed services Public Cloud (true multi- tenant) Private Cloud...
Service Provider Datacenter ArchitecturePhilip Moss, NTTX – Managing Partner IT
DCIM-B211
Introduction
About NTTX
Service Provider
Next generation managed services
Public Cloud (true multi-tenant)
Private Cloud Hybrid Cloud
Delivering the highest levels of user experience
Full multi-point High-Availability
Geo-location agnostic access
Business drivers
Reducing cost to service
Providing a feature set that meets our customers need
Make Money
Key drivers
WorkloadsDomain ControllersDNS
InternalPublic
ExchangeSharePointLyncSQLWDS
File ServersApp-VUE-VRDSHVDIDPMDHCPBespoke Client Line of Business Applications
Engineering goals
Engineering goalsSupport for multiple diverse workloadsFull end-to-end high-availability100% virtualisation100% automationSub-system scale-out
StorageNetworkingCompute
Cost to serve reductionRemoval of middlewareHardware platform agnosticJust in time hardware provisioning
Architecture
Logical architecture
Storage Spaces
Scale-out CA file-server
SMB Transport
Hyper- V Cluster – General Workloads Hyper-V Cluster - PVM͛!s (WARP)
Hyper-V Cluster - PVM͛!s (virtual GPU)
DC͛!s Exchange Lync RDSH
SQL DPM DHCP
RDS SharePoint WDS
DNS
Storage
Networking
Compute
Datacentre topology
DataCentre A
Fault-tolerant data storage
Highly-available data delivery platform
Data transport fabric
Virtualisation Compute Fabric(Hyper-Visor clusters)
Perimeter Security
DataCentre B
Fault-tolerant data storage
Highly-available data delivery platform
Data transport fabric
Virtualisation Compute Fabric(Hyper-Visor clusters)
Perimeter Security
Data replication
Storage
Data Delivery – Scale Out File ServerScale Out File Server
Storage Spaces - Windows Server as the storage controllerSMB 3 as data transport
Replaces iSCSI and Fibre Channel
Cheap generic JBOD’s
Multi-point highly available
Continuous availabilityFull scale out
Removes requirement for SAN
JBOD
Sofs Node Sofs Node Sofs Node Sofs Node
JBOD JBOD JBOD
Scale Out File Server costs around 50% - 60% of traditional SAN.
Introduced in 2012 R2SSD layer used for high-IO dataData moved to SSD via “heat” logic
1MB data chunks – not all of a large file needs to fit on SSDGained write back-cache
Pinning allows files to be locked onto SSD layerWorking with the CSV cache
Heat does not work with CSVDoes not work with redirected IO
Planning considerationsA Space using tiring without CSV, could be slower than a no-tired Space using CSVCan still pin files to SSD
Spaces 2012 R2 - tiring
Write back cache on SSDDramatically increase write perfLimited to 1GB
Dynamic rebuild using spare capacityNo longer a requirement for dedicated hot-spaceSimply leave unallocated headroom in disk pool
Spaces 2012 R2
Larger column counts are importantColumn count defines how many disks are written across for any given write operationRead operations use all copies of data, therefore significant performance increase
Column count shared between SSD and HDD
SSD’s can become limiting factor
Larger pools are more efficientIncreases disk failure planning complexity
Storage Spaces - design considerations
Data integrity2 way mirror provides only limited disk failure protection
Suitable solution if using application HA3 way mirror gives a good level of disk failure tolerance
Very costly is disk usage (66% raw capacity loss)Party Space now supported for cluster
Performance is not good
Enclosure awarenessProvides protection against entire JBOD failureSetup considerations
3 JBOD’s for 2 way mirror, single enclosure failure3 JBOD’s for 3 way mirror, single enclosure failure5 JBOD’s for 3 way mirror, duel enclosure failure
Storage Spaces – design considerations
De-DuplicationBest suited for VDI workloadsTiring key to de-dup deploymentDe-dupped data (the chuck store IO will be massive)Chunk store cannot be pinned
Use heat
CPU and RAM ConsiderationsCan now run on hot (open) VHDx files, consumes resourcesPer volume, therefore planning required so that CPU and RAM are not exhorted.
No De-Dup With De-Dup0
5000
10000
15000
20000
25000
30000
35000
40000
45000
100 VDI Clients
Disk (GB)
Design Considerations - SoFS nodesSMB client connection redirectionReduces load / requirements for CSV networkApplies to 2012 R2 / Win 8.1 and laterIncoming connection “moved” to the node that owns the storage
Increased RAM and CPU overhead Requirement driven by heat and de-dup overheads2012,; single physical proc and 12GB of RAM was fine2012 R2, duel CPU and 128GB plus of RAM
NetworkingPlan for SMB multi-channel
If using RDMA no teaming optionsAs clustered solution, separate IP required for each NIC interface
Planning considerations on Hyper-V hostsLACP is an option, however potential challenges
Distribution hash settings on switches
Network
NetworkingLet Windows do the workData delivery via standard protocolSMB 3.0
Load-balancing and failoverTeaming (switch agnostic)
Load aggregation and balancingSMB multi-channel
Commodity L2 switchingCost effective networking (Ethernet)
RJ45QSFP
Quality of ServiceMultiple-levels
Host workload overhead reductionRDMA
Easily Scale
“Decouple storage from compute”
Loose the complexity of iSCSI or FC
To Be Or, Not To Be (Converged) – That is that question
Converged networking = Big Win’s
Also known as software defined networkingReduces complexity and costIncreases flexibly
Fully convergedSingle network, no dedicated service networksUse Windows networking capability to define system
NIC TeamingHyper-V vSwitch QoSSMB QoS
Universal vSwitch bindingParent loopback for SMB data to hostGain complete control over QoSExcellent resource utilization, managing networking resources between workloads
SMB 3.0VM trafficLive Migration
Semi-ConvergedDedicated NIC’s for SMB 3.0Dedicated (teamed) NIC’s for VM traffic
Critical for RDMA deployments
RDMA can’t work via vSwitchNo teaming support
Gen 1; 1Gbps using multiple connectionsVery cheap on NIC’s and switch portsAttractive as teaming / SMB multi channel in Windows made this viableCabling nightmares
Gen 2; 10Gbps using multiple connectionsCost viable due to NIC and port cost reductionsSignificant throughput achievable with 4 connection in each serverCabling challenges remainDeployment over very cost effective RJ45
Gen 3; 40GbpsNIC and port costs are still high, available speed makes tradeoff acceptablePerformance from only 2 ports very highCabling issues removedAvoiding VM teaming mitigates many vRRS challenges
vRSS places significant overhead on hostMakes very high performance VM’s simpler to deploy with increased flexibility
Network Speed Choices
Do you need fault-tolerant NIC configurations?
“NIC’s and switch ports are costly. Make the server the fault domain.”
NTTX are in our next generation DC’s
Compute
Hyper-V 3.063 node clusters
8000 VM limit preventing clusters approaching this numberModern hardware allows for huge VM counter for node
Inter-cluster version live migrationKey for 2012 to 2012 R2 migrations
SMB 3.0 supportDynamic RAMvGPU supportDynamic quorum selection
Introduced in 2012 R2Very useful for clusters that grow over time
Compute – Hyper-V
Generation 2 VM’sUEFI basedSecure boot supportWDS support without using legacy NICNo support for IDE VHDx
Dynamic VHDx resizeEnables dynamic increase or decrease in VHDx size without taking VM offlineKey feature for IaaS clients
Hyper-V 2012 R2
Introduced in 2012 R2Enables 100% VHDx based VM storage solutionRemoves relicense on synthetic iSCSI or FC for shared storage in VM
Primary workloadsHA file-serversLegacy SQL serversBespoke line of business application requiring shared disk
ConsiderationsNo support for Hyper-V replica
Therefore no Hyper-V recovery manager supportStretch clusters cannot be created
VM based clusters using shared VHDx
vRSS support new available on vNIC’sAddress limitations of vNIC being limited to 1 CPU core and therefore maxing outAllows for very high performance VM’svRSS puts significant load on host CPUNo vRSS to parent; not viable to drive high network bandwidth into parent.
New teaming algorithm; DynamicCombines Hyper-v port with address hashRecommended setting
VM based teamingDriving vNIC’s at above wire speed of physical NIC is very difficultAvoid teaming though the use of higher speed physical NIC’s
SRIOV – Choices and trade offsKey for low latency / high-performance VM workloadsLimits VM deployment options as requires host with dedicated spare NIC’sDedicated NIC requirements increase if VM requires HA NIC capability
For a service provider, this level of rigidity creates considerable challenges
Networking – Hyper-V
Quality of Service – Hyper-VStorage QoSDefine storage IOPS limits on a per VM bases.
v-Switch is you friendDefine QoS behaviours on a per VM basesIf using parent loopback, define QoS to control SMB traffic
QoS helps you deal with “noisy neighbour” syndromeNo solution for CPU or RAM challenges today
SMB contains it’s own channel prioritisation logic
Upgrade considerations
Migration 2012 – 2012 R2No in place upgrade of Scale Out File ServerDrain clear all storage, wipe and rebuildRequires significant headroom in storage capacityMust use storage migrationRDMA highly recommended between SoFS nodes to increase storage migration performance
No in place upgrade of compute clusterLive migration between 2012 and 2012 R2 Hyper-V hosts If short of Host capacity; use evict, upgrade, rejoinPowerShell highly recommended
Services
RDP 8.1UDP SupportvGPUAudio / VideoTouch RemotingUSB bus redirection
Touch and audio / video performance improvementsConnection / Reconnection performance improvementsScreen / resolution dynamic resize
Remote Desktop Services
Hyper-V ReplicaFailover and DR solution for VM’sSupports point in time replication of VM’sPlanned and unplanned failover supportOff network target supportRemote network IP injection (into vNIC)
2012 R2 ImprovementsReduction in IO overheadSupport for tertiary replication locationChoice of replication interval
IO overhead increases with replication interval frequency
Multi-VM applications are potentially complex to manageConsider Hyper-V Recovery Manager as a good solution
Hyper-V Network VirtualizationTenant “bring your own subnet”Introduced in 2012Solves the requirement to do vLAN tagging
Mitigates the 4096 vLAN’s celling
Introduced in 2012Perimeter breakout was challenging.Required 3rd party solutionComplex to deploy and manage
Multi-Tenant Site-to-Site gateway : 2012 R2Allows edge routing of virtualized networkFull support for layer 3 routingFull support for IPSEC site 2 site gatewaysImplemented as NVGRE aware route inside of RRAS service
High-availability considerations
System Availability – In the real worldHyper-V Clusters ARE NOT a true high-availability solution, closer to “near time DR”
Hyper-v replica has potential weaknesses with complex application typesThe classic IP injection challenges, that are only mitigated through NVRGE.
Application HA is only real solution.Requires End to End design and planning
Host dies VM diesVM fails over to
new host
User losses session
and data
Application HA trades offsConsiderable amount of data duplication.Disk is cheap; especially when using commodity technology.
Inherent headroom required in application VM’sCPU is cheap and constantly getting cheaper.
Careful planning and deep application knowledge requiredPure IaaS relies on client’s “buy in” to end to end designThis can be challenging, HRM a reasonable workaround.
There is no “real” HA solution for true VDIBroker is not hyper-v replica awareBroker is not multi-site aware
Demo – The stack in action
Philip Moss
A user personal virtual desktopRunning on clustered Hyper-V
VHDx over SMB 3.0Software delivered via App-vUser settings delivered via EU-VAdvanced graphics driven by vGPU
Services: Exchange, SharePoint, LyncDelivered from VM’sRunning on clustered Hyper-v
VHDX over SMB 3.0
Securely accessed via Remote Desktop ServicesOver the InternetAnimation, video, 3D driven by RDP 8.1
The technology behind the service
End-to-end stack
Storage Spaces
Scale out file server
SMB 3.0
Hyper-V Cluster
HA VM File Server
VM – Windows Client
App-VUE-V
Virtual GPUExchangeLyncSharePointRemote Desktop Services
High-level designStorage - SoFSNetwork – SMB and converged software defined networkingCompute – Hyper-V
2012 R2 Changes and ImprovementsDeployment and upgrade considerationsHigh-Availability
Design considerations
ServicesHyper-V network virtualization (NVGRE)Hyper-V replicaRemote Desktop ServicesVM based clusters
Summary
Questions
Breakout SessionsDCIM-B346 Best Practices for Deploying Tiered Storage Spaces in Windows Server 2012 R2 - Alex Hsieh, Bryan Matthew
DCIM-B349 Software-Defined Storage in Windows Server 2012 R2 and Microsoft System Center 2012 R2 - Elden Christensen, Hector Linares, Jose Barreto, Tobias Klima
DCIM-B354 Failover Clustering: What's New in Windows Server 2012 R2 - Elden Christensen, John Marlin
DCIM-B337 File Server Networking for a Private Cloud Storage Infrastructure in Windows Server 2012 R2 - Jose Barreto
DCIM-B349 Software-Defined Storage in Windows Server 2012 R2 and Microsoft System Center 2012 R2 - Elden Christensen, Hector Linares, Jose Barreto, Tobias Klima
DCIM-B378 Converged Networking for Windows Server 2012 R2 Hyper-V - Don Stanwyck, Taylor Brown
Related content
Come Visit Us in the Microsoft Solutions Experience!
Look for Datacenter and Infrastructure ManagementTechExpo Level 1 Hall CD
For More InformationWindows Server 2012 R2http://technet.microsoft.com/en-US/evalcenter/dn205286
Windows Server
Microsoft Azure
Microsoft Azurehttp://azure.microsoft.com/en-us/
System Center
System Center 2012 R2http://technet.microsoft.com/en-US/evalcenter/dn205295
Azure PackAzure Packhttp://www.microsoft.com/en-us/server-cloud/products/windows-azure-pack
Resources
Learning
Microsoft Certification & Training Resources
www.microsoft.com/learning
msdn
Resources for Developers
http://microsoft.com/msdn
TechNet
Resources for IT Professionals
http://microsoft.com/technet
Sessions on Demand
http://channel9.msdn.com/Events/TechEd
Complete an evaluation and enter to win!
Evaluate this session
Scan this QR code to evaluate this session.
© 2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.