V sphere 5.1-storage-features-&-futures
-
Upload
subtitle -
Category
Technology
-
view
1.114 -
download
0
Transcript of V sphere 5.1-storage-features-&-futures
© 2012 VMware Inc. All rights reserved
Storage Enhancements in vSphere 5.x
&
Storage Futures Tech Preview
Cormac Hogan
Technical Marketing
VMware
2 Virtual Machine User Group – November 2012
Agenda
vSphere 5.x Storage Features & Storage Futures
SIOC, Storage DRS & Storage vMotion
Introduction
VMFS-5 & VOMA
Protocol Enhancements
IO Device Management & SSD Monitoring
Space Efficient Sparse Virtual Disks
VAAI
Storage Futures - vFlash
Storage Futures – Virtual Volumes
Storage Futures – Distributed Storage
3 Virtual Machine User Group – November 2012
Disclaimer
•This presentation may contain product features that are
currently under development.
•Features are subject to change, and must not be included in contracts,
purchase orders, or sales agreements of any kind.
•Technical feasibility and market demand will affect final delivery.
•Pricing and packaging for any new technologies or features discussed or
presented have not been determined.
• In other words, VMware in no way promises to deliver on any of the
products or features shown in the following presentation.
•And just to be clear, neither does Cormac Hogan.
4 Virtual Machine User Group – November 2012
Introduction
•vSphere 5.1 builds on the storage features introduced in vSphere 5.0.
• More scalability
• Increased performance
• Increased interoperability between VMware products & features
•The purpose of this presentation is to quickly highlight the major storage
enhancements in vSphere 5.0 and what improvements have been made
to storage features in vSphere 5.1.
•We will also take a look at some of the storage features which were tech
previewed at VMworld 2012.
5 Virtual Machine User Group – November 2012
Agenda
vSphere 5.x Storage Features & Storage Futures
SIOC, Storage DRS & Storage vMotion
Introduction
VMFS-5 & VOMA
Protocol Enhancements
IO Device Management & SSD Monitoring
Space Efficient Sparse Virtual Disks
VAAI
Storage Futures - vFlash
Storage Futures – Virtual Volumes
Storage Futures – Distributed Storage
6 Virtual Machine User Group – November 2012
VMFS-5 Upgrade Considerations
•A live, non-disruptive upgrade mechanism is available to upgrade from
VMFS-3 to VMFS-5 (with running VMs) but you do not get the full
complement of features.
•Best Practice: If you have the luxury of doing so, create a brand new
VMFS-5 datastore, and use Storage vMotion to move your VMs to it.
Feature Upgraded VMFS-5 New VMFS-5
Maximum files 30720 (inherited from VMFS-3)
130689
File Block Size 1, 2, 4 or 8MB (inherited from VMFS-3)
1MB
Sub-Blocks 64KB (inherited from VMFS-3)
8KB
ATS Complete No (same as VMFS-3)
Yes
7 Virtual Machine User Group – November 2012
Increasing VMFS-5 File Sharing Limits in vSphere 5.1
• In previous versions of vSphere, the maximum number of hosts which
could share a read-only file (linked clone base disk) on VMFS was 8.
• In vSphere 5.1, this has been increased to 32.
•VMFS is now as scalable as NFS for linked-clones.
VMFS-5
8 Virtual Machine User Group – November 2012
VOMA - vSphere On-Disk Metadata Analyzer
•VOMA is a VMFS meta-data consistency checker tool which will be made
available in the CLI of vSphere 5.1 ESXi systems.
• It has the ability to check various On-Disk metadata structures on a given
VMFS datastore (both versions 3 & 5) and report any consistencies.
•VOMA is not a data recovery tool!
9 Virtual Machine User Group – November 2012
Agenda
vSphere 5.x Storage Features & Storage Futures
SIOC, Storage DRS & Storage vMotion
Introduction
VMFS-5 & VOMA
Protocol Enhancements
IO Device Management & SSD Monitoring
Space Efficient Sparse Virtual Disks
VAAI
Storage Futures - vFlash
Storage Futures – Virtual Volumes
Storage Futures – Distributed Storage
10 Virtual Machine User Group – November 2012
VAAI Primitives
Primitive vSphere 4.1 vSphere 5.0 vSphere 5.1
ATS (Atomic Test & Set) Yes Yes Yes
XCOPY (Clone) Yes Yes Yes
Write Same (Zero) Yes Yes Yes
Full File Clone (NAS) No Yes Yes
Fast File Clone (NAS) No Yes Yes
Reserve Space (NAS) No Yes Yes
Extended Statistics (NAS) No Yes Yes
Thin Provisioning OOS
Alarm/VM Stun No Yes Yes
Thin Provisioning UNMAP No Yes* Yes*
11 Virtual Machine User Group – November 2012
A note about UNMAP - Dead Space Reclamation
•Dead space is previously written
blocks that are no longer used, for
instance, after a Storage vMotion
operation on a VM.
•Through VAAI, storage system will
now reclaim the dead blocks
•Although the objective is to make
this procedure automated, this
mechanism is currently only
supported via a manual
vmkfstools command in vSphere
5.0 & 5.1.
•More detail on the VAAI UNMAP
primitive can be found here –
http://kb.vmware.com/kb/2007427
VMware
VMFS
volume A
VMFS
volume B
Storage vMotion
VM’s file data blocks will be
released through a manually
issued vmkfstools command
12 Virtual Machine User Group – November 2012
VAAI NAS Support for vCloud Director
•vSphere 5.0 introduced the offloading of linked clones for VMware View to
native snapshots on the array via NAS VAAI primitives.
vSphere 5.1 will allow storage array
based snapshots to be used by
vCloud Director vApps, leveraging
the VAAI Fast File Clone primitive.
vCloud Director vApps are based on
linked clones.
This will minimizing CPU & memory
usage and on the hosts and network
bandwidth consumption in vCloud
Director deployments using NFS.
• This will also require a special VAAI NAS plug-in from vendors.
vCloud vApps
13 Virtual Machine User Group – November 2012
Agenda
vSphere 5.x Storage Features & Storage Futures
SIOC, Storage DRS & Storage vMotion
Introduction
VMFS-5 & VOMA
Protocol Enhancements
IO Device Management & SSD Monitoring
Space Efficient Sparse Virtual Disks
VAAI
Storage Futures - vFlash
Storage Futures – Virtual Volumes
Storage Futures – Distributed Storage
14 Virtual Machine User Group – November 2012
Storage I/O Control Revisited
What you see
Datastore
online
store data
mining
Microsoft
Exchange
What you want to see
online
store data
mining
Microsoft
Exchange
Datastore
15 Virtual Machine User Group – November 2012
Storage I/O Control Enhancements in vSphere 5.1
•Stats Only Mode
• SIOC is now turned on in stats only mode automatically.
It doesn't enforce throttling but gathers statistics.
This gives more granular performance statistics in the vSphere client.
Storage DRS can also use these statistics for characterizing new datastores added to a datastore cluster.
•Automatic Threshold Computation
• A new automatic latency threshold detection mechanism has been added.
The default SIOC latency threshold in previous versions is 30msecs.
Previously we relied on customers selecting the appropriate threshold.
The latency thresholds is now automatically set using device modeling rather (I/O injector mechanism).
16 Virtual Machine User Group – November 2012
SIOC Automatic Threshold Detection in vSphere 5.1
•Through device modeling, SIOC determines
the peak throughput of the device.
• It first measures the peak latency value when
the throughput is at its peak.
•The latency threshold is then set (by default)
to 90% of this value.
•Admin still has the option to:
• Change % value.
• Manually set congestion threshold.
La
ten
cy
Lpeak
Thro
ughput Tpeak
La
Ta
Load
Load
17 Virtual Machine User Group – November 2012
Storage DRS Revisited
•Storage DRS was introduced in vSphere 5.0, and has since become
recognised as one of VMware’s more innovative features
•Benefits of Storage DRS:
• Automatic selection of the best datastore for your initial VM placement, avoiding hot-spots, disk space imbalances & I/O imbalances
• Advanced balancing mechanism to avoid storage performance bottlenecks or “out of space” problems using Storage vMotion
• Smart Placement Rules which allow the placing of VMs with a similar task on different datastores, as well as keeping VMs together on the same datastore when required
•Storage DRS works on VMFS-5, VMFS-3 & NFS datastores.
18 Virtual Machine User Group – November 2012
Storage DRS Enhancements in vSphere 5.1 (1 of 2)
•vCloud Director Interoperability/Support
• The major enhancement in Storage DRS in vSphere 5.1 is to have interoperability with vCloud Director
• vCloud Director will use Storage DRS for the initial placement of vCloud vApps during Fast Provisioning
• vCloud Director will also use Storage DRS for the on-going management of space utilization and I/O load balancing
19 Virtual Machine User Group – November 2012
Storage DRS Enhancements in vSphere 5.1 (2 of 2)
•SDRS introduces a new datastore correlation detector.
• Datastore correlation means datastores are backed by the same disk spindles.
• If we see latency increases on different datastores when load placed on
one datastore, we assume the datastores are correlated.
•Anti-Affinity rules (keeping VMs or VMDKs apart on different datastores)
can also use correlation to ensure the VMs/VMDKs are on different
spindles.
Datastore Cluster
Storage
Array
20 Virtual Machine User Group – November 2012
Storage vMotion 5.1 Enhancements
• In vSphere 5.1 Storage vMotion performs up to 4 parallel disk migrations
per Storage vMotion operation.
• In previous versions, Storage vMotion used to copy virtual disks serially.
• This does not impact the ability to do concurrent Storage vMotion operations
per datastore.
21 Virtual Machine User Group – November 2012
Agenda
vSphere 5.x Storage Features & Storage Futures
SIOC, Storage DRS & Storage vMotion
Introduction
VMFS-5 & VOMA
Protocol Enhancements
IO Device Management & SSD Monitoring
Space Efficient Sparse Virtual Disks
VAAI
Storage Futures - vFlash
Storage Futures – Virtual Volumes
Storage Futures – Distributed Storage
22 Virtual Machine User Group – November 2012
1: Software FCoE Adapter
•vSphere 5.0 introduces a new software FCoE adapter.
•A software FCoE adapter is software code that performs some of the
FCoE processing & can be used with a number of NICs that support
partial FCoE offload.
•The software adapter needs to be activated, similar to Software iSCSI.
In vSphere 5.1, Boot from Software FCoE enables an ESXi host to boot
from an FCoE LUN using a Network Interface Card with FCoE boot
capabilities and VMware's Software FCoE driver.
23 Virtual Machine User Group – November 2012
2: Support 16Gb FC HBAs
• VMware introduced support for 16Gb FC HBA with vSphere 5.0.
However the 16Gb HBA had to be throttled to work at 8Gb.
• vSphere 5.1 introduces support for 16Gb FC HBAs running at 16Gb.
• There is no 16Gb end-to-end support for FC in vSphere 5.1, so to get full bandwidth, you will need to zone to multiple 8Gb FC array ports as shown below.
16Gb
8Gb
24 Virtual Machine User Group – November 2012
Agenda
vSphere 5.x Storage Features & Storage Futures
SIOC, Storage DRS & Storage vMotion
Introduction
VMFS-5 & VOMA
Protocol Enhancements
IO Device Management & SSD Monitoring
Space Efficient Sparse Virtual Disks
VAAI
Storage Futures - vFlash
Storage Futures – Virtual Volumes
Storage Futures – Distributed Storage
25 Virtual Machine User Group – November 2012
Advanced IO Device Management (IODM)
•New commands in vSphere 5.1 to help administrators monitor &
troubleshoot issues with I/O devices and fabrics.
•Enable diagnosis and querying of Fibre Channel, FCoE, iSCSI & SAS
Protocol Statistics.
•The commands provide layered statistic information to narrow down
issues to ESXi, HBA, Fabric and Storage Port.
• Includes framework to log frame loss and other critical events.
• Includes options to initiate an HBA reset.
26 Virtual Machine User Group – November 2012
Advanced IO Device Management (IODM)
Some of the detail you can get
from ESXi with the new IODM
feature
27 Virtual Machine User Group – November 2012
SSD Monitoring
•VMware provides a default plugin for monitoring certain SSD attributes in
vSphere 5.1:
• Media Wearout Indicator
• Temperature
• Reallocated Sector Count
•Enables customers to query SMART details for SAS and SATA SSD.
• SMART - Self Monitoring, Analysis And Reporting Technology
• A monitoring system for hard disk drives
• Works on non-SSD drives too
•VMware provides a mechanism for other SSD vendors to provide their
own plugins for monitoring additional statistics.
28 Virtual Machine User Group – November 2012
Agenda
vSphere 5.x Storage Features & Storage Futures
SIOC, Storage DRS & Storage vMotion
Introduction
VMFS-5 & VOMA
Protocol Enhancements
IO Device Management & SSD Monitoring
Space Efficient Sparse Virtual Disks
VAAI
Storage Futures - vFlash
Storage Futures – Virtual Volumes
Storage Futures – Distributed Storage
29 Virtual Machine User Group – November 2012
Space Efficient Sparse Virtual Disks (1 of 2)
•A new Space Efficient Sparse Virtual Disk aims to address certain
limitations with Virtual Disks.
1. A variable block allocation unit size
Currently, linked clones have a 512 bytes block allocation size.
This leads to alignment and partial write issues.
SE Sparse disks have variable block allocation sizes.
Tuned to suit applications running in the Guest OS and storage arrays.
2. Stale/Stranded data in the Guest OS filesystem/database.
An automated mechanism for reclaiming stranded space.
•A future release of VMware View will be required to use SE Sparse
Disks. This is the only use case defined thus far.
30 Virtual Machine User Group – November 2012
Space Efficient Sparse Virtual Disks (2 of 2)
Initiate
Wipe
Inform VMkernel
about unused blocks
Via SCSI UNMAP ESXi
vSCSI Layer
Reorganises SE Sparse
disk to create contiguous
free space at end of disk
Initiate Shrink which
issues SCSI UNMAP
command and reclaims
blocks on array
VMware
Tools
Scan filesystem
for unused
space Filesystem
31 Virtual Machine User Group – November 2012
Agenda
vSphere 5.x Storage Features & Storage Futures
SIOC, Storage DRS & Storage vMotion
Introduction
VMFS-5 & VOMA
Protocol Enhancements
IO Device Management & SSD Monitoring
Space Efficient Sparse Virtual Disks
VAAI
Storage Futures - vFlash
Storage Futures – Virtual Volumes
Storage Futures – Distributed Storage
32 Virtual Machine User Group – November 2012
Introducing Virtual Flash
Flash Infrastructure
• Integrate solid state storage into
the vSphere storage stack
•Permitting flash storage
consumers to reserve, access,
and use flash storage in a
flexible manner
•A mechanism to insert 3rd party
flash services into vSphere stack
Cache software
•VM-transparent - sharing a pool
of flash resources based on
reservations, shares and limits.
•VM-aware – a dedicated chunk
of cache is assigned to the VM.
Flash Infrastructure
Cache software
Flash as a new Tier in vSphere
Cache software
33 Virtual Machine User Group – November 2012
Caching Modes
Flash Infrastructure
Cache SW
Virtual Machine
without local flash
cache
Virtual Machine
transparent flash
cache
Virtual Machine
aware flash cache
Cache SW
Cache
presented as
block to VM
34 Virtual Machine User Group – November 2012
Agenda
vSphere 5.x Storage Features & Storage Futures
SIOC, Storage DRS & Storage vMotion
Introduction
VMFS-5 & VOMA
Protocol Enhancements
IO Device Management & SSD Monitoring
Space Efficient Sparse Virtual Disks
VAAI
Storage Futures - vFlash
Storage Futures – Virtual Volumes
Storage Futures – Distributed Storage
35 Virtual Machine User Group – November 2012
Per VM Data Services on storage systems
• Provide customers option to use per-
VM data operations on storage array
• Build framework to offload per-VM
data operations to the storage array
Goals
• Data management on storage array
is at LUN or Volume granularity
• Data management on vSphere is at
the VMDK level
Challenge
Granularity mismatch between
vSphere and Storage systems
36 Virtual Machine User Group – November 2012
Introducing Virtual Volumes...
•A Virtual Volume (VVOL) is a VMDK (or its derivative – clone, snapshot
replica) stored natively inside a storage array.
•Storage array is now involved in VM lifecycle by virtue of managing VM
storage natively
• Application/VM requirements can now be conveyed to storage system
• Policies set at Virtual Volume granularity
How do vSphere hosts access these VMDK objects?
Is this model scalable?
37 Virtual Machine User Group – November 2012
Scalable Connectivity for Virtual Volumes
•Protocol Endpoint is an IO
channel from the host to the
entire storage system
• PE is SCSI LUN or NFS mount point, but holds no data
• VMDKs are not visible on the network
• VM admin configures multipathing, path policies, etc, once per PE
I/Os to each LUN or Volume
Traditional Storage system
PE
What about:
Capacity management?
Access control?
Storage Capabilities?
VVOL enabled Storage system
I/Os to a single Protocol Endpoint
38 Virtual Machine User Group – November 2012
Capacity Management for Virtual Volumes
•Storage Container is a logical
entity which describes:
• How much physical space can be
allocated for VMDKs
• Access Control
• A set of data services offered on
any part of that storage space
• The storage container can span
the entire data center.
• It is Created and managed by
storage administrator; Used by
vSphere administrator to store
VMs
PE
VVOL enabled Storage system
Manage capacity, access control on the storage
system, and defines storage capabilities
(snapshot, clone, replication, etc)
39 Virtual Machine User Group – November 2012
Agenda
vSphere 5.x Storage Features & Storage Futures
SIOC, Storage DRS & Storage vMotion
Introduction
VMFS-5 & VOMA
Protocol Enhancements
IO Device Management & SSD Monitoring
Space Efficient Sparse Virtual Disks
VAAI
Storage Futures - vFlash
Storage Futures – Virtual Volumes
Storage Futures – Distributed Storage
40 Virtual Machine User Group – November 2012
Distributed Storage Technology is…
•Many things
• A new VMware developed Storage Solution
• A Storage Solution that is fully integrated with vSphere
• A platform for Policy Based Storage to simplify Virtual Machine deployments decisions
• A Highly Available Clustered Storage Solution
• A Scale-Out Storage System
• An Quality Of Service implementation (for its storage objects)
41 Virtual Machine User Group – November 2012
Distributed Storage Hardware Requirements Summary
SAS/SATA RAID Controller
(with “passthru” or “HBA”
mode)
SAS/SATA SSD
SAS/SATA HDD
10G NIC (recommended)
Server on
vSphere HCL
At least 1
of each
• Not every node in a Distributed Storage cluster needs to bear storage
• The expected overhead of the Distributed Storage s/w itself is ~10%
42 Virtual Machine User Group – November 2012
Distributed Storage Design Principles
• Distributed Storage aggregates
locally attached storage on each
ESXi host in the cluster.
• The storage is a combination of
SSD & spinning disks.
• Datastores consist of multiple
storage components distributed
across the ESXi hosts in the
cluster.
• Storage Policy Profiles are built
with certain desired capabilities
(Availability, Reliability, &
Performance)
• The VMDK is then instantiated
through the policy profile settings
(based on VM requirements).
Distributed
Storage Cluster
ESX ESX ESX ESX
Virtual
Machine
virtual
disk
Datastore
replica-1 replica-2
RAIN-1
43 Virtual Machine User Group – November 2012
Distributed Storage Datastore
•The object is laid out across the cluster based on the storage policy of the
VM and the optimization goals.
•The replica may end up on any host and any storage.
vSphere
Hard disks Hard disks SSD SSD
Distributed Storage Datastore
Hard disks SSD
…
Distributed Storage Cluster
Replica 1
Replica 2
44 Virtual Machine User Group – November 2012
Conclusion
•vSphere 5.1 has many new compelling storage features.
• VMFS Scalability and a new consistency checking tool
• VAAI Enhancements for View & vCloud Director
• vCloud Director interoperability with Storage DRS & Profile Driven Storage
• Storage I/O Control, Storage DRS & Storage vMotion enhancements
• Additional protocol features (FC, FCoE & iSCSI)
• More visibility into low level storage behaviours with IODM & SSD Monitoring
• A new Space-Efficient Sparse Virtual Disk with granular block allocation size and space reclaim mechanism.
•VMware has many additional storage initiatives underway to provide even greater integration with the underlying hardware.
45 Virtual Machine User Group – November 2012
Questions?
http://CormacHogan.com
http://blogs.vmware.com/vSphere/Storage
@VMwareStorage