Reduxio Solution Guide for VMware vSphere€¦ · July 5, 2017 Added RDM. ... Solution Architecture...

57
Reduxio Solution for VMware vSphere 5.x/6.x

Transcript of Reduxio Solution Guide for VMware vSphere€¦ · July 5, 2017 Added RDM. ... Solution Architecture...

Reduxio Solution for

VMware vSphere 5.x/6.x

For more information, refer to Reduxio website at http://www.reduxio.com. If you have comments about this documentation, submit your feedback to [email protected]. Revisions: Descriptions October 28, 2015 Initial version. January 27, 2016 Added network configurations, vDisk recovery. May 10, 2016 Correction to recommended path selection

policy. July 22, 2016 Added required iSCSI parameters. March 23, 2017 General updates, added claim rules, updated

networking best practices. July 5, 2017 Added RDM.

© 2017 Reduxio Systems Inc. All rights reserved. No portions of this document may be reproduced without prior written consent of Reduxio. Reduxio™, the Reduxio logo, NoDup®, BackDating™, Tier-X™, StorSense™, NoRestore™ and NoMigrate™ are trademarks or registered trademarks of Reduxio in the United States and/or other countries. Linux is a registered trademark of Linus Torvalds. Windows is a registered trademark of Microsoft Corporation. UNIX is a registered trademark of The Open Group. ESX and VMWare are registered trademarks of VMWare, Inc. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. The Reduxio system hardware, software, user interface and/or information contained herein is Reduxio Systems Inc. proprietary and confidential. Any and all rights including all intellectual property rights associated therewith are reserved and shall remain with Reduxio Systems Inc. Rights to use, if any, shall be subject to the acceptance of the End User License Agreement provided with the system. Information in this document is subject to change without notice. Reduxio Systems, Inc. 111 Pine Avenue South San Francisco, CA, 94080 United States www.reduxio.com

Contents Preface ..................................................................................................................................................................... 5

Intended Audience .......................................................................................................................................... 5

Technical Support ........................................................................................................................................... 5

Overview ................................................................................................................................................................. 6

Introduction ....................................................................................................................................................... 6

The Business Challenge ................................................................................................................................. 6

Reduxio Solution for VMware vSphere ................................................................................................... 6

Solution Architecture ....................................................................................................................................... 10

Overview ........................................................................................................................................................... 10

Supported Configurations .......................................................................................................................... 10

Hardware Configuration ............................................................................................................................... 11

Software Configuration ................................................................................................................................ 11

Configuration ....................................................................................................................................................... 12

Storage Provisioning .................................................................................................................................... 12

iSCSI Adapter Installation ........................................................................................................................... 14

Network Configurations .............................................................................................................................. 16

iSCSI Port Binding ........................................................................................................................................ 36

Target Discovery ........................................................................................................................................... 38

CHAP Authentication .................................................................................................................................. 38

iSCSI Parameters ........................................................................................................................................... 40

Multipathing .................................................................................................................................................... 40

Maximum no. of Paths ................................................................................................................................. 42

Raw Device Mapping (RDM) .................................................................................................................... 42

Improving Cloning Performance ............................................................................................................. 42

Claim Rules ...................................................................................................................................................... 42

Expanding Capacity ..................................................................................................................................... 44

BackDating™ ....................................................................................................................................................... 45

Overview .......................................................................................................................................................... 45

Concepts .......................................................................................................................................................... 45

Recovering Datastores ............................................................................................................................... 47

Recovering VMs ............................................................................................................................................. 51

Recovering Disks .......................................................................................................................................... 52

Integration of Reduxio and vSphere ......................................................................................................... 54

vSphere Features .......................................................................................................................................... 54

Reduxio Features .......................................................................................................................................... 56

Conclusion ........................................................................................................................................................... 57

References ........................................................................................................................................................... 57

Preface Reduxio Solution for VMware vSphere 5.x/6.x describes the Reduxio solutions available for VMware vSphere virtualization customers. It provides instructions for the installation, configuration and management of Reduxio HX Series storage systems in a VMware vSphere 5.x/6.x environment.

Intended Audience Reduxio Solution for VMware vSphere 5.x/6.x is intended for anyone who needs to configure Reduxio storage systems for hosting a VMware vSphere solution.

This information is written for experienced system and storage administrators.

Technical Support For additional support, refer to https://support.reduxio.com.

Overview

Introduction Implementing a virtual infrastructure in an organization raises various business challenges. Selecting a storage infrastructure that is cost-effective but at the same time provides high performance and high availability is key to a successful deployment.

The Business Challenge The abundance of virtualized environments is an evidence to their many benefits. Customers are finding that virtualization improves the availability, manageability and agility of their IT systems. However, what about performance, scalability and data protection? Virtualization which once served only a portion of the environment, is now the core platform for nearly all IT environments – anywhere from the virtual desktop to the data center’s most mission critical applications. This market shift towards massive consolidation has created new challenges to storage administrators:

• Performance scalability – Consolidating many applications onto a single system creates a storage bottleneck. The aggregated IOs from all hypervisors are sent to the storage concurrently, in a highly random workload. For example, a specific application server produces a thousand IOPS. Virtualizing 200 servers with this load requires a storage system that can handle 20,000 IOPS.

• Storage efficiency – A major benefit of virtualization is that it simplifies the introduction of servers and applications. This creates a high demand for capacity required to store the set of operating systems and applications. For example, a typical application server with a set of installed applications would typically require 50 gigabytes or more. Virtualizing just 200 servers at this capacity would already require 10 terabytes stored on high performance media. A typical enterprise nowadays implements hundreds to thousands of virtual machines, amounting to tens of terabytes required just to host the virtual machines.

• VM availability – A consolidated server environment is as available as its VM storage layer. Customers expect 100% availability for their virtual infrastructure. This requires high availability not only at the service level, but also at the data level – VM datastores must be accessible even after major hardware and software component failures. This challenges existing storage architectures, which only provide an infrequent protection to datastores.

Reduxio Solution for VMware vSphere

Overview Reduxio Storage is the most intuitive solution for virtualization infrastructure based on VMware vSphere. The solution consists of the following components:

• Reduxio Storage System – The Reduxio flash hybrid storage arrays, based on Reduxio’s TimeOS™ storage operating system allow you to recover application data to any second in the past, eliminates most of the complexity associated with managing storage, and provides exceptional performance and efficiency, far exceeding anything available today. It provides considerable higher performance and more effective capacity for VMs than traditional SAN, and provides VM and datastore recoverability to any second in the past without upfront administration.

• vStorage APIs for Array Integration (VAAI) Support – Reduxio storage supports the VAAI primitives, and provides rapid VM provisioning, cloning and migration. Refer to vStorage APIs for Array Integration (VAAI) for more information.

• Reduxio Storage Manager for VMware vSphere (RSMV) – a Reduxio-supplied vSphere Web Client plugin for vCenter management and monitoring. Using RSMV, it is possible to create

VMware datastores on Reduxio volumes right from the vSphere Web Client, monitor capacity savings and performance, and recover datastores from any point in time in the history using Reduxio’s BackDating™ feature.

Use Cases There are various use cases for Reduxio Storage in VMware vSphere environments:

Server Virtualization A highly efficient, high performance storage platform for vSphere server virtualization environments. Virtual machine operating system, application and user data are all stored on a Reduxio storage system.

Desktop Virtualization A highly efficient, high performance storage platform for VMware Horizon View desktop virtualization environments. Virtual desktop operating system, application and user data are all stored on a Reduxio storage system.

Business Continuity for Virtualization

A highly efficient, high performance storage platform for recovery sites.

Solution Benefits Reduxio Storage Systems provide many compelling benefits for VMware vSphere customers:

Support 5x more VMs Reduxio offers unparalleled storage efficiency and density. Reduxio’s unique NoDup® engine only stores duplicate data once, in a compressed format. This data reduction is performed before the host writes get stored in cache. To the NoDup engine, virtual servers are no more than a set of many duplicate blocks.

Operating system files, applications and duplicate user data are noduped into a single unique, compressed set of blocks. For example, the same 200 Windows servers previously mentioned would be stored in no more than 50 gigabytes. Since the Reduxio system provides 256 gigabytes of DRAM cache, host reads from an entire set of VMs, for example during a boot storm, will be served from system memory. In addition, Reduxio is a flash-first system, effectively operating as if it was an all-flash array.

Ultimate storage efficiency and high performance combined

Reduxio offers unparalleled storage efficiency and density. Reduxio’s unique NoDup engine only stores duplicate data once, in a compressed format. This data reduction is performed before the host writes get stored in cache. To the NoDup engine, virtual servers are no more than a set of many duplicate blocks.

Operating system files, applications and duplicate user data are noduped into a single unique, compressed set of blocks.

For example, the same 200 Windows servers previously mentioned would be stored in no more than 50 gigabytes. Since the Reduxio system provides 256 gigabytes of DRAM cache, host reads from an entire set of VMs, for example during a boot storm, will be served

from system memory. In addition, Reduxio is a flash-first system, effectively operating as if it was an all-flash array.

Enhanced for VMs, supports the general-purpose

Reduxio’s architecture is fully optimized for VMware vSphere, offering full integration of VMware VAAI for rapid cloning and datastore zeroing, and a vCenter plug-in for seamless storage management from within vCenter. However, unlike niche solutions tailored for VMware server virtualization or VDI, the Reduxio solution provides support for other applications, virtualized and non-virtualized, such as Microsoft Exchange, SQL Server, Oracle, Windows and Linux file servers and more.

Total high availability Reduxio storage solutions offer unparalleled total end-to-end high availability.

• High availability controllers – The system comes with dual controllers and dual power supplies.

• N+2 Protection for SSD & HDD tiers – For ultimate protection against drive failures.

• High availability networking – Support for native VMware multipathing.

Efficient Business Continuity

Combined with vSphere VM Replication or other 3rd-party solutions, Reduxio storage solutions offer a highly efficient storage for secondary sites. Replicated VM capacity is deduped and compressed as it is written to the secondary site, dramatically reducing the required physical capacity. During a site failover time, the system will provide high performance, unlike traditional implementations whereby performance is degraded in disaster time.

Meeting Customer Challenges Table 1 - Customer Challenges in Virtualized Infrastructures lists common challenges in virtualized infrastructure environments, and how Reduxio storage solutions resolve them.

Table 1 - Customer Challenges in Virtualized Infrastructures

The Challenge The Solution

Capacity for storing VMs keeps growing rapidly

Typically, an environment containing hundreds of VMs with the same operating system and applications would require hundreds of copies of the same VM disks.

In-line extensive data reduction reduces capacity by storing only unique blocks and compress them further, effectively storing only a single copy of each operating system and application, drastically reducing the storage requirements of a large virtual machine environment.

Since both deduplication and compression are used, the observed savings are tremendous – anywhere from 2x-25x more capacity than comparable systems.

For example, a single Windows 2012 Server VM template consumes 30 gigabytes.

Storing a hundred VMs in a traditional storage system:

100x Windows 2012 Server = 3,000 GBs

In Reduxio:

100x Windows 2012 Server = 30 GBs

The marginal cost for cloning more VMs with similar operating systems and applications is dramatically lower than traditional storage systems.

Cloning VMs is complex and time consuming

Reduxio’s support for VMware vStorage APIs for Array Integration (VAAI) provides an amazingly fast cloning natively performed using vCenter standard cloning procedures or PowerCLI scripting, with the ability to fully customize the cloned VMs.

Booting a virtualized infrastructure impacts the performance of other VMs and applications

Operating system data is stored in deduped and compressed format. Frequently accessed operating system files are stored in the SSD tier and cached in the Reduxio system’s RAM.

Therefore, most of the data blocks read during a boot cycle of an entire population of virtual machines is effectively served from either the Reduxio system cache, speeding up the boot time while reducing the impact on other applications.

Reorganizing the physical structure of a virtual environment is both tedious and a time-consuming task

As storage pools run out of capacity or performance, storage administrators are finding themselves constantly migrating data from one pool to another.

With Reduxio, the location of virtual machine files in the various datastores is of no importance. The configuration of VMs and datastores does not impact capacity or performance, since all writes sent to a Reduxio system are acknowledged in cache and immediately gets stored in the SSD tier (if found unique), and most application reads are served from the SSD tier as well. This in turn means that reorganizing the physical structure of VMs, datastores and Reduxio volumes serve no purpose in optimizing performance nor capacity.

Even in cases where the administrator is interested in such a reorganization, it is both fast and simple to perform migrations of virtual machines between hosts and datastores using vSphere Storage vMotion. During migrations occurring between datastores in the same Reduxio system, no data is physically copied.

Solution Architecture

Overview The Reduxio HX Series storage systems are configured with one or more high capacity vSphere datastores. Virtual machine files are globally “NoDuped” – deduped and compressed, expanding the total usable capacity of the system. Operating system and application binaries are stored once and kept in memory. This leaves ample room – almost the entire system capacity - available for application and user data.

The solution can be managed using:

Reduxio StorApp for VMware vSphere (RSVV)

Storage management is performed centrally from RSVV which is accessible from the vSphere Web Client:

• Datastore configuration. • BackDating – Datastore recovery to any second. • System monitoring – overall status, savings ratio and

performance statistics.

RSMV is the recommended choice as less steps are required to perform common tasks.

Native tools Storage management is performed using native tools:

• Reduxio system - Reduxio Storage Manager (RSM) or ReduxioCLI.

• ESXi host – vSphere Web Client, vSphere Client, PowerCLI etc.

This document describes the native management tools. Refer to the RSMV Installation and Administration Guide for more information on management using RSMV.

Supported Configurations Reduxio supports various storage configurations for VMware vSphere environments:

Hypervisor Servers running VMware ESXi v5.5, v6.0 and v6.5.

Management Software Reduxio StorApp for VMware vSphere supports vSphere Web Client v5.5 and higher running on Windows Server 2012 R2.

Storage VMFS3, VMFS5 and VMFS6 datastores.

Datastores are globally noduplicated and compressed, expanding the total usable capacity of the system.

Hardware Configuration The following hardware components were used in this reference architecture:

No. Hardware component Description

1 Reduxio HX550 dual-controller, 256GB RAM, 8x 800GB MLC SSDs, 16x 2TB 7.2k RPM SATA disks

Reduxio HX550 storage system

2 Dell R620, dual-socket Intel Xeon E5-2650 v2 @ 2.60GHz (total 32 virtual cores), 256GB RAM

VMware ESXi servers

1 Mellanox 10GbE switch Interconnect between ESXi hosts and Reduxio HX550

1 Lenovo ThinkPad X1 Carbon

Administration host

1 Apple iPad (2nd generation)

Mobile administration

Software Configuration The following software components were used in this reference architecture:

No. Software component Description

3 VMware vSphere 5.5, 6.0 and 6.5

ESXi hypervisors – 2 for virtual servers, 1 for infrastructure and management VMs

1 VMware VM Replication 5.5.1

VMware’s replication virtual appliance

1 Reduxio StorApp for VMware vSphere v1.1

Reduxio plug-in for VMware vSphere Web Client integrated storage management

Configuration The following section describes how to setup a Reduxio solution for VMware using Reduxio Storage Manager and VMware vSphere Web Client. Refer to the Reduxio StorApp for VMware vSphere (RSVV) Administration Guide for setup instructions using RSVV.

The setup ESXi with Reduxio, the following steps are required:

1. Provision storage on Reduxio and assign to ESXi hosts. 2. Install iSCSI Software Adapter on ESXi hosts. 3. Configure ESXi networking in standard or distributed switch configuration 4. Configure iSCSI binding in vSphere. 5. Discover Reduxio. 6. Configure vSphere multipathing.

Storage Provisioning First, configure host groups, hosts and volumes in the Reduxio system itself:

1. Create a host for each ESXi server.

To create a host using Reduxio Storage Manager:

1. Click the HOSTS & VOLUMES icon in the icon bar. Hosts and host groups are listed together on the left side, and volumes are listed on the right side.

2. Create a new host for each ESXi host. Click the NEW HOST button to open the new host dialog box.

3. Enter the following information: NAME Enter a meaningful name, e.g. sfo_esx1. ISCSI NAME Copy/paste the IQN obtained previously. BLOCK SIZE Select 512. DESCRIPTION Enter a description, e.g. “ESXi 5.5 Rack 3”.

VMWare ESXi v5.5/6.0/6.5 supports SCSI devices with 512-bytes sectors. For proper system behavior, configure the volumes used for ESXi with 512-bytes block size.

4. Click OK to create the host. Repeat this for all the ESXi servers.

To create the hosts using ReduxioCLI:

rdxadmin@reduxio:/ ➜ # hosts create esx1 –iscsi-name iqn.1998-01.com.vmware:esx1-411749c5 –description "ESXi 5.5 Rack 3"

2. Create a host group and add all ESXi hosts.

To create a host group using Reduxio Storage Manager:

1. Click the HOSTS & VOLUMES icon in the icon bar to open the hosts and volumes screen.

2. Click the NEW GROUP button. The new host group dialog box will open up.

3. Enter a host group name next in the NAME field. 4. Click OK to create the host group. 5. Within the host group panel, select the HOSTS

tab. 6. Drag and drop each host onto the drop zone

below the tab buttons. Repeat this for each host created in step 2.

To create a host group using ReduxioCLI:

rdxadmin@reduxio:/ ➜ # hostgroups create esx_clus1 –description "ESXi Cluster1 Rack 3"

To add the hosts to the host group using ReduxioCLI:

rdxadmin@reduxio:/ ➜ # hostgroups add-host esx_clus1

–host esx1

rdxadmin@reduxio:/ ➜ # hostgroups add-host esx_clus1 –host esx2

3. Create a volume for the datastore.

To create a volume used for the datastore using Reduxio Storage Manager:

1. Select HOSTS & VOLUMES.

2. Create a new volume for each ESXi host: NAME Enter a meaningful name, e.g. sfo_esx1. ISCSI NAME Copy/paste the previously obtained IQN. SIZE The volume size in either gigabytes or terabytes. BLOCK SIZE Select 512 – ESXi expects devices with 512-byte sectors.

To create a volume used for the datastore using ReduxioCLI:

rdxadmin@reduxio:/ ➜ # volumes create reduxstor1 –size 1024 –blocksize 512

4. Assign the volume to the ESXi cluster.

To assign the volume to the ESXi cluster host group using Reduxio Storage Manager:

1. Select the host group created in step 3.

2. Within the host group panel, select the VOLUMES tab.

Drag and drop the volume created in step 4 onto the drop zone below the tab buttons.

5. Identify Reduxio iSCSI IP addresses.

To locate the data interface IP addresses using Reduxio Storage Manager:

1. Select SETTINGS. 2. Select NETWORK CONFIGURATION. 3. Four iSCSI IP addresses are listed in the

CONTROLLER 1 & 2, PORT 1 & 2 fields.

iSCSI Adapter Installation Install the ESXi iSCSI Software Adapter in each ESXi host:

1. Add iSCSI Software Adapter to each ESXi server.

To initially configure iSCSI Software Adapter on a VMware vSphere 5.5 host or higher:

1. In vSphere Web Client, select the ESXi host.

2. Click on Manage > Storage.

Tip: How to choose a volume size?

All the volumes in the Reduxio storage solution are thin provisioned, and stored in highly compacted form on physical media.

As of high savings typically observed when using the Reduxio storage solution in VMware environments, the physical capacity consumed by datastores is much lower than the total space of the datastores from the vSphere point of view. For example, to store a hundred (100) VMs with 30GB each – total ~3TB, typically only a third or less will be required – e.g. ~1TB.

It is therefore recommended to create relatively large volumes for datastore storage to provide room to grow without the need to expand capacity. This best practice is compelling since there is no additional capacity consumed by creating large datastores upfront.

Some examples are provides as a reference:

To store 100 VMs x 30GB (3TB) create a 10TB datastore.

To store 500 VMs x 30GB (15TB) create a 64TB datastore (max size in ESXi 5.x).

3. Click on the + icon.

4. Select Software iSCSI adapter.

5. A dialog pops up. Click OK to confirm.

6. The Software iSCSI adapter vmhbaXX is created.

7. The adapter iSCSI IQN is displayed. Select and copy (CTRL-C) the IQN. In the example shown: iqn.1998-01.com.vmware:dhcp-172-17-41-155-411749c5

2. VMware Documentation To set up iSCSI adapters and storage in VMware vSphere 5.5, refer to Configuring iSCSI Adapters and Storage.

To set up iSCSI adapters and storage in VMware vSphere 6.0, refer to Configuring iSCSI Adapters and Storage.

To set up iSCSI adapters and storage in VMware vSphere 6.5, refer to Configuring iSCSI Adapters and Storage.

Network Configurations The following configurations are recommended by Reduxio:

Virtual Standard switches (vSS)

This configuration is the most straightforward and simple to setup. but nevertheless provides high availability:

• Single physical adapter per ESXi host – A single path to each Reduxio controller. A single link failure will trigger a vSphere HA failover.

• Dual physical adapters per ESXi host – Redundant paths to each controller. A single link failure will be covered by failing over to the remaining link.

In this configuration, each physical adapter is configured with its own VMkernel and vSwitch.

Virtual Distributed switches (vDS)

This configuration provides consolidated networking management across many ESXi hosts.

• Single physical adapter per ESXi host – A single path to each Reduxio controller. A single link failure will trigger a vSphere HA failover.

• Dual physical adapters per ESXi host – Redundant paths to each controller. A single link failure will be covered by failing over to the remaining link.

In this configuration, a distributed switch functions as a single virtual switch across all associated hosts. A distributed switch allows virtual machines to maintain a consistent network configuration as they migrate across multiple hosts.

Standard Switches Recommended Configuration A virtual switch models a physical Ethernet switch. When two or more virtual machines are connected to the same virtual switch, network traffic between them is routed locally. If an uplink adapter is attached to the virtual switch, each virtual machine can access the external network that the adapter is connected to. This section discusses the configuration of a Reduxio system in a standard switch environment. The recommended configuration using Reduxio storage is creating two standard switches; one for management traffic and another for iSCSI traffic. vSwitch0 contains two virtual port groups. One Virtual Machine Port Group and one VMkernel Port Group. vSwitch1 contains three virtual port groups. One Virtual Machine Port Group and two VMkernel Port Groups.

Figure 1. Minimum Standard Switches Configuration per ESXi Server

Figure 2. Example of Standard Switches Configuration in a Virtual Environment with Reduxio

Configure iSCSI with Standard Switches To configure a standard switch and iSCSI with two ports per host:

1. Add Reduxio Data interface IPs as a target.

To configure the Reduxio iSCSI IP addresses as new targets in ESXi using vSphere Web Client:

1. Select the ESXi host. 2. Select Manage > Storage > Storage Adapters. 3. In the Storage Adapters panel, select the iSCSI Software

Adapter. 4. Under Adapter Details, select the Targets tab. 5. Select Dynamic Discovery. 6. Click Add…

Add a Reduxio iSCSI IP address or a hostname that resolves to it and click OK. Any of the iSCSI ports that are up can be used.

2. Rescan devices. To rescan the devices using vSphere Web Client:

1. Select the ESXi host. 2. Select Manage > Storage > Storage Adapters. 3. Click the Rescan all storage adapters icon ( ). 4. Repeat steps 1 to 3 for each ESXi host in the cluster.

3. Create a new datastore.

To create a new datastore using vSphere Web Client:

1. Click on the New Datastore icon. 2. Select the ESXi cluster or host that will access this datastore. 3. Click Next. 4. Click Next to confirm a VMFS datastore. 5. Enter a name in Datastore name field. 6. Select the LUN in each host. 7. Click Next. 8. Click Next to confirm a VMFS5 datastore. 9. Click Next to confirm the use of entire LUN space. Review the settings and click Finish to create the datastore.

4. Creating VMkernel Port Groups for iSCSI

1. Create a virtual VMkernel adapter for each physical network adapter that will be used for iSCSI.

2. Bind iSCSI and VMkernel Adapters (Refer to iSCSI Port Binding in VMware vSphere documentation).

Distributed Switches Recommended Configuration

Overview A distributed switch functions as a single virtual switch across all associated hosts. A distributed switch allows virtual machines to maintain a consistent network configuration as they migrate across multiple hosts. A distributed switch can forward traffic internally between virtual machines or link to an external network by connecting to uplink adapters. Each distributed switch can have one or more distributed port groups assigned to it. Distributed port groups group multiple ports under a common configuration and provide a stable anchor point for virtual machines that are connecting to labeled networks. Each distributed port group is identified by a network label, which is unique to the current datacenter.

This section discusses working in a distributed switch environment. The recommended configuration using Reduxio storage is creating two Distributed Port Groups and two Uplinks. The Distributed Port Groups are only for the iSCSI traffic. Standard switches can still be used for the management traffic.

Figure 3. Example of Distributed Switches Configuration

Best Practices The recommended configuration is dual physical adapters per host. Each ESXi host is configured with two physical ports used for iSCSI traffic. This provides redundancy within the ESXi host, such that a single link failure will be covered by automatically failing to a path from the remaining port to the active port on the Reduxio system.

Install and Configure a DVSwitch 1. Creating VMware

Distributed Switch To install and configure DVswitch on a VMware vSphere 5.5 host or higher:

1. Connect to vSphere Web Client. 2. Create Standard Switches according to Figure 1. 3. Create a new Distributed Switch, Home >

Networking > Right Click on the Datacenter Name > Distributed Switch > New Distributed Switch.

4. Enter a name for the Distributed Switch > Next.

5. Select your Distributed Switch Version > Next.

6. Select 2 uplinks and Clear the checkbox > Next.

7. Review the changes > Finish.

8. Stand on the Distributed Switch > Right Click > Settings > Edit Settings.

9. Go to Advanced tab and change the MTU to 9000 > OK.

2. Creating Two Distributed Switches

1. Stand on the Distributed Switch > Right Click > Distributed Port Group > New Distributed Port Group.

2. Enter a name for the new Distributed Port Group > Next.

3. Change the number of ports according to the number of your existing ESXi servers in your environment (in our example we have 3 ESXi servers) and check the Advanced option “Customize default policies configuration” > Next.

4. Go to Teaming and Failover tab.

5. Go to Teaming and Failover tab > Failover order > Move Uplink 2 to “Unused uplinks”.

6. Review the changes > Finish.

7. Stand on the Distributed Switch > Right Click > Distributed Port Group > New Distributed Port Group.

8. Enter a name for the new Distributed Port Group > Next.

9. Change the number of ports according to the number of your existing ESXi servers in your environment (in our example we have 3 ESXi servers) and check the Advanced option “Customize default policies configuration” > Next.

10. Go to Teaming and Failover tab.

11. Go to Teaming and Failover tab > Failover order > Move Uplink 1 to “Unused uplinks”.

12. Review the changes > Finish.

3. Migrating Physical Network Adapters (for iSCSI) to DVswitch

1. Stand on the Distributed Switch > Right Click > Add and Manage Hosts.

2. Select Add hosts > Next.

3. Go to +Add hosts > Next.

4. Select your ESXi servers > OK.

5. Click Next.

6. Select “Manage physical adapters” and “Manage VMkernel Adapters” > Next.

7. Go to the first ESXi Server, choose the first vmnic which is connected to vSwitch1 > Assign Uplink.

8. Choose Uplink 1 > OK.

9. Choose the second vmnic which is connected to vSwitch1 > Assign Uplink.

10. Choose Uplink 2 > OK.

11. Go to the next ESXi Server, choose the first vmnic which is connected to vSwitch1 > Assign Uplink.

12. Choose Uplink 1 > OK.

13. Choose the second vmnic which is connected to vSwitch1 > Assign Uplink.

14. Choose Uplink 2 > OK.

15. You will be able to see that both vmnics which are assigned to vSwitch1 are assigned to Uplink 1 and Uplink 2.

16. Repeat steps 11-15 for all other ESXi servers in the same cluster. After assigning all your vSwitch1 vmnics to uplink 1 and uplink 2 > Next.

4. Migrating VMkernel Network Adapters (for iSCSI) to DVswitch

1. Go to the first ESXi Server, choose the first VMkernel which is connected to iSCSI Lab> Assign Port Group.

2. Choose the first Port Group > OK.

3. Choose the VMkernel which is connected to iSCSI Lab 2 > Assign Port Group.

4. Choose the second Port Group > OK.

5. Go to the next ESXi Server, choose the VMkernel which is connected to iSCSI Lab > Assign Port Group.

6. Choose the first port group > OK.

7. Choose the VMkernel which is connected to iSCSI Lab 2 > Assign Port Group.

8. Choose the second port group > OK.

9. Repeat steps 5-8 for all other ESXi servers in the same cluster. After assigning all your vSwitch1 vmnics to new Port Groups > Next.

10. Review the impact of the servers > Next.

11. Review the changes > Finish.

12. Make sure that the Tasks have finished successfully.

13. The final configuration will appear as follow:

5. iSCSI Port Binding 1. Bind iSCSI and VMkernel Adapters (refer to iSCSI Port Binding).

iSCSI Port Binding Port binding is used in VMware’s iSCSI configuration when multiple VMkernel ports for iSCSI reside in the same broadcast domain and IP subnet to allow multiple paths to an iSCSI array that broadcasts a single IP address. The required iSCSI port NICs are bound to an iSCSI adapter.

1. Configuring iSCSI Port Binding

To install and configure iSCSI Pori Binding on a VMware vSphere 5.5 host or higher:

1. Go to ESXi Host > Storage > Storage Adapters > iSCSI Software Adapter > Edit.

2. Select your iSCSI Port Groups > OK.

3. Validate that VMkernel Adapters are bound to the Port Group.

4. Perform a Rescan.

5. Keep the default > OK.

6. Validate that the Port Status is Active.

Target Discovery There are two methods to configure a new iSCSI target in VMware:

Static Discovery Targets are pre-configured in the ESXi host, and the initiator uses the IP address, target iSCSI name and CHAP (if configured) to login to the target.

Dynamic Discovery Also called Send Targets discovery. Once a target IP address is added, the initiator sends a Send Targets request to the server – a query to get the complete list of target IP addresses.

The target responds with the list of available targets, which then appear on the Static Discovery tab.

Note: Hosts configured with CHAP cannot be configured using dynamic discovery. It is recommended to configure first without CHAP, then update the Reduxio host and the ESXi target properties.

CHAP Authentication To configure CHAP authentication, first perform all steps listed in Error! Reference source not found., and then perform the following steps:

1. Configure CHAP authentication in the Reduxio system

To configure CHAP authentication using Reduxio Storage Manager:

1. Click the HOSTS & VOLUMES icon in the icon bar. Hosts and host groups are listed together on the left side, and volumes are listed on the right side.

2. Select the desired ESXi host.

3. Select the Enable CHAP checkbox.

4. Enter a CHAP name in the CHAP USER field. As a best practice, enter the ESXi iSCSI software initiator’s IQN.

5. Enter a CHAP password in the CHAP PASSWORD field. The CHAP password must be 12 to 16 characters long.

6. Click OK.

7. Repeat steps 2 to 6 for each ESXi host configured in the Reduxio system.

2. Configure CHAP authentication in ESXi hosts

To configure CHAP authentication in ESXi using vSphere Web Client, perform the following in each ESXi host:

1. Select the ESXi host. 2. Select Manage, Storage, Storage Adapters. 3. In the Storage Adapters panel, select the iSCSI

Software Adapter. 4. Under Adapter Details, select the Targets tab. 5. Select the relevant target for the Reduxio system. 6. Click Authentication… 7. Unselect Inherit settings from parent. 8. Select Use unidirectional CHAP. 9. If a non-default CHAP name was configured in

the Reduxio system, enter it in the Name field. 10. Enter the CHAP secret configured in the Reduxio

system. 11. Click OK. 12. Repeat steps 1 to 11 for each ESXi host configured

in the Reduxio system.

Figure 4 - ESXi CHAP Authentication

iSCSI Parameters To configure VMware vSphere hosts with Reduxio, the following settings must be updated:

• LoginTimeout – Time in seconds initiator will wait for the Login response to finish. Must be updated to 30 seconds for proper high availability with Reduxio TimeOS™ v2.6 and higher.

• RecoveryTimeout – Time in seconds initiator will wait before placing a path into a DEAD_STATE when the path was active, but now no PDUs are being sent or received. Must be updated to 60 seconds for proper high availability with Reduxio TimeOS™ v2.6 and higher.

To find the iSCSI software initiator adapter name, run the following from the ESXi command-line:

~ # esxcli iscsi adapter list Adapter Driver State UID Description ------- --------- ------ ------------------------------------- ---------------------- vmhba37 iscsi_vmk online iqn.1998-01.com.vmware:itsko-73760733 iSCSI Software Adapter

To change the settings, run the following from the ESXi command-line on the iSCSI software adapter (typically named vmhba37):

~ # esxcli iscsi adapter param set -A vmhba37 -k LoginTimeout -v 30 ~ # esxcli iscsi adapter param set -A vmhba37 -k RecoveryTimeout -v 60

To view the current settings, run the following from the ESXi command-line, using the iSCSI software adapter name (typically vmhba37):

~ # esxcli iscsi adapter param get -A vmhba37

Multipathing Reduxio storage supports the Asymmetric Logical Unit Access (ALUA) standard, providing native path failover and load balancing in vSphere.

Reduxio storage is supported by the VMware default VMkernel multipathing plug-in called the Native Multipathing Plug-In (NMP). Various path selection plug-ins (PSPs) are provided. The recommended mode is VMW_PSP_RR:

VMW_PSP_MRU The host selects the path that it used most recently. When the path becomes unavailable, the host selects an alternative path. The host does not revert back to the original path when that path becomes available again. There is no preferred path setting with the MRU policy. MRU is the default policy for most active-passive storage devices.

The VMW_PSP_MRU ranking capability allows you to assign ranks to individual paths. To set ranks to individual paths, use the “esxcli storage nmp psp generic pathconfig set” command. For details, see the VMware knowledge base article at http://kb.vmware.com/kb/2003468.

The policy is displayed in the client as the Most Recently Used (VMware) path selection policy.

VMW_PSP_RR The host uses the designated preferred path, if it has been configured. Otherwise, it selects the first working path discovered at system boot time. If you want the host to use a particular

preferred path, specify it manually. Fixed is the default policy for most active-active storage devices.

Note:

If the host uses a default preferred path and the path's status turns to Dead, a new path is selected as preferred. However, if you explicitly designate the preferred path, it will remain preferred even when it becomes inaccessible.

Displayed in the client as the Fixed (VMware) path selection policy.

VMW_PSP_FIXED The host uses an automatic path selection algorithm rotating through all active paths when connecting to active-passive arrays, or through all available paths when connecting to active-active arrays. RR is the default for a number of arrays and can be used with both active-active and active-passive arrays to implement load balancing across paths for different LUNs.

To set the multipathing on all Reduxio volumes to VMW_PSP_RR create and run the following script from ESX command-line:

#!/bin/sh for dev in `esxcli storage nmp device list | grep REDUXIO | awk '{print $7}' | sed -r 's/[()]//g'`; do echo "Changing the multipathing policy of $dev" esxcli storage nmp device set --device $dev --psp VMW_PSP_RR done

To change the default multipathing policy for all standard ALUA devices (such as Reduxio):

~ # esxcli storage nmp satp set --default-psp=VMW_PSP_RR --satp=VMW_SATP_ALUA

To configure multipathing with VMware NMP from vSphere Web Client:

1. Select path selection plug-in.

1. Browse to the host and select it. 2. Select Manage, Storage, Storage Devices. 3. Select a device iSCSI Disk from storage device list. 4. In Device Details pane, select the Properties tab. 5. The multipathing policy is listed in Multipathing

Policies section. 6. To change the policy, click the Edit Multipathing…

button. 7. Select the desired policy from the drop-down menu. 8. Click OK to change the policy.

2. View and manage paths. 1. Browse to the host and select it. 2. Select Manage, Storage, Storage Devices. 3. Select a device iSCSI Disk from storage device list. 4. In Device Details pane, select the Paths tab.

5. Click on a path and the Enable or Disable buttons to change the path state.

Maximum no. of Paths Reduxio HX550 supports a maximum of 64 paths. Connections beyond 64 paths are automatically rejected.

The following configurations are recommended:

• 4 paths - ESXi host with a single 10GbE port, connected to 4 ports on the HX550. • 8 paths - ESXi host with two 10GbE ports, connected to 4 ports on the HX550.

Raw Device Mapping (RDM) To configure Reduxio volumes as RDM devices, create volumes with a block size of 512-bytes. 4K volumes cannot be used with current ESXi versions.

Improving Cloning Performance VAAI performance can be improved by increasing the maximum transfer size of the VAAI XCOPY primitive.

To display the current value of the maximum XCOPY transfer size in ESXi:

# esxcfg-advcfg -g /DataMover/MaxHWTransferSize

To set the recommended XCOPY transfer size in ESXi:

# esxcfg-advcfg –s 16384 /DataMover/MaxHWTransferSize

Claim Rules a

# esxcli storage core claimrule add -r 65430 -t vendor -V REDUXIO -M TCAS -P NMP -c VAAI -a -s -m 256

esxcli storage nmp satp rule add -s “VMW_SATP_ALUA” -V “NETAPP” -P “VMW_PSP_RR” -O “iops=1” -c tpgs_on -o “reset_on_attempted_reserve”

esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -V "REDUXIO" —M "" P "VMW_PSP_RR" -O "iops=1" -c tpgs_on -o "reset_on_attempted_reserve"

[root@localhost:~] esxcli storage nmp satp list Name Default PSP Description ------------------- ------------- ------------------------------------------------------- VMW_SATP_ALUA VMW_PSP_MRU Supports non-specific arrays that use the ALUA protocol VMW_SATP_MSA VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AA VMW_PSP_FIXED Supports non-specific active/active arrays VMW_SATP_LOCAL VMW_PSP_FIXED Supports direct attached devices

esxcli storage nmp satp rule add -s “VMW_SATP_RDX” -V “REDUXIO” -P “VMW_PSP_RR” -O “iops=1” -c tpgs_on -o “reset_on_attempted_reserve”

esxcli storage nmp satp generic deviceconfig get -d

Expanding Capacity Datastores can be expanded using the Reduxio volume resize capability.

To increase the capacity of a datastore:

1. Expand the volume. To expand a volume:

1. Select the HOSTS & VOLUMES icon to open the hosts and volumes screen.

2. Select the volume to be expanded. 3. Enter a new volume size. Size must be larger

than the existing volume size. 4. Click OK. 5. The volume is immediately expanded.

2. Rescan devices. To rescan the devices using the vSphere Web Client:

1. Select the ESXi host. 2. Select Manage, Storage, Storage Adapters. 3. Click the Rescan all storage adapters icon ( ). 4. Repeat steps 1 to 3 for each ESXi host in the

cluster.

3. Increase the datastore capacity.

To increase the datastore capacity using the vSphere Web Client:

1. Select the datastore (from a host, select the Related Objects tab, then Datastores, then the datastore itself).

2. Click the Increase datastore capacity icon ( ). 3. Select the relevant volume. Note that the

original volume size appears in this list at this point.

4. Click Next. 5. In Partition Configuration, select Use ‘Free

space X.00 TB’ to expand the datastore. 6. Click Next to approve an expansion to the

largest possible when using the expanded volume.

7. Click Finish to perform the VMFS expansion.

BackDating™

Overview Backdating enables the administrator to revert or clone volumes to any point in time back in the history. Unlike legacy storage designs, Reduxio operates as a “data recorder” and keeps track of both location – where is the data stored (LUN and offset), and time – when was the data written (timestamp).

BackDating is the evolution of snapshots. Snapshots only provide a set of pre-defined point in time (PITs) copies per volume, at the cost of complex planning, scheduling and application-aware backup software. Backdating replaces all of that with a simple to use timeline. It is possible to select any point on this timeline and clone a volume on its basis.

Backdating™ enables the vSphere storage administrator to recover VMs and datastores from any point in time in the past.

Concepts

History Timeline A history timeline can be thought of as a continuous timeline of seconds from the creation of a volume to the current time, as seen in Figure 5 - History Timeline.

In such a timeline, it would have been possible to keep the entire history of a volume, such that it is recoverable to every second in the past – from the current time back to the volume’s creation time. However, typically as data ages, its history becomes less relevant. The volume' timeline was therefore designed with multiple granularity levels of history defined by a policy. The most recent part of history still maintains the continuous timeline of seconds; however, moving towards the past, there are portions when only minutes, hours, days and weeks are kept. Each volume is assigned to a single history policy that dictates its history.

Bookmarks Bookmarks are user-initiated labels for specific time points in a volume that have significance to the user. For example, a bookmark can be created for a certain application volume before a major change such as a service pack update, or a large data processing batch job. These bookmarks would enable the administrator to identify these times more easily, and recover from failures by cloning or reverting volumes to the bookmarked time.

History is deleted according to the history policies. However, bookmarks can be used to guarantee that certain timestamps will not be deleted. For example, use a bookmark to maintain a baseline version of a database. Database clones can be created using the bookmark, without the risk of the

Figure 5 - History Timeline

baseline version being deleted from the system. All the data blocks required for its timestamp remain until the bookmark itself is deleted.

There are two types of bookmarks:

Automatic The bookmark is automatically deleted based on the history policy assigned to the volume.

For example, a volume is configured for storing a database. The default policy is configured. An automatic bookmark is then created for a database backup.

This bookmark will be automatically deleted after the maximum retention level is reached for this volume. For example, when using the Default-Apps policy, automatic bookmarks will be deleted after one year.

Manual The bookmark is kept by the system until it is manually deleted. This is especially useful for keeping important time points from being deleted accidentally. However, to avoid a full system state, manual bookmarks should eventually be deleted.

History Policies The system maintains a certain amount of history for each volume, referred to as the retention period. The amount is defined by setting a history policy, which dictates the deletion of past blocks. Only blocks that have no future reference are deleted.

Clones Clones provide an advanced functionality that is highly beneficial to various recovery, test and development use cases.

Clones are writeable and independent

A cloned volume is 100% equivalent to the volume it based on, at the chosen timestamp. However, the clone and the source volume are entirely independent from each other. The source volume can be deleted without affecting its clone/s. Clones are standard read/write volumes. This enables the administrator to clone a certain application, and keep running from the clone.

One-to-many A single volume can be cloned to many volumes. Note that the clones are similar to other volumes and are accounted together towards the maximum number of volumes.

For example, if the maximum number of volumes is 1,000, there can be a single volume and 999 clones.

Multi-level Cascading Cloned volumes can be recursively cloned for versioning purposes. For example, vol1 can be cloned to vol2, which can then be cloned to vol3.

Automatic Consistency Data is consistent across volumes in every point in the timeline. This provides an inherent support for consistency groups. As long as a set of related volumes maintain the same timestamps

(by configuring the same or a similar history policy), reverting or cloning volumes to the same timestamps across the volumes will provide a consistent view.

For example, db1 and log1 volumes are used to store a database’s data and log files respectively. When cloned or reverted to the same timestamp, the resulting volumes will be consistent to that timestamp.

Recovering Data Volumes can be reverted or cloned from any point-in-time in the history timeline of a volume:

Clone Create a new volume that is based on another volume from a selected timestamp.

For example, consider the following timeline:

Jan 19, 2015 9:34:00am - volume db1 created. Jan 19, 2015 10:05:00am - volume db1_clone created, based on db1, at timestamp 9:55:04am.

The cloned volume db1_clone will contain the contents of db1 as it was in 9:55:04am. Concurrently with volume db1_clone, volume db1 continues to function as before – including all updates it received since the moment of the cloning.

Revert Revert a volume to a selected timestamp. The volume contents will be changed in-place; any changes beyond the timestamp will be lost.

For example, consider the following timeline:

Jan 19, 2015 9:34:00am - volume db1 created. Jan 19, 2015 10:05:00am - volume db1 is reverted to timestamp 9:55:04am. All changes beyond 9:55:04am are lost.

Note: A volume must be dismounted from the host before a revert operation, as the host may contain cache information or file system metadata that were not committed to storage yet, which may conflict with the reverted volume. Once the revert is completed, the volume can be safely remounted.

Recovering Datastores

Overview Datastores can be recovered in one of the following methods:

• Cloning the datastore – Create a cloned volume from a past timestamp, and connect to the vSphere hosts. This results with an additional datastore, allowing the administrator to manage both the original and recovered (backdated) datastores together. The VMs from the original datastore can be removed from the inventory if necessary, and the recovered ones added as needed.

• Reverting the datastore – Disconnect the original datastore from vSphere, revert it to a past timestamp, and connect it. This provides a faster recovery since it requires less steps. However, all VMs located on the original datastore must be powered off and removed from the inventory.

Duplicate VMFS Signatures VMWare vSphere identifies duplicate datastore using a signature stored in VMFS. When a backdated datastore is detected, a dialog box will be displayed offering multiple options:

• When reverting the datastore – Select Keep existing signature. Prior to this, the original datastore must be unmounted to eliminate conflicts between the two datastores.

• When cloning a datastore - Select Assign a new signature so that the recovered datastore will be connected as an additional datastore.

Figure 6 - Datastore Signature

Note: If the same datastore is cloned more than once, and these clones are assigned and already visible to the ESXi hosts, both Keep existing signature and Assign new signature selections will be grayed out, and only Format the disk is available for selection. This is a known issue. Unassign the additional clones such that at most only the original datastore and a single clone are accessible to the hosts, then retry the Add Datastore operation.

For more information on duplicate signature, refer to Managing Duplicate VMFS Datastores section of the VMware vSphere 6.0 Documentation Center.

Recover a Datastore using Cloning To recover an entire datastore using a clone:

1. Identify the volume to clone.

To clone the relevant datastore volume:

1. Select Manage, Storage, Storage Devices. 2. Select each device iSCSI Disk from storage

device list – the Name value starts with REDUXIO. 3. In Device Details pane, select the Properties tab. 4. Identify the volume using its Identifier (WWID)

value. The identifier can be compared with the

one displayed in the Reduxio Storage Manager (click the flip-over button in the volume panel).

2. Clone the volume. To clone a volume using Reduxio Storage Manager:

1. Click the HOSTS & VOLUMES in the icon bar. 2. Select the volume to be cloned. 3. Click the CLONE/REVERT button. 1. Select the required timestamp to revert to. Any

time from the available history can be selected. The time selection is performed by either entering the desired time and date, or clicking the time/date fields using the keyboard arrow keys or scrolling the mouse to move the time and date forward or backwards.

4. Click the CLONE button.

3. Assign the volume to the ESXi cluster.

To assign the volume to the ESXi cluster host group using Reduxio Storage Manager:

1. Select the host group for the ESXi cluster.

2. Within the host group panel, select the VOLUMES tab.

Drag and drop the new volume clone created in step 2 onto the drop zone below the tab buttons.

4. Rescan devices. To rescan the devices using vSphere Web Client:

1. Select the ESXi host. 2. Select Manage, Storage, Storage Adapters. 3. Click the Rescan all storage adapters icon ( ) and

click OK. 4. Repeat steps 1 to 3 for each ESXi host in the

cluster.

5. Connect to the datastore clone.

To connect to the datastore clone:

1. Select the ESXi host (if this is a cluster, select any of the hosts in the cluster).

2. Select Related Objects, Datastores. 3. Click the Create a new datastore icon ( ). 4. Select VMFS and click Next. 5. Select the Reduxio clone volume. The Snapshot

Volume column will contain the original datastore name. Click Next.

6. Select Assign a new signature to avoid a conflict with the original datastore. Click Next.

7. Click Finish.

6. Add VMs to inventory. To browse the datastore clone and add VMs to the inventory:

1. Right-click the datastore, then select Browse Files from the drop-down menu.

2. Click the relevant VMs, select the .vmx file (Type column equals to Virtual Machine), right-click it and select Register VM…

3. Select a name and location, and then click Next. 4. Select a host or a cluster, and then click Next. 5. Select a host in the cluster, and then click Finish.

Recover a Datastore using Revert To recover an entire datastore using revert:

1. Identify the volume to revert.

To clone the relevant datastore volume:

1. Select Manage, Storage, Storage Devices. 2. Select each device iSCSI Disk from storage

device list – the Name value starts with REDUXIO. 3. In Device Details pane, select the Properties tab. 4. Identify the volume using its Identifier (WWID)

value. The identifier can be compared with the one displayed in the Reduxio Storage Manager (click the flip-over button in the volume panel).

2. Unmount the datastore. To unmount the datastore:

1. Power off all VMs using the datastore. 2. Remove these VMs from the inventory. 3. Unmount the datastore.

3. Revert the volume. To revert a volume using Reduxio Storage Manager:

1. Click the HOSTS & VOLUMES in the icon bar. 2. Select the volume to be reverted. 3. Click the CLONE/REVERT button. 4. Select the required timestamp to revert to. Any

time from the current time back to the volume’s creation time can be selected. The time selection is performed by either entering the desired time and date, or clicking the time/date fields using the keyboard arrow keys or scrolling the mouse to move the time and date forward or backwards.

5. Click the REVERT button.

4. Assign the volume to the ESXi cluster.

To assign the volume to the ESXi cluster host group using Reduxio Storage Manager:

1. Select the host group for the ESXi cluster.

2. Within the host group panel, select the VOLUMES tab.

Drag and drop the new volume clone created in step 2 onto the drop zone below the tab buttons.

5. Rescan devices. To rescan the devices using vSphere Web Client:

1. Select the ESXi host. 2. Select Manage, Storage, Storage Adapters. 3. Click the Rescan all storage adapters icon ( ) and

click OK. 4. Repeat steps 1 to 3 for each ESXi host in the

cluster.

6. Connect to the datastore clone.

To connect to the datastore clone:

1. Select the ESXi host (if this is a cluster, select any of the hosts in the cluster).

2. Select Related Objects, Datastores. 3. Click the Create a new datastore icon ( ). 4. Select VMFS and click Next. 5. Select the Reduxio clone volume. The Snapshot

Volume column will contain the original datastore name. Click Next.

6. Select Assign a new signature to avoid a conflict with the original datastore. Click Next.

7. Click Finish.

7. Add VMs to inventory. To browse the datastore clone and add VMs to the inventory:

1. Right-click the datastore, then select Browse Files from the drop-down menu.

2. Click the relevant VMs, select the .vmx file (Type column equals to Virtual Machine), right-click it and select Register VM…

3. Select a name and location, and then click Next. 4. Select a host or a cluster, and then click Next. 5. Select a host in the cluster, and then click Finish.

Recovering VMs

Overview BackDating is a volume-level operation. In order to recover individual VMs, the datastore containing the VM must first be cloned. The recovered VM must then be added to inventory, and migrated back to the original datastore if needed.

Recover a VM using Cloning To recover a VM using a datastore clone:

1. Recover the datastore. Perform the steps listed in Recover a Datastore using Cloning.

2. Register the VM from the datastore clone.

To browse the datastore clone and add VMs to the inventory:

1. Right-click the datastore, then select Browse Files from the drop-down menu.

2. Click the relevant VMs, select the .vmx file (Type column equals to Virtual Machine), right-click it and select Register VM…

3. Select a name and location, and then click Next. 4. Select a host or a cluster, and then click Next.

Select a host in the cluster, and then click Finish.

When registering recovered VMs, the following dialog box will be displayed. Select “I copied it”.

3. Power on the recovered VM.

If necessary, power off the original VM to avoid MAC address conflicts.

Then, power on the recovered VM.

4. Migrate recovered VM to original datastore.

If necessary, it is possible to migrate the VM to the original datastore. Note that this migration will not take up additional space, since most of the data in the clone is a duplicate of the original VM.

Recovering Disks

Overview BackDating is a volume-level operation. In order to recover individual disks, the datastore containing the disk must be cloned first. The disk can then be added to the VM – either as a new one, or replacing the original one.

In-place Disk Recovery To recover the disk by replacing it with a previous version from the datastore clone:

1. Recover the datastore. Perform the steps listed in Recover a Datastore using Cloning.

2. Remove the current disk from the VM.

To remove the existing disk using VMware vSphere Web Client:

1. Right-click the VM. Select Edit Settings. 2. Select the disk, then click the X next to it to

remove it from the VM’s configuration.

3. Select Remove it from inventory. 4. Click OK to apply the change.

3. Add the recovered disk from the cloned datastore.

To add the recovered disk using VMware vSphere Web Client:

1. Right-click the VM. Select Edit Settings. 2. Select Add Existing Disk, then click Add. 3. Select the datastore clone, browse to the same

disk removed earlier, and select it. 4. Click OK to apply the change.

Note: Running applications may require the disk to be available. Consider the implications before the removal of the disk. The removal may require shutting down the VM.

Recover a disk using Cloning To recover the disk by replacing it with a previous version from the datastore clone:

1. Recover the datastore. Perform the steps listed in Recover a Datastore using Cloning.

2. Modify the clone disk UUID. By default, a cloned virtual disk has the same unique identifier (UUID) as the original disk. ESXi does not allow duplicate disk UUIDs to be configured in a single system. The clone disk’s UUID must therefore be updated such that it becomes unique prior to the addition of the disk to the relevant VM.

To change the disk database (ddb) UUID of the clone disk to make it unique, run from the ESXi Shell:

vmkfstools -J setuuid <vmname>.vmdk

For more information, refer to:

VMware KB: Duplicate VMDK UUIDs are created when virtual machines are deployed from a template (2006865).

3. Add the recovered disk from the cloned datastore.

To add the recovered disk using VMware vSphere Web Client:

1. Right-click the VM. Select Edit Settings. 2. Select Add Existing Disk, then click Add. 3. Select the datastore clone, browse to the clone

disk and select it. 4. Click OK to apply the change.

Integration of Reduxio and vSphere Reduxio’s rich storage functionality integrates well with vSphere’s virtualization infrastructure. The following describes various integrations between Reduxio and vSphere features.

vSphere Features

vDisk Provisioning

Overview ESXi supports different virtual disk provisioning types. It is important to note that the Reduxio system is thin-provisioned by design, since less physical blocks are required to store the data than a standard, fully-provisioned legacy storage would have. For this reason, no matter the provision type selected, physical space is reduced. The differences are in disk creation performance, and displayed savings ratio, described in

Table 2.

Table 2 - vDisk Provisioning on Reduxio

vDisk type Description When stored on Reduxio

Thick-provisioned Eager Zero

All blocks are zeroed at disk creation time.

The additional zeroing temporarily increases the savings ratio as all zeroed blocks get deduped into a single block. As more data gets written to the system, the savings ratio will eventually go down.

Thin-provisioned Lazy Zero

Blocks are zeroed on-demand – as data is written to the system.

The additional zeroing temporarily increases the savings ratio since zeroed blocks get deduped into a single block. As more data gets written to the system, the savings ratio will eventually go down.

The savings ratio increase is much lower and not noticeable compared to eager-zero since only blocks that are written to by the VM get zeroed.

Thin-provisioned Blocks are allocated on-demand, with no zeroing required.

There is no performance impact when using thin provisioning on Reduxio. Therefore, this is the recommended vDisk type.

vStorage APIs for Array Integration (VAAI) Reduxio storage fully supports the following VAAI primitives in a highly optimized method:

• Write Same (Zero) – Rapid zeroing of volume regions. • Atomic Test & Set (ATS) - SCSI Compare and Write - Provides accelerated file locking in

VMFS, increasing the scalability of VMFS in a multi-hosted environments. ATS is used by ESXi hosts to safely perform metadata updates, and used for heartbeats as well.

• Clone Blocks/Full Copy (XCOPY) – Provides accelerated data copy or migration within the same storage system. Reduxio TimeOS has a unique, highly-optimized implementation of XCOPY. When the NoDup engine receives the XCOPY requests, the resulting blocks are considered duplicates, and only metadata is copied. This provides both performance and capacity benefits:

o Accelerates VM cloning and migration – virtual machine data is cloned or migrated without actually copying data blocks.

o Copy without capacity requirements - virtual machine data is cloned or migrated without consuming additional physical space.

XCOPY can operate within the same datastore or across datastores.

• Zero Blocks (WRITE SAME) – Enables rapid zeroing of large empty regions. In Reduxio HX550, a WRITE SAME operation is “noduped” – no data is written on the media and as a result, the operation is very fast.

• Thin Provisioning (UNMAP) – currently not supported. This has no effect on actual IO to the system.

To view the VAAI support status, run the following in the ESXi console:

[root@appsesx3:~] esxcli storage core device vaai status get naa.6f4032f0003a00000000000000000001 VAAI Plugin Name: ATS Status: supported Clone Status: supported Zero Status: supported Delete Status: unsupported

Both XCOPY and WRITE SAME IOs are calculated as full IOs in Reduxio statistics. XCOPY operations are considered as full reads and writes, and WRITE SAME operations as full writes. As a result, the statistics graphs may display a very high throughput with relatively low IOPS during VM datastore creations, disk formats etc.

For more information on VAAI, refer to the following:

• Frequently Asked Questions for vStorage APIs for Array Integration (1021976) • VMware vSphere 5.5 Storage Guide

vCenter Reduxio Storage Manager for vSphere (RSMV) is a single pane management for vSphere and Reduxio storage that provides integrated configuration and datastore recovery.

vSphere vMotion Virtual machines stored on a Reduxio datastore can be migrated between ESXi cluster hosts. To simplify storage management tasks, an ESXi cluster can be represented in the Reduxio Storage Manager and ReduxioCLI as a host group. Volumes are then assigned to the host group, i.e. to the entire cluster of ESXi hosts.

vSphere Storage vMotion Virtual machines stored on a Reduxio datastore can be migrated between shared datastores. Migrating VMs within the same datastore is essentially replicating the same set of blocks. Leveraging Reduxio’s NoDup feature, the second VM copy will not require additional capacity.

vSphere VM Replication VMs can be replicated between source and target sites in various use cases:

• Reduxio systems in both sites – In this scenario, the replicated VMs source and target sites • Reduxio in the target site – this enables administrators to maintain the existing storage

infrastructure but at the same time reduce the capacity required at the target site and increase the performance during failover time.

Reduxio Features

NoDup

NoDup is Reduxio’s unique data reduction technology that eliminates duplicates the instant data is written by a host to the system – that effectively multiples the capacity of every tier (including DRAM).

NoDup is integrated well with virtualization environments for various reasons: • Very high capacity savings – virtualized environments contain a lot of copies of the same

data blocks – be it operating system files, application binaries and user data. These blocks are deduped into a single instance. For example, cloning a Windows VM or installing the same application on multiple VMs will only consume marginal additional capacity. In addition, NoDup also applies compression on the blocks, which reduces the required physical capacity even further.

• Frequently access data is served from RAM – Since all data in Reduxio is “NoDuped” (i.e. deduped and compressed) including what is stored in RAM, in a virtualization environment that translates to having most accessed parts of VM operating system files and application binaries stored in a single copy in main memory. When hosts access these parts, no matter from which VM – it is all served from memory, at close to zero latency. In traditional storage systems, even those that provide deduplication, when many VMs boot or when the same application is being opened across the VM farm, the storage system needs to read the data from the media and load it into RAM multiple times.

Conclusion Reduxio storage solutions are built using a new, innovative storage operating system. Reduxio TIME OS was designed from scratch to address the current storage challenges by leveraging the latest advances in processing power and high-speed networks.

The Reduxio HX550, based on TIME OS, offers breakthroughs in efficiency, performance and unique data management capabilities far exceeding anything in the market today.

Reduxio Storage Manager for vSphere (RSMV) is a single pane management for vSphere and Reduxio storage that provides integrated configuration and datastore recovery.

Reduxio storage solutions offer tremendous value to existing and new VMware vSphere implementations. Reduxio storage provides a new architecture, which is highly optimized for virtualization environments. It is therefore complementary to VMware vSphere and integrates well with many of its features.

References

Reduxio Documentation • Reduxio Administration Guide

VMware Documentation • Best Practices for Running VMware vSphere® on iSCSI • VMware Online Documentation • VMware vSphere® Storage APIs – Array Integration (VAAI)