XenServer 6x Best Practices Compellent

download XenServer 6x Best Practices Compellent

of 88

Transcript of XenServer 6x Best Practices Compellent

  • 7/29/2019 XenServer 6x Best Practices Compellent

    1/88

    Dell Compellent Storage Center

    XenServer 6.x Best Practices

  • 7/29/2019 XenServer 6x Best Practices Compellent

    2/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 2

  • 7/29/2019 XenServer 6x Best Practices Compellent

    3/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 3

    Document revisionDate Revision Description

    2/16/2009 1 Initial 5.0 Documentation

    5/21/2009 2 Documentation update for 5.5

    10/1/2010 3 Document Revised for 5.6 and iSCSI

    MPIO

    12/21/2010 3.1 Updated iSCSI information

    8/22/2011 4.0 Documentation updated for 6.0

    11/29/2011 4.1 Update for Software iSCSI information

    THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN

    TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUTEXPRESS OR IMPLIED WARRANTIES OF ANY KIND.

    2011 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without

    the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.

    Dell, the DELL logo, the DELL badge, and Compellentare trademarks of Dell Inc. Other trademarks and

    trade names may be used in this document to refer to either the entities claiming the marks and names

  • 7/29/2019 XenServer 6x Best Practices Compellent

    4/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 4

    or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than

    its own.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    5/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 5

    ContentsDocument revision ................................................................................................. 3

    Contents .................................................................................................................. 5General syntax ..................................................................................................... 8Conventions ......................................................................................................... 8

    Preface ................................................................................................................... 9Audience ............................................................................................................ 9Purpose .............................................................................................................. 9Customer support .................................................................................................. 9

    Introduction ........................................................................................................... 10XenServer Storage Overview ........................................................................................ 11

    XenServer Storage Terminology ............................................................................... 11Shared iSCSI Storage............................................................................................. 11Shared Fibre Channel Storage ................................................................................. 12Shared NFS ........................................................................................................ 13

    Volume to Virtual Machine Mapping .................................................................... 14NIC Bonding vs. iSCSI MPIO ................................................................................ 14

    Multi-Pathing ..................................................................................................... 15Enable Multi-pathing in XenCenter...................................................................... 15

    Software iSCSI ......................................................................................................... 16Overview .......................................................................................................... 16Open iSCSI initiator Setup with Dell Compellent ........................................................... 17Multipath with Dual Subnets ................................................................................... 17

    Configuring Dedicated Storage NIC ...................................................................... 18To Assign NIC Functions using the XE CLI ............................................................... 18XenServer Software iSCSI Setup.......................................................................... 19Login to Compellent Control Ports ...................................................................... 19Configure Server Objects in Enterprise Manager ..................................................... 20View Multipath Status ..................................................................................... 22

    Multi-path Requirements with Single Subnet ............................................................... 23Configuring Bonded Interface ............................................................................ 23Configuring Dedicated Storage Network ............................................................... 24To assign NIC functions using the XE CLI: .............................................................. 25XenServer Software iSCSI Setup.......................................................................... 25

  • 7/29/2019 XenServer 6x Best Practices Compellent

    6/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 6

    Configure Server Objects in Enterprise Manager ..................................................... 27Multi-path Requirements with Dual Subnets, Legacy Port Mode ........................................ 28

    Log in to Dell Compellent iSCSI Target Ports .......................................................... 30View Multipath Status ..................................................................................... 33

    iSCSI SR Using iSCSI HBA ........................................................................................ 33Fibre Channel ......................................................................................................... 38

    Overview .......................................................................................................... 38Adding a FC LUN to XenServer Pool .......................................................................... 38Data Instant Replay to Recover Virtual Machines or Data ................................................ 40

    Overview ..................................................................................................... 40Recovery Option 1 One VM per LUN ........................................................................ 40Recovery Option 2 Recovery Server ........................................................................ 50

    Dynamic Capacity ..................................................................................................... 62Dynamic Capacity Overview .............................................................................. 62Dynamic Capacity with XenServer ....................................................................... 62

    Data Progression ...................................................................................................... 63Data Progression on XenServer ........................................................................... 63

    Boot from SAN ......................................................................................................... 64VM Metadata Backup and Recovery ............................................................................... 65

    Backing Up VM MetaData ....................................................................................... 65Importing VM MetaData ......................................................................................... 67

    Disaster Recovery ..................................................................................................... 68Replication Overview ........................................................................................... 68

    Test XenServer Disaster Recovery ....................................................................... 70Recovering from a Disaster .................................................................................... 73Replication Based Disaster Recovery ......................................................................... 77

    Disaster Recovery Replication Example ................................................................ 77Live Volume ....................................................................................................... 82Overview .......................................................................................................... 82

    Appendix 1 Troubleshooting ........................................................................................ 84XenServer Pool FC Mapping Issue ............................................................................. 84Starting Software iSCSI ......................................................................................... 85

    Two ways to Start iSCSI ................................................................................... 86Software iSCSI Fails to Start as Server Boot ................................................................. 86Wildcard Doesnt Return All Volumes ........................................................................ 86

  • 7/29/2019 XenServer 6x Best Practices Compellent

    7/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 7

    View Multipath Status ........................................................................................... 87XenCenter GUI displays Multipathing Incorrectly .......................................................... 87Connectivity issues with a Fibre Channel Storage Repository ........................................... 88

  • 7/29/2019 XenServer 6x Best Practices Compellent

    8/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 8

    General syntaxFigure 1, Document Syntax

    Item Convention

    Menu items, dialog box titles, field names, keys Bold

    Mouse click required Click:

    User Input Monospace Font

    User typing required Type:

    Website addresses http://www.compellent.com

    Email addresses [email protected]

    Conventions

    Notes are used to convey special information or instructions.

    Timesavers are tips specifically designed to save time or reduce the number of steps.

    Caution indicates the potential for risk including system or data damage.

    Warning indicates that failure to follow directions could result in bodily harm.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    9/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 9

    Preface

    AudienceThe audience for this document is System Administrators who are responsible for the setup and

    maintenance of Citrix XenServer and associated storage. Readers should have a working knowledge of

    the installation and management of Citrix XenServer and the Dell Compellent Storage Center.

    PurposeThis document provides best practices for the setup, configuration and management of Citrix XenServer

    with Dell Compellent Storage Center. This document is highly technical and intended for storage and

    server administrators as well as information technology professionals interested in learning more about

    how Citrix XenServer integrates with Compellent Storage Center.

    Customer supportDell Compellent provides live support 1-866-EZSTORE (866.397.8673), 24 hours a day, 7 days a week,

    365 days a year. For additional support, email Dell Compellent at [email protected]. DellCompellent responds to emails during normal business hours.

    Additional information on XenServer 6.0 can be found in the Citrix XenServer 6.0 Administration Guide

    located on the Citrix download site. Information on Dell Compellent Storage Center is located on the

    Dell Compellent Knowledge Center.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    10/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 10

    IntroductionThis document will provide configuration examples, tips, recommended settings, and other storage

    guidelines a user can follow while integrating Citrix XenServer with the Dell Compellent Storage

    Center. This document has been written to answer many frequently asked questions with regard to

    how XenServer interacts with the Dell Compellent Storage Center's various features such as Dynamic

    Capacity, Data Progression, Replays, and Remote Instant Replay. This document focuses on XenServer

    6.0, however most of the concepts apply to XenServer 5.X unless otherwise noted.

    Dell Compellent advises customers to read XenServer documentation which are publically available on

    the Citrix XenServer knowledge base documentation pages to provide additional information on

    installation and configuration.

    This document assumes the reader has had formal training or has advanced working knowledge of the

    following:

    Installation and configuration of Citrix XenServer

    Configuration and operation of the Dell Compellent Storage Center Operating systems such as Windows or Linux

    The Citrix XenServer 6.0 Administrators Guide

    NOTE: the information contained within this document is based on general circumstances and

    environments. Actual configuration may vary in different environments.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    11/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 11

    XenServer Storage Overview

    XenServer Storage TerminologyIn working with XenServer 6.0, there are four object classes that are used to describe, configure, andmanage storage:

    Storage Repositories (SRs) are storage targets containing homogeneous virtual disks (VDIs). SRcommands provide operations for creating, destroying, resizing, cloning, connecting anddiscovering the individual VDIs that they contain. A storage repository is a persistent, on-diskdata structure. So the act of "creating" a new SR is similar to that of formatting a disk -- forsingle LUN-based SR types, i.e. LVM over iSCSI or Fibre Channel, the creation of a new SRinvolves erasing any existing data on the specified LUN. SRs are long-lived, and may in somecases be shared among XenServer Hosts, or moved between them. The interface to storagehardware allows VDIs to be supported on a large number of SR types. With built-in support forIDE, SATA, SCSI and SAS drives locally connected, and iSCSI and Fibre Channel remotelyconnected, the XenServer host SR is very flexible.Each XenServer host can access multiple SRsin parallel of any type.When hosting direct attached shared Storage Repositories on a DellCompellent Storage Center, there are 2 options; an iSCSI connected LUN or a Fibre Channel

    connected LUN.

    Physical Block Devices (PBDs) represent the interface between a physical server and anattached SR. PBDs are connector objects that allow a given SR to be mapped to a XenServerHost. PBDs store the device configuration fields that are used to connect to and interact with agiven storage target. PBD objects manage the run-time attachment of a given SR to a givenXenServer Host.

    Virtual Disk Images (VDIs) are an on-disk representation of a virtual disk provided to a VM. VDIsare the fundamental unit of virtualized storage in XenServer. Similar to SRs, VDIs arepersistent, on-disk objects that exist independently of XenServer Hosts.

    Virtual Block Devices (VBDs) are a connector object (similar to the PBD described above) that

    allows mappings between VDIs and Virtual Machines (VMs). In addition to providing amechanism to attach (or plug) a VDI into a VM, VBDs allow fine-tuning of parameters regardingQoS (quality of service), statistics, and the boot ability of a given VDI.

    Shared iSCSI StorageCitrix XenServer on Dell Compellent Storage provides support for shared SRs on iSCSI attached LUNs.

    iSCSI is supported using the open-iSCSI software initiator or a supported iSCSI Host Bus Adapter(HBA).

    Shared iSCSI support is implemented based on a Logical Volume Manager (LVM). LVM-based storage is

    high-performance and allows virtual disks to be dynamically resized. Virtual disks are fully allocated as

    an isolated volume on the underlying physical disk and so there is a minimum of storage virtualization

    overhead imposed. As such, this is a good option for high-performance storage.

    Below is a diagrammatic representation of using shared storage with iSCSI HBAs in XenServer. The

    second diagram illustrates shared storage with the open iSCSI initiator.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    12/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 12

    Figure 2, Shared iSCSI Storage with iSCSI HBA

    Figure 3, Shared iSCSI with Software Initiator

    Shared Fibre Channel StorageXenServer hosts with Dell Compellent Storage supports Fibre Channel SANs using the Emulex or QLogic

    host bus adapters(HBAs). Logical unit numbers (LUNs) are mapped to the XenServer host as disk

    devices. Like HBA iSCSI storage, Fibre Channel storage support is implemented based on the same

    Logical Volume Manager with the same benefits as iSCSI storage, just utilizing a different data I/O

    path.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    13/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 13

    Figure 4, Shard Fibre Channel Storage

    Shared NFSXenServer supports NFS file servers, such as the Dell NX3000 with Dell Compellent storage to host SRs.

    NFS storage repositories can be shared within a resource pool of XenServers. This allows virtual

    machines to be migrated between XenServers within the pool using XenMotion.

    Attaching an NFS storage repository requires the hostname or IP address of the NFS server. The NFSserver must be configure to export the specified path to all XenServers in a pool or the reading of the

    SR will fail.

    Using and NFS share is a relatively simple way to create an SR and doesnt involve the complexity of

    iSCSI or expense of Fibre Channel. There are some limitations that must be considered before

    implementing NFS however. An NFS SR will utilize a similar network infrastructure as iSCSI to support

    redundant paths to the NFS share. The main difference is that iSCSI uses MPIO to support multipathing

    and load balancing between multiple the paths while NFS is limited to one network interface per SR.

    Redundancy in an NFS environment can be accomplished by using XenServer bonded interfaces.

    Bonded interfaces are active/passive and wont provide load balancing across both physical adapters

    such as iSCSI can provide.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    14/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 14

    Figure 5, Shared NFS SR

    A new feature with XenServer 6.0 is the ability to provide a high availability (HA) quorum disk on an

    NFS volume. However, the XenServer 6.0 Disaster Recovery feature can only be enabled when using

    LVM over HBA or software iSCSI. The underlying protocol choice for SRs is a business decision that will

    be unique to each environment. Given the performance benefits and the requirement for Disaster

    Recovery it is the recommendation of Dell Compellent to use iSCSI or FC HBA, or software iSCSI over

    NFS.

    Volume to Virtual Machine Mapping

    XenServer is fully capable of deploying a many-to-one VM-to-volume (LUN) deployment. The number of

    VM on a volume is dependent on the workload and IOPS requirement of the VM. When multiple virtual

    disks share a volume they also share the disk queue for that volume on the host. For this reason, careshould be taken to prevent a bottleneck condition on the volume. Additionally, replication and DR

    become a factor when hosting multiple VMs on a volume. This is due to replication and recovery taking

    place on a per-volume basis.

    NIC Bonding vs. iSCSI MPIO

    NIC bonds can improve XenServer host resiliency by using two physical NICs as if they were one. If one

    NIC within the bond fails the host's network traffic will automatically be routed over the second NIC.

    NIC bonds supports active/active mode, but only supports load-balancing of VM traffic across the

    physical NICS. Any given virtual network interface will only use one of the links in the bond at a time.

    Load-balancing is not available for non-VM traffic.

    MPIO also provides host resiliency by using two physical NICs. MPIO uses round robin to balance the

    Storage traffic between separate targets on the Dell Compellent Storage Center. By spreading the load

    between multiple Dell Compellent Target iSCSI bottlenecks can be avoided while providing network

    adapter, subnet, and switch redundancy.

    If all Front End iSCSI ports on the Dell Compellent System are on the same subnet, then NIC bonding is

    the better option since XenServer iSCSI MPIO requires at least two separate subnets. In this

    configuration all iSCSI connections will use the same physical NIC because Bonding does not support

  • 7/29/2019 XenServer 6x Best Practices Compellent

    15/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 15

    active/active connections for anything but VM traffic. For this reason, it is recommended that front

    end iSCSI ports across be configured two subnets. This allows load balancing across all NICs and

    failover with MPIO.

    Multi-PathingMulti-Pathing allows for failures in HBAs, Switch Ports, Switches, and SAN IO ports. It is recommended

    to utilize Multi-Pathing to increase availability and redundancy for critical systems such as production

    deployments of XenServer when hosting critical servers.

    XenServer supports Active/Active Multi-Pathing for iSCSI and FC protocols for I/O datapaths. Dynamic

    Multi-Pathing uses a round-robin mode load balancing algorithm, so both routes will have active traffic

    on them during normal operations. Multi-Pathing can be enabled via XenCenter or on the command

    line. Please see the XenServer 6.0 Administrator Guide for information on enabling Multi-Pathing on

    XenServer hosts. Enabling Multi-Pathing requires a server restart and should be enabled before storage

    is added to the server. Only use Multi-Pathing when there are multiple paths to the storage center.

    Enable Multi-pathing in XenCenter

    1. Right click on the server in XenCenter and select Enter Maintenance Mode

    2. Right click on the server and select Properties

    3. In the Properties window, select Multipathing

    4. Check the Enable Multipathing on this server box and click OK

    5. The server will need to be restarted for Multipathing to take affect

    Figure 6, Enable Multipathing

  • 7/29/2019 XenServer 6x Best Practices Compellent

    16/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 16

    Software iSCSI

    OverviewXenServer Supports shared Storage Repositories (SRs) on iSCSI LUNs. iSCSI is implemented using the

    open-iSCSI software initiator or by using a supported iSCSI HBAs. XenServer iSCSI Storage Repositories

    are supported with Dell Compellent Storage Center running in either Legacy mode or Virtual Port

    mode.

    Shared iSCSI using the software iSCSI initiator is implemented based on the Linux Volume Manager

    (LVM) and provides the same performance benefits provided by LVM on local disks. Shared iSCSI SRs

    using the software-based host initiator are capable of supporting VM agility. Using XenMotion, VMs can

    be started on any XenServer host in a resource pool and migrated between them with no noticeable

    interruption.

    iSCSI SRs utilize the entire LUN specified at creation time and may not span more than one LUN. CHAP

    support is provided for client authentication, during both the data path initialization and the LUN

    discovery phases.

    NOTE: Use dedicated network adapters for iSCSI traffic. The default connection can be used however

    it is always best practice to separate iSCSI and network traffic.

    All iSCSI initiators and targets must have a unique name to ensure they can be identified on the

    network. An initiator has an iSCSI initiator address, and a target has an iSCSI target address.

    Collectively these are called iSCSI Qualified Names, or IQNs.

    XenServer hosts support a single iSCSI initiator which is automatically created and configured with a

    random IQN during host installation. iSCSI targets commonly provide access control via iSCSI initiator

    IQN lists, so all iSCSI targets/LUNs to be accessed by a XenServer host must be configured to allow

    access by the host's initiator IQN. Similarly, targets/LUNs to be used as shared iSCSI SRs must beconfigured to allow access by all host IQNs in the resource pool.

    iSCSI targets that do not provide access control will typically default to restricting LUN access to a

    single initiator to ensure data integrity. If an iSCSI LUN is intended for use as a shared SR across

    multiple XenServer hosts in a resource pool ensure that multi-initiator access is enabled for the

    specified LUN.

    It is strongly suggested to change the default XenServer IQN to one that is consistent with a naming

    schema in the iSCSI environment. The XenServer host IQN value can be adjusted using XenCenter, or

    via the CLI with the following command when using the iSCSI software initiator:

    xe host-param-set uuid=other-config:iscsi_iqn=

    Caution: It is imperative that every iSCSI target and initiator have a unique IQN. If a non-unique IQN

    identifier is used, data corruption and/or denial of LUN access can occur.

    Caution: Do not change the XenServer host IQN with iSCSI SRs attached. Doing so can result in failures

    connecting to new targets or existing SRs.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    17/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 17

    Open iSCSI initiator Setup with Dell Compellent

    Caution: Issues have been identified with the Citrix implementation of multipathing and Storage

    Center in virtual port mode. It is strongly recommended to use iSCSI HBAs when implementing

    XenServer with Storage Center in virtual port mode.

    When planning iSCSI it is important that networks used for software-based iSCSI have separate

    switching and different subnets from those used for management. The use of separate subnets ensures

    that management and storage traffic flows over the intended interface and avoids complex

    workarounds that may compromise reliability or performance.

    If planning to utilize iSCSI storage with Multi-Pathing, it is important to ensure that none of the

    redundant paths reported by iSCSI are within the same subnet as the management interface. If this

    occurs the iSCSI initiator may not be able to successfully establish a session over each path because the

    management interface comes up separate to the storage interface(s).

    There are three options when implementing the XenServer software iSCSI initiator to connect to Dell

    Compellent storage. They are:

    Multipath with dual subnets, virtual port mode - In this configuration the Storage Center is set

    to Virtual Port mode and the front end controller ports are on two separate subnets. This

    option uses MPIO for multipathing. This is the recommended option when HA is required.

    Multipath with single subnet - In this configuration the Storage Center is set to Virtual Port

    mode and all controller front end ports are on the same subnet. This option uses NIC Bonding

    for path failover. This is also an option when the servers have a single iSCSI Storage NIC and HA

    is not required.

    Multipath with dual subnets, Legacy port mode - This is the option for HA when the Storage

    Center is set to Legacy Port mode.

    Multipath with Dual SubnetsThe requirements for software iSCSI Multi-pathing with dual subnets and Compellent Storage Center in

    virtual port mode are as follows:

    XenServer 6.0

    iSCSI using 2 unique dedicated storage NICs/subnets

    o Citrix best practices states that these 2 subnets should be different from the XenServer

    management network.

    Multi-pathing enabled on all XenServer pool hosts

    iSCSI Target IP addresses for the Storage Center Front End Control ports

    o In the example below the iSCSI FE Control ports on Storage Center Controller are

    assigned IP address 10.25.0.10/16 and 10.26.0.10/16

    In this configuration the Storage Center is set to virtual port mode and the iSCSI Front End ports are

    on two separate subnets different from the management interface. The Storage Center is

    configured with two control ports, one for each subnet. Multipathing is controlled through MPIO.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    18/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 18

    Figure 7, Dual Subnet, MPIO

    Configuring Dedicated Storage NIC

    XenServer allows use of either XenCenter or the XE CLI to configure and dedicate a NIC to specific

    functions, such as storage traffic.

    Assigning a NIC to a specific function will prevent the use of the NIC for other functions such as host

    management, but requires that the appropriate network configuration be in place to ensure the NIC is

    used for the desired traffic. For example, to dedicate a NIC to storage traffic the NIC, storage target,

    switch, and/or VLAN must be configured so the target is only accessible over the assigned NIC.

    Ensure that the dedicated storage interface uses a separate IP subnet which is not routable from the

    main management interface. If this is not enforced, storage traffic may be directed over the main

    management interface after a host reboot due to the order in which network interfaces are initialized.

    To Assign NIC Functions using the XE CLI

    1. Ensure that the Physical Interface (PIF) is on a separate subnet, or routing is configured to suit yournetwork topology in order to force the desired traffic over the selected PIF.

    2. Get the PIF UUID for the interface

    2.1. If on a stand-alone server, use xe pif-list to list the PIFs on the server

    2.2. If on a host in a resource pool, first type xe host-list to retrieve a list of the hosts and

    UUIDs

    2.3.Use the command xe pif-list host-uuid= to list the host PIFs

  • 7/29/2019 XenServer 6x Best Practices Compellent

    19/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 19

    3. Setup an IP configuration for the PIF, adding appropriate values for the mode parameter and if

    using static IP addressing the IP, netmask, gateway, and DNS parameters:

    xe pif-reconfigure-ip mode= uuid=

    Example: xe pif-reconfigure-ip mode=static ip=10.0.0.10

    netmask=255.255.255.0 gateway=10.10.0.1 uuid=

    4. Set the PIF's disallow-unplug parameter to true:

    xe pif-param-set disallow-unplug=true uuid=

    5. Set the Management Purpose of the interface:

    xe pif-param-set other-config:management_purpose="Storage" uuid=

    6. Repeat this process for each eth interface in the XenServer host that will be dedicated for storage

    traffic. For iSCSI MPIO configurations this should be a minimum of two eth interfaces that are on

    separate subnets.

    For more information on this topic see the Citrix XenServer 6.0 Administrator Guide.

    XenServer Software iSCSI Setup

    A server object on the Dell Compellent Storage Center can be created once the XenServer has beenconfigured for iSCSI traffic.

    NOTE: Best practice recommendation is to change the XenServer IQN from the randomly assigned IQN

    to one that identifies the system on the iSCSI network. The IQN must be unique to avoid data

    corruption or loss.

    Gather Dell Compellent iSCSI Target Info

    Within Storage Center Manager, go to Controllers, IO Cards, iSCSI and note the IP address of the two

    control ports. These should be on the same IP subnet as the servers storage NICs.

    Figure 8, Control Port IP Addresses

    In this example the IP addresses are:

    10.25.0.10/16

    10.26.0.10/16

    Login to Compellent Control Ports

    In this step the iscsiadm command will be utilized in the XenServer CLI to discover and login to all the

    Compellent iSCSI targets.

    1. From the XenServer console run the following command for each iSCSI control ports.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    20/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 20

    iscsiadm -m discovery --type sendtarget --portal

    Example: iscsiadm m discovery --type stendtarget --portal 10.25.0.10:3260

    Figure 9, Discover Storage Center Ports

    NOTE: If problems are encountered while running the iscsiadm commands, see the iSCSI

    Troubleshooting section at the end of this document.

    2. Repeat the discovery process for each Dell Compellent Control Port.

    3. Once all target ports are discovered run iscsiadm with the Login parameter:

    iscsiadm m node --login

    Figure 10, Log into Storage Center Ports

    The server objects can be configured in the Storage Center now that the server has logged in.

    Configure Server Objects in Enterprise Manager

    Follow the steps below to configure the server object for access to the Storage Center

    1. In Enterprise Manager, go to Storage Center and select Storage Management

    2. In the object tree, right click on Servers and select Create Server3. Complete all options as specific in the Compellent Administrators guide

    4. Uncheck the Use iSCSI Name box

    5. Select both connections listed under WWName and click OK to finish

    NOTE: Unchecking the Use iSCSI Name box will aid in identifying the status of MPIO paths.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    21/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 21

    Figure 11, Create Server, Enterprise Manager

    NOTE: Starting in Storage Center version 5.5.x, the steps listed above must be completed using

    Enterprise Manager. It is not possible to create server objects with the Use iSCSI Names box

    unchecked when connected directly to the Storage Center.

    After creating the server object the volumes can be created and mapped to the server. In a server

    pool, map the LUN to all servers specifying the same LUN number. See the Dell Compellent

    documentation for detailed instructions on creating and mapping volumes.

    NOTE: Use Server Cluster objects to map volumes to multiple servers in a resource pool.

    Once the volumes are mapped to the server they can be added to the XenServer using XenCenter or the

    CLI. Below are the steps for adding storage using XenCenter The steps for adding storage through the

    CLI can be found in the XenServer 6.0 Administrators Guide.

    1. Select the server or pool in XenCenter and click on New Storage

    2. Select the Software iSCSI option under virtual disk storage, click next

    Figure 12, Add iSCSI Disk

    3. Give the new Storage Repository a name and click next

    4. Enter one of the Dell Compellent control ports in the Target Host field, click Discover IQNs

    5. Click Discover LUNs

    6. Select the LUN to add under Target LUN and click finish

  • 7/29/2019 XenServer 6x Best Practices Compellent

    22/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 22

    Figure 13, Add iSCSI SR

    NOTE: When the Storage Center is in virtual port mode and adding storage with the wildcard option, an

    incomplete list of volumes mapped to the server may be returned. This is a know issue with the

    XenCenter GUI. To work around the issue, cycle through the Control Ports in the Target Host field

    using the (*) wildcard Target IQNs until the Target LUN appears. This is a GUI issue and will not affect

    multipathing.

    The SR should now be available to the server. Repeat the steps for mapping and adding storage for any

    additional SRs.

    View Multipath StatusTo view the status of the multipath use the following command:

    mpathutil status

    Figure 14, Multipath Status

  • 7/29/2019 XenServer 6x Best Practices Compellent

    23/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 23

    Multi-path Requirements with Single SubnetThe process for configuring multi-pathing in a single subnet environment is similar to that of a dual

    subnet environment. The key difference is that redundancy is handled by the bonded network

    adapters. The requirements for software iSCSI multi-pathing with the Compellent Storage Center in a

    single subnet are as follows:

    XenServer 6.0

    iSCSI using 2 bonded NICs

    o Citrix best practices states that these 2 NICs should bonded through the XenCenter GUI.

    iSCSI Target IP addresses for the Storage Center Front End Control ports

    o In this example the IP address for the Control port will be 10.35.0.10

    1 Network Storage Interfaces on XenServer on the bonded interface.

    Figure 15, Single Subnet

    Configuring Bonded Interface

    In this configuration redundancy to the network is provided by two bonded NICs. Bonding the two NICs

    will create a new bonded interface that network interfaces will be associated with. This will create

    multiple paths with one storage IP address on the server.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    24/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 24

    NOTE: The process of configuring a single-path, non redundant connection to a Dell Compellent Storage

    Center is the same except for excluding the steps to bond the two NICs.

    NOTE: Create NIC bonds as part of the initial resource pool creation, prior to joining additional hosts to

    the pool. This will allow the bond configuration to be replicated to new hosts as they join the pool.

    The steps below outline the process of creating a NIC bond in XenServer 6.0

    1. Go into Citrix XenCenter, select the server and go to the NIC tab.

    2. At the bottom of the NIC window is the option to create a bond. Select the NICs you would like

    to bond and click create.

    Figure 16, Add Bonded Interface

    3. Once complete, there will be a new bonded NIC displayed in the list of NICs.

    Figure 17, Bonded Interface

    Configuring Dedicated Storage Network

    XenServer allows use of either XenCenter or the XE CLI to configure and dedicate a network to specific

    functions, such as storage traffic. The steps below outline the process of creating a dedicated storage

    network interface through the CLI.

    Assigning a network to storage will prevent the use of the network for other functions such as host

    management, but requires that the appropriate configuration be in place in order to ensure the

    network is used for the desired traffic. For example, to dedicate a network to storage traffic the NIC,

    storage target, switch, and/or VLAN must be configured such that the target is only accessible over the

    assigned NIC. This allows use of standard IP routing to control how traffic is routed between multiple

    NICs within a XenServer.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    25/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 25

    Before dedicating a network interface as a storage interface for use with iSCSI SRs, ensure that the

    dedicated interface uses a separate IP subnet which is not routable from the main management

    interface. If this is not enforced, then storage traffic may be directed over the main management

    interface after a host reboot, due to the order in which network interfaces are initialized.

    To assign NIC functions using the XE CLI:1. Ensure that the Bond PIF is on a separate subnet, or routing is configured to force the desired

    traffic over the selected PIF.

    2. Get the PIF UUID for the Bond interface

    2.1. If on a stand-alone server, usexe pif-list to list the PIFs on the server

    2.2. If on a host in a resource pool, first typexe host-list to retrieve a list of the hosts and UUIDs

    2.3.Use the command xe pif-list host-uuid= to list the host PIFs3. Setup an IP configuration for the PIF identified in the previous step, adding appropriate values for

    the mode parameter and if using static IP addressing:

    3.1.xe pif-reconfigure-ip mode= uuid=Example: xe pif-reconfigure-ip mode=static ip=10.0.0.10 netmask=255.255.255.0 gateway=10.10.0.1

    uuid=4. Set the PIF's disallow-unplug parameter to true:

    4.1.xe pif-param-set disallow-unplug=true uuid=5. Set the Management Purpose of the interface

    5.1.xe pif-param-set other-config:management_purpose="Storage" uuid=

    For more information on this topic see the Citrix XenServer 6.0 Administrator Guide.

    XenServer Software iSCSI Setup

    Once the XenServer has been configured for iSCSI traffic a server object on the Dell Compellent Storage

    Center can be created.

    NOTE: Best practice is to change the XenServer IQN from the randomly assigned IQN to one that

    identifies the system on the iSCSI network. The IQN must be unique to avoid data corruption or loss.

    1. To gather the Storage Center iSCSI target Info from Storage Center go to Controllers, IO Cards,

    iSCSI and note the IP address of the control port. It should be on the same IP subnet as the servers

    storage NICs.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    26/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 26

    Figure 18, Control Port IP address

    In this example the IP address is:

    10.35.0.10/16

    2. Login to Compellent Control Ports. In this step the iscsiadm command will be utilized in the

    XenServer CLI to discover and login to all the Dell Compellent iSCSI targets.

    3. From the XenServer console, run the following command for the iSCSI control port.

    iscsiadm -m discovery --type sendtarget --portal

    Example: iscsiadm m discovery --type sendtarget --portal 10.25.0.10:3260

    Figure 19, Discover Storage Center Ports

    NOTE: If problems are encountered while running the iscsiadm commands, see the iSCSI

    troubleshooting section at the end of this document.

    4. Once all target ports are discovered, run iscsiadm with the Login parameter:

    iscsiadm m node --login

    Figure 20, log into Storage Center Ports

    5. Now that the server has logged in the server objects can be configured in the Storage Center.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    27/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 27

    Configure Server Objects in Enterprise Manager

    Follow the steps below to configure the server object for access to the Storage Center

    1. In Enterprise Manager, go to Storage Center Manager and select Storage Management In the object

    tree.

    2. Right click on Servers and select Create Server. Complete all options as specific in the Compellent

    Administrators Guide including server name and operating system.

    3. Select the server IQN listed under WWName and click OK to finish

    Figure 21, Create Server in Enterprise Manager

    After creating the server object the volumes can be created and mapped to the server. In a server

    pool, be sure the LUNS are mapped to the servers with the same LUN number. See the Dell Compellent

    Admin Guide for detailed instructions on creating and mapping volumes.

    NOTE: Use Server Cluster objects to map volumes to multiple servers in a resource pool.

    Once the volumes are mapped to the server they can be added to the XenServer using XenCenter or the

    CLI. Below are the steps for adding storage using XenCenter. Steps for adding storage through the CLI

    can be found in the XenServer 6.0 Administrators Guide.

    1. Select the server or pool in XenCenter and click on New Storage

    2. Select the Software iSCSI option under virtual disk storage, click next

  • 7/29/2019 XenServer 6x Best Practices Compellent

    28/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 28

    Figure 22, Add iSCSI Disk

    3. Give the new Storage Repository a name and click next

    4. Enter the Dell Compellent control port in the Target Host field, click Discover IQNs

    5. Click Discover LUNs to view the available LUNs.

    Figure 23, Add iSCSI SR

    6. Select the LUN to add under Target LUN and click finish

    NOTE: When the Storage Center is in virtual port mode and storage is added with the wildcard option,

    an incomplete list of volumes mapped to the server may be returned. This is a know issue with the

    XenCenter GUI. To work around the problem, cycle through the Target Host IP addresses using the (*)

    wildcard IQN until the Target LUN appears. This is a GUI issue and will not affect multipathing.

    The SR will now be available to the server. Repeat the steps for mapping and adding storage for any

    additional SRs.

    Multi-path Requirements with Dual Subnets, Legacy Port ModeDell Compellent Legacy Port Mode uses the concept of Fault Domains to provide redundant paths to the

    Storage Center. To ensure redundancy, a fault domain consists of a primary port on one controller and

    a failover port on the second controller. The two ports are linked in the same domain by the identical

    Fault Domain number. This provides redundancy with the requirement that half the Front End ports

    will only be utilized in the event of a failover. The requirements for software iSCSI Multi-pathing with

    the Compellent Storage Center Legacy Port Mode are as follows:

  • 7/29/2019 XenServer 6x Best Practices Compellent

    29/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 29

    XenServer 6.0

    iSCSI using 2 unique dedicated storage NICs/subnets

    o Citrix best practices states that these 2 subnets should be different from the XenServer

    management network.

    Multi-pathing enabled on all XenServer Pool Hosts

    iSCSI Target IP addresses for the Storage Center Front End ports

    o In this example the primary iSCSI Front End ports IP address are 10.10.63.2, 10.10.62.1,

    172.31.37.134, 172.31.37.131

    In this configuration the Storage Center is set to Legacy Port mode and the iSCSI Front End ports

    are on two subnets separate from each other and the management interface. Multipathing is

    controlled through MPIO.

    Figure 24, Legacy Port Mode

    The first step to configure XenServer for Dell Compellent in Legacy Port mode is to identify the primary

    iSCSI target IP addresses on each controller the Storage Center. This can be done by going to the

    controllers listed in Storage Center, expanding IO cards, iSCSI and clicking on each iSCSI port listed.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    30/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 30

    Figure 25, Legacy Port IP addresses

    Log in to Dell Compellent iSCSI Target Ports

    This step uses the iscsiadm command in the XenServer CLI to discover and login to all the Compellent

    iSCSI targets.

    1. For each of the Target IP addresses enter the following command:

    iscsiadm -m discovery --type sendtarget --portal

    Example: iscsiadm -m discovery --type sendtarget --portal 10.10.62.1:3260

    Figure 26, Discover Storage Center Ports

    2. Repeat the discovery process for each Target Port

    3. Once all the ports are discovered, run the iscsiadm command with Login parameter to connect the

    host to the Storage Center

    Iscsiadm -m node --login

    Figure 27, log into Storage Center Ports

    Configure Server Objects in Enterprise Manager

    Follow the steps below to configure the server object for access to the Storage Center

    1. In Enterprise Manager, go to Storage Center and select Storage Management

    2. In the object tree, right click on Servers and select Create Server

    3. Complete all options as specified in the Dell Compellent Administrators Guide.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    31/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 31

    Figure 28, Create Server in Enterprise Manager

    After creating the server object the volumes can be created and mapped to the server. See the Dell

    Compellent documentation for detailed instructions on creating and mapping volumes.

    NOTE: Use Server Cluster objects to map volumes to multiple servers in a resource pool.

    Once the volumes are mapped to the server they can be added to the XenServer using XenCenter or the

    CLI. Below are the steps for adding storage using XenCenter. Steps for adding storage through the CLI

    can be found in the XenServer 6.0 Administrators Guide.

    1. Select the server or Pool in XenCenter and click on New Storage

    2. Select the Software iSCSI option under virtual disk storage, click next

    Figure 29, Add iSCSI Disk

    3. Give the new Storage Repository a name and click Next

    4. Enter the Dell Compellent control ports in the Target Host filed, click Discover IQNs

  • 7/29/2019 XenServer 6x Best Practices Compellent

    32/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 32

    Figure 30, Discover Storage Center LUNs

    5. Click Discover LUNs

    Figure 31, Add iSCSI SR

    6. Select the LUN to add under Target LUN and click finish

    NOTE: When Storage Center is in legacy port mode adding storage may return an incomplete list of

    volumes mapped to the server. This is a know issue with the XenCenter GUI where only the LUNs

    active on the first IP address in Target Host are returned. To work around this issue, cycle through the

    Target Hosts IP using the (*) wildcard Target IQN until the Target LUN appears. This is a GUI issue and

    will not affect multipathing.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    33/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 33

    The SR will now be available to the server. Repeat the steps above for mapping and adding storage for

    any additional SRs.

    View Multipath Status

    To view the status of the multipath use the following command:

    mpathutil status

    Figure 32, Multipath Status

    iSCSI SR Using iSCSI HBAIf using an iSCSI HBA to create the iSCSI SR, either the CLI from the control domain needs to be used, or

    the BIOS level management interface needs to be updated for target information. Depending on what

    HBA is being used; the initiator IQN for the HBA needs to be configured. Given the type of HBA used,

    the documentation for that HBA should be consulted to configure the IQN. Once the IQN has been

    configured for the HBA, use the Storage Center GUI to create a new LUN. However, instead of using theXenServers IQN, specify the IQN of the various ports of the HBA. Do this for every XenServer host in

    the pool. Qlogics HBA CLI in included in the XenServer host and located at:

    Qlogic: /opt/QLogic_Corporation/SANsurferiCLI/iscli

    If using Emulex iSCSI HBAs, consult the Emulex documentation for instructions on installing and

    configuring the HBA.

    For the purposes of an example, this guide illustrates how the QLogic iSCSI HBA CLI iscli can be used to

    configure an IP addresses on a dual port QLE4062C iSCSI HBA Adapter, add the iSCSI server to the

    Compellent SAN, and configure a LUN for the server. This setup will also utilize Multi-Pathing since

    there are two iSCSI HBA ports.

    1. From the XenServer Console launch the SanSurfer iscli.1.1.From XenServer Command Prompt type in: /opt/QLogic_Corporation/SANsurferiCLI/iscli

    NOTE: This configuration can also be performed during the Server boot by entering Ctrl Q when

    prompted.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    34/88

    Dell Compellent Storage Center Xe

    Figure 33, iSCLI Menu

    2. Configure IP Address for the iS2.1. In order to set the IP addr

    then option 2 (Port Netw

    2.2.Enter option 4 (Select HB

    (Configure IP Settings).

    Figure 34, Configure HBA IP Address

    2.3.Enter the appropriate IP S

    select another HBA port t

    nServer 6.x Best Practices

    SI HBAess for the HBA choose option 4 (Port Level Info &

    rk Settings Menu).

    Port) to select the appropriate HBA port then sel

    ettings for the HBA adapter port when finished exi

    configure.

    Page 34

    Operations), and

    ct option 2

    and save or

  • 7/29/2019 XenServer 6x Best Practices Compellent

    35/88

    Dell Compellent Storage Center Xe

    2.3.1.In this example anot

    Figure 35, Enter IP Address Informati

    2.4.From the Port Network Se

    configure. Enter 2 and to

    choose option 2 (Configur

    appropriate IP settings for

    Figure 36, Enter IP Address Info

    nServer 6.x Best Practices

    er HBA port will be configured.

    on

    ttings Menu select option 4 to select an additional

    select the second HBA port. Once the second HBA

    e IP Settings) from the Port network settings menu

    the second HBA port.

    Page 35

    HBA port to

    port is selected

    to input the

  • 7/29/2019 XenServer 6x Best Practices Compellent

    36/88

    Dell Compellent Storage Center Xe

    2.5.Choose option 5 (Save cha

    main menu.

    The iSCSI name or IQN can also be

    option 4 (Port Level info & Operati

    Configured Port Settings menu) th

    Advanced Settings). Select

    adapter.

    3. The next step is to establish aCenter.3.1.From the main interactive3.2.From the Port Level Info

    Operations)3.3.On the HBA target menu s

    3.3.1. Select Enter until raddress of the Comp

    3.3.1.1. In this exaiSCSI connectio

    Figure 37, Enter Target IP Address

    3.3.2. Once all targets arinformation.

    3.3.3. Select option 10 t3.3.4. Repeat the steps i

    3.4.Enter option 12 to exit.3.5.Exit out of the iscli utility.

    4. Add server iSCSI connection HB4.1.Logon to the Storage Cent4.2.Expand Servers and select

    4.2.1. For ease of use the4.3.Right click the location to

    nServer 6.x Best Practices

    nges and reset HBA (if necessary). Then select Exit

    changed using the iscli utility. This menu can be a

    ons Menu) from the main menu, then selecting Op

    n Option 3 (Port Firmware Settings Menu), then O

    until reaching iSCSI_Name, then enter a unique IQ

    arget from XenServer so it registers with the Com

    iscli menu select option 4 (Port level info & OperOperations menu select option 7 (---> Target lev

    creen select option 6 (Add a Target)eaching the TGT_TargetIPAddress option. Enter thllent Controller. (Repeat for each target.)ple 10.10.64.1 and 10.10.65.2 are used. These aron both Dell Compellent Storage Center Controlle

    entered for HBA 0 select option 9 to the save th

    select the second HBA port.section 3.3 for the iSCSI targets.nter YES to save the changes.

    .As to the Dell Compellent Storage Center.er console.the location or folder to store the server in.servers in this view are separated into folders bascreate the server in and select Create Server.

    Page 36

    until back at the

    cess by selecting

    ion 3 (Edit

    tion 7 (Configure

    N name for the

    ellent Storage

    tions)l Info &

    e target IP

    e the primaryrs.

    port

    d on function.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    37/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 37

    Note: You may have to uncheck show only active/up connections in the create a server wizard

    4.4.Select the appropriate iSCSI HBA/IQNs for the new server object then click Continue.4.5.Depending on the Storage Center version select the XenServer Operating system or just select

    Other Multipath OS if XenServer is not listed.5. Repeat preceding 4 steps for each XenServer in the Pool.

    6. Once all the XenServer servers are added to the Compellent Storage Center, create a new volumeon the Compellent Storage Center and map it to all the XenServers in the pool with the same LUN

    Number, or create a Compellent Clustered server object, add all the XenServers to the Cluster, and

    map the volume to the XenServer Clustered server object.

    7. The final step of the process is adding the new Volume to XenServer.

    7.1.Logon to XenCenter, right click on the appropriate XenServer to add the connection to, andselect New Storage Repository.

    If the storage is being added to a resource pool, select the Pool instead of the server.

    7.2.Select Hardware HBA option as the iSCSI connection is using iSCSI HBAs, then click Next.

    Figure 38, Storage Type

    There is short delay while XenServer probes for available LUNs.

    7.3. Select the appropriate LUN. Give the SR an appropriate Name and click Finish.7.4. A warning is displayed that the LUN will be formatted and any data present will be destroyed.

    Click Yes to format the disk.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    38/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 38

    Fibre Channel

    OverviewXenServer provides support for shared Storage Repositories (SRs) on Fibre Channel (FC) LUNs. FC is

    supported on the Dell Compellent SAN by utilizing QLogic or Emulex HBAs.

    Fibre Channel support is implemented based on the Linux Volume Manager (LVM) and provides the same

    performance benefits provided by LVM VDIs in the local disk case. Fibre Channel SRs are capable of

    supporting VM agility using XenMotion: VMs can be started on any XenServer host in a resource pool and

    migrated between them with no noticeable downtime.

    The following sections details the steps involved in adding a new Fibre Channel connected volume to a

    XenServer pool.

    Adding a FC LUN to XenServer PoolThe following section will cover the creation of the Volume on the Compellent Storage Center, the LUN

    mapping on the Dell Compellent, and adding the new SR to the XenServer pool.

    This procedure assumes that the servers Fibre Channel connection have been zoned to the Dell

    Compellent Storage Center and the server objects have been added to the Storage Center.

    1. Once all the XenServer servers are added to the Dell Compellent Storage Center, create a new

    volume and map it to all the XenServers in the pool with the same LUN Number, or create a

    Compellent Clustered server object, add all the XenServers to the Cluster, and map the volume

    to the XenServer Clustered server object.

    2. When finished mapping the volume to all the XenServers in the Pool launch the XenCenterManagement console, right click on the pool name and select New Storage Repository.

    Figure 39, New Storage Repository

    3. On the Choose the type of new storage screen select Hardware HBA then click Next.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    39/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 39

    Figure 40, Choose Storage Type

    4. On the Select the LUN to reattach or create a new SRscreen select the appropriate volume,then enter a descriptive name. Click Finished to continue.

    Figure 41, Select LUN

    5. A dialog box will appear asking: Do you wish to format the disk? Click Yes to Format the SR.6. The SR should now be created and mapped to all the servers in the pool.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    40/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 40

    Data Instant Replay to Recover Virtual Machines or Data

    Overview

    The Dell Compellent Storage Center system allows for the creation of Data Instant Replays (snapshots)

    to recover crash-consistent states of virtual machines.

    When mapping Dell Compellent iSCSI or Fiber Channel volumes to XenServer, the SRs will be created as

    LVM disks, therefore stamping each SR with a unique identifier (UUID). When creating Dell Compellent

    Replays of LVM volumes, the Replay will not be able to be mapped to the XenServer without first un-

    mapping the original volume from the server because the LVM UUID will conflict due to being the same.

    There are two different options to recover data or virtual machines using Dell Compellent Replays.

    Recovery Option 1 One VM per LUNThe first option is the easiest way to recover however it also requires more administration of LUNs.

    This recovery option utilizes a 1:1 ratio of virtual machines to LUNs on the Dell Compellent SAN. This

    option allows for easy recovery of volumes/virtual machines to the XenServer by creating a local

    recovery view of the Volume in Storage Center.

    Prior to mapping the Replay to the XenServer(s) remove the mapping to the original volume. Since the

    Replay has the same UUID as the original volume, XenServer will reattach to the volume just as if it

    was the original.

    The following process details how to recover a virtual machine to a previous state using the 1:1

    mapping of Virtual Machines to LUNs.

    The Dell Compellent System does not limit the number of LUNs that can be created, however the

    servers HBA usually have a limitation of 256 LUNs per server.

    Recovery Scenario

    XenServer Pool containing two servers, XenServer6P1S1 and XenServer6P1S2.

    All servers are connected to the Dell Compellent Storage Center using Fibre Channel and zonedaccordingly.

    The Dell Compellent Storage System has been setup to take hourly Replays of the volumerunning one virtual machine named W2k8-Xen6.

    1. As shown below, a volume is created on the Dell Compellent system and named Xen6_P1_SR2.Also note the replay of this volume created at 08:30:00 PM.

    Replays can be manually or automatically generated on the Compellent system by utilizing the Replay

    Scheduler or manually through the Storage Center Console.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    41/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 41

    Figure 42, Compellent Replays

    2. The figure below depicts the VM named w2k3-xen5 running.

    Figure 43, W2k8-Xen6 Online

    In this example a catastrophe strikes w2k8-xen6 rendering it unbootable. By using Dell CompellentReplays the server can be quickly recovered to the time of the last snapshot.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    42/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 42

    3. Verify the VM is shutdown in the XenServer Console.4. Highlight the Xen6_P1_SR2 volume hosting w2k3-xen5 and select Forget Storage Repository to

    remove this volume from the XenServer Pool.Figure 44, Forget SR

    5. Go to the Dell Compellent Storage Center Console and highlight the volume containing the VM.In this example this is the Xen6_P1_SR2 volume.

    6. Select the Mapping button.Figure 45, Volume Mapping

    7. Note the LUN number for the mapping.8. Highlight each of the mappings listed individually and select the Remove Mapping button.9. Select Yes on the Are you sure screen.10.Select Yes (Remove Now) on the Warnings screen.11. Repeat until all mappings are removed from the volume.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    43/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 43

    Figure 46, Remove Mappings

    12.With the volume in question selected from the Dell Compellent Storage Center console, clickthe Replays button. Right click on the replay to recover to and select Create Volume fromReplay. In this example it is the replay dated 09/10/2011 08:30:00 pm.

    Figure 47, Local Recovery

    13.On the Create Volume from Replay screen enter an appropriate name for the Replay Volume

    and select the Create Now button.14.On the Map Volume to Server screen select one of the appropriate servers in the pool to map

    the view volume to, and then select Continue.15.Go to the Advanced options screen enter the appropriate LUN number then select Continue. In

    this example LUN 2 is being used as that was the original volume number.16.When completed select Create Now.17.This procedure only mapped the volume to one server, if more mappings are required select

    the Mappings button and add the appropriate mappings to the volume to represent all the

  • 7/29/2019 XenServer 6x Best Practices Compellent

    44/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 44

    servers in the XenServer Pool. In the example below the server XenServer6P1S1 andXenServer6P1S2 are both added to the new View Volume.

    Figure 48, Volume Mappings

    18.Return to the XenCenter console, right click on the pool and select New Storage Repository.

    Figure 49, New SR

    19.Select the appropriate type of storage for the volume then select Next. In this example it is aFC connection so hardware HBA should be selected.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    45/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 45

    Figure 50, SR Type

    20.On the Select the LUN to reattach or create a new SRon screen select the appropriatevolume, name it accordingly, then select Finish.

    Figure 51, Select LUN

    21.A message should appear asking if the SR should be Reattached, Formatted or canceled. SelectReattach.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    46/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 46

    Figure 52, Reattach SR

    22.With the replay of the SR now attached to the Pool, the virtual disk can be mapped to thevirtual machine. From XenCenter highlight the server to be recovered then select the StorageTab. Notice that the server doesnt have a disks associated with it.

    23.Click the Attach button to associate a disk to the VM.

    Figure 53, Attach Disk

    24.Expand the recovered SR, select the appropriate disk and click Attach.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    47/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 47

    Figure 54, Select Disk

    25.The Virtual machine can now be started in the same state it was in at the time of the last

    Replay. In this example the last Replay was taken at 8:30 pm.

    Figure 55, Start VM

    26. If satisfied with the result the original volume can be coalesced into the new view volume byfollowing the remaining steps.

    CAUTION: Continuing the original volume with the view volume will destroy the original volume.

    27.Highlight the original volume, right click on it and choose delete.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    48/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 48

    Figure 56, Delete Volume

    28.Confirm the action by clicking Yes to move the volume to the Recycle Bin.29.To completely remove the volume from the system, delete the volume from the recycle bin by

    expanding the recycle bin, right click on the volume and choose delete.Figure 57, Delete Volume from Recycle Bin

    30.Confirm the delete by clicking Yes.31. The original volume is not removed leaving the recovery volume as the primary volume. Once

    the associated replays of the view volume are expired they will be coalesced into the volumeas shown below.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    49/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 49

    Figure 58, Volume with Replays Associated

    Figure 59, Replay Coalescing

  • 7/29/2019 XenServer 6x Best Practices Compellent

    50/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 50

    Figure 60, Coalescing Complete

    Recovery Option 2 Recovery ServerThe second option available for recovering virtual machines with Dell Compellent Replays is using a

    standalone recovery XenServer. This option is useful when multiple virtual machines are hosted on

    each SR as it allows recovery of one VM to a recovery server utilizing Dell Compellent Replays.

    As mentioned earlier, there is a limitation that prevents mounting the replay to the same XenServer or

    Pool due to the UUID associated with the disks will conflict. Adding a separate standalone XenServer

    recovery server allows administrators to map the recovery volume to the recovery server and attach

    the SR. A new virtual machine can then be created and mapped to the appropriate virtual disk. The

    recovered virtual machines can then be exported and imported back into the production system.

    Below is a step by step guide on recovering virtual machines to a standalone XenServer or a Remote DR

    site XenServer.

    Recovery Scenario

    XenServer Pool containing two servers, XenServer6P1S1 and XenServer6P1S2.

    Standalone (Recovery) XenServer named XenRecovery.

    All servers are connected to the Dell Compellent Storage Center using Fibre Channel and arealready zoned accordingly.

    A replay is created on the volume Xen6_P1_SR2.

    1. From the Dell Compellent Storage Center console, select the volume to recover and click theReplays Button.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    51/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 51

    Figure 61, Volume Replays

    2. Right click on the replay to recover to and select Create Volume from Replay. In the examplebelow the Replay used is dated 09/11/2011 08:09:54 am.

    Figure 62, Local Recovery

    3. On the Create Volume from Replay screen enter an appropriate name for the Replay volumeand click the Create Now button.

    4. On the Select a Server to Map screen select one of the recovery servers to map the viewvolume to, click Continue.

    5. In the Map Volume to Server Advanced options, enter the appropriate LUN numbers for theserver port. If mapping to multiple servers set each mapping to the same LUN number. In theexample LUN 12 is used. Click Create Now.

    When mapping to multiple servers in a Pool use the Storage Center Cluster Server Object. This willcreate the mapping to all servers with the same LUN number.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    52/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 52

    Figure 63, LUN Number

    6. The next step after mapping the storage to the recovery XenServer is to add the StorageRepository to the recovery server.

    A separate copy of XenCenter must be used or the original Pool must first be removed from the

    console. XenCenter will not allow the addition of this Storage Repository to the recovery server if it

    sees that volume mapped elsewhere.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    53/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 53

    Figure 64, XenCenter Console

    7. From XenCenter right click on recovery XenServer and select New Storage Repository.

    Figure 65, New SR

  • 7/29/2019 XenServer 6x Best Practices Compellent

    54/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 54

    8. Select the appropriate storage type and click Next.

    Figure 66, Select Disk Type

    9. Enter a name for the new SR and click Next.

    Figure 67, Enter SR Name

    10.Select the recovered LUN, name it, and click Finish.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    55/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 55

    Figure 68, Select Recovery LUN

    11.A warning message should appear stating that an existing SR was found on the selected LUN.click Reattach.

    Figure 69, Reattach SR

    12.Now that the SR has been added to the recovery server the process of recovering the VMs canbe started. The next step is to create a new virtual machine as a placeholder.

    13.Right click on the recovery XenServer and choose New VM.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    56/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 56

    Figure 70, New Virtual Machine

    14.Select the appropriate template for the server then click Next.

    Figure 71, OS Template

    15.Enter in a name for the server then click Next. Typically the actual server name of the VMbeing recovered is used.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    57/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 57

    Figure 72, Virtual Machine Name

    16.Click Next on the Locate the operating system installation media screen.

    Figure 73, Installation Media

    17.Click Next at Select a home server screen.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    58/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 58

    Figure 74, Select VM Home Server

    18.Enter in the appropriate amount of vCPUs and Memory then click Next.

    Figure 75, Size CPU and Memory

    19.On the screen Enter the information about the virtual disks for the new virtual machine,select a location to store a temporary virtual disk, then click Next. Typically it is best to storethe temporary disk on a SR that isnt being used for recovery.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    59/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 59

    Figure 76, Temporary SR Disk Location

    20.On the Add or remove virtual network interfacesscreen click Add, select the appropriatenetwork, then click Next.

    Figure 77, Select Network

    21.On theVirtual machine configuration is complete screen uncheck Start VM automaticallyand click Finish.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    60/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 60

    Figure 78, Uncheck Start VM Automatically

    22.From the XenCenter Console select the newly create VM then select the Storage tab.23.Highlight the virtual disk temporarily attached to the VM and select Delete or Detach. Since

    this disk contains no information it is OK to delete it.

    Figure 79, Detach Disk

    24.Click Yes at the Delete system disk message.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    61/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 61

    Figure 80, Delete System Disk

    25.Once the temporary disk is deleted click the Attach button to select the original disk from therecovered Volume. Expand the recovered LUN and select the appropriate disk to attach.

    Figure 81, Attach Disk

    NOTE: If there are multiple disks in the Storage Repository with no name, it may take some trial

    and error to connect to the correct disk. Use the Storage Tab to detach and reattach disks until

    the correct one is selected. Restoring the MetaDate will prevent this issue. If a Virtual Machine

    MetaData backup has been taken on the Volume, use the procedure outlined in the VM MetaData

    Back and Recovery section to recover the names.

    From this point the VM can be started, exported, copied etc. Typically the VM would be exported and

    imported back into the production Pool.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    62/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 62

    Dynamic Capacity

    Dynamic Capacity Overview

    Dell Compellent's Thin Provisioning, called Dynamic Capacity, delivers the highest storage utilization

    possible by eliminating allocated but unused capacity. Dynamic Capacity completely separates storage

    allocation from utilization, enabling users to allocate any size virtual volume upfront yet only consumeactual physical capacity when data is written by the application.

    Dynamic Capacity with XenServer

    When XenServer is connected to Dell Compellent storage via iSCSI or Fibre Channel connections the

    Storage Repository is created as a LVM (Linux Volume Manager) repository. When the volume is created

    on the Dell Compellent System by default the newly created volume consumes zero space. Only when

    data is written to the volume will space be acquired and only the written space is consumed.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    63/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 63

    Data Progression

    Data Progression on XenServer

    The foundation of Dell Compellents Automated Tiered Storage patent is our unique Dynamic Block

    Architecture. Storage Center records and tracks specific information about blocks of data, including

    time written, time accessed, frequency of access, associated volume, RAID level, and more. DataProgression utilizes all of this metadata, or data about the data to automatically migrate blocks of

    data to the optimum storage tier based on usage and performance, unlike traditional systems that

    move entire files.

    Figure 82, Data Progression

    Data Progression automatically classifies and migrates data to the optimum tier of storage, retainingfrequently accessed data on high performance storage and storing infrequently accessed data on lower

    cost storage.

    XenServer, like other virtualization hypervisors, will contain virtual machines running Windows, Linux,

    or other virtual machines that contain stagnant data, data that is read frequently and heavy read/write

    data such as transaction logs and pagefiles.

    Take a Virtual Machine running a file server for example. A user copies a new file to the file server.

    The Dell Compellent system writes the data instantly to Tier 1 Raid 10. The longer the file sits without

    any reads/writes, the further the blocks of data that make up the file will transition in the tiering

    structure until it reaches Tier 3, Raid 5. Typically less than 20% of data on the file server is accessed

    frequently. The Dell Compellent system is optimized to automatically move this data between tierswithout any assistance. In a typical storage solution, an Administrator would have to manually move

    files from one Tier to another. This equates to costs savings by storing static data on low-cost, high-

    capacity disks and by eliminating the need to manage data manually. Only data that is required to be

    on Tier 1 Storage will remain on that Tier.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    64/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 64

    Boot from SANIn some cases, such as with blade servers that do not have internal disk drives, booting from SAN is the

    only option, but a lot of XenServers have internal mirrored drives giving administrators the flexibility to

    choose whether to boot from SAN or local disks.

    Booting from SAN allows administrators to take Replays of the boot volume, replicate it to a DR site,and provides for fast recovery to other identical hardware if that XenServer fails. However, there are

    also benefits to booting from local disks and having the virtual machines located on SAN resources.

    Since it only takes about 30 minutes to install and patch a XenServer, booting from local disks insures

    the server will stay online if there is a need to do maintenance to fibre channel switches, Ethernet

    switches, or the SAN itself. The other advantage of booting from local disks is that this configuration

    does not require iSCSI or FC HBAs. The XenServer can boot from local disk and use the iSCSI software

    initiator to connect to shared storage on the SAN.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    65/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 65

    VM Metadata Backup and RecoveryThe metadata for a VM contains information about the VM (such as the name, description, and

    Universally Unique Identifier (UUID)), VM configuration (such as the amount of virtual memory and the

    number of virtual CPUs), and information about the use of resources on the host or Resource Pool (such

    as Virtual Networks, Storage Repository, ISO Library, and so on).

    Most metadata configuration data is written when the VM is created and is updated when changes to

    the VM configuration are made. Adding a metadata export command to the change-control checklist

    will ensure that this information is available if needed.

    NOTE: Without the Metadata Backup the names and descriptions of files on the SR may not be available

    for a recovery. This will make recovery a difficult process.

    Figure 83, Conceptual Overview of XenServer Disaster Recovery

    Backing Up VM MetaDataIn XenServer, exporting or importing metadata can be done from the text-based console menu. On the

    physical console the menu is loaded by default. To start the console menu through the host console

    screen in XenCenter, type: xsconsolefrom the command line.

  • 7/29/2019 XenServer 6x Best Practices Compellent

    66/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 66

    Figure 84 Backup, Restore and Update Screen

    To export the VM metadata:

    1. Select Backup, Restore and Update from the menu.2. Select Backup Virtual Machine Metadata.3. If prompted, log on with root credentials.4. Select the Storage Repository where the desired VMs are stored.5. After the metadata backup is done, verify the successful completion on the summary screen.6. In XenCenter, on the Storage tab of the SR selected in step 4, a new VDI should be created

    named Pool Metadata Backup.

    Figure 85 Backup Summary Screen

  • 7/29/2019 XenServer 6x Best Practices Compellent

    67/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 67

    Another option available from the console menu is Schedule Virtual Machine Metadata. This option

    allows for automated exports of metadata on a daily, weekly, or monthly basis. By default this option

    is disabled.

    Importing VM MetaDataA prerequisite for running the import command in a DR environment is that Storage Repository(s)

    (where the replicated virtual disk images are located) need to be setup and re-attached to a

    XenServer. Also make sure that the Virtual Networks are set up correctly by using the same names in

    the production and DR environment.

    After the SR is attached, the metadata backup can be restored.

    From the console menu:

    1. Select Backup, Restore and Update from the menu.

    2. Select Restore Virtual Machine Metadata.

    3. If prompted, log on with root credentials.

    4.

    Select the Storage Repository to restore from.5. Select the Metadata Backup you want to restore.

    6. Select restore only VMs on this SRor allVMs in the pool.

    7. After the metadata restore is done, verify the summary screen and check for errors.

    8. The VMs are now available in XenCenter and can be started at the new site.

    Figure 86 Metadata Restore Summary

  • 7/29/2019 XenServer 6x Best Practices Compellent

    68/88

    Dell Compellent Storage Center XenServer 6.x Best Practices

    Page 68

    Disaster RecoveryXenServer 6 provides the enterprise with functionality designed to recover data from a catastrophic

    failure of hardware which disables or destroys a whole pool or site. The XenServer 6 Disaster Recovery

    feature provides the mechanism to backup services and applications while Dell Compellent replication

    technology provides a means to make this data available at a remote site. Together they provide a

    high availability solution for mission critical services and applications

    This functionality is extended with XenServer Virtual Appliance (vApp) technology. A vApp is a logical

    group of one or more related VMs which can be started as a single entity in the event of a disaster.

    When a vApp is started, the VMs contained within the vApp are started based on a predefined order,

    relieving the administrator from manually stating servers. The vApp functionality is useful in DR

    situation where all VMs in s vApp reside on the same Storage Repository.

    NOTE: XenServer Disaster Recovery can only be enabled when using LVM over FC/iSCSI HBA, or software

    iSCSI. A small amount of space will be required on the storage for a new LUN which will contain the

    pool recovery information.

    Replication OverviewXenServer Disaster Recovery takes advantage of Dell Compellents replication technology to provide

    high availability. Dell Compellent replicates volumes in one direction. In a DR scenario, data is

    replicated from the primary site to the secondary site. By default, Dell Compellent replication is not

    bidirectional; therefore it is not possible to XenMotion between source Storage Center (the primary

    site) and destination Storage Center (the secondary site) unless using Dell Compellent Live Volumes for

    Replication. The following best practices recommendations for replication and remote recovery should

    be considered.

    Compatible XenServer server hardware and OS is required at the DR site to map replicated

    volumes to in the event the main XenServer Pool becomes inoperable.

    Since replicated volumes can contain more than one virtual machine, it is recommended to sortvirtual machines into specific replicated and non-replicated Storage Repositories. For example,

    if there are 30 virtual machines in the XenServer Pool, and only eight of them need to be

    replicated to the DR site, a special "Replicated" volume should be created to place those eight

    virtual machines on, or utilize a 1:1 mapping of VMs to Volumes and only replicated the

    required VMs.

    Take advantage of the Storage Center QOS settings to prioritize the replication bandwidth of

    certain "mission critical" volumes. For example, two QOS definitions could be created so that

    the "mission critical" volume would get 80 Mb of the bandwidth, and the lower priority volume

    would get 20 Mb of the bandwidth.

    The following steps should be taken in preparation for a disaster:

    Configure the VMs and vApps.

    Note how the VMs and vApps are mapped to the SRs and the SRs to Volumes. Verify that the

    name_label and name__description are meaningful and will allow an administrator to

    recognize the SR after a disaster.

    Configure replication of the SR volume

  • 7/29/2019 XenServer 6x Best Prac