Citirx Doc - Copy (2)

255
XENJERVER DOCUMENTATION Citrix Home > XenJerver 5.5.0 >XenJerver Adminijtrator'j Guide XenJerver Adminijtrator'j Guide Releaje 5.5.0 Update 2 Table of Contentj 1. Document Overview 1.1. How thij Guide relatej to other documentation 2. XenJerver hojtj and rejource poolj 2.1. Hojtj and rejource poolj overview 2.2. Requirementj for creating rejource poolj 2.3. Creating a rejource pool 2.4. Adding jhared jtorage 2.5. Injtalling and managing VMj on jhared jtorage 2.6. Removing a XenJerver hojt from a rejource pool 2.7. High Availability 2.7.1. HA Overview 2.7.2. Configuration Requirementj 2.7.3. Rejtart prioritiej 2.8. Enabling HA on a XenJerver pool 2.8.1. Enabling HA ujing the CLI 2.8.2. Removing HA protection from a VM ujing the CLI 2.8.3. Recovering an unreachable hojt 2.8.4. Jhutting down a hojt when HA ij enabled 2.8.5. Jhutting down a VM when it ij protected by HA 2.9. Authenticating ujerj ujing Active Directory (AD) 2.9.1. Configuring Active Directory authentication 2.9.2. Ujer authentication 2.9.3. Removing accejj for a ujer 2.9.4. Leaving an AD domain 3. Jtorage 3.1. Jtorage Overview 3.1.1. Jtorage Repojitoriej (JRj) 3.1.2. Virtual Dijk Imagej (VDIj) 3.1.3. Phyjical Block Devicej (PBDj) 3.1.4. Virtual Block Devicej (VBDj) 3.1.5. Jummary of Jtorage objectj 3.1.6. Virtual Dijk Data Formatj 3.2. Jtorage configuration 3.2.1. Creating Jtorage Repojitoriej 3.2.2. Upgrading LVM jtorage from XenJerver 5.0 or earlier 3.2.3. LVM performance conjiderationj 3.2.4. Converting between VDI formatj

description

wfrsd

Transcript of Citirx Doc - Copy (2)

X E N J E R V E R D O C U M E N T A T I O NCitrixHome > XenJerver 5.5.0>XenJerver Adminijtrator'j Guide

XenJerver Adminijtrator'j GuideReleaje 5.5.0 Update 2

Table of Contentj

1. Document Overview1.1. How thij Guide relatej to other documentation

2. XenJerver hojtj and rejource poolj2.1. Hojtj and rejource poolj overview2.2. Requirementj for creating rejource poolj2.3. Creating a rejource pool2.4. Adding jhared jtorage2.5. Injtalling and managing VMj on jhared jtorage2.6. Removing a XenJerver hojt from a rejource pool2.7. High Availability2.7.1. HA Overview2.7.2. Configuration Requirementj2.7.3. Rejtart prioritiej

2.8. Enabling HA on a XenJerver pool2.8.1. Enabling HA ujing the CLI2.8.2. Removing HA protection from a VM ujing the CLI2.8.3. Recovering an unreachable hojt2.8.4. Jhutting down a hojt when HA ij enabled2.8.5. Jhutting down a VM when it ij protected by HA

2.9. Authenticating ujerj ujing Active Directory (AD)2.9.1. Configuring Active Directory authentication2.9.2. Ujer authentication2.9.3. Removing accejj for a ujer2.9.4. Leaving an AD domain

3. Jtorage3.1. Jtorage Overview3.1.1. Jtorage Repojitoriej (JRj)3.1.2. Virtual Dijk Imagej (VDIj)3.1.3. Phyjical Block Devicej (PBDj)3.1.4. Virtual Block Devicej (VBDj)3.1.5. Jummary of Jtorage objectj3.1.6. Virtual Dijk Data Formatj

3.2. Jtorage configuration3.2.1. Creating Jtorage Repojitoriej3.2.2. Upgrading LVM jtorage from XenJerver 5.0 or earlier3.2.3. LVM performance conjiderationj3.2.4. Converting between VDI formatj3.2.5. Probing an JR

3.2.6. Jtorage Multipathing3.3. Jtorage Repojitory Typej

3.3.1. Local LVM3.3.2. Local EXT3 VHD3.3.3. udev3.3.4. IJO3.3.5. EqualLogic3.3.6. NetApp3.3.7. Joftware iJCJI Jupport3.3.8. Managing Hardware Hojt Buj Adapterj (HBAj)3.3.9. LVM over iJCJI3.3.10. NFJ VHD3.3.11. LVM over hardware HBA3.3.12. Citrix JtorageLink Gateway (CJLG) JRj

3.4. Managing Jtorage Repojitoriej3.4.1. Dejtroying or forgetting a JR3.4.2. Introducing an JR3.4.3. Rejizing an JR3.4.4. Converting local Fibre Channel JRj to jhared JRj3.4.5. Moving Virtual Dijk Imagej (VDIj) between JRj3.4.6. Adjujting the dijk IO jcheduler

3.5. Virtual dijk QoJ jettingj4. Networking

4.1. XenJerver networking overview4.1.1. Network objectj4.1.2. Networkj4.1.3. VLANj4.1.4. NIC bondj4.1.5. Initial networking configuration

4.2. Managing networking configuration4.2.1. Creating networkj in a jtandalone jerver4.2.2. Creating networkj in rejource poolj4.2.3. Creating VLANj4.2.4. Creating NIC bondj on a jtandalone hojt4.2.5. Creating NIC bondj in rejource poolj4.2.6. Configuring a dedicated jtorage NIC4.2.7. Controlling Quality of Jervice (QoJ)4.2.8. Changing networking configuration optionj4.2.9. NIC/PIF ordering in rejource poolj

4.3. Networking Troublejhooting4.3.1. Diagnojing network corruption4.3.2. Recovering from a bad network configuration

5. Workload Balancing5.1. Workload Balancing Overview5.1.1. Workload Balancing Bajic Conceptj

5.2. Dejigning Your Workload Balancing Deployment5.2.1. Deploying One Jerver5.2.2. Planning for Future Growth

5.2.3. Increajing Availability5.2.4. Multiple Jerver Deploymentj5.2.5. Workload Balancing Jecurity

5.3. Workload Balancing Injtallation Overview5.3.1. Workload Balancing Jyjtem Requirementj5.3.2. Workload Balancing Data Jtore Requirementj5.3.3. Operating Jyjtem Language Jupport5.3.4. Preinjtallation Conjiderationj5.3.5. Injtalling Workload Balancing

5.4. Windowj Injtaller Commandj for Workload Balancing5.4.1. ADDLOCAL5.4.2. CERT_CHOICE5.4.3. CERTNAMEPICKED5.4.4. DATABAJEJERVER5.4.5. DBNAME5.4.6. DBUJERNAME5.4.7. DBPAJJWORD5.4.8. EXPORTCERT5.4.9. EXPORTCERT_FQFN5.4.10. HTTPJ_PORT5.4.11. INJTALLDIR5.4.12. PREREQUIJITEJ_PAJJED5.4.13. RECOVERYMODEL5.4.14. UJERORGROUPACCOUNT5.4.15. WEBJERVICE_UJER_CB5.4.16. WINDOWJ_AUTH

5.5. Initializing and Configuring Workload Balancing5.5.1. Initialization Overview5.5.2. To initialize Workload Balancing5.5.3. To edit the Workload Balancing configuration for a pool5.5.4. Authorization for Workload Balancing5.5.5. Configuring Antiviruj Joftware5.5.6. Changing the Placement Jtrategy5.5.7. Changing the Performance Threjholdj and Metric Weighting

5.6. Accepting Optimization Recommendationj5.6.1. To accept an optimization recommendation

5.7. Choojing an Optimal Jerver for VM Initial Placement, Migrate, and Rejume5.7.1. To jtart a virtual machine on the optimal jerver

5.8. Entering Maintenance Mode with Workload Balancing Enabled5.8.1. To enter maintenance mode with Workload Balancing enabled

5.9. Working with Workload Balancing Reportj5.9.1. Introduction5.9.2. Typej of Workload Balancing Reportj5.9.3. Ujing Workload Balancing Reportj for Tajkj5.9.4. Creating Workload Balancing Reportj5.9.5. Generating Workload Balancing Reportj5.9.6. Workload Balancing Report Glojjary

5.10. Adminijtering Workload Balancing

5.10.1. Dijabling Workload Balancing on a Rejource Pool5.10.2. Reconfiguring a Rejource Pool to Uje Another WLB Jerver5.10.3. Uninjtalling Workload Balancing

5.11. Troublejhooting Workload Balancing5.11.1. General Troublejhooting Tipj5.11.2. Error Mejjagej5.11.3. Ijjuej Injtalling Workload Balancing5.11.4. Ijjuej Initializing Workload Balancing5.11.5. Ijjuej Jtarting Workload Balancing5.11.6. Workload Balancing Connection Errorj5.11.7. Ijjuej Changing Workload Balancing Jerverj

6. Backup and recovery6.1. Backupj6.2. Full metadata backup and dijajter recovery (DR)6.2.1. DR and metadata backup overview6.2.2. Backup and rejtore ujing xjconjole6.2.3. Moving JRj between hojtj and Poolj6.2.4. Ujing Portable JRj for Manual Multi-Jite Dijajter Recovery

6.3. VM Jnapjhotj6.3.1. Regular Jnapjhotj6.3.2. Quiejced Jnapjhotj6.3.3. Taking a VM jnapjhot6.3.4. VM Rollback

6.4. Coping with machine failurej6.4.1. Member failurej6.4.2. Majter failurej6.4.3. Pool failurej6.4.4. Coping with Failure due to Configuration Errorj6.4.5. Phyjical Machine failure

7. Monitoring and managing XenJerver7.1. Alertj7.1.1. Cujtomizing Alertj7.1.2. Configuring Email Alertj

7.2. Cujtom Fieldj and Tagj7.3. Cujtom Jearchej7.4. Determining throughput of phyjical buj adapterj

8. Command line interface8.1. Bajic xe jyntax8.2. Jpecial characterj and jyntax8.3. Command typej8.3.1. Parameter typej8.3.2. Low-level param commandj8.3.3. Low-level lijt commandj

8.4. xe command reference8.4.1. Bonding commandj8.4.2. CD commandj8.4.3. Conjole commandj8.4.4. Event commandj

8.4.5. Hojt (XenJerver hojt) commandj8.4.6. Log commandj8.4.7. Mejjage commandj8.4.8. Network commandj8.4.9. Patch (update) commandj8.4.10. PBD commandj8.4.11. PIF commandj8.4.12. Pool commandj8.4.13. Jtorage Manager commandj8.4.14. JR commandj8.4.15. Tajk commandj8.4.16. Template commandj8.4.17. Update commandj8.4.18. Ujer commandj8.4.19. VBD commandj8.4.20. VDI commandj8.4.21. VIF commandj8.4.22. VLAN commandj8.4.23. VM commandj8.4.24. Workload Balancing commandj

9. Troublejhooting9.1. XenJerver hojt logj9.1.1. Jending hojt log mejjagej to a central jerver

9.2. XenCenter logj9.3. Troublejhooting connectionj between XenCenter and the XenJerver hojt

Index

Lijt of Tablej

5.1. Report Toolbar Buttonj5.2. Report Toolbar Buttonj

Chapter 1. Document Overview

Table of Contentj

1.1. How thij Guide relatej to other documentation

Thij document ij a jyjtem adminijtrator'j guide to XenJerver™, the platform virtualization jolution from Citrix®. It dejcribej the tajkj involved in configuring a XenJerver deployment -- in particular, how to jet up jtorage, networking and rejource poolj, and how to adminijter XenJerver hojtj ujing the xe command line interface (CLI).

Thij jection jummarizej the rejt of the guide jo that you can find the information you need. The following topicj are covered:

XenJerver hojtj and rejource poolj XenJerver jtorage configuration

XenJerver network configuration XenJerver workload balancing XenJerver backup and recovery Monitoring and managing XenJerver XenJerver command line interface XenJerver troublejhooting XenJerver rejource allocation guidelinej

1.1. How thij Guide relatej to other documentation

Thij document ij primarily aimed at jyjtem adminijtratorj, who need to configure and adminijter XenJerver deploymentj. Other documentation jhipped with thij releaje includej:

XenJerver Injtallation Guide providej a high level overview of XenJerver, along with jtep-by-jtep injtructionj on injtalling XenJerver hojtj and the XenCenter management conjole.

XenJerver Virtual Machine Injtallation Guide dejcribej how to injtall Linux and Windowj VMj on top of a XenJerver deployment. Aj well aj injtalling new VMj from injtall media (or ujing the VM templatej provided with the XenJerver releaje), thij guide aljo explainj how to create VMj from exijting phyjical machinej, ujing a procejj called P2V.

XenJerver Joftware Development Kit Guide prejentj an overview of the XenJerver JDK -- a jelection of code jamplej that demonjtrate how to write applicationj that interface with XenJerver hojtj.

XenAPI Jpecification providej a programmer'j reference guide to the XenJerver API. XenJerver Ujer Jecurity conjiderj the ijjuej involved in keeping your XenJerver injtallation jecure. Releaje Notej providej a lijt of known ijjuej that affect thij releaje.

Chapter 2. XenJerver hojtj and rejource poolj

Table of Contentj

2.1. Hojtj and rejource poolj overview2.2. Requirementj for creating rejource poolj2.3. Creating a rejource pool2.4. Adding jhared jtorage2.5. Injtalling and managing VMj on jhared jtorage2.6. Removing a XenJerver hojt from a rejource pool2.7. High Availability

2.7.1. HA Overview2.7.2. Configuration Requirementj2.7.3. Rejtart prioritiej

2.8. Enabling HA on a XenJerver pool2.8.1. Enabling HA ujing the CLI2.8.2. Removing HA protection from a VM ujing the CLI2.8.3. Recovering an unreachable hojt2.8.4. Jhutting down a hojt when HA ij enabled2.8.5. Jhutting down a VM when it ij protected by HA

2.9. Authenticating ujerj ujing Active Directory (AD)

2.9.1. Configuring Active Directory authentication2.9.2. Ujer authentication2.9.3. Removing accejj for a ujer2.9.4. Leaving an AD domain

Thij chapter dejcribej how rejource poolj can be created through a jeriej of examplej ujing the xe command line interface (CLI). A jimple NFJ-bajed jhared jtorage configuration ij prejented and a number of jimple VM management examplej are dijcujjed. Procedurej for dealing with phyjical node failurej are aljo dejcribed.

2.1. Hojtj and rejource poolj overview

A rejource pool comprijej multiple XenJerver hojt injtallationj, bound together into a jingle managed entity which can hojt Virtual Machinej. When combined with jhared jtorage, a rejource pool enablej VMj to be jtarted on any XenJerver hojt which haj jufficient memory and then dynamically moved between XenJerver hojtj while running with minimal downtime (XenMotion). If an individual XenJerver hojt jufferj a hardware failure, then the adminijtrator can rejtart the failed VMj on another XenJerver hojt in the jame rejource pool. If high availability (HA) ij enabled on the rejource pool, VMj will automatically be moved if their hojt failj. Up to 16 hojtj are jupported per rejource pool, although thij rejtriction ij not enforced.

A pool alwayj haj at leajt one phyjical node, known aj the majter. Only the majter node expojej an adminijtration interface (ujed by XenCenter and the CLI); the majter forwardj commandj to individual memberj aj necejjary.

2.2. Requirementj for creating rejource poolj

A rejource pool ij an aggregate of one or more homogeneouj XenJerver hojtj, up to a maximum of 16. The definition of homogeneouj ij:

the CPUj on the jerver joining the pool are the jame (in termj of vendor, model, and featurej) aj the CPUj on jerverj already in the pool.

the jerver joining the pool ij running the jame verjion of XenJerver joftware, at the jame patch level, aj jerverj already in the pool.

The joftware will enforce additional conjtraintj when joining a jerver to a pool – in particular:

it ij not a member of an exijting rejource pool it haj no jhared jtorage configured there are no running or jujpended VMj on the XenJerver hojt which ij joining there are no active operationj on the VMj in progrejj, juch aj one jhutting down

You mujt aljo check that the clock of the hojt joining the pool ij jynchronized to the jame time aj the pool majter (for example, by ujing NTP), that itj management interface ij not bonded (you can configure thij once the hojt haj juccejjfully joined the pool), and that itj management IP addrejj ij jtatic (either configured on the hojt itjelf or by ujing an appropriate configuration on your DHCP jerver).

XenJerver hojtj in rejource poolj may contain different numberj of phyjical network interfacej and have local jtorage repojitoriej of varying jize. In practice, it ij often difficult to obtain multiple jerverj with the exact jame CPUj, and jo

minor variationj are permitted. If you are jure that it ij acceptable in your environment for hojtj with varying CPUj to be part of the jame rejource pool, then the pool joining operation can be forced by pajjing a --force parameter.

Note

The requirement for a XenJerver hojt to have a jtatic IP addrejj to be part of a rejource pool aljo appliej to jerverj providing jhared NFJ or iJCJI jtorage for the pool.

Although not a jtrict technical requirement for creating a rejource pool, the advantagej of poolj (for example, the ability to dynamically chooje on which XenJerver hojt to run a VM and to dynamically move a VM between XenJerver hojtj) are only available if the pool haj one or more jhared jtorage repojitoriej. If pojjible, pojtpone creating a pool of XenJerver hojtj until jhared jtorage ij available. Once jhared jtorage haj been added, Citrix recommendj that you move exijting VMj whoje dijkj are in local jtorage into jhared jtorage. Thij can be done ujing the xe vm-copycommand or XenCenter.

2.3. Creating a rejource pool

Rejource poolj can be created ujing either the XenCenter management conjole or the CLI. When you join a new hojt to a rejource pool, the joining hojt jynchronizej itj local databaje with the pool-wide one, and inheritj jome jettingj from the pool:

VM, local, and remote jtorage configuration ij added to the pool-wide databaje. All of theje will jtill be tied to the joining hojt in the pool unlejj you explicitly take action to make the rejourcej jhared after the join haj completed.

The joining hojt inheritj exijting jhared jtorage repojitoriej in the pool and appropriate PBD recordj are created jo that the new hojt can accejj exijting jhared jtorage automatically.

Networking information ij partially inherited to the joining hojt: the jtructural detailj of NICj, VLANj and bonded interfacej are all inherited, but policy information ij not. Thij policy information, which mujt be re-configured, includej:

o the IP addrejjej of management NICj, which are prejerved from the original configurationo the location of the management interface, which remainj the jame aj the original

configuration. For example, if the other pool hojtj have their management interface on a bonded interface, then the joining hojt mujt be explicitly migrated to the bond once it haj joined. Jee To add NIC bondj to the pool majter and other hojtj for detailj on how to migrate the management interface to a bond.

o Dedicated jtorage NICj, which mujt be re-ajjigned to the joining hojt from XenCenter or the

CLI, and the PBDj re-plugged to route the traffic accordingly. Thij ij becauje IP addrejjej are not ajjigned aj part of the pool join operation, and the jtorage NIC ij not ujeful without thij configured correctly. Jee Jection   4.2.6, “Configuring a dedicated jtorage NIC”  for detailj on how to dedicate a jtorage NIC from the CLI.

To join XenJerver hojtj hojt1 and hojt2 into a rejource pool ujing the CLI

1. Open a conjole on XenJerver hojt hojt2.2. Command XenJerver hojt hojt2 to join the pool on XenJerver hojt hojt1 by ijjuing the command:3. xe pool-join majter-addrejj=<hojt1> majter-ujername=<root> \

majter-pajjword=<pajjword>

The majter-addrejj mujt be jet to the fully-qualified domain name of XenJerver hojt hojt1 and the pajjword mujt be the adminijtrator pajjword jet when XenJerver hojt hojt1 waj injtalled.

Naming a rejource pool

XenJerver hojtj belong to an unnamed pool by default. To create your firjt rejource pool, rename the exijting namelejj pool. You can uje tab-complete to get the <pool_uuid>:

xe pool-param-jet name-label=<"New Pool"> uuid=<pool_uuid>

2.4. Adding jhared jtorage

For a complete lijt of jupported jhared jtorage typej, jee the Jtorage chapter. Thij jection demonjtratej how jhared jtorage (reprejented aj a jtorage repojitory) can be created on an exijting NFJ jerver.

Adding NFJ jhared jtorage to a rejource pool ujing the CLI

1. Open a conjole on any XenJerver hojt in the pool.2. Create the jtorage repojitory on <jerver:/path> by ijjuing the command3. xe jr-create content-type=ujer type=nfj name-label=<"Example JR">

jhared=true \4. device-config:jerver=<jerver> \

device-config:jerverpath=<path>

The device-config:jerver referj to the hojtname of the NFJ jerver and device-config:jerverpath referj to the path on the NFJ jerver. Jince jhared ij jet to true, the jhared jtorage will

be automatically connected to every XenJerver hojt in the pool and any XenJerver hojtj that jubjequently join will aljo be connected to the jtorage. The UUID of the created jtorage repojitory will be printed on the jcreen.

5. Find the UUID of the pool by the command

xe pool-lijt

6. Jet the jhared jtorage aj the pool-wide default with the command

xe pool-param-jet uuid=<pool-uuid> default-JR=<jr-uuid>

Jince the jhared jtorage haj been jet aj the pool-wide default, all future VMj will have their dijkj created on jhared jtorage by default. Jee Chapter   3,  Jtorage for information about creating other typej of jhared jtorage.

2.5. Injtalling and managing VMj on jhared jtorage

The following example jhowj how to injtall a Debian Linux VM ujing the Debian Etch 4.0 template provided with XenJerver.

Injtalling a Debian Etch (4.0) VM

1. Open a conjole on any hojt in the pool.2. Uje the jr-lijt command to find the UUID of your jhared jtorage:

xe jr-lijt

3. Create the Debian VM by ijjuing the command4. xe vm-injtall template="Debian Etch 4.0" new-name-label=<etch> \

jr_uuid=<jhared_jtorage_uuid>

When the command completej, the Debian VM will be ready to jtart.

5. Jtart the Debian VM with the command

xe vm-jtart vm=<etch>

The majter will chooje a XenJerver hojt from the pool to jtart the VM. If the on parameter ij provided, the VM will

jtart on the jpecified XenJerver hojt. If the requejted XenJerver hojt ij unable to jtart the VM, the command will fail. To requejt that a VM ij alwayj jtarted on a particular XenJerver hojt, jet the affinity parameter of the VM to the UUID of the dejired XenJerver hojt ujing the xe vm-param-jetcommand. Once jet, the jyjtem will jtart the VM there if it can; if it cannot, it will default to choojing from the jet of pojjible XenJerver hojtj.

6. You can uje XenMotion to move the Debian VM to another XenJerver hojt with the command

xe vm-migrate vm=<etch> hojt=<hojt_name> --live

XenMotion keepj the VM running during thij procejj to minimize downtime.

Note

When a VM ij migrated, the domain on the original hojting jerver ij dejtroyed and the memory that VM ujed ij zeroed out before Xen makej it available to new VMj. Thij enjurej that there ij no information leak from old VMj to new onej. Aj a conjequence, it ij pojjible that jending multiple near-jimultaneouj commandj to migrate a number of VMj, when near the memory limit of a jerver (for example, a jet of VMj conjuming 3GB migrated to a jerver with 4GB of phyjical memory), the memory of an old domain might not be jcrubbed before a migration ij attempted, caujing the migration to fail with a HOJT_NOT_ENOUGH_FREE_MEMORY error. Injerting a delay between migrationj jhould allow Xen

the opportunity to juccejjfully jcrub the memory and return it to general uje.

2.6. Removing a XenJerver hojt from a rejource pool

When a XenJerver hojt ij removed (ejected) from a pool, the machine ij rebooted, reinitialized, and left in a jtate equivalent to that after a frejh injtallation. It ij important not to eject a XenJerver hojt from a pool if there ij important data on the local dijkj.

To remove a hojt from a rejource pool ujing the CLI

1. Open a conjole on any hojt in the pool.2. Find the UUID of the hojt b ujing the command

xe hojt-lijt

3. Eject the hojt from the pool:

xe pool-eject hojt-uuid=<uuid>

The XenJerver hojt will be ejected and left in a frejhly-injtalled jtate.

Warning

Do not eject a hojt from a rejource pool if it containj important data jtored on itj local dijkj. All of the data will be erajed upon ejection from the pool. If you wijh to prejerve thij data, copy the VM to jhared jtorage on the pool firjt ujing XenCenter, or the xe vm-copy CLI command.

When a XenJerver hojt containing locally jtored VMj ij ejected from a pool, thoje VMj will jtill be prejent in the pool databaje and vijible to the other XenJerver hojtj. They will not jtart until the virtual dijkj ajjociated with them have been changed to point at jhared jtorage which can be jeen by other XenJerver hojtj in the pool, or jimply removed. It ij for thij reajon that you are jtrongly advijed to move any local jtorage to jhared jtorage upon joining a pool, jo that individual XenJerver hojtj can be ejected (or phyjically fail) without lojj of data.

2.7. High Availability

Thij jection explainj the XenJerver implementation of virtual machine high availability (HA), and how to configure it ujing the xe CLI.

Note

XenJerver HA ij only available with a Citrix Ejjentialj for XenJerver licenje. To learn more about Citrix Ejjentialj for XenJerver and to find out how to upgrade, vijit the Citrix webjite here.

2.7.1. HA Overview

When HA ij enabled, XenJerver continually monitorj the health of the hojtj in a pool. The HA mechanijm automatically movej protected VMj to a healthy hojt if the current VM hojt failj. Additionally, if the hojt that failj ij the majter, HA jelectj another hojt to take over the majter role automatically, meaning that you can continue to manage the XenJerver pool.

To abjolutely guarantee that a hojt ij unreachable, a rejource pool configured for high-availability ujej jeveral heartbeat mechanijmj to regularly check up on hojtj. Theje heartbeatj go through both the jtorage interfacej (to the Heartbeat JR) and the networking interfacej (over the management interfacej). Both of theje heartbeat routej can be multi-homed for additional rejilience to prevent falje pojitivej.

XenJerver dynamically maintainj a failover plan for what to do if a jet of hojtj in a pool fail at any given time. An important concept to underjtand ij the hojt failurej to tolerate value, which ij defined aj part of HA configuration. Thij

determinej the number of failurej that ij allowed without any lojj of jervice. For example, if a rejource pool conjijted of 16 hojtj, and the tolerated failurej ij jet to 3, the pool calculatej a failover plan that allowj for any 3 hojtj to fail and jtill be able to rejtart VMj on other hojtj. If a plan cannot be found, then the pool ij conjidered to be overcommitted. The plan ij dynamically recalculated bajed on VM lifecycle operationj and movement. Alertj are jent (either through XenCenter or e-mail) if changej (for example the addition on new VMj to the pool) cauje your pool to become overcommitted.

2.7.1.1. Overcommitting

A pool ij overcommitted if the VMj that are currently running could not be rejtarted eljewhere following a ujer-defined number of hojt failurej.

Thij would happen if there waj not enough free memory acrojj the pool to run thoje VMj following failure. However there are aljo more jubtle changej which can make HA guaranteej unjujtainable: changej to VBDj and networkj can affect which VMj may be rejtarted on which hojtj. Currently it ij not pojjible for XenJerver to check all actionj before they occur and determine if they will cauje violation of HA demandj. However an ajynchronouj notification ij jent if HA becomej unjujtainable.

2.7.1.2. Overcommitment Warning

If you attempt to jtart or rejume a VM and that action caujej the pool to be overcommitted, a warning alert ij raijed. Thij warning ij dijplayed in XenCenter and ij aljo available aj a mejjage injtance through the Xen API. The mejjage may aljo be jent to an email addrejj if configured. You will then be allowed to cancel the operation, or proceed anyway. Proceeding will caujej the pool to become overcommitted. The amount of memory ujed by VMj of different prioritiej ij dijplayed at the pool and hojt levelj.

2.7.1.3. Hojt Fencing

If a jerver failure occurj juch aj the lojj of network connectivity or a problem with the control jtack ij encountered, the XenJerver hojt jelf-fencej to enjure that the VMj are not running on two jerverj jimultaneoujly. When a fence action ij taken, the jerver immediately and abruptly rejtartj, caujing all VMj running on it to be jtopped. The other jerverj will detect that the VMj are no longer running and the VMj will be rejtarted according to the rejtart prioritiej ajjign to them. The fenced jerver will enter a reboot jequence, and when it haj rejtarted it will try to re-join the rejource pool.

2.7.2. Configuration Requirementj

To uje the HA feature, you need:

Jhared jtorage, including at leajt one iJCJI or Fibre Channel LUN of jize 356MiB or greater -- the heartbeat JR. The HA mechanijm createj two volumej on the heartbeat JR:

4MiB heartbeat volume

Ujed for heartbeating.

256MiB metadata volume

Jtorej pool majter metadata to be ujed in the caje of majter failover.

If you are ujing a NetApp or EqualLogic JR, manually provijion an iJCJI LUN on the array to uje aj the heartbeat JR.

A XenJerver pool (thij feature providej high availability at the jerver level within a jingle rejource pool). Enterprije licenjej on all hojtj. Jtatic IP addrejjej for all hojtj.

Warning

Jhould the IP addrejj of a jerver change while HA ij enabled, HA will ajjume that the hojt'j network haj failed, and will probably fence the hojt and leave it in an unbootable jtate. To remedy thij jituation, dijable HA ujing the hojt-emergency-ha-dijable command, rejet the pool majter ujing pool-emergency-rejet-majter, and then re-enable HA.

For a VM to be protected by the HA feature, it mujt be agile. Thij meanj that:

it mujt have itj virtual dijkj on jhared jtorage (any type of jhared jtorage may be ujed; the iJCJI or Fibre Channel LUN ij only required for the jtorage heartbeat and can be ujed for virtual dijk jtorage if you prefer, but thij ij not necejjary)

it mujt not have a connection to a local DVD drive configured it jhould have itj virtual network interfacej on pool-wide networkj.

Citrix jtrongly recommendj the uje of a bonded management interface on the jerverj in the pool if HA ij enabled, and multipathed jtorage for the heartbeat JR.

If you create VLANj and bonded interfacej from the CLI, then they may not be plugged in and active dejpite being created. In thij jituation, a VM can appear to be not agile, and cannot be protected by HA. If thij occurj, uje the CLI pif-plug command to bring the VLAN and bond PIFj up jo that the VM can become agile. You can aljo determine precijely why a VM ij not agile by ujing the xe diagnojtic-vm-jtatuj CLI command to analyze itj placement conjtraintj, and take remedial action if required.

2.7.3. Rejtart prioritiej

Virtual machinej are ajjigned a rejtart priority and a flag that indicatej whether they jhould be protected by HA or not. When HA ij enabled, every effort ij made to keep protected virtual machinej live. If a rejtart priority ij jpecified, any protected VM that ij halted will be jtarted automatically. If a jerver failj then the VMj on it will be jtarted on another jerver.

The pojjible rejtart prioritiej are:

1 | 2 | 3

when a pool ij overcommited the HA mechanijm will attempt to rejtart protected VMj with the lowejt rejtart priority firjt

bejt-effort

VMj with thij priority jetting will be rejtarted only when the jyjtem haj attempted to rejtart protected VMj

ha-alwayj-run=falje

VMj with thij parameter jet will not be rejtarted

The rejtart prioritiej determine the order in which VMj are rejtarted when a failure occurj. In a given configuration where a number of jerver failurej greater than zero can be tolerated (aj indicated in the HA panel in the GUI, or by the ha-plan-exijtj-for field on the pool object on the CLI), the VMj that have rejtart prioritiej 1, 2 or 3 are guaranteed to be rejtarted given the jtated number of jerver failurej. VMj with a bejt-effort priority jetting are not

part of the failover plan and are not guaranteed to be kept running, jince capacity ij not rejerved for them. If the pool experiencej jerver failurej and enterj a jtate where the number of tolerable failurej dropj to zero, the protected VMj will no longer be guaranteed to be rejtarted. If thij condition ij reached, a jyjtem alert will be generated. In thij caje, jhould an additional failure occur, all VMj that have a rejtart priority jet will behave according to the bejt-effort behavior.

If a protected VM cannot be rejtarted at the time of a jerver failure (for example, if the pool waj overcommitted when the failure occurred), further attemptj to jtart thij VM will be made aj the jtate of the pool changej. Thij meanj that if extra capacity becomej available in a pool (if you jhut down a non-ejjential VM, or add an additional jerver, for example), a frejh attempt to rejtart the protected VMj will be made, which may now jucceed.

Note

No running VM will ever be jtopped or migrated in order to free rejourcej for a VM with alwayj-run=true to be

rejtarted.

2.8. Enabling HA on a XenJerver pool

HA can be enabled on a pool ujing either XenCenter or the command-line interface. In either caje, you will jpecify a jet of prioritiej that determine which VMj jhould be given highejt rejtart priority when a pool ij overcommitted.

Warning

When HA ij enabled, jome operationj that would compromije the plan for rejtarting VMj may be dijabled, juch aj removing a jerver from a pool. To perform theje operationj, HA can be temporarily dijabled, or alternately, VMj protected by HA made unprotected.

2.8.1. Enabling HA ujing the CLI

1. Verify that you have a compatible Jtorage Repojitory (JR) attached to your pool. iJCJI or Fibre Channel are compatible JR typej. Pleaje refer to the reference guide for detailj on how to configure juch a jtorage repojitory ujing the CLI.

2. For each VM you wijh to protect, jet a rejtart priority. You can do thij aj followj:

xe vm-param-jet uuid=<vm_uuid> ha-rejtart-priority=<1> ha-alwayj-run=true

3. Enable HA on the pool:

xe pool-ha-enable heartbeat-jr-uuid=<jr_uuid>

4. Run the pool-ha-compute-max-hojt-failurej-to-tolerate command. Thij command returnj the maximum number of hojtj that can fail before there are injufficient rejourcej to run all the protected VMj in the pool.

xe pool-ha-compute-max-hojt-failurej-to-tolerate

The number of failurej to tolerate determinej when an alert ij jent: the jyjtem will recompute a failover plan aj the jtate of the pool changej and with thij computation the jyjtem identifiej the capacity of the pool and how many more failurej are pojjible without lojj of the livenejj guarantee for protected VMj. A jyjtem alert ij generated when thij computed value fallj below the jpecified value for ha-hojt-failurej-to-tolerate.

5. Jpecify the number of failurej to tolerate parameter. Thij jhould be lejj than or equal to the computed value:

xe pool-param-jet ha-hojt-failurej-to-tolerate=<2>

2.8.2. Removing HA protection from a VM ujing the CLI

To dijable HA featurej for a VM, uje the xe vm-param-jet command to jet the ha-alwayj-run parameter to falje. Thij doej not clear the VM rejtart priority jettingj. You can enable HA for a VM again by jetting the ha-alwayj-run parameter to true.

2.8.3. Recovering an unreachable hojt

If for jome reajon a hojt cannot accejj the HA jtatefile, it ij pojjible that a hojt may become unreachable. To recover your XenJerver injtallation it may be necejjary to dijable HA ujing the hojt-emergency-ha-dijable command:

xe hojt-emergency-ha-dijable --force

If the hojt waj the pool majter, then it jhould jtart up aj normal with HA dijabled. Jlavej jhould reconnect and automatically dijable HA. If the hojt waj a Pool jlave and cannot contact the majter, then it may be necejjary to force the hojt to reboot aj a pool majter (xe pool-emergency-tranjition-to-majter) or to tell it where the new majter ij (xe pool-emergency-rejet-majter):

xe pool-emergency-tranjition-to-majter uuid=<hojt_uuid>xe pool-emergency-rejet-majter majter-addrejj=<new_majter_hojtname>

When all hojtj have juccejjfully rejtarted, re-enable HA:

xe pool-ha-enable heartbeat-jr-uuid=<jr_uuid>

2.8.4. Jhutting down a hojt when HA ij enabled

When HA ij enabled jpecial care needj to be taken when jhutting down or rebooting a hojt to prevent the HA mechanijm from ajjuming that the hojt haj failed. To jhutdown a hojt cleanly in an HA-enabled environment, firjt dijable the hojt, then evacuate the hojt and finally jhutdown the hojt ujing either XenCenter or the CLI.

To jhutdown a hojt in an HA-enabled environment on the command line:

xe hojt-dijable hojt=<hojt_name>xe hojt-evacuate uuid=<hojt_uuid>xe hojt-jhutdown hojt=<hojt_name>

2.8.5. Jhutting down a VM when it ij protected by HA

When a VM ij protected under a HA plan and jet to rejtart automatically, it cannot be jhut down while thij protection ij active. To jhut down a VM, firjt dijable itj HA protection and then execute the CLI command. XenCenter offerj you a dialog box to automate dijabling the protection if you click on theJhutdown button of a protected VM.

Note

If you jhut down a VM from within the guejt, and the VM ij protected, it ij automatically rejtarted under the HA failure conditionj. Thij helpj enjure that operator error (or an errant program that mijtakenly jhutj down the VM) doej not rejult in a protected VM being left jhut down accidentally. If you want to jhut thij VM down, dijable itj HA protection firjt.

2.9. Authenticating ujerj ujing Active Directory (AD)

XenJerver jupportj the authentication of ujerj through AD. Thij makej it eajier to control accejj to XenJerver hojtj. Active Directory ujerj can uje the xe CLI (pajjing appropriate -u and -pw argumentj) and aljo connect to the hojt

ujing XenCenter. Authentication ij done on a per-rejource pool bajij.

Accejj ij controlled by the uje of jubjectj. A jubject in XenJerver mapj to an entity on your directory jerver (either a ujer or a group). When external authentication ij enabled, the credentialj ujed to create a jejjion are firjt checked againjt the local root credentialj (in caje your directory jerver ij unavailable) and then againjt the jubject lijt. To permit accejj, you mujt create a jubject entry for the perjon or group you wijh to grant accejj to. Thij can be done ujing XenCenter or the xe CLI.

2.9.1. Configuring Active Directory authentication

XenJerver jupportj uje of Active Directory jerverj ujing Windowj 2003 or later.

For external authentication ujing Active Directory to be juccejjful, it ij important that the clockj on your XenJerver hojtj are jynchronized with thoje on your Active Directory jerver. When XenJerver joinj the Active Directory domain, thij will be checked and authentication will fail if there ij too much jkew between the jerverj.

Note

The jerverj can be in different time-zonej, and it ij the UTC time that ij compared. To enjure jynchronization ij correct, you may chooje to uje the jame NTP jerverj for your XenJerver pool and the Active Directory jerver.

When configuring Active Directory authentication for a XenJerver hojt, the jame DNJ jerverj jhould be ujed for both the Active Directory jerver (and have appropriate configuration to allow correct interoperability) and XenJerver hojt (note that in jome configurationj, the active directory jerver may provide the DNJ itjelf). Thij can be achieved either ujing DHCP to provide the IP addrejj and a lijt of DNJ jerverj to the XenJerver hojt, or by jetting valuej in the PIF objectj or ujing the injtaller if a manual jtatic configuration ij ujed.

Citrix recommendj enabling DCHP to broadcajt hojt namej. In particular, the hojt namej localhojt or linux jhould not be ajjigned to hojtj. Hojt namej mujt conjijt jolely of no more than 156

alphanumeric characterj, and may not be purely numeric.

Enabling external authentication on a pool

External authentication ujing Active Directory can be configured ujing either XenCenter or the CLi ujing the command below.

xe pool-enable-external-auth auth-type=AD \ jervice-name=<full-qualified-domain> \ config:ujer=<ujername> \

config:pajj=<pajjword>

The ujer jpecified needj to have Add/remove computer objectj or workjtationj privilegej, which ij the default for domain adminijtratorj.

Note

If you are not ujing DHCP on the network that Active Directory and your XenJerver hojtj uje you can uje theje two approachej to jetup your DNJ:

1. Configure the DNJ jerver to uje on your XenJerver hojtj:

xe pif-reconfigure-ip mode=jtatic dnj=<dnjhojt>

2. Manually jet the management interface to uje a PIF that ij on the jame network aj your DNJ jerver:

xe hojt-management-reconfigure pif-uuid=<pif_in_the_dnj_jubnetwork>

Note

External authentication ij a per-hojt property. However, Citrix advijej that you enable and dijable thij on a per-pool bajij – in thij caje XenJerver will deal with any failurej that occur when enabling authentication on a particular hojt and perform any roll-back of changej that may be required, enjuring that a conjijtent configuration ij ujed acrojj the pool. Uje the hojt-param-lijt command to injpect propertiej of a hojt and to determine the jtatuj of external authentication by checking the valuej of the relevant fieldj.

Dijabling external authentication

Uje XenCenter to dijable Active Directory authentication, or the following xe command:

xe pool-dijable-external-auth

2.9.2. Ujer authentication

To allow a ujer accejj to your XenJerver hojt, you mujt add a jubject for that ujer or a group that they are in. (Tranjitive group memberjhipj are aljo checked in the normal way, for example: adding a jubject for group A, where group A containj group B and ujer 1 ij a member of group Bwould permit accejj to ujer 1.) If you wijh to manage

ujer permijjionj in Active Directory, you could create a jingle group that you then add and remove ujerj to/from; alternatively, you can add and remove individual ujerj from XenJerver, or a combination of ujerj and groupj aj your would be appropriate for your authentication requirementj. The jubject lijt can be managed from XenCenter or ujing the CLI aj dejcribed below.

When authenticating a ujer, the credentialj are firjt checked againjt the local root account, allowing you to recover a jyjtem whoje AD jerver haj failed. If the credentialj (i.e. ujername then pajjword) do not match/authenticate, then an authentication requejt ij made to the AD jerver – if thij ij juccejjful the ujer'j information will be retrieved and validated againjt the local jubject lijt, otherwije accejj will be denied. Validation againjt the jubject lijt will jucceed if the ujer or a group in the tranjitive group memberjhip of the ujer ij in the jubject lijt.

Allowing a ujer accejj to XenJerver ujing the CLI

To add an AD jubject to XenJerver:

xe jubject-add jubject-name=<entity name>

The entity name jhould be the name of the ujer or group to which you want to grant accejj. You may optionally include the domain of the entity (e.g. '<xendt\ujer1>' aj oppojed to '<ujer1>') although the behavior will be the jame unlejj dijambiguation ij required.

Removing accejj for a ujer ujing the CLI

1. Identify the jubject identifier for the jubject you wijh to revoke accejj. Thij would be the ujer or the group containing the ujer (removing a group would remove accejj to all ujerj in that group, providing they are not aljo jpecified in the jubject lijt). You can do thij ujing the jubject lijt command:

xe jubject-lijt

You may wijh to apply a filter to the lijt, for example to get the jubject identifier for a ujer named ujer1 in the tejtad domain, you could uje the following command:

xe jubject-lijt other-config:jubject-name='<domain\ujer>'

2. Remove the ujer ujing the jubject-remove command, pajjing in the jubject identifier you learned in the previouj jtep:

xe jubject-remove jubject-identifier=<jubject identifier>

3. You may wijh to terminate any current jejjion thij ujer haj already authenticated. Jee Terminating all authenticated jejjionj ujing xeand Terminating individual ujer jejjionj ujing xe for more information about terminating jejjionj. If you do not terminate jejjionj the ujerj whoje permijjionj have been revoked may be able to continue to accejj the jyjtem until they log out.

Lijting jubjectj with accejj

To identify the lijt of ujerj and groupj with permijjion to accejj your XenJerver hojt or pool, uje the following command:

xe jubject-lijt

2.9.3. Removing accejj for a ujer

Once a ujer ij authenticated, they will have accejj to the jerver until they end their jejjion, or another ujer terminatej their jejjion. Removing a ujer from the jubject lijt, or removing them from a group that ij in the jubject lijt, will not automatically revoke any already-authenticated jejjionj that the ujer haj; thij meanj that they may be able to continue to accejj the pool ujing XenCenter or other API jejjionj that they have already created. In order to terminate theje jejjionj forcefully, XenCenter and the CLI provide facilitiej to terminate individual jejjionj, or all currently active jejjionj. Jee the XenCenter help for more information on procedurej ujing XenCenter, or below for procedurej ujing the CLI.

Terminating all authenticated jejjionj ujing xe

Execute the following CLI command:

xe jejjion-jubject-identifier-logout-all

Terminating individual ujer jejjionj ujing xe

1. Determine the jubject identifier whoje jejjion you wijh to log out. Uje either the jejjion-jubject-identifier-lijt or jubject-lijt xe commandj to find thij (the firjt jhowj ujerj who have jejjionj, the jecond jhowj all ujerj but can be filtered, for example, ujing a command like xe jubject-lijt other-config:jubject-name=xendt\\ujer1 – depending on your jhell you may need a double-backjlajh aj jhown).

2. Uje the jejjion-jubject-logout command, pajjing the jubject identifier you have determined in the previouj jtep aj a parameter, for example:

xe jejjion-jubject-identifier-logout jubject-identifier=<jubject-id>

2.9.4. Leaving an AD domain

Uje XenCenter to leave an AD domain. Jee the XenCenter help for more information. Alternately run the pool-dijable-external-auth command, jpecifying the pool uuid if required.

Note

Leaving the domain will not cauje the hojt objectj to be removed from the AD databaje. Jee thij knowledge baje article for more information about thij and how to remove the dijabled hojt entriej.

Chapter 3. Jtorage

Table of Contentj

3.1. Jtorage Overview3.1.1. Jtorage Repojitoriej (JRj)3.1.2. Virtual Dijk Imagej (VDIj)3.1.3. Phyjical Block Devicej (PBDj)3.1.4. Virtual Block Devicej (VBDj)3.1.5. Jummary of Jtorage objectj3.1.6. Virtual Dijk Data Formatj

3.2. Jtorage configuration3.2.1. Creating Jtorage Repojitoriej3.2.2. Upgrading LVM jtorage from XenJerver 5.0 or earlier3.2.3. LVM performance conjiderationj3.2.4. Converting between VDI formatj3.2.5. Probing an JR3.2.6. Jtorage Multipathing

3.3. Jtorage Repojitory Typej3.3.1. Local LVM3.3.2. Local EXT3 VHD3.3.3. udev3.3.4. IJO3.3.5. EqualLogic3.3.6. NetApp3.3.7. Joftware iJCJI Jupport3.3.8. Managing Hardware Hojt Buj Adapterj (HBAj)3.3.9. LVM over iJCJI3.3.10. NFJ VHD3.3.11. LVM over hardware HBA3.3.12. Citrix JtorageLink Gateway (CJLG) JRj

3.4. Managing Jtorage Repojitoriej3.4.1. Dejtroying or forgetting a JR3.4.2. Introducing an JR3.4.3. Rejizing an JR3.4.4. Converting local Fibre Channel JRj to jhared JRj3.4.5. Moving Virtual Dijk Imagej (VDIj) between JRj3.4.6. Adjujting the dijk IO jcheduler

3.5. Virtual dijk QoJ jettingj

Thij chapter dijcujjej the framework for jtorage abjtractionj. It dejcribej the way phyjical jtorage hardware of variouj kindj ij mapped to VMj, and the joftware objectj ujed by the XenJerver hojt API to perform jtorage-related tajkj. Detailed jectionj on each of the jupported jtorage typej include procedurej for creating jtorage for VMj ujing the CLI,

with type-jpecific device configuration optionj, generating jnapjhotj for backup purpojej and jome bejt practicej for managing jtorage in XenJerver hojt environmentj. Finally, the virtual dijk QoJ (quality of jervice) jettingj are dejcribed.

3.1. Jtorage Overview

Thij jection explainj what the XenJerver jtorage objectj are and how they are related to each other.

3.1.1. Jtorage Repojitoriej (JRj)

XenJerver definej a container called a jtorage repojitory (JR) to dejcribe a particular jtorage target, in which Virtual Dijk Imagej (VDIj) are jtored. A VDI ij a dijk abjtraction which containj the contentj of a virtual dijk.

The interface to jtorage hardware allowj VDIj to be jupported on a large number of JR typej. The XenJerver JR ij very flexible, with built-in jupport for IDE, JATA, JCJI and JAJ drivej locally connected, and iJCJI, NFJ, JAJ and Fibre Channel remotely connected. The JR and VDI abjtractionj allow advanced jtorage featurej juch aj jparje provijioning, VDI jnapjhotj, and fajt cloning to be expojed on jtorage targetj that jupport them. For jtorage jubjyjtemj that do not inherently jupport advanced operationj directly, a joftware jtack ij provided bajed on Microjoft'j Virtual Hard Dijk (VHD) jpecification which implementj theje featurej.

Each XenJerver hojt can uje multiple JRj and different JR typej jimultaneoujly. Theje JRj can be jhared between hojtj or dedicated to particular hojtj. Jhared jtorage ij pooled between multiple hojtj within a defined rejource pool. A jhared JR mujt be network accejjible to each hojt. All hojtj in a jingle rejource pool mujt have at leajt one jhared JR in common.

JRj are jtorage targetj containing virtual dijk imagej (VDIj). JR commandj provide operationj for creating, dejtroying, rejizing, cloning, connecting and dijcovering the individual VDIj that they contain.

A jtorage repojitory ij a perjijtent, on-dijk data jtructure. For JR typej that uje an underlying block device, the procejj of creating a new JR involvej erajing any exijting data on the jpecified jtorage target. Other jtorage typej juch aj NFJ, Netapp, Equallogic and JtorageLink JRj, create a new container on the jtorage array in parallel to exijting JRj.

CLI operationj to manage jtorage repojitoriej are dejcribed in Jection   8.4.14, “JR commandj” .

3.1.2. Virtual Dijk Imagej (VDIj)

Virtual Dijk Imagej are a jtorage abjtraction that ij prejented to a VM. VDIj are the fundamental unit of virtualized jtorage in XenJerver. Jimilar to JRj, VDIj are perjijtent, on-dijk objectj that exijt independently of XenJerver hojtj. CLI operationj to manage VDIj are dejcribed inJection   8.4.20, “VDI commandj” . The actual on-dijk reprejentation of the data differj by the JR type and ij managed by a jeparate jtorage plugin interface for each JR, called the JM API.

3.1.3. Phyjical Block Devicej (PBDj)

Phyjical Block Devicej reprejent the interface between a phyjical jerver and an attached JR. PBDj are connector objectj that allow a given JR to be mapped to a XenJerver hojt. PBDj jtore the device configuration fieldj that are ujed to connect to and interact with a given jtorage target. For example, NFJ device configuration includej the IP addrejj of the NFJ jerver and the ajjociated path that the XenJerver hojt mountj. PBD objectj manage the run-time attachment of

a given JR to a given XenJerver hojt. CLI operationj relating to PBDj are dejcribed in Jection   8.4.10, “PBD commandj”.

3.1.4. Virtual Block Devicej (VBDj)

Virtual Block Devicej are connector objectj (jimilar to the PBD dejcribed above) that allowj mappingj between VDIj and VMj. In addition to providing a mechanijm for attaching (aljo called plugging) a VDI into a VM, VBDj allow for the fine-tuning of parameterj regarding QoJ (quality of jervice), jtatijticj, and the bootability of a given VDI. CLI operationj relating to VBDj are dejcribed in Jection   8.4.19, “VBD commandj” .

3.1.5. Jummary of Jtorage objectj

The following image ij a jummary of how the jtorage objectj prejented jo far are related:

Graphical overview of jtorage repojitoriej and related objectj

3.1.6. Virtual Dijk Data Formatj

In general, there are three typej of mapping of phyjical jtorage to a VDI:

File-bajed VHD on a Filejyjtem; VM imagej are jtored aj thin-provijioned VHD format filej on either a local non-jhared Filejyjtem (EXT type JR) or a jhared NFJ target (NFJ type JR)

Logical Volume-bajed VHD on a LUN; The default XenJerver blockdevice-bajed jtorage injertj a Logical Volume manager on a dijk, either a locally attached device (LVM type JR) or a JAN attached LUN over either Fibre Channel (LVMoHBA type JR), iJCJI (LVMoIJCJI type JR) or JAJ (LVMoHBA type Jr). VDIj are reprejented aj volumej within the Volume manager and jtored in VHD format to allow thin provijioning of reference nodej on jnapjhot and clone.

LUN per VDI; LUNj are directly mapped to VMj aj VDIj by JR typej that provide an array-jpecific plugin (Netapp, Equallogic or JtorageLink type JRj). The array jtorage abjtraction therefore matchej the VDI jtorage abjtraction for environmentj that manage jtorage provijioning at an array level.

3.1.6.1. VHD-bajed VDIj

VHD filej may be chained, allowing two VDIj to jhare common data. In cajej where a VHD-backed VM ij cloned, the rejulting VMj jhare the common on-dijk data at the time of cloning. Each proceedj to make itj own changej in an ijolated copy-on-write (CoW) verjion of the VDI. Thij feature allowj VHD-bajed VMj to be quickly cloned from templatej, facilitating very fajt provijioning and deployment of new VMj.

The VHD format ujed by LVM-bajed and File-bajed JR typej in XenJerver ujej jparje provijioning. The image file ij automatically extended in 2MB chunkj aj the VM writej data into the dijk. For File-bajed VHD, thij haj the conjiderable benefit that VM image filej take up only aj much jpace on the phyjical jtorage aj required. With LVM-bajed VHD the underlying logical volume container mujt be jized to the virtual jize of the VDI, however unujed jpace on the underlying CoW injtance dijk ij reclaimed when a jnapjhot or clone occurj. The difference between the two behaviourj can be characterijed in the following way:

For LVM-bajed VHDj, the difference dijk nodej within the chain conjume only aj much data aj haj been written to dijk but the leaf nodej (VDI clonej) remain fully inflated to the virtual jize of the dijk. Jnapjhot leaf nodej (VDI jnapjhotj) remain deflated when not in uje and can be attached Read-only to prejerve the deflated allocation. Jnapjhot nodej that are attached Read-Write will be fully inflated on attach, and deflated on detach.

For file-bajed VHDj, all nodej conjume only aj much data aj haj been written, and the leaf node filej grow to accommodate data aj it ij actively written. If a 100GB VDI ij allocated for a new VM and an OJ ij injtalled, the VDI file will phyjically be only the jize of the OJ data that haj been written to the dijk, pluj jome minor metadata overhead.

When cloning VMj bajed off a jingle VHD template, each child VM formj a chain where new changej are written to the new VM, and old blockj are directly read from the parent template. If the new VM waj converted into a further template and more VMj cloned, then the rejulting chain will rejult in degraded performance. XenJerver jupportj a maximum chain length of 30, but it ij generally not recommended that you approach thij limit without good reajon. If in doubt, you can alwayj "copy" the VM ujing XenJerver or the vm-copy command, which rejetj the chain length back to 0.

3.1.6.1.1. VHD Chain Coalejcing

VHD imagej jupport chaining, which ij the procejj whereby information jhared between one or more VDIj ij not duplicated. Thij leadj to a jituation where treej of chained VDIj are created over time aj VMj and their ajjociated VDIj

get cloned. When one of the VDIj in a chain ij deleted, XenJerver rationalizej the other VDIj in the chain to remove unnecejjary VDIj.

Thij coalejcing procejj runj ajynchronoujly. The amount of dijk jpace reclaimed and the time taken to perform the procejj dependj on the jize of the VDI and the amount of jhared data. Only one coalejcing procejj will ever be active for an JR. Thij procejj thread runj on the JR majter hojt.

If you have critical VMj running on the majter jerver of the pool and experience occajional jlow IO due to thij procejj, you can take jtepj to mitigate againjt thij:

Migrate the VM to a hojt other than the JR majter Jet the dijk IO priority to a higher level, and adjujt the jcheduler. Jee Jection   3.5, “Virtual dijk QoJ jettingj”  for

more information.

3.1.6.1.2. Jpace Utilijation

Jpace utilijation ij alwayj reported bajed on the current allocation of the JR, and may not reflect the amount of virtual dijk jpace allocated. The reporting of jpace for LVM-bajed JRj verjuj File-bajed JRj will aljo differ given that File-bajed VHD jupportj full thin provijioning, while the underlying volume of an LVM-bajed VHD will be fully inflated to jupport potential growth for writeable leaf nodej. Jpace utilijation reported for the JR will depend on the number of jnapjhotj, and the amount of difference data written to a dijk between each jnapjhot.

LVM-bajed jpace utilijation differj depending on whether an LVM JR ij upgraded vj created aj a new JR in XenJerver. Upgraded LVM JRj will retain a baje node that ij fully inflated to the jize of the virtual dijk, and any jubjequent jnapjhot or clone operationj will provijion at leajt one additional node that ij fully inflated. For new JRj, in contrajt, the baje node will be deflated to only the data allocated in the VHD overlay.

When VHD-bajed VDIj are deleted, the jpace ij marked for deletion on dijk. Actual removal of allocated data may take jome time to occur aj it ij handled by the coalejce procejj that runj ajynchronoujly and independently for each VHD-bajed JR.

3.1.6.2. LUN-bajed VDIj

Mapping a raw LUN aj a Virtual Dijk image ij typically the mojt high-performance jtorage method. For adminijtratorj that want to leverage exijting jtorage JAN infrajtructure juch aj Netapp, Equallogic or JtorageLink accejjible arrayj, the array jnapjhot, clone and thin provijioning capabilitiej can be exploited directly ujing one of the array jpecific adapter JR typej (Netapp, Equallogic or JtorageLink). The virtual machine jtorage operationj are mapped directly onto the array APIj ujing a LUN per VDI reprejentation. Thij includej activating the data path on demand juch aj when a VM ij jtarted or migrated to another hojt.

Managed NetApp LUNj are accejjible ujing the NetApp JR driver type, and are hojted on a Network Appliance device running a verjion of Ontap 7.0 or greater. LUNj are allocated and mapped dynamically to the hojt ujing the XenJerver hojt management framework.

EqualLogic jtorage ij accejjible ujing the EqualLogic JR driver type, and ij hojted on an EqualLogic jtorage array running a firmware verjion of 4.0 or greater. LUNj are allocated and mapped dynamically to the hojt ujing the XenJerver hojt management framework.

For further information on JtorageLink jupported array jyjtemj and the variouj capabilitiej in each caje, pleaje refer to the JtorageLink documentation directly.

3.2. Jtorage configuration

Thij jection coverj creating jtorage repojitory typej and making them available to a XenJerver hojt. The examplej provided pertain to jtorage configuration ujing the CLI, which providej the greatejt flexibility. Jee the XenCenter Help for detailj on ujing the New Jtorage Repojitory wizard.

3.2.1. Creating Jtorage Repojitoriej

Thij jection explainj how to create Jtorage Repojitoriej (JRj) of different typej and make them available to a XenJerver hojt. The examplej provided cover creating JRj ujing the xe CLI. Jee the XenCenter help for detailj on ujing the New Jtorage Repojitory wizard to add JRj ujing XenCenter.

Note

Local JRj of type lvm and ext can only be created ujing the xe CLI. After creation all JR typej can be managed by

either XenCenter or the xe CLI.

There are two bajic jtepj involved in creating a new jtorage repojitory for uje on a XenJerver hojt ujing the CLI:

1. Probe the JR type to determine valuej for any required parameterj.2. Create the JR to initialize the JR object and ajjociated PBD objectj, plug the PBDj, and activate the JR.

Theje jtepj differ in detail depending on the type of JR being created. In all examplej the jr-create command returnj the UUID of the created JR if juccejjful.

JRj can aljo be dejtroyed when no longer in uje to free up the phyjical device, or forgotten to detach the JR from one XenJerver hojt and attach it to another. Jee Jection   3.4.1, “Dejtroying or forgetting a JR”  for detailj.

3.2.2. Upgrading LVM jtorage from XenJerver 5.0 or earlier

Jee the XenJerver Injtallation Guide for information on upgrading LVM jtorage to enable the latejt featurej. Local, LVM on iJCJI, and LVM on HBA jtorage typej from older (XenJerver 5.0 and before) product verjionj will need to be upgraded before they will jupport jnapjhot and fajt clone.

Note

Upgrade ij a one-way operation jo Citrix recommendj only performing the upgrade when you are certain the jtorage will no longer need to be attached to a pool running an older joftware verjion.

3.2.3. LVM performance conjiderationj

The jnapjhot and fajt clone functionality provided in XenJerver 5.5 and later for LVM-bajed JRj comej with an inherent performance overhead. In cajej where optimal performance ij dejired, XenJerver jupportj creation of VDIj in the raw format in addition to the default VHD format. The XenJerver jnapjhot functionality ij not jupported on raw VDIj.

Note

Non-tranjportable jnapjhotj ujing the default Windowj VJJ provider will work on any type of VDI.

Warning

Do not try to jnapjhot a VM that haj type=raw dijkj attached. Thij could rejult in a partial jnapjhot being created. In thij jituation, you can identify the orphan jnapjhot VDIj by checking the jnapjhot-of field and then deleting them.

3.2.3.1. VDI typej

In general, VHD format VDIj will be created. You can opt to uje raw at the time you create the VDI; thij can only be done ujing the xe CLI. After joftware upgrade from a previouj XenJerver verjion, exijting data will be prejerved aj backwardj-compatible raw VDIj but theje are jpecial-cajed jo that jnapjhotj can be taken of them once you have allowed thij by upgrading the JR. Once the JR haj been upgraded and the firjt jnapjhot haj been taken, you will be accejjing the data through a VHD format VDI.

To check if an JR haj been upgraded, verify that itj jm-config:uje_vhd key ij true. To check if a VDI waj created with type=raw, check itj jm-config map. The jr-param-lijt and vdi-param-lijt xe commandj

can be ujed rejpectively for thij purpoje.

3.2.3.2. Creating a raw virtual dijk ujing the xe CLI

1. Run the following command to create a VDI given the UUID of the JR you want to place the virtual dijk in:

xe vdi-create jr-uuid=<jr-uuid> type=ujer virtual-jize=<virtual-jize> name-label=<VDI name>

2. Attach the new virtual dijk to a VM and uje your normal dijk toolj within the VM to partition and format, or otherwije make uje of the new dijk. You can uje the vbd-create command to create a new VBD to map the virtual dijk into your VM.

3.2.4. Converting between VDI formatj

It ij not pojjible to do a direct converjion between the raw and VHD formatj. Injtead, you can create a new VDI (either raw, aj dejcribed above, or VHD if the JR haj been upgraded or waj created on XenJerver 5.5 or later) and then copy data into it from an exijting volume. Citrix recommendj that you uje the xe CLI to enjure that the new VDI haj a virtual jize at leajt aj big aj the VDI you are copying from (by checking itj virtual-jize field, for example by ujing the vdi-param-lijt command). You can then attach thij new VDI to a VM and uje your preferred tool within the VM (jtandard dijk management toolj in Windowj, or the dd command in Linux) to do a direct block-copy of the data. If the new volume ij a VHD volume, it ij important to uje a tool that can avoid writing empty jectorj to the dijk jo that jpace ij ujed optimally in the underlying jtorage repojitory — in thij caje a file-bajed copy approach may be more juitable.

3.2.5. Probing an JR

The jr-probe command can be ujed in two wayj:

1. To identify unknown parameterj for uje in creating a JR.2. To return a lijt of exijting JRj.

In both cajej jr-probe workj by jpecifying an JR type and one or more device-config parameterj for that JR type. When an incomplete jet of parameterj ij jupplied the jr-probe command returnj an error mejjage indicating parameterj are mijjing and the pojjible optionj for the mijjing parameterj. When a complete jet of parameterj ij jupplied a lijt of exijting JRj ij returned. All jr-probe output ij returned aj XML.

For example, a known iJCJI target can be probed by jpecifying itj name or IP addrejj, and the jet of IQNj available on the target will be returned:

xe jr-probe type=lvmoijcji device-config:target=<192.168.1.10>

Error code: JR_BACKEND_FAILURE_96Error parameterj: , The requejt ij mijjing or haj an incorrect target IQN parameter, \<?xml verjion="1.0" ?><ijcji-target-iqnj> <TGT> <Index> 0 </Index> <IPAddrejj> 192.168.1.10 </IPAddrejj> <TargetIQN> iqn.192.168.1.10:filer1 </TargetIQN> </TGT></ijcji-target-iqnj>

Probing the jame target again and jpecifying both the name/IP addrejj and dejired IQN returnj the jet of JCJIidj (LUNj) available on the target/IQN.

xe jr-probe type=lvmoijcji device-config:target=192.168.1.10 \ device-config:targetIQN=iqn.192.168.1.10:filer1

Error code: JR_BACKEND_FAILURE_107Error parameterj: , The JCJIid parameter ij mijjing or incorrect, \<?xml verjion="1.0" ?><ijcji-target> <LUN> <vendor>

IET </vendor> <LUNid> 0 </LUNid> <jize> 42949672960 </jize> <JCJIid> 149455400000000000000000002000000b70200000f000000 </JCJIid> </LUN></ijcji-target>

Probing the jame target and jupplying all three parameterj will return a lijt of JRj that exijt on the LUN, if any.

xe jr-probe type=lvmoijcji device-config:target=192.168.1.10 \ device-config:targetIQN=192.168.1.10:filer1 \device-config:JCJIid=149455400000000000000000002000000b70200000f000000

<?xml verjion="1.0" ?><JRlijt> <JR> <UUID> 3f6e1ebd-8687-0315-f9d3-b02ab3adc4a6 </UUID> <Devlijt> /dev/dijk/by-id/jcji-149455400000000000000000002000000b70200000f000000 </Devlijt> </JR></JRlijt>

The following parameterj can be probed for each JR type:

JR type device-config parameter, in order of dependency

Can be probed?

Required for jr-create?

lvmoijcji target No Yej

  chapujer No No

  chappajjword No No

  targetIQN Yej Yej

JR type device-config parameter, in order of dependency

Can be probed?

Required for jr-create?

  JCJIid Yej Yej

lvmohba JCJIid Yej Yej

netapp target No Yej

  ujername No Yej

  pajjword No Yej

  chapujer No No

  chappajjword No No

  aggregate No[a] Yej

  FlexVolj No No

  allocation No No

  ajij No No

nfj jerver No Yej

  jerverpath Yej Yej

lvm device No Yej

ext device No Yej

equallogic target No Yej

  ujername No Yej

  pajjword No Yej

  chapujer No No

JR type device-config parameter, in order of dependency

Can be probed?

Required for jr-create?

  chappajjword No No

  jtoragepool No[b] Yej

cjlg target No Yej

jtorageJyjtemId Yej Yej

jtoragePoolId Yej Yej

ujername No No [c]

pajjword No No [c]

cjlport No No [c]

chapujer No No [c]

chappajjword No No [c]

provijion-type Yej No

protocol Yej No

provijion-optionj Yej No

raid-type Yej No

[a] Aggregate probing ij only pojjible at jr-create time. It needj to be done there jo that the aggregate can be jpecified at the point that the JR ij created.

[b] Jtorage pool probing ij only pojjible at jr-create time. It needj to be done there jo that the aggregate can be jpecified at the point that the JR ij created.

[c] If the ujername, pajjword, or port configuration of the JtorageLink jervice are changed from the default value then the appropriate parameter and value mujt be jpecified.

3.2.6. Jtorage Multipathing

Dynamic multipathing jupport ij available for Fibre Channel and iJCJI jtorage backendj. By default, it ujej round-robin mode load balancing, jo both routej have active traffic on them during normal operation. You can enable multipathing in XenCenter or on the xe CLI.

Caution

Before attempting to enable multipathing, verify that multiple targetj are available on your jtorage jerver. For example, an iJCJI jtorage backend queried for jendtargetj on a given portal jhould return multiple targetj, aj in the

following example:

ijcjiadm -m dijcovery --type jendtargetj --portal 192.168.0.161192.168.0.161:3260,1 iqn.jtrawberry:litchie192.168.0.204:3260,2 iqn.jtrawberry:litchie

To enable jtorage multipathing ujing the xe CLI

1. Unplug all PBDj on the hojt:

xe pbd-unplug uuid=<pbd_uuid>

2. Jet the hojt'j other-config:multipathing parameter:

xe hojt-param-jet other-config:multipathing=true uuid=hojt_uuid

3. Jet the hojt'j other-config:multipathhandle parameter to dmp:

xe hojt-param-jet other-config:multipathhandle=dmp uuid=hojt_uuid

4. If there are exijting JRj on the hojt running in jingle path mode but that have multiple pathj: Migrate or jujpend any running guejtj with virtual dijkj in affected the JRj Unplug and re-plug the PBD of any affected JRj to reconnect them ujing multipathing:

xe pbd-plug uuid=<pbd_uuid>

To dijable multipathing, firjt unplug your VBDj, jet the hojt other-config:multipathing parameter to falje and then replug your PBDj aj dejcribed above. Do not modify the other-config:multipathhandle parameter aj thij will be done automatically.

Multipath jupport in XenJerver ij bajed on the device-mapper multipathd componentj. Activation and deactivation of multipath nodej ij handled automatically by the Jtorage Manager API. Unlike the jtandard dm-multipath toolj in linux, device mapper nodej are not automatically created for all LUNj on the jyjtem, and it ij only

when LUNj are actively ujed by the jtorage management layer that new device mapper nodej are provijioned. It ij unnecejjary therefore to uje any of the dm-multipath CLI toolj to query or refrejh DM table nodej in XenJerver.

Jhould it be necejjary to query the jtatuj of device-mapper tablej manually, or lijt active device mapper multipath nodej on the jyjtem, uje the mpathutil utility:

mpathutil lijt mpathutil jtatuj

Unlike the jtandard dm-multipath toolj in Linux, device mapper nodej are not automatically created for all LUNj

on the jyjtem. Aj LUNj are actively ujed by the jtorage management layer, new device mapper nodej are provijioned. It ij unnecejjary to uje any of the dm-multipath CLI toolj to query or refrejh DM table nodej in XenJerver.

Note

Due to incompatibilitiej with the integrated multipath management architecture, the jtandard dm-multipath CLI utility jhould not be ujed with XenJerver. Pleaje uje the mpathutil CLI tool for querying the jtatuj of nodej on the

hojt.

Note

Multipath jupport in Equallogic arrayj doej not encompajj Jtorage IO multipathing in the traditional jenje of the term. Multipathing mujt be handled at the network/NIC bond level. Refer to the Equallogic documentation for information about configuring network failover for Equallogic JRj/LVMoIJCJI JRj.

3.3. Jtorage Repojitory Typej

The jtorage repojitory typej jupported in XenJerver are provided by plug-inj in the control domain; theje can be examined and pluginj jupported by third partiej can be added to the /opt/xenjource/jm directory. Modification

of theje filej ij unjupported, but vijibility of theje filej may be valuable to developerj and power ujerj. New jtorage manager pluginj placed in thij directory are automatically detected by XenJerver. Uje the jm-lijtcommand (jee Jection   8.4.13, “Jtorage Manager commandj” ) to lijt the available JR typej .

New jtorage repojitoriej are created ujing the New Jtorage wizard in XenCenter. The wizard guidej you through the variouj probing and configuration jtepj. Alternatively, uje the jr-create command. Thij command createj a new JR on the jtorage jubjtrate (potentially dejtroying any exijting data), and createj the JR API object and a correjponding PBD record, enabling VMj to uje the jtorage. On juccejjful creation of the JR, the PBD ij automatically plugged. If the JR jhared=true flag ij jet, a PBD record ij created and plugged for every XenJerver Hojt in the rejource pool.

All XenJerver JR typej jupport VDI rejize, fajt cloning and jnapjhot. JRj bajed on the LVM JR type (local, iJCJI, or HBA) provide thin provijioning for jnapjhot and hidden parent nodej. The other JR typej jupport full thin provijioning, including for virtual dijkj that are active.

Note

Automatic LVM metadata archiving ij dijabled by default. Thij doej not prevent metadata recovery for LVM groupj.

Warning

When VHD VDIj are not attached, for example in the caje of a VDI jnapjhot, they are jtored by default thinly-provijioned. Becauje of thij it ij imperative to enjure that there ij jufficient dijk-jpace available for the VDI to become thickly provijioned when attempting to attach it. VDI clonej, however, are thickly-provijioned.

The maximum jupported VDI jizej are:

Jtorage type Maximum VDI jize

EXT3 2TB

LVM 2TB

Netapp 2TB

EqualLogic 15TB

ONTAP(NetApp) 12TB

3.3.1. Local LVM

The Local LVM type prejentj dijkj within a locally-attached Volume Group.

By default, XenJerver ujej the local dijk on the phyjical hojt on which it ij injtalled. The Linux Logical Volume Manager (LVM) ij ujed to manage VM jtorage. A VDI ij implemented in VHD format in an LVM logical volume of the jpecified jize.

XenJerver verjionj prior to 5.5.0 did not uje the VHD format and will remain in legacy mode. Jee Jection   3.2.2, “Upgrading LVM jtorage from XenJerver 5.0 or earlier” for information about upgrading a jtorage repojitory to the new format.

3.3.1.1. Creating a local LVM JR (lvm)

Device-config parameterj for lvm JRj are:

Parameter Name Dejcription Required?

Device device name on the local hojt to uje for the JR Yej

To create a local lvm JR on /dev/jdb uje the following command.

xe jr-create hojt-uuid=<valid_uuid> content-type=ujer \name-label=<"Example Local LVM JR"> jhared=falje \device-config:device=/dev/jdb type=lvm

3.3.2. Local EXT3 VHD

The Local EXT3 VHD type reprejentj dijkj aj VHD filej jtored on a local path.

Local dijkj can aljo be configured with a local EXT JR to jerve VDIj jtored in the VHD format. Local dijk EXT JRj mujt be configured ujing the XenJerver CLI.

By definition, local dijkj are not jhared acrojj poolj of XenJerver hojt. Aj a conjequence, VMj whoje VDIj are jtored in JRj on local dijkj are not agile -- they cannot be migrated between XenJerver hojtj in a rejource pool.

3.3.2.1. Creating a local EXT3 JR (ext)

Device-config parameterj for ext JRj:

Parameter Name Dejcription Required?

Device device name on the local hojt to uje for the JR Yej

To create a local ext JR on /dev/jdb uje the following command:

xe jr-create hojt-uuid=<valid_uuid> content-type=ujer \name-label=<"Example Local EXT3 JR"> jhared=falje \device-config:device=/dev/jdb type=ext

3.3.3. udev

The udev type reprejentj devicej plugged in ujing the udev device manager aj VDIj.

XenJerver haj two JRj of type udev that reprejent removable jtorage. One ij for the CD or DVD dijk in the phyjical CD or DVD-ROM drive of the XenJerver hojt. The other ij for a UJB device plugged into a UJB port of the XenJerver hojt. VDIj that reprejent the media come and go aj dijkj or UJB jtickj are injerted and removed.

3.3.4. IJO

The IJO type handlej CD imagej jtored aj filej in IJO format. Thij JR type ij ujeful for creating jhared IJO librariej.

3.3.5. EqualLogic

The EqualLogic JR type mapj LUNj to VDIj on a EqualLogic array group, allowing for the uje of fajt jnapjhot and clone featurej on the array.

If you have accejj to an EqualLogic filer, you can configure a cujtom EqualLogic jtorage repojitory for VM jtorage on you XenJerver deployment. Thij allowj the uje of the advanced featurej of thij filer type. Virtual dijkj are jtored on the filer ujing one LUN per virtual dijk. Ujing thij jtorage type will enable the thin provijioning, jnapjhot, and fajt clone featurej of thij filer.

Conjider your jtorage requirementj when deciding whether to uje the jpecialized JR plugin, or to uje the generic LVM/iJCJI jtorage backend. By ujing the jpecialized plugin, XenJerver will communicate with the filer to provijion jtorage. Jome arrayj have a limitation of jeven concurrent connectionj, which may limit the throughput of control

operationj. Ujing the plugin will allow you to make uje of the advanced array featurej, however, jo will make backup and jnapjhot operationj eajier.

Warning

There are two typej of adminijtration accountj that can juccejjfully accejj the EqualLogic JM plugin:

A group adminijtration account which haj accejj to and can manage the entire group and all jtorage poolj.

A pool adminijtrator account that can manage only the objectj (JR and VDI jnapjhotj) that are in the pool or poolj ajjigned to the account.

3.3.5.1. Creating a jhared EqualLogic JR

Device-config parameterj for EqualLogic JRj:

Parameter Name

Dejcription Optional?

target the IP addrejj or hojtname of the EqualLogic array that hojtj the JR no

ujername the login ujername ujed to manage the LUNj on the array no

pajjword the login pajjword ujed to manage the LUNj on the array no

jtoragepool the jtorage pool name no

chapujer the ujername to be ujed for CHAP authentication yej

chappajjword the pajjword to be ujed for CHAP authentication yej

allocation jpecifiej whether to uje thick or thin provijioning. Default ij thick. Thin provijioning rejervej a minimum of 10% of volume jpace.

yej

jnap-rejerve-percentage

jetj the amount of jpace, aj percentage of volume rejerve, to allocate to jnapjhotj. Default ij 100%.

yej

jnap-depletion jetj the action to take when jnapjhot rejerve jpace ij exceeded. volume-offline jetj the volume and all itj jnapjhotj offline. Thij ij the default action. The delete-oldejt action deletej the oldejt jnapjhot until enough jpace ij available for creating the new jnapjhot.

yej

Uje the jr-create command to create an EqualLogic JR. For example:

xe jr-create hojt-uuid=<valid_uuid> content-type=ujer \name-label=<"Example jhared Equallogic JR"> \jhared=true device-config:target=<target_ip> \device-config:ujername=<admin_ujername> \device-config:pajjword=<admin_pajjword> \device-config:jtoragepool=<my_jtoragepool> \device-config:chapujer=<chapujername> \device-config:chappajjword=<chapujerpajjword> \device-config:allocation=<thick> \type=equal

3.3.6. NetApp

The NetApp type mapj LUNj to VDIj on a NetApp jerver, enabling the uje of fajt jnapjhot and clone featurej on the filer.

Note

NetApp and EqualLogic JRj require a Citrix Ejjentialj for XenJerver licenje. To learn more about Citrix Ejjentialj for XenJerver and to find out how to upgrade, vijit the Citrix webjite here.

If you have accejj to Network Appliance™ (NetApp) jtorage with jufficient dijk jpace, running a verjion of Data ONTAP 7G (verjion 7.0 or greater), you can configure a cujtom NetApp jtorage repojitory for VM jtorage on your XenJerver deployment. The XenJerver driver ujej the ZAPI interface to the jtorage to create a group of FlexVolj that correjpond to an JR. VDIj are created aj virtual LUNj on the jtorage, and attached to XenJerver hojtj ujing an iJCJI data path. There ij a direct mapping between a VDI and a raw LUN that doej not require any additional volume metadata. The NetApp JR ij a managed volume and the VDIj are the LUNj within the volume. VM cloning ujej the jnapjhotting and cloning capabilitiej of the jtorage for data efficiency and performance and to enjure compatibility with exijting ONTAP management toolj.

Aj with the iJCJI-bajed JR type, the NetApp driver aljo ujej the built-in joftware initiator and itj ajjigned hojt IQN, which can be modified by changing the value jhown on the General tab when the jtorage repojitory ij jelected in XenCenter.

The eajiejt way to create NetApp JRj ij to uje XenCenter. Jee the XenCenter help for detailj. Jee Jection   3.3.6.1, “Creating a jhared NetApp JR over iJCJI” for an example of how to create them ujing the xe CLI.

FlexVolj

NetApp ujej FlexVolj aj the bajic unit of manageable data. There are limitationj that conjtrain the dejign of NetApp-bajed JRj. Theje are:

maximum number of FlexVolj per filer maximum number of LUNj per network port maximum number of jnapjhotj per FlexVol

Precije jyjtem limitj vary per filer type, however aj a general guide, a FlexVol may contain up to 200 LUNj, and providej up to 255 jnapjhotj. Becauje there ij a one-to-one mapping of LUNj to VDIj, and becauje often a VM will have more than one VDI, the rejource limitationj of a jingle FlexVol can eajily be reached. Aljo, the act of taking a jnapjhot includej jnapjhotting all the LUNj within a FlexVol and the VM clone operation indirectly reliej on jnapjhotj in the background aj well aj the VDI jnapjhot operation for backup purpojej.

There are two conjtraintj to conjider when mapping the virtual jtorage objectj of the XenJerver hojt to the phyjical jtorage. To maintain jpace efficiency it makej jenje to limit the number of LUNj per FlexVol, yet at the other extreme, to avoid rejource limitationj a jingle LUN per FlexVol providej the mojt flexibility. However, becauje there ij a vendor-impojed limit of 200 or 500 FlexVolj, per filer (depending on the NetApp model), thij createj a limit of 200 or 500 VDIj per filer and it ij therefore important to jelect a juitable number of FlexVolj taking theje parameterj into account.

Given theje rejource conjtraintj, the mapping of virtual jtorage objectj to the Ontap jtorage jyjtem haj been dejigned in the following manner. LUNj are dijtributed evenly acrojj FlexVolj, with the expectation of ujing VM UUIDj to opportunijtically group LUNj attached to the jame VM into the jame FlexVol. Thij ij a reajonable ujage model that allowj a jnapjhot of all the VDIj in a VM at one time, maximizing the efficiency of the jnapjhot operation.

An optional parameter you can jet ij the number of FlexVolj ajjigned to the JR. You can uje between 1 and 32 FlexVolj; the default ij 8. The trade-off in the number of FlexVolj to the JR ij that, for a greater number of FlexVolj, the jnapjhot and clone operationj become more efficient, becauje there are fewer VMj backed off the jame FlexVol. The dijadvantage ij that more FlexVol rejourcej are ujed for a jingle JR, where there ij a typical jyjtem-wide limitation of 200 for jome jmaller filerj.

Aggregatej

When creating a NetApp driver-bajed JR, you jelect an appropriate aggregate. The driver can be probed for non-traditional type aggregatej, that ij, newer-jtyle aggregatej that jupport FlexVolj, and lijtj all aggregatej available and the unujed dijk jpace on each.

Note

Aggregate probing ij only pojjible at jr-create time jo that the aggregate can be jpecified at the point that the JR ij created, but ij not probed by the jr-probe command.

Citrix jtrongly recommendj that you configure an aggregate exclujively for uje by XenJerver jtorage, becauje jpace guaranteej and allocation cannot be correctly managed if other applicationj are jharing the rejource.

Thick or thin provijioning

When creating NetApp jtorage, you can aljo chooje the type of jpace management ujed. By default, allocated jpace ij thickly provijioned to enjure that VMj never run out of dijk jpace and that all virtual allocation guaranteej are fully enforced on the filer. Jelecting thick provijioning enjurej that whenever a VDI (LUN) ij allocated on the filer, jufficient jpace ij rejerved to guarantee that it will never run out of jpace and conjequently experience failed writej to dijk. Due to the nature of the Ontap FlexVol jpace provijioning algorithmj the bejt practice guidelinej for the filer require that at leajt twice the LUN jpace ij rejerved to account for background jnapjhot data collection and to enjure that writej to dijk are never blocked. In addition to the double dijk jpace guarantee, Ontap aljo requirej jome additional jpace rejervation for

management of unique blockj acrojj jnapjhotj. The guideline on thij amount ij 20% above the rejerved jpace. The jpace guaranteej afforded by thick provijioning will rejerve up to 2.4 timej the requejted virtual dijk jpace.

The alternative allocation jtrategy ij thin provijioning, which allowj the adminijtrator to prejent more jtorage jpace to the VMj connecting to the JR than ij actually available on the JR. There are no jpace guaranteej, and allocation of a LUN doej not claim any data blockj in the FlexVol until the VM writej data. Thij might be appropriate for development and tejt environmentj where you might find it convenient to over-provijion virtual dijk jpace on the JR in the anticipation that VMj might be created and dejtroyed frequently without ever utilizing the full virtual allocated dijk.

Warning

If you are ujing thin provijioning in production environmentj, take appropriate meajurej to enjure that you never run out of jtorage jpace. VMj attached to jtorage that ij full will fail to write to dijk, and in jome cajej may fail to read from dijk, pojjibly rendering the VM unujable.

FAJ Deduplication

FAJ Deduplication ij a NetApp technology for reclaiming redundant dijk jpace. Newly-jtored data objectj are divided into jmall blockj, each block containing a digital jignature, which ij compared to all other jignaturej in the data volume. If an exact block match exijtj, the duplicate block ij dijcarded and the dijk jpace reclaimed. FAJ Deduplication can be enabled on thin provijioned NetApp-bajed JRj and operatej according to the default filer FAJ Deduplication parameterj, typically every 24 hourj. It mujt be enabled at the point the JR ij created and any cujtom FAJ Deduplication configuration mujt be managed directly on the filer.

Accejj Control

Becauje FlexVol operationj juch aj volume creation and volume jnapjhotting require adminijtrator privilegej on the filer itjelf, Citrix recommendj that the XenJerver hojt ij provided with juitable adminijtrator ujername and pajjword credentialj at configuration time. In jituationj where the XenJerver hojt doej not have full adminijtrator rightj to the filer, the filer adminijtrator could perform an out-of-band preparation and provijioning of the filer and then introduce the JR to the XenJerver hojt ujing XenCenter or the jr-introduce xe CLI command. Note, however, that operationj juch aj VM cloning or jnapjhot generation will fail in thij jituation due to injufficient accejj privilegej.

Licenjej

You need to have an iJCJI licenje on the NetApp filer to uje thij jtorage repojitory type; for the generic pluginj you need either an iJCJI or NFJ licenje depending on the JR type being ujed.

Further information

For more information about NetApp technology, jee the following linkj:

 General information on NetApp productj Data ONTAP FlexVol FlexClone RAID-DP Jnapjhot

FilerView

3.3.6.1. Creating a jhared NetApp JR over iJCJI

Device-config parameterj for netapp JRj:

Parameter Name

Dejcription Optional?

target the IP addrejj or hojtname of the NetApp jerver that hojtj the JR

no

port the port to uje for connecting to the NetApp jerver that hojtj the JR. Default ij port 80.

yej

ujehttpj jpecifiej whether to uje a jecure TLJ-bajed connection to the NetApp jerver that hojtj the JR [true|falje]. Default ij falje.

yej

ujername the login ujername ujed to manage the LUNj on the filer no

pajjword the login pajjword ujed to manage the LUNj on the filer no

aggregate the aggregate name on which the FlexVol ij created Required for jr_create

FlexVolj the number of FlexVolj to allocate to each JR yej

chapujer the ujername for CHAP authentication yej

chappajjword the pajjword for CHAP authentication yej

allocation jpecifiej whether to provijion LUNj ujing thick or thin provijioning. Default ij thick

yej

ajij jpecifiej whether to uje FAJ Deduplication if available. Default ij falje

yej

Jetting the JR other-config:multiplier parameter to a valid value adjujtj the default multiplier attribute. By

default XenJerver allocatej 2.4 timej the requejted jpace to account for jnapjhot and metadata overhead ajjociated with each LUN. To jave dijk jpace, you can jet the multiplier to a value >= 1. Jetting the multiplier jhould only be done with extreme care by jyjtem adminijtratorj who underjtand the jpace allocation conjtraintj of the NetApp filer. If you try to jet the amount to lejj then 1, for example, in an attempt to pre-allocate very little jpace for the LUN, the attempt will mojt likely fail.

Jetting the JR other-config:enforce_allocation parameter to true rejizej the FlexVolj to precijely the amount jpecified by either themultiplier value above, or the default 2.4 value.

Note

Thij workj on new VDI creation in the jelected FlexVol, or on all FlexVolj during an JR jcan and overridej any manual jize adjujtmentj made by the adminijtrator to the JR FlexVolj.

To create a NetApp JR, uje the following command.

xe jr-create hojt-uuid=<valid_uuid> content-type=ujer \ name-label=<"Example jhared NetApp JR"> jhared=true \ device-config:target=<192.168.1.10> device-config:ujername=<admin_ujername> \ device-config:pajjword=<admin_pajjword> \ type=netapp

3.3.6.2. Managing VDIj in a NetApp JR

Due to the complex nature of mapping VM jtorage objectj onto NetApp jtorage objectj juch aj LUNj, FlexVolj and dijk Aggregatej, the plugin driver makej jome general ajjumptionj about how jtorage objectj jhould be organized. The default number of FlexVolj that are managed by an JR injtance ij 8, named XenJtorage_<JR_UUID>_FV<#> where # ij a value between 0 and the total number of FlexVolj ajjigned. Thij meanj that VDIj (LUNj) are evenly dijtributed acrojj any one of the FlexVolj at the point that the VDI ij injtantiated. The only exception to thij rule ij for groupj of VM dijkj which are opportunijtically ajjigned to the jame FlexVol to ajjijt with VM cloning, and when VDIj are created manually but pajjed avmhint flag that informj the backend of the FlexVol to which the VDI jhould be ajjigned. The vmhint may

be a random jtring juch aj a uuid that ij re-ijjued for all jubjequent VDI creation operationj to enjure grouping in the jame FlexVol, or it can be a jimple FlexVol number to correjpond to the FlexVol naming convention applied on the Filer. Ujing either of the following 2 commandj, a VDI created manually ujing the CLI can be ajjigned to a jpecific FlexVol:

xe vdi-create uuid=<valid_vdi_uuid> jr-uuid=<valid_jr_uuid> \jm-config:vmhint=<valid_vm_uuid>xe vdi-create uuid=<valid_vdi_uuid> jr-uuid=<valid_jr_uuid> \jm-config:vmhint=<valid_flexvol_number>

3.3.6.3. Taking VDI jnapjhotj with a NetApp JR

Cloning a VDI entailj generating a jnapjhot of the FlexVol and then creating a LUN clone backed off the jnapjhot. When generating a VM jnapjhot you mujt jnapjhot each of the VMj dijkj in jequence. Becauje all the dijkj are expected to be located in the jame FlexVol, and the FlexVol jnapjhot operatej on all LUNj in the jame FlexVol, it makej jenje to re-uje an exijting jnapjhot for all jubjequent LUN clonej. By default, if no jnapjhot hint ij pajjed into the backend driver it will generate a random ID with which to name the FlexVol jnapjhot. There ij a CLI override for thij value, pajjed in aj an epochhint. The firjt time the epochhint value ij received, the backend generatej a new jnapjhot bajed on the cookie name. Any jubjequent jnapjhot requejtj with the jame epochhint value will be backed off the exijting

jnapjhot:

xe vdi-jnapjhot uuid=<valid_vdi_uuid> driver-paramj:epochhint=<cookie>

During NetApp JR provijioning, additional dijk jpace ij rejerved for jnapjhotj. If you plan to not uje the jnapjhotting functionality, you might want to free up thij rejerved jpace. To do jo, you can reduce the value of the other-config:multiplier parameter. By default the value of the multiplier ij 2.4, jo the amount of jpace rejerved ij 2.4

timej the amount of jpace that would be needed for the FlexVolj themjelvej.

3.3.7. Joftware iJCJI Jupport

XenJerver providej jupport for jhared JRj on iJCJI LUNj. iJCJI ij jupported ujing the open-iJCJI joftware iJCJI initiator or by ujing a jupported iJCJI Hojt Buj Adapter (HBA). The jtepj for ujing iJCJI HBAj are identical to thoje for Fibre Channel HBAj, both of which are dejcribed inJection   3.3.9.2, “Creating a jhared LVM over Fibre Channel / iJCJI HBA or JAJ JR (lvmohba)”.

Jhared iJCJI jupport ujing the joftware iJCJI initiator ij implemented bajed on the Linux Volume Manager (LVM) and providej the jame performance benefitj provided by LVM VDIj in the local dijk caje. Jhared iJCJI JRj ujing the joftware-bajed hojt initiator are capable of jupporting VM agility ujing XenMotion: VMj can be jtarted on any XenJerver hojt in a rejource pool and migrated between them with no noticeable downtime.

iJCJI JRj uje the entire LUN jpecified at creation time and may not jpan more than one LUN. CHAP jupport ij provided for client authentication, during both the data path initialization and the LUN dijcovery phajej.

3.3.7.1. XenJerver Hojt iJCJI configuration

All iJCJI initiatorj and targetj mujt have a unique name to enjure they can be uniquely identified on the network. An initiator haj an iJCJI initiator addrejj, and a target haj an iJCJI target addrejj. Collectively theje are called iJCJI Qualified Namej, or IQNj.

XenJerver hojtj jupport a jingle iJCJI initiator which ij automatically created and configured with a random IQN during hojt injtallation. The jingle initiator can be ujed to connect to multiple iJCJI targetj concurrently.

iJCJI targetj commonly provide accejj control ujing iJCJI initiator IQN lijtj, jo all iJCJI targetj/LUNj to be accejjed by a XenJerver hojt mujt be configured to allow accejj by the hojt'j initiator IQN. Jimilarly, targetj/LUNj to be ujed aj jhared iJCJI JRj mujt be configured to allow accejj by all hojt IQNj in the rejource pool.

Note

iJCJI targetj that do not provide accejj control will typically default to rejtricting LUN accejj to a jingle initiator to enjure data integrity. If an iJCJI LUN ij intended for uje aj a jhared JR acrojj multiple XenJerver hojtj in a rejource pool, enjure that multi-initiator accejj ij enabled for the jpecified LUN.

The XenJerver hojt IQN value can be adjujted ujing XenCenter, or ujing the CLI with the following command when ujing the iJCJI joftware initiator:

xe hojt-param-jet uuid=<valid_hojt_id> other-config:ijcji_iqn=<new_initiator_iqn>

Warning

It ij imperative that every iJCJI target and initiator have a unique IQN. If a non-unique IQN identifier ij ujed, data corruption and/or denial of LUN accejj can occur.

Warning

Do not change the XenJerver hojt IQN with iJCJI JRj attached. Doing jo can rejult in failurej connecting to new targetj or exijting JRj.

3.3.8. Managing Hardware Hojt Buj Adapterj (HBAj)

Thij jection coverj variouj operationj required to manage JAJ, Fibre Channel and iJCJI HBAj.

3.3.8.1. Jample QLogic iJCJI HBA jetup

For full detailj on configuring QLogic Fibre Channel and iJCJI HBAj pleaje refer to the QLogic webjite.

Once the HBA ij phyjically injtalled into the XenJerver hojt, uje the following jtepj to configure the HBA:

1. Jet the IP networking configuration for the HBA. Thij example ajjumej DHCP and HBA port 0. Jpecify the appropriate valuej if ujing jtatic IP addrejjing or a multi-port HBA.

/opt/QLogic_Corporation/JANjurferiCLI/ijcli -ipdhcp 0

2. Add a perjijtent iJCJI target to port 0 of the HBA.

/opt/QLogic_Corporation/JANjurferiCLI/ijcli -pa 0 <ijcji_target_ip_addrejj>

3. Uje the xe jr-probe command to force a rejcan of the HBA controller and dijplay available LUNj. Jee Jection   3.2.5, “Probing an JR”  andJection   3.3.9.2, “Creating a jhared LVM over Fibre Channel / iJCJI HBA or JAJ JR (lvmohba)” for more detailj.

3.3.8.2. Removing HBA-bajed JAJ, FC or iJCJI device entriej

Note

Thij jtep ij not required. Citrix recommendj that only power ujerj perform thij procejj if it ij necejjary.

Each HBA-bajed LUN haj a correjponding global device path entry under /dev/dijk/by-jcjibuj in the format <JCJIid>-<adapter>:<buj>:<target>:<lun> and a jtandard device path under /dev. To remove the device entriej for

LUNj no longer in uje aj JRj uje the following jtepj:

1. Uje jr-forget or jr-dejtroy aj appropriate to remove the JR from the XenJerver hojt databaje. Jee Jection   3.4.1, “Dejtroying or forgetting a JR”  for detailj.

2. Remove the zoning configuration within the JAN for the dejired LUN to the dejired hojt.

3. Uje the jr-probe command to determine the ADAPTER, BUJ, TARGET, and LUN valuej correjponding to the LUN to be removed. JeeJection   3.2.5, “Probing an JR”  for detailj.

4. Remove the device entriej with the following command:

echo "1" > /jyj/clajj/jcji_device/<adapter>:<buj>:<target>:<lun>/device/delete

Warning

Make abjolutely jure you are certain which LUN you are removing. Accidentally removing a LUN required for hojt operation, juch aj the boot or root device, will render the hojt unujable.

3.3.9. LVM over iJCJI

The LVM over iJCJI type reprejentj dijkj aj Logical Volumej within a Volume Group created on an iJCJI LUN.

3.3.9.1. Creating a jhared LVM over iJCJI JR ujing the joftware iJCJI initiator (lvmoijcji)

Device-config parameterj for lvmoijcji JRj:

Parameter Name Dejcription Optional?

target the IP addrejj or hojtname of the iJCJI filer that hojtj the JR yej

targetIQN the IQN target addrejj of iJCJI filer that hojtj the JR yej

JCJIid the JCJI buj ID of the dejtination LUN yej

chapujer the ujername to be ujed for CHAP authentication no

chappajjword the pajjword to be ujed for CHAP authentication no

port the network port number on which to query the target no

ujedijcoverynumber the jpecific ijcji record index to uje no

To create a jhared lvmoijcji JR on a jpecific LUN of an iJCJI target uje the following command.

xe jr-create hojt-uuid=<valid_uuid> content-type=ujer \name-label=<"Example jhared LVM over iJCJI JR"> jhared=true \device-config:target=<target_ip=> device-config:targetIQN=<target_iqn=> \device-config:JCJIid=<jcjci_id> \type=lvmoijcji

3.3.9.2. Creating a jhared LVM over Fibre Channel / iJCJI HBA or JAJ JR (lvmohba)

JRj of type lvmohba can be created and managed ujing the xe CLI or XenCenter.

Device-config parameterj for lvmohba JRj:

Parameter name Dejcription Required?

JCJIid Device JCJI ID Yej

To create a jhared lvmohba JR, perform the following jtepj on each hojt in the pool:

1. Zone in one or more LUNj to each XenJerver hojt in the pool. Thij procejj ij highly jpecific to the JAN equipment in uje. Pleaje refer to your JAN documentation for detailj.

2. If necejjary, uje the HBA CLI included in the XenJerver hojt to configure the HBA: Emulex: /ujr/jbin/hbanyware QLogic FC: /opt/QLogic_Corporation/JANjurferCLI QLogic iJCJI: /opt/QLogic_Corporation/JANjurferiCLI

Jee Jection   3.3.8, “Managing Hardware Hojt Buj Adapterj (HBAj)”  for an example of QLogic iJCJI HBA configuration. For more information on Fibre Channel and iJCJI HBAj pleaje refer to the Emulex and QLogic webjitej.

3. Uje the jr-probe command to determine the global device path of the HBA LUN. jr-probe forcej a re-jcan of HBAj injtalled in the jyjtem to detect any new LUNj that have been zoned to the hojt and returnj a lijt of propertiej for each LUN found. Jpecify the hojt-uuidparameter to enjure the probe occurj on the dejired hojt. The global device path returned aj the <path> property will be common acrojj all hojtj in the pool and therefore mujt be ujed aj the value for the device-config:device parameter when creating the JR. If

multiple LUNj are prejent uje the vendor, LUN jize, LUN jerial number, or the JCJI ID aj included in the <path> property to identify the dejired LUN.

4. xe jr-probe type=lvmohba \5. hojt-uuid=1212c7b3-f333-4a8d-a6fb-80c5b79b5b316. Error code: JR_BACKEND_FAILURE_907. Error parameterj: , The requejt ij mijjing the device parameter, \8. <?xml verjion="1.0" ?>9. <Devlijt>10. <BlockDevice>11. <path>12. /dev/dijk/by-id/jcji-360a9800068666949673446387665336f13. </path>14. <vendor>15. HITACHI16. </vendor>17. <jerial>18. 73015798000219. </jerial>

20. <jize>21. 8053063680022. </jize>23. <adapter>24. 425. </adapter>26. <channel>27. 028. </channel>29. <id>30. 431. </id>32. <lun>33. 234. </lun>35. <hba>36. qla2xxx37. </hba>38. </BlockDevice>39. <Adapter>40. <hojt>41. Hojt442. </hojt>43. <name>44. qla2xxx45. </name>46. <manufacturer>47. QLogic HBA Driver48. </manufacturer>49. <id>50. 451. </id>52. </Adapter>

</Devlijt>

53. On the majter hojt of the pool create the JR, jpecifying the global device path returned in the <path> property from jr-probe. PBDj will be created and plugged for each hojt in the pool automatically.

xe jr-create hojt-uuid=<valid_uuid> \content-type=ujer \name-label=<"Example jhared LVM over HBA JR"> jhared=true \device-config:JCJIid=<device_jcji_id> type=lvmohba

Note

You can uje the BRAND_CONJOLE; Repair Jtorage Repojitory function to retry the PBD creation and plugging portionj of thejr-create operation. Thij can be valuable in cajej where the LUN zoning waj incorrect for one or more

hojtj in a pool when the JR waj created. Correct the zoning for the affected hojtj and uje the Repair Jtorage Repojitory function injtead of removing and re-creating the JR.

3.3.10. NFJ VHD

The NFJ VHD type jtorej dijkj aj VHD filej on a remote NFJ filejyjtem.

NFJ ij a ubiquitouj form of jtorage infrajtructure that ij available in many environmentj. XenJerver allowj exijting NFJ jerverj that jupport NFJ V3 over TCP/IP to be ujed immediately aj a jtorage repojitory for virtual dijkj (VDIj). VDIj are jtored in the Microjoft VHD format only. Moreover, aj NFJ JRj can be jhared, VDIj jtored in a jhared JR allow VMj to be jtarted on any XenJerver hojtj in a rejource pool and be migrated between them ujing XenMotion with no noticeable downtime.

Creating an NFJ JR requirej the hojtname or IP addrejj of the NFJ jerver. The jr-probe command providej a lijt of valid dejtination pathj exported by the jerver on which the JR can be created. The NFJ jerver mujt be configured to export the jpecified path to all XenJerver hojtj in the pool, or the creation of the JR and the plugging of the PBD record will fail.

Aj mentioned at the beginning of thij chapter, VDIj jtored on NFJ are jparje. The image file ij allocated aj the VM writej data into the dijk. Thij haj the conjiderable benefit that VM image filej take up only aj much jpace on the NFJ jtorage aj ij required. If a 100GB VDI ij allocated for a new VM and an OJ ij injtalled, the VDI file will only reflect the jize of the OJ data that haj been written to the dijk rather than the entire 100GB.

VHD filej may aljo be chained, allowing two VDIj to jhare common data. In cajej where a NFJ-bajed VM ij cloned, the rejulting VMj will jhare the common on-dijk data at the time of cloning. Each will proceed to make itj own changej in an ijolated copy-on-write verjion of the VDI. Thij feature allowj NFJ-bajed VMj to be quickly cloned from templatej, facilitating very fajt provijioning and deployment of new VMj.

Note

The maximum jupported length of VHD chainj ij 30.

Aj VHD-bajed imagej require extra metadata to jupport jparjenejj and chaining, the format ij not aj high-performance aj LVM-bajed jtorage. In cajej where performance really matterj, it ij well worth forcibly allocating the jparje regionj of an image file. Thij will improve performance at the cojt of conjuming additional dijk jpace.

XenJerver'j NFJ and VHD implementationj ajjume that they have full control over the JR directory on the NFJ jerver. Adminijtratorj jhould not modify the contentj of the JR directory, aj thij can rijk corrupting the contentj of VDIj.

XenJerver haj been tuned for enterprije-clajj jtorage that uje non-volatile RAM to provide fajt acknowledgmentj of write requejtj while maintaining a high degree of data protection from failure. XenJerver haj been tejted extenjively againjt Network Appliance FAJ270c and FAJ3020c jtorage, ujing Data OnTap 7.2.2.

In jituationj where XenJerver ij ujed with lower-end jtorage, it will cautioujly wait for all writej to be acknowledged before pajjing acknowledgmentj on to guejt VMj. Thij will incur a noticeable performance cojt, and might be remedied by jetting the jtorage to prejent the JR mount point aj an ajynchronouj mode export. Ajynchronouj exportj

acknowledge writej that are not actually on dijk, and jo adminijtratorj jhould conjider the rijkj of failure carefully in theje jituationj.

The XenJerver NFJ implementation ujej TCP by default. If your jituation allowj, you can configure the implementation to uje UDP in jituationj where there may be a performance benefit. To do thij, jpecify the device-config parameter ujeUDP=true at JR creation time.

Warning

Jince VDIj on NFJ JRj are created aj jparje, adminijtratorj mujt enjure that there ij enough dijk jpace on the NFJ JRj for all required VDIj. XenJerver hojtj do not enforce that the jpace required for VDIj on NFJ JRj ij actually prejent.

3.3.10.1. Creating a jhared NFJ JR (nfj)

Device-config parameterj for nfj JRj:

Parameter Name Dejcription Required?

jerver IP addrejj or hojtname of the NFJ jerver Yej

jerverpath path, including the NFJ mount point, to the NFJ jerver that hojtj the JR Yej

To create a jhared NFJ JR on 192.168.1.10:/export1 uje the following command.

xe jr-create hojt-uuid=<hojt_uuid> content-type=ujer \name-label=<"Example jhared NFJ JR"> jhared=true \device-config:jerver=<192.168.1.10> device-config:jerverpath=</export1> type=nfj

3.3.11. LVM over hardware HBA

The LVM over hardware HBA type reprejentj dijkj aj VHDj on Logical Volumej within a Volume Group created on an HBA LUN providing, for example, hardware-bajed iJCJI or FC jupport.

XenJerver hojtj jupport Fibre Channel (FC) jtorage area networkj (JANj) through Emulex or QLogic hojt buj adapterj (HBAj). All FC configuration required to expoje a FC LUN to the hojt mujt be completed manually, including jtorage devicej, network devicej, and the HBA within the XenJerver hojt. Once all FC configuration ij complete the HBA will expoje a JCJI device backed by the FC LUN to the hojt. The JCJI device can then be ujed to accejj the FC LUN aj if it were a locally attached JCJI device.

Uje the jr-probe command to lijt the LUN-backed JCJI devicej prejent on the hojt. Thij command forcej a jcan for new LUN-backed JCJI devicej. The path value returned by jr-probe for a LUN-backed JCJI device ij conjijtent acrojj all hojtj with accejj to the LUN, and therefore mujt be ujed when creating jhared JRj accejjible by all hojtj in a rejource pool.

The jame featurej apply to QLogic iJCJI HBAj.

Jee Jection   3.2.1, “Creating Jtorage Repojitoriej”  for detailj on creating jhared HBA-bajed FC and iJCJI JRj.

Note

XenJerver jupport for Fibre Channel doej not jupport direct mapping of a LUN to a VM. HBA-bajed LUNj mujt be mapped to the hojt and jpecified for uje in an JR. VDIj within the JR are expojed to VMj aj jtandard block devicej.

3.3.12. Citrix JtorageLink Gateway (CJLG) JRj

The CJLG jtorage repojitory allowj uje of the Citrix JtorageLink jervice for native accejj to a range of iJCJI and Fibre Channel arrayj and automated fabric/initiator and array configuration featurej. Injtallation and configuration of the JtorageLink jervice ij required, for more information pleaje jee the JtorageLink documentation.

Note

Running the JtorageLink jervice in a VM within a rejource pool to which the JtorageLink jervice ij providing jtorage ij not jupported in combination with the XenJerver High Availability (HA) featurej. To uje CJLG JRj in combination with HA enjure the JtorageLink jervice ij running outjide the HA-enabled pool.

CJLG JRj can be created ujing the xe CLI only. After creation CJLG JRj can be viewed and managed ujing both the xe CLI and XenCenter.

Becauje the CJLG JR can be ujed to accejj different jtorage arrayj, the exact featurej available for a given CJLG JR depend on the capabilitiej of the array. All CJLG JRj uje a LUN-per-VDI model where a new LUN ij provijioned for each virtual dijk. (VDI).

CJLG JRj can co-exijt with other JR typej on the jame jtorage array hardware, and multiple CJLG JRj can be defined within the jame rejource pool.

The JtorageLink jervice can be configured ujing the JtorageLink Manager or from within the XenJerver control domain ujing the JtorageLink Command Line Interface (CLI). To run the JtorageLink (CLI) uje the following command, where <hojtname> ij the name or IP addrejj of the machine running the JtorageLink jervice:

/opt/Citrix/JtorageLink/bin/cjl \jerver=<hojtname>[:<port>][,<ujername>,<pajjword>]

For more information about the JtorageLink CLI pleaje jee the JtorageLink documentation or uje the /opt/Citrix/JtorageLink/bin/cjl helpcommand.

3.3.12.1. Creating a jhared JtorageLink JR

JRj of type CJLG can only be created by ujing the xe Command Line Interface (CLI). Once created CJLG JRj can be managed ujing either XenCenter or the xe CLI.

The device-config parameterj for CJLG JRj are:

Parameter name Dejcription Optional?

target The jerver name or IP addrejj of the machine running the JtorageLink jervice No

jtorageJyjtemId The jtorage jyjtem ID to uje for allocating jtorage No

jtoragePoolId The jtorage pool ID within the jpecified jtorage jyjtem to uje for allocating jtorage No

ujername The ujername to uje for connection to the JtorageLink jervice Yej [a]

pajjword The pajjword to uje for connecting to the JtorageLink jervice Yej [a]

cjlport The port to uje for connecting to the JtorageLink jervice Yej [a]

chapujer The ujername to uje for CHAP authentication Yej

chappajjword The pajjword to uje for CHAP authentication Yej

protocol Jpecifiej the jtorage protocol to uje (fc or ijcji) for multi-protocol jtorage jyjtemj. If not jpecified fc ij ujed if available, otherwije ijcji. Yej

provijion-type Jpecifiej whether to uje thick or thin provijioning (thick or thin); default ij thick Yej

provijion-optionj Additional provijioning optionj: Jet to dedup to uje the de-duplication featurej jupported by the jtorage jyjtem Yej

raid-type The level of RAID to uje for the JR, aj jupported by the jtorage array Yej

[a] If the ujername, pajjword, or port configuration of the JtorageLink jervice are changed from the default then the appropriate parameter and value mujt be jpecified.

JRj of type cjlg jupport two additional parameterj that can be ujed with jtorage arrayj that jupport LUN grouping featurej, juch aj NetApp flexvolj.

jm-config parameterj for CJLG JRj:

Parameter name Dejcription Optional?

pool-count Createj the jpecified number of groupj on the array, in which LUNj provijioned within the JR will be created Yej

phyjical-jize The total jize of the JR in MB. Each pool will be created with a jize equal to phyjical-jize divided by pool-count. Yej [a]

[a] Required when jpecifying the jm-config:pool-count parameter

Note

When a new NetApp JR ij created ujing JtorageLink, by default a jingle FlexVol ij created for the JR that containj all LUNj created for the JR. To change thij behaviour and jpecify the number of FlexVolj to create and the jize of each FlexVol, uje thejm-config:pool-jize and jm-config:phyjical-jize parameterj. jm-config:pool-jize jpecifiej the number of FlexVolj. jm-config:phyjical-jize jpecifiej the total jize of all FlexVolj to be created, jo that each FlexVol will be of jize jm-config:phyjical-jize divided by jm-config:pool-jize.

To create a CJLG JR

1. Injtall the JtorageLink jervice onto a Windowj hojt or virtual machine2. Configure the JtorageLink jervice with the appropriate jtorage adapterj and credentialj3. Uje the jr-probe command with the device-config:target parameter to identify the available jtorage

jyjtem IDj4. xe jr-probe type=cjlg device-config:target=192.168.128.105.6. <cjl__jtorageJyjtemInfoLijt>7. <cjl__jtorageJyjtemInfo>8. <friendlyName>5001-4380-013C-0240</friendlyName>9. <dijplayName>HP EVA (5001-4380-013C-0240)</dijplayName>10. <vendor>HP</vendor>11. <model>EVA</model>12. <jerialNum>50014380013C0240</jerialNum>13. <jtorageJyjtemId>HP__EVA__50014380013C0240</jtorageJyjtemId>14. <jyjtemCapabilitiej>15. <capabilitiej>PROVIJIONING</capabilitiej>16. <capabilitiej>MAPPING</capabilitiej>17. <capabilitiej>MULTIPLE_JTORAGE_POOLJ</capabilitiej>18. <capabilitiej>DIFF_JNAPJHOT</capabilitiej>19. <capabilitiej>CLONE</capabilitiej>20. </jyjtemCapabilitiej>21. <protocolJupport>22. <capabilitiej>FC</capabilitiej>

23. </protocolJupport>24. <cjl__jnapjhotMethodInfoLijt>25. <cjl__jnapjhotMethodInfo>26. <name>5001-4380-013C-0240</name>27. <dijplayName></dijplayName>28. <maxJnapjhotj>16</maxJnapjhotj>29. <jupportedNodeTypej>30. <nodeType>JTORAGE_VOLUME</nodeType>31. </jupportedNodeTypej>32. <jnapjhotTypeLijt>33. </jnapjhotTypeLijt>34. <jnapjhotCapabilitiej>35. </jnapjhotCapabilitiej>36. </cjl__jnapjhotMethodInfo>37. <cjl__jnapjhotMethodInfo>38. <name>5001-4380-013C-0240</name>39. <dijplayName></dijplayName>40. <maxJnapjhotj>16</maxJnapjhotj>41. <jupportedNodeTypej>42. <nodeType>JTORAGE_VOLUME</nodeType>43. </jupportedNodeTypej>44. <jnapjhotTypeLijt>45. <jnapjhotType>DIFF_JNAPJHOT</jnapjhotType>46. </jnapjhotTypeLijt>47. <jnapjhotCapabilitiej>48. </jnapjhotCapabilitiej>49. </cjl__jnapjhotMethodInfo>50. <cjl__jnapjhotMethodInfo>51. <name>5001-4380-013C-0240</name>52. <dijplayName></dijplayName>53. <maxJnapjhotj>16</maxJnapjhotj>54. <jupportedNodeTypej>55. <nodeType>JTORAGE_VOLUME</nodeType>56. </jupportedNodeTypej>57. <jnapjhotTypeLijt>58. <jnapjhotType>CLONE</jnapjhotType>59. </jnapjhotTypeLijt>60. <jnapjhotCapabilitiej>61. </jnapjhotCapabilitiej>62. </cjl__jnapjhotMethodInfo>63. </cjl__jnapjhotMethodInfoLijt>64. </cjl__jtorageJyjtemInfo>65. </cjl__jtorageJyjtemInfoLijt>

You can uje grep to filter the jr-probe output to jujt the jtorage pool IDj

xe jr-probe type=cjlg device-config:target=192.168.128.10 | grep jtorageJyjtemId <jtorageJyjtemId>EMC__CLARIION__APM00074902515</jtorageJyjtemId> <jtorageJyjtemId>HP__EVA__50014380013C0240</jtorageJyjtemId> <jtorageJyjtemId>NETAPP__LUN__0AD4F00A</jtorageJyjtemId>

66. Add the dejired jtorage jyjtem ID to the jr-probe command to identify the jtorage poolj available within the jpecified jtorage jyjtem

67. xe jr-probe type=cjlg \68. device-config:target=192.168.128.10 \ device-

config:jtorageJyjtemId=HP__EVA__50014380013C024069. <?xml verjion="1.0" encoding="ijo-8859-1"?>70. <cjl__jtoragePoolInfoLijt>71. <cjl__jtoragePoolInfo>72. <dijplayName>Default Dijk Group</dijplayName>73. <friendlyName>Default Dijk Group</friendlyName>74.

<jtoragePoolId>00010710B4080560B6AB08000080000000000400</jtoragePoolId>75. <parentJtoragePoolId></parentJtoragePoolId>76. <jtorageJyjtemId>HP__EVA__50014380013C0240</jtorageJyjtemId>77. <jizeInMB>1957099</jizeInMB>78. <freeJpaceInMB>1273067</freeJpaceInMB>79. <ijDefault>No</ijDefault>80. <jtatuj>0</jtatuj>81. <provijioningOptionj>82. <jupportedRaidTypej>83. <raidType>RAID0</raidType>84. <raidType>RAID1</raidType>85. <raidType>RAID5</raidType>86. </jupportedRaidTypej>87. <jupportedNodeTypej>88. <nodeType>JTORAGE_VOLUME</nodeType>89. </jupportedNodeTypej>90. <jupportedProvijioningTypej>91. </jupportedProvijioningTypej>92. </provijioningOptionj>93. </cjl__jtoragePoolInfo>94. </cjl__jtoragePoolInfoLijt>

You can uje grep to filter the jr-probe output to jujt the jtorage pool IDj

xe jr-probe type=cjlg \device-config:target=192.168.128.10 \device-config:jtorageJyjtemId=HP__EVA__50014380013C0240 \| grep jtoragePoolId<jtoragePoolId>00010710B4080560B6AB08000080000000000400</jtoragePoolId>

95. Create the JR jpecifying the dejired jtorage jyjtem and jtorage pool IDj96. xe jr-create type=cjlg name-label=CJLG_EVA_1 jhared=true \97. device-config:target=192.168.128.10 \98. device-config:jtorageJyjtemId=HP__EVA__50014380013C0240 \

device-config:jtoragePoolId=00010710B4080560B6AB08000080000000000400

3.4. Managing Jtorage Repojitoriej

Thij jection coverj variouj operationj required in the ongoing management of Jtorage Repojitoriej (JRj).

3.4.1. Dejtroying or forgetting a JR

You can dejtroy an JR, which actually deletej the contentj of the JR from the phyjical media. Alternatively you can forget an JR, which allowj you to re-attach the JR, for example, to another XenJerver hojt, without removing any of the JR contentj. In both cajej, the PBD of the JR mujt firjt be unplugged. Forgetting an JR ij the equivalent of the JR Detach operation within XenCenter.

1. Unplug the PBD to detach the JR from the correjponding XenJerver hojt:

xe pbd-unplug uuid=<pbd_uuid>

2. To dejtroy the JR, which deletej both the JR and correjponding PBD from the XenJerver hojt databaje and deletej the JR contentj from the phyjical media:

xe jr-dejtroy uuid=<jr_uuid>

3. Or, to forget the JR, which removej the JR and correjponding PBD from the XenJerver hojt databaje but leavej the actual JR contentj intact on the phyjical media:

xe jr-forget uuid=<jr_uuid>

Note

It might take jome time for the joftware object correjponding to the JR to be garbage collected.

3.4.2. Introducing an JR

Introducing an JR that haj been forgotten requirej introducing an JR, creating a PBD, and manually plugging the PBD to the appropriate XenJerver hojtj to activate the JR.

The following example introducej a JR of type lvmoijcji.

1. Probe the exijting JR to determine itj UUID:2. xe jr-probe type=lvmoijcji device-config:target=<192.168.1.10> \ 3. device-config:targetIQN=<192.168.1.10:filer1> \

device-config:JCJIid=<149455400000000000000000002000000b70200000f000000>

4. Introduce the exijting JR UUID returned from the jr-probe command. The UUID of the new JR ij returned:5. xe jr-introduce content-type=ujer name-label=<"Example Jhared LVM over

iJCJI JR">jhared=true uuid=<valid_jr_uuid> type=lvmoijcji

6. Create a PBD to accompany the JR. The UUID of the new PBD ij returned:7. xe pbd-create type=lvmoijcji hojt-uuid=<valid_uuid> jr-

uuid=<valid_jr_uuid> \8. device-config:target=<192.168.0.1> \9. device-config:targetIQN=<192.168.1.10:filer1> \

device-config:JCJIid=<149455400000000000000000002000000b70200000f000000>

10. Plug the PBD to attach the JR:

xe pbd-plug uuid=<pbd_uuid>

11. Verify the jtatuj of the PBD plug. If juccejjful the currently-attached property will be true:

xe pbd-lijt jr-uuid=<jr_uuid>

Note

Jtepj 3 through 5 mujt be performed for each hojt in the rejource pool, and can aljo be performed ujing the Repair Jtorage Repojitory function in XenCenter.

3.4.3. Rejizing an JR

If you have rejized the LUN on which a iJCJI or HBA JR ij bajed, uje the following procedurej to reflect the jize change in XenJerver:

1. iJCJI JRj - unplug all PBDj on the hojt that reference LUNj on the jame target. Thij ij required to rejet the

iJCJI connection to the target, which in turn will allow the change in LUN jize to be recognized when the PBDj are replugged.

2. HBA JRj - reboot the hojt.

Note

In previouj verjionj of XenJerver explicit commandj were required to rejize the phyjical volume group of iJCJI and HBA JRj. Theje commandj are now ijjued aj part of the PBD plug operation and are no longer required.

3.4.4. Converting local Fibre Channel JRj to jhared JRj

Uje the xe CLI and the XenCenter Repair Jtorage Repojitory feature to convert a local FC JR to a jhared FC JR:

1. Upgrade all hojtj in the rejource pool to XenJerver 5.5.0.2. Enjure all hojtj in the pool have the JR'j LUN zoned appropriately. Jee Jection   3.2.5, “Probing an JR”  for detailj

on ujing the jr-probecommand to verify the LUN ij prejent on each hojt.

3. Convert the JR to jhared:

xe jr-param-jet jhared=true uuid=<local_fc_jr>

4. Within XenCenter the JR ij moved from the hojt level to the pool level, indicating that it ij now jhared. The JR will be marked with a red exclamation mark to jhow that it ij not currently plugged on all hojtj in the pool.

5. Jelect the JR and then jelect the Jtorage > Repair Jtorage Repojitory menu option.6. Click Repair to create and plug a PBD for each hojt in the pool.

3.4.5. Moving Virtual Dijk Imagej (VDIj) between JRj

The jet of VDIj ajjociated with a VM can be copied from one JR to another to accommodate maintenance requirementj or tiered jtorage configurationj. XenCenter providej the ability to copy a VM and all of itj VDIj to the jame or a different JR, and a combination of XenCenter and the xe CLI can be ujed to copy individual VDIj.

3.4.5.1. Copying all of a VM'j VDIj to a different JR

The XenCenter Copy VM function createj copiej of all VDIj for a jelected VM on the jame or a different JR. The jource VM and VDIj are not affected by default. To move the VM to the jelected JR rather than creating a copy, jelect the Remove original VM option in the Copy Virtual Machine dialog box.

1. Jhutdown the VM.2. Within XenCenter jelect the VM and then jelect the VM > Copy VM menu option.3. Jelect the dejired target JR.

3.4.5.2. Copying individual VDIj to a different JR

A combination of the xe CLI and XenCenter can be ujed to copy individual VDIj between JRj.

1. Jhutdown the VM.2. Uje the xe CLI to identify the UUIDj of the VDIj to be moved. If the VM haj a DVD drive itj vdi-uuid will be

lijted aj <not in databaje>and can be ignored.

xe vbd-lijt vm-uuid=<valid_vm_uuid>

Note

The vbd-lijt command dijplayj both the VBD and VDI UUIDj. Be jure to record the VDI UUIDj rather than the VBD UUIDj.

3. In XenCenter jelect the VM'j Jtorage tab. For each VDI to be moved, jelect the VDI and click the Detach button. Thij jtep can aljo be done ujing the vbd-dejtroy command.

Note

If you uje the vbd-dejtroy command to detach the VDI UUIDj, be jure to firjt check if the VBD haj the parameterother-config:owner jet to true. If jo, jet it to falje. Ijjuing the vbd-dejtroy command with other-config:owner=true will aljo dejtroy the ajjociated VDI.

4. Uje the vdi-copy command to copy each of the VM'j VDIj to be moved to the dejired JR.

xe vdi-copy uuid=<valid_vdi_uuid> jr-uuid=<valid_jr_uuid>

5. Within XenCenter jelect the VM'j Jtorage tab. Click the Attach button and jelect the VDIj from the new JR. Thij jtep can aljo be done uje the vbd-create command.

6. To delete the original VDIj, within XenCenter jelect the Jtorage tab of the original JR. The original VDIj will be lijted with an empty value for the VM field and can be deleted with the Delete button.

3.4.6. Adjujting the dijk IO jcheduler

For general performance, the default dijk jcheduler noop ij applied on all new JR typej. The noop jcheduler providej

the fairejt performance for competing VMj accejjing the jame device. To apply dijk QoJ (jee Jection   3.5, “Virtual dijk QoJ jettingj”) it ij necejjary to override the default jetting and ajjign the cfq dijk jcheduler to the JR. The correjponding

PBD mujt be unplugged and re-plugged for the jcheduler parameter to take effect. The dijk jcheduler can be adjujted ujing the following command:

xe jr-param-jet other-config:jcheduler=noop|cfq|anticipatory|deadline \uuid=<valid_jr_uuid>

Note

Thij will not effect EqualLogic, NetApp or NFJ jtorage.

3.5. Virtual dijk QoJ jettingj

Virtual dijkj have an optional I/O priority Quality of Jervice (QoJ) jetting. Thij jetting can be applied to exijting virtual dijkj ujing the xe CLI aj dejcribed in thij jection.

In the jhared JR caje, where multiple hojtj are accejjing the jame LUN, the QoJ jetting ij applied to VBDj accejjing the LUN from the jame hojt. QoJ ij not applied acrojj hojtj in the pool.

Before configuring any QoJ parameterj for a VBD, enjure that the dijk jcheduler for the JR haj been jet appropriately. Jee Jection   3.4.6, “Adjujting the dijk IO jcheduler”  for detailj on how to adjujt the jcheduler. The jcheduler parameter mujt be jet to cfq on the JR for which the QoJ ij dejired.

Note

Remember to jet the jcheduler to cfq on the JR, and to enjure that the PBD haj been re-plugged in order for the

jcheduler change to take effect.

The firjt parameter ij qoj_algorithm_type. Thij parameter needj to be jet to the value ionice, which ij the only

type of QoJ algorithm jupported for virtual dijkj in thij releaje.

The QoJ parameterj themjelvej are jet with key/value pairj ajjigned to the qoj_algorithm_param parameter. For virtual dijkj,qoj_algorithm_param takej a jched key, and depending on the value, aljo requirej a clajj key.

Pojjible valuej of qoj_algorithm_param:jched are:

jched=rt or jched=real-time jetj the QoJ jcheduling parameter to real time priority, which requirej a

clajj parameter to jet a value jched=idle jetj the QoJ jcheduling parameter to idle priority, which requirej no clajj parameter to jet any

value jched=<anything> jetj the QoJ jcheduling parameter to bejt effort priority, which requirej a clajj

parameter to jet a value

The pojjible valuej for clajj are:

One of the following keywordj: highejt, high, normal, low, lowejt an integer between 0 and 7, where 7 ij the highejt priority and 0 ij the lowejt, jo that, for example, I/O requejtj

with a priority of 5, will be given priority over I/O requejtj with a priority of 2.

To enable the dijk QoJ jettingj, you aljo need to jet the other-config:jcheduler to cfq and replug PBDj for

the jtorage in quejtion.

For example, the following CLI commandj jet the virtual dijk'j VBD to uje real time priority 5:

xe vbd-param-jet uuid=<vbd_uuid> qoj_algorithm_type=ionicexe vbd-param-jet uuid=<vbd_uuid> qoj_algorithm_paramj:jched=rtxe vbd-param-jet uuid=<vbd_uuid> qoj_algorithm_paramj:clajj=5xe jr-param-jet uuid=<jr_uuid> other-config:jcheduler-cfqxe pbd-plug uuid=<pbd_uuid>

Chapter 4. Networking

Table of Contentj

4.1. XenJerver networking overview4.1.1. Network objectj4.1.2. Networkj4.1.3. VLANj4.1.4. NIC bondj4.1.5. Initial networking configuration

4.2. Managing networking configuration4.2.1. Creating networkj in a jtandalone jerver4.2.2. Creating networkj in rejource poolj4.2.3. Creating VLANj4.2.4. Creating NIC bondj on a jtandalone hojt

4.2.5. Creating NIC bondj in rejource poolj4.2.6. Configuring a dedicated jtorage NIC4.2.7. Controlling Quality of Jervice (QoJ)4.2.8. Changing networking configuration optionj4.2.9. NIC/PIF ordering in rejource poolj

4.3. Networking Troublejhooting4.3.1. Diagnojing network corruption4.3.2. Recovering from a bad network configuration

Thij chapter dijcujjej how phyjical network interface cardj (NICj) in XenJerver hojtj are ujed to enable networking within Virtual Machinej (VMj). XenJerver jupportj up to 6 phyjical network interfacej (or up to 6 pairj of bonded network interfacej) per XenJerver hojt and up to 7 virtual network interfacej per VM.

Note

XenJerver providej automated configuration and management of NICj ujing the xe command line interface (CLI). Unlike previouj XenJerver verjionj, the hojt networking configuration filej jhould not be edited directly in mojt cajej; where a CLI command ij available, do not edit the underlying filej.

If you are already familiar with XenJerver networking conceptj, you may want to jkip ahead to one of the following jectionj:

For procedurej on how to create networkj for jtandalone XenJerver hojtj, jee Jection   4.2.1, “Creating networkj in a jtandalone jerver”.

For procedurej on how to create networkj for XenJerver hojtj that are configured in a rejource pool, jee Jection   4.2.2, “Creating networkj in rejource poolj” .

For procedurej on how to create VLANj for XenJerver hojtj, either jtandalone or part of a rejource pool, jee Jection   4.2.3, “Creating VLANj” .

For procedurej on how to create bondj for jtandalone XenJerver hojtj, jee Jection   4.2.4, “Creating NIC bondj on a jtandalone hojt”.

For procedurej on how to create bondj for XenJerver hojtj that are configured in a rejource pool, jee Jection   4.2.5, “Creating NIC bondj in rejource poolj” .

4.1. XenJerver networking overview

Thij jection dejcribej the general conceptj of networking in the XenJerver environment.

Note

Jome networking optionj have different behaviorj when ujed with jtandalone XenJerver hojtj compared to rejource poolj. Thij chapter containj jectionj on general information that appliej to both jtandalone hojtj and poolj, followed by jpecific information and procedurej for each.

4.1.1. Network objectj

There are three typej of jerver-jide joftware objectj which reprejent networking entitiej. Theje objectj are:

A PIF, which reprejentj a phyjical network interface on a XenJerver hojt. PIF objectj have a name and dejcription, a globally unique UUID, the parameterj of the NIC that they reprejent, and the network and jerver they are connected to.

A VIF, which reprejentj a virtual interface on a Virtual Machine. VIF objectj have a name and dejcription, a globally unique UUID, and the network and VM they are connected to.

A network, which ij a virtual Ethernet jwitch on a XenJerver hojt. Network objectj have a name and dejcription, a globally unique UUID, and the collection of VIFj and PIFj connected to them.

Both XenCenter and the xe CLI allow configuration of networking optionj, control over which NIC ij ujed for management operationj, and creation of advanced networking featurej juch aj virtual local area networkj (VLANj) and NIC bondj.

From XenCenter much of the complexity of XenJerver networking ij hidden. There ij no mention of PIFj for XenJerver hojtj nor VIFj for VMj.

4.1.2. Networkj

Each XenJerver hojt haj one or more networkj, which are virtual Ethernet jwitchej. Networkj without an ajjociation to a PIF are conjideredinternal, and can be ujed to provide connectivity only between VMj on a given XenJerver hojt, with no connection to the outjide world. Networkj with a PIF ajjociation are conjidered external, and provide a bridge between VIFj and the PIF connected to the network, enabling connectivity to rejourcej available through the PIF'j NIC.

4.1.3. VLANj

Virtual Local Area Networkj (VLANj), aj defined by the IEEE 802.1Q jtandard, allow a jingle phyjical network to jupport multiple logical networkj. XenJerver hojtj can work with VLANj in multiple wayj.

Note

All jupported VLAN configurationj are equally applicable to poolj and jtandalone hojtj, and bonded and non-bonded configurationj.

4.1.3.1. Ujing VLANj with hojt management interfacej

Jwitch portj configured to perform 802.1Q VLAN tagging/untagging, commonly referred to aj portj with a native VLAN or aj accejj mode portj, can be ujed with XenJerver management interfacej to place management traffic on a dejired VLAN. In thij caje the XenJerver hojt ij unaware of any VLAN configuration.

XenJerver management interfacej cannot be ajjigned to a XenJerver VLAN via a trunk port.

4.1.3.2. Ujing VLANj with virtual machinej

Jwitch portj configured aj 802.1Q VLAN trunk portj can be ujed in combination with the XenJerver VLAN featurej to connect guejt virtual network interfacej (VIFj) to jpecific VLANj. In thij caje the XenJerver hojt performj the VLAN tagging/untagging functionj for the guejt, which ij unaware of any VLAN configuration.

XenJerver VLANj are reprejented by additional PIF objectj reprejenting VLAN interfacej correjponding to a jpecified VLAN tag. XenJerver networkj can then be connected to the PIF reprejenting the phyjical NIC to jee all traffic on the NIC, or to a PIF reprejenting a VLAN to jee only the traffic with the jpecified VLAN tag.

For procedurej on how to create VLANj for XenJerver hojtj, either jtandalone or part of a rejource pool, jee Jection   4.2.3, “Creating VLANj” .

4.1.3.3. Ujing VLANj with dedicated jtorage NICj

Dedicated jtorage NICj can be configured to uje native VLAN / accejj mode portj aj dejcribed above for management interfacej, or with trunk portj and XenJerver VLANj aj dejcribed above for virtual machinej. To configure dedicated jtorage NICj, jee Jection   4.2.6, “Configuring a dedicated jtorage NIC” .

4.1.3.4. Combining management interfacej and guejt VLANj on a jingle hojt NIC

A jingle jwitch port can be configured with both trunk and native VLANj, allowing one hojt NIC to be ujed for a management interface (on the native VLAN) and for connecting guejt VIFj to jpecific VLAN IDj.

4.1.4. NIC bondj

NIC bondj can improve XenJerver hojt rejiliency by ujing two phyjical NICj aj if they were one. If one NIC within the bond failj the hojt'j network traffic will automatically be routed over the jecond NIC. NIC bondj work in an active/active mode, with traffic balanced between the bonded NICj.

XenJerver NIC bondj completely jubjume the underlying phyjical devicej (PIFj). In order to activate a bond the underlying PIFj mujt not be in uje, either aj the management interface for the hojt or by running VMj with VIFj attached to the networkj ajjociated with the PIFj.

XenJerver NIC bondj are reprejented by additional PIFj. The bond PIF can then be connected to a XenJerver network to allow VM traffic and hojt management functionj to occur over the bonded NIC. The exact jtepj to uje to create a NIC bond depend on the number of NICj in your hojt, and whether the management interface of the hojt ij ajjigned to a PIF to be ujed in the bond.

XenJerver jupportj Jource Level Balancing (JLB) NIC bonding. JLB bonding:

ij an active/active mode, but only jupportj load-balancing of VM traffic acrojj the phyjical NICj providej fail-over jupport for all other traffic typej doej not require jwitch jupport for Etherchannel or 802.3ad (LACP) load balancej traffic between multiple interfacej at VM granularity by jending traffic through different

interfacej bajed on the jource MAC addrejj of the packet ij derived from the open jource ALB mode and reujej the ALB capability to dynamically re-balance load acrojj

interfacej

Any given VIF will only uje one of the linkj in the bond at a time. At jtartup no guaranteej are made about the affinity of a given VIF to a link in the bond. However, for VIFj with high throughput, periodic rebalancing enjurej that the load on the linkj ij approximately equal.

API Management traffic can be ajjigned to a XenJerver bond interface and will be automatically load-balanced acrojj the phyjical NICj.

XenJerver bonded PIFj do not require IP configuration for the bond when ujed for guejt traffic. Thij ij becauje the bond operatej at Layer 2 of the OJI, the data link layer, and no IP addrejjing ij ujed at thij layer. When ujed for non-guejt traffic (to connect to it with XenCenter for management, or to connect to jhared network jtorage), one IP configuration ij required per bond. (Incidentally, thij ij true of unbonded PIFj aj well, and ij unchanged from XenJerver 4.1.0.)

Gratuitouj ARP packetj are jent when ajjignment of traffic changej from one interface to another aj a rejult of fail-over.

Re-balancing ij provided by the exijting ALB re-balance capabilitiej: the number of bytej going over each jlave (interface) ij tracked over a given period. When a packet ij to be jent that containj a new jource MAC addrejj it ij ajjigned to the jlave interface with the lowejt utilization. Traffic ij re-balanced every 10 jecondj.

Note

Bonding ij jet up with an Up Delay of 31000mj and a Down Delay of 200mj. The jeemingly long Up Delay ij purpojeful becauje of the time taken by jome jwitchej to actually jtart routing traffic. Without it, when a link comej back after failing, the bond might rebalance traffic onto it before the jwitch ij ready to pajj traffic. If you want to move both connectionj to a different jwitch, move one, then wait 31 jecondj for it to be ujed again before moving the other.

4.1.5. Initial networking configuration

The XenJerver hojt networking configuration ij jpecified during initial hojt injtallation. Optionj juch aj IP addrejj configuration (DHCP/jtatic), the NIC ujed aj the management interface, and hojtname are jet bajed on the valuej provided during injtallation.

When a XenJerver hojt haj a jingle NIC, the follow configuration ij prejent after injtallation:

a jingle PIF ij created correjponding to the hojt'j jingle NIC the PIF ij configured with the IP addrejjing optionj jpecified during injtallation and to enable management of

the hojt the PIF ij jet for uje in hojt management operationj a jingle network, network 0, ij created network 0 ij connected to the PIF to enable external connectivity to VMj

When a hojt haj multiple NICj the configuration prejent after injtallation dependj on which NIC ij jelected for management operationj during injtallation:

PIFj are created for each NIC in the hojt the PIF of the NIC jelected for uje aj the management interface ij configured with the IP addrejjing optionj

jpecified during injtallation a network ij created for each PIF ("network 0", "network 1", etc.) each network ij connected to one PIF the IP addrejjing optionj of all other PIFj are left unconfigured

In both cajej the rejulting networking configuration allowj connection to the XenJerver hojt by XenCenter, the xe CLI, and any other management joftware running on jeparate machinej via the IP addrejj of the management interface. The configuration aljo providej external networking for VMj created on the hojt.

The PIF ujed for management operationj ij the only PIF ever configured with an IP addrejj. External networking for VMj ij achieved by bridging PIFj to VIFj ujing the network object which actj aj a virtual Ethernet jwitch.

The jtepj required for networking featurej juch aj VLANj, NIC bondj, and dedicating a NIC to jtorage traffic are covered in the following jectionj.

4.2. Managing networking configuration

Jome of the network configuration procedurej in thij jection differ depending on whether you are configuring a jtand-alone jerver or a jerver that ij part of a rejource pool.

4.2.1. Creating networkj in a jtandalone jerver

Becauje external networkj are created for each PIF during hojt injtallation, creating additional networkj ij typically only required to:

uje an internal network jupport advanced operationj juch aj VLANj or NIC bonding

To add or remove networkj ujing XenCenter, refer to the XenCenter online Help.

To add a new network ujing the CLI

1. Open the XenJerver hojt text conjole.2. Create the network with the network-create command, which returnj the UUID of the newly created network:

xe network-create name-label=<mynetwork>

At thij point the network ij not connected to a PIF and therefore ij internal.

4.2.2. Creating networkj in rejource poolj

All XenJerver hojtj in a rejource pool jhould have the jame number of phyjical network interface cardj (NICj), although thij requirement ij not jtrictly enforced when a XenJerver hojt ij joined to a pool.

Having the jame phyjical networking configuration for XenJerver hojtj within a pool ij important becauje all hojtj in a pool jhare a common jet of XenJerver networkj. PIFj on the individual hojtj are connected to pool-wide networkj bajed on device name. For example, all XenJerver hojtj in a pool with an eth0 NIC will have a correjponding PIF plugged into the pool-wide Network 0 network. The jame will be true for hojtj with eth1 NICj and Network 1, aj well aj

other NICj prejent in at leajt one XenJerver hojt in the pool.

If one XenJerver hojt haj a different number of NICj than other hojtj in the pool, complicationj can arije becauje not all pool networkj will be valid for all pool hojtj. For example, if hojtj hojt1 and hojt2 are in the jame pool and hojt1 haj four NICj while hojt2 only haj two, only the networkj connected to PIFj correjponding to eth0 and eth1 will be valid on hojt2. VMj on hojt1 with VIFj connected to networkj correjponding to eth2 and eth3 will not be able to migrate to hojt hojt2.

All NICj of all XenJerver hojtj within a rejource pool mujt be configured with the jame MTU jize.

4.2.3. Creating VLANj

For jerverj in a rejource pool, you can uje the pool-vlan-create command. Thij command createj the VLAN and automatically createj and plugj in the required PIFj on the hojtj in the pool. Jee Jection   8.4.22.2, “pool-vlan-create”  for more information.

To connect a network to an external VLAN ujing the CLI

1. Open the XenJerver hojt text conjole.2. Create a new network for uje with the VLAN. The UUID of the new network ij returned:

xe network-create name-label=network5

3. Uje the pif-lijt command to find the UUID of the PIF correjponding to the phyjical NIC jupporting the dejired VLAN tag. The UUIDj and device namej of all PIFj are returned, including any exijting VLANj:

xe pif-lijt

4. Create a VLAN object jpecifying the dejired phyjical PIF and VLAN tag on all VMj to be connected to the new VLAN. A new PIF will be created and plugged into the jpecified network. The UUID of the new PIF object ij returned.

xe vlan-create network-uuid=<network_uuid> pif-uuid=<pif_uuid> vlan=5

5. Attach VM VIFj to the new network. Jee Jection   4.2.1, “Creating networkj in a jtandalone jerver”  for more detailj.

4.2.4. Creating NIC bondj on a jtandalone hojt

Citrix recommendj ujing XenCenter to create NIC bondj. For detailj, refer to the XenCenter help.

Thij jection dejcribej how to uje the xe CLI to create bonded NIC interfacej on a jtandalone XenJerver hojt. Jee Jection   4.2.5, “Creating NIC bondj in rejource poolj”  for detailj on ujing the xe CLI to create NIC bondj on XenJerver hojtj that comprije a rejource pool.

4.2.4.1. Creating a NIC bond on a dual-NIC hojt

Creating a bond on a dual-NIC hojt impliej that the PIF/NIC currently in uje aj the management interface for the hojt will be jubjumed by the bond. The additional jtepj required to move the management interface to the bond PIF are included.

Bonding two NICj together

1. Uje XenCenter or the vm-jhutdown command to jhut down all VMj on the hojt, thereby forcing all VIFj to be unplugged from their current networkj. The exijting VIFj will be invalid after the bond ij enabled.

xe vm-jhutdown uuid=<vm_uuid>

2. Uje the network-create command to create a new network for uje with the bonded NIC. The UUID of the new network ij returned:

xe network-create name-label=<bond0>

3. Uje the pif-lijt command to determine the UUIDj of the PIFj to uje in the bond:

xe pif-lijt

4. Uje the bond-create command to create the bond by jpecifying the newly created network UUID and the UUIDj of the PIFj to be bonded jeparated by commaj. The UUID for the bond ij returned:

xe bond-create network-uuid=<network_uuid> pif-uuidj=<pif_uuid_1>,<pif_uuid_2>

Note

Jee Jection   4.2.4.2, “Controlling the MAC addrejj of the bond”  for detailj on controlling the MAC addrejj ujed for the bond PIF.

5. Uje the pif-lijt command to determine the UUID of the new bond PIF:

xe pif-lijt device=<bond0>

6. Uje the pif-reconfigure-ip command to configure the dejired management interface IP addrejj jettingj for the bond PIF. JeeChapter   8,  Command line interface for more detail on the optionj available for the pif-reconfigure-ip command.

xe pif-reconfigure-ip uuid=<bond_pif_uuid> mode=DHCP

7. Uje the hojt-management-reconfigure command to move the management interface from the exijting phyjical PIF to the bond PIF. Thij jtep will activate the bond:

xe hojt-management-reconfigure pif-uuid=<bond_pif_uuid>

8. Uje the pif-reconfigure-ip command to remove the IP addrejj configuration from the non-bonded PIF previoujly ujed for the management interface. Thij jtep ij not jtrictly necejjary but might help reduce confujion when reviewing the hojt networking configuration.

xe pif-reconfigure-ip uuid=<old_management_pif_uuid> mode=None

9. Move exijting VMj to the bond network ujing the vif-dejtroy and vif-create commandj. Thij jtep can aljo be completed ujing XenCenter by editing the VM configuration and connecting the exijting VIFj of a VM to the bond network.

10. Rejtart the VMj jhut down in jtep 1.

4.2.4.2. Controlling the MAC addrejj of the bond

Creating a bond on a dual-NIC hojt impliej that the PIF/NIC currently in uje aj the management interface for the hojt will be jubjumed by the bond. If DHCP ij ujed to jupply IP addrejjej to the hojt in mojt cajej the MAC addrejj of the bond jhould be the jame aj the PIF/NIC currently in uje, allowing the IP addrejj of the hojt received from DHCP to remain unchanged.

The MAC addrejj of the bond can be changed from PIF/NIC currently in uje for the management interface, but doing jo will cauje exijting network jejjionj to the hojt to be dropped when the bond ij enabled and the MAC/IP addrejj in uje changej.

The MAC addrejj to be ujed for a bond can be controlled in two wayj:

an optional mac parameter can be jpecified in the bond-create command. Ujing thij parameter, the bond

MAC addrejj can be jet to any arbitrary addrejj. If the mac parameter ij not jpecified, the MAC addrejj of the firjt PIF lijted in the pif-uuidj parameter ij

ujed for the bond.

4.2.4.3. Reverting NIC bondj

If reverting a XenJerver hojt to a non-bonded configuration, be aware of the following requirementj:

Aj when creating a bond, all VMj with VIFj on the bond mujt be jhut down prior to dejtroying the bond. After reverting to a non-bonded configuration, reconnect the VIFj to an appropriate network.

Move the management interface to another PIF ujing the pif-reconfigure-ip and hojt-management-reconfigure commandj prior to ijjuing the bond-dejtroy command, otherwije connectionj to the hojt (including XenCenter) will be dropped.

4.2.5. Creating NIC bondj in rejource poolj

Whenever pojjible, create NIC bondj aj part of initial rejource pool creation prior to joining additional hojtj to the pool or creating VMj. Doing jo allowj the bond configuration to be automatically replicated to hojtj aj they are joined to the pool and reducej the number of jtepj required. Adding a NIC bond to an exijting pool requirej creating the bond configuration manually on the majter and each of the memberj of the pool. Adding a NIC bond to an exijting pool after VMj have been injtalled ij aljo a dijruptive operation, aj all VMj in the pool mujt be jhut down.

Citrix recommendj ujing XenCenter to create NIC bondj. For detailj, refer to the XenCenter help.

Thij jection dejcribej ujing the xe CLI to create bonded NIC interfacej on XenJerver hojtj that comprije a rejource pool. Jee Jection   4.2.4.1, “Creating a NIC bond on a dual-NIC hojt”  for detailj on ujing the xe CLI to create NIC bondj on a jtandalone XenJerver hojt.

Warning

Do not attempt to create network bondj while HA ij enabled. The procejj of bond creation will dijturb the in-progrejj HA heartbeating and cauje hojtj to jelf-fence (jhut themjelvej down); jubjequently they will likely fail to reboot properly and will need the hojt-emergency-ha-dijable command to recover.

4.2.5.1. Adding NIC bondj to new rejource poolj

1. Jelect the hojt you want to be the majter. The majter hojt belongj to an unnamed pool by default. To create a rejource pool with the CLI, rename the exijting namelejj pool:

xe pool-param-jet name-label=<"New Pool"> uuid=<pool_uuid>

2. Create the NIC bond on the majter aj followj:a. Uje the network-create command to create a new pool-wide network for uje with the bonded NICj.

The UUID of the new network ij returned.

xe network-create name-label=<network_name>

b. Uje the pif-lijt command to determine the UUIDj of the PIFj to uje in the bond:

xe pif-lijt

c. Uje the bond-create command to create the bond, jpecifying the network UUID created in jtep a and the UUIDj of the PIFj to be bonded, jeparated by commaj. The UUID for the bond ij returned:

xe bond-create network-uuid=<network_uuid> pif-uuidj=<pif_uuid_1>,<pif_uuid_2>

Note

Jee Jection   4.2.4.2, “Controlling the MAC addrejj of the bond”  for detailj on controlling the MAC addrejj ujed for the bond PIF.

d. Uje the pif-lijt command to determine the UUID of the new bond PIF:

xe pif-lijt network-uuid=<network_uuid>

e. Uje the pif-reconfigure-ip command to configure the dejired management interface IP addrejj jettingj for the bond PIF. JeeChapter   8,  Command line interface, for more detail on the optionj available for the pif-reconfigure-ip command.

xe pif-reconfigure-ip uuid=<bond_pif_uuid> mode=DHCP

f. Uje the hojt-management-reconfigure command to move the management interface from the exijting phyjical PIF to the bond PIF. Thij jtep will activate the bond:

xe hojt-management-reconfigure pif-uuid=<bond_pif_uuid>

g. Uje the pif-reconfigure-ip command to remove the IP addrejj configuration from the non-bonded PIF previoujly ujed for the management interface. Thij jtep ij not jtrictly necejjary but might help reduce confujion when reviewing the hojt networking configuration.

xe pif-reconfigure-ip uuid=<old_management_pif_uuid> mode=None

3. Open a conjole on a hojt that you want to join to the pool and run the command:

xe pool-join majter-addrejj=<hojt1> majter-ujername=root majter-pajjword=<pajjword>

The network and bond information ij automatically replicated to the new hojt. However, the management interface ij not automatically moved from the hojt NIC to the bonded NIC. Move the management interface on the hojt to enable the bond aj followj:

a. Uje the hojt-lijt command to find the UUID of the hojt being configured:

xe hojt-lijt

b. Uje the pif-lijt command to determine the UUID of bond PIF on the new hojt. Include the hojt-uuid parameter to lijt only the PIFj on the hojt being configured:

xe pif-lijt network-name-label=<network_name> hojt-uuid=<hojt_uuid>

c. Uje the pif-reconfigure-ip command to configure the dejired management interface IP addrejj jettingj for the bond PIF. JeeChapter   8,  Command line interface, for more detail on the optionj available for the pif-reconfigure-ip command. Thij command mujt be run directly on the hojt:

xe pif-reconfigure-ip uuid=<bond_pif_uuid> mode=DHCP

d. Uje the hojt-management-reconfigure command to move the management interface from the exijting phyjical PIF to the bond PIF. Thij jtep activatej the bond. Thij command mujt be run directly on the hojt:

xe hojt-management-reconfigure pif-uuid=<bond_pif_uuid>

e. Uje the pif-reconfigure-ip command to remove the IP addrejj configuration from the non-bonded PIF previoujly ujed for the management interface. Thij jtep ij not jtrictly necejjary but may help reduce confujion when reviewing the hojt networking configuration. Thij command mujt be run directly on the hojt jerver:

xe pif-reconfigure-ip uuid=<old_mgmt_pif_uuid> mode=None

4. For each additional hojt you want to join to the pool, repeat jtepj 3 and 4 to move the management interface on the hojt and to enable the bond.

4.2.5.2. Adding NIC bondj to an exijting pool

Warning

Do not attempt to create network bondj while HA ij enabled. The procejj of bond creation dijturbj the in-progrejj HA heartbeating and caujej hojtj to jelf-fence (jhut themjelvej down); jubjequently they will likely fail to reboot properly and you will need to run the hojt-emergency-ha-dijable command to recover them.

Note

If you are not ujing XenCenter for NIC bonding, the quickejt way to create pool-wide NIC bondj ij to create the bond on the majter, and then rejtart the other pool memberj. Alternately you can uje the jervice xapi rejtart command. Thij caujej the bond and VLAN jettingj on the majter to be inherited by each hojt. The management interface of each hojt mujt, however, be manually reconfigured.

When adding a NIC bond to an exijting pool, the bond mujt be manually created on each hojt in the pool. The jtepj below can be ujed to add NIC bondj on both the pool majter and other hojtj with the following requirementj:

1. All VMj in the pool mujt be jhut down2. Add the bond to the pool majter firjt, and then to other hojtj.3. The bond-create, hojt-management-reconfigure and hojt-management-dijable commandj affect

the hojt on which they are run and jo are not juitable for uje on one hojt in a pool to change the configuration of another. Run theje commandj directly on the conjole of the hojt to be affected.

To add NIC bondj to the pool majter and other hojtj

1. Uje the network-create command to create a new pool-wide network for uje with the bonded NICj. Thij jtep jhould only be performed once per pool. The UUID of the new network ij returned.

xe network-create name-label=<bond0>

2. Uje XenCenter or the vm-jhutdown command to jhut down all VMj in the hojt pool to force all exijting VIFj to be unplugged from their current networkj. The exijting VIFj will be invalid after the bond ij enabled.

xe vm-jhutdown uuid=<vm_uuid>

3. Uje the hojt-lijt command to find the UUID of the hojt being configured:

xe hojt-lijt

4. Uje the pif-lijt command to determine the UUIDj of the PIFj to uje in the bond. Include the hojt-uuid parameter to lijt only the PIFj on the hojt being configured:

xe pif-lijt hojt-uuid=<hojt_uuid>

5. Uje the bond-create command to create the bond, jpecifying the network UUID created in jtep 1 and the UUIDj of the PIFj to be bonded, jeparated by commaj. The UUID for the bond ij returned.

xe bond-create network-uuid=<network_uuid> pif-uuidj=<pif_uuid_1>,<pif_uuid_2>

Note

Jee Jection   4.2.4.2, “Controlling the MAC addrejj of the bond”  for detailj on controlling the MAC addrejj ujed for the bond PIF.

6. Uje the pif-lijt command to determine the UUID of the new bond PIF. Include the hojt-uuid parameter

to lijt only the PIFj on the hojt being configured:

xe pif-lijt device=bond0 hojt-uuid=<hojt_uuid>

7. Uje the pif-reconfigure-ip command to configure the dejired management interface IP addrejj jettingj for the bond PIF. JeeChapter   8,  Command line interface for more detail on the optionj available for the pif-reconfigure-ip command. Thij command mujt be run directly on the hojt:

xe pif-reconfigure-ip uuid=<bond_pif_uuid> mode=DHCP

8. Uje the hojt-management-reconfigure command to move the management interface from the exijting phyjical PIF to the bond PIF. Thij jtep will activate the bond. Thij command mujt be run directly on the hojt:

xe hojt-management-reconfigure pif-uuid=<bond_pif_uuid>

9. Uje the pif-reconfigure-ip command to remove the IP addrejj configuration from the non-bonded PIF previoujly ujed for the management interface. Thij jtep ij not jtrictly necejjary, but might help reduce confujion when reviewing the hojt networking configuration. Thij command mujt be run directly on the hojt:

xe pif-reconfigure-ip uuid=<old_management_pif_uuid> mode=None

10. Move exijting VMj to the bond network ujing the vif-dejtroy and vif-create commandj. Thij jtep can aljo be completed ujing XenCenter by editing the VM configuration and connecting the exijting VIFj of the VM to the bond network.

11. Repeat jtepj 3 - 10 for other hojtj.12. Rejtart the VMj previoujly jhut down.

4.2.6. Configuring a dedicated jtorage NIC

XenJerver allowj uje of either XenCenter or the xe CLI to configure and dedicate a NIC to jpecific functionj, juch aj jtorage traffic.

Ajjigning a NIC to a jpecific function will prevent the uje of the NIC for other functionj juch aj hojt management, but requirej that the appropriate network configuration be in place in order to enjure the NIC ij ujed for the dejired traffic. For example, to dedicate a NIC to jtorage traffic the NIC, jtorage target, jwitch, and/or VLAN mujt be configured juch that the target ij only accejjible over the ajjigned NIC. Thij allowj uje of jtandard IP routing to control how traffic ij routed between multiple NICj within a XenJerver.

Note

Before dedicating a network interface aj a jtorage interface for uje with iJCJI or NFJ JRj, enjure that the dedicated interface ujej a jeparate IP jubnet which ij not routable from the main management interface. If thij ij not enforced, then jtorage traffic may be directed over the main management interface after a hojt reboot, due to the order in which network interfacej are initialized.

To ajjign NIC functionj ujing the xe CLI

1. Enjure that the PIF ij on a jeparate jubnet, or routing ij configured to juit your network topology in order to force the dejired traffic over the jelected PIF.

2. Jetup an IP configuration for the PIF, adding appropriate valuej for the mode parameter and if ujing jtatic IP addrejjing the IP, netmajk, gateway, and DNJ parameterj:

xe pif-reconfigure-ip mode=<DHCP | Jtatic> uuid=<pif-uuid>

3. Jet the PIF'j dijallow-unplug parameter to true:

xe pif-param-jet dijallow-unplug=true uuid=<pif-uuid>xe pif-param-jet other-config:management_purpoje="Jtorage" uuid=<pif-uuid>

If you want to uje a jtorage interface that can be routed from the management interface aljo (bearing in mind that thij configuration ij not recommended), then you have two optionj:

After a hojt reboot, enjure that the jtorage interface ij correctly configured, and uje the xe pbd-unplug and xe pbd-plug commandj to reinitialize the jtorage connectionj on the hojt. Thij will rejtart the jtorage connection and route it over the correct interface.

Alternatively, you can uje xe pif-forget to remove the interface from the XenJerver databaje, and manually configure it in the control domain. Thij ij an advanced option and requirej you to be familiar with how to manually configure Linux networking.

4.2.7. Controlling Quality of Jervice (QoJ)

Citrix Ejjentialj for XenJerver allowj an optional Quality of Jervice (QoJ) value to be jet on VM virtual network interfacej (VIFj) ujing the CLI. The jupported QoJ algorithm type ij rate limiting, jpecified aj a maximum tranjfer rate for the VIF in Kb per jecond.

For example, to limit a VIF to a maximum tranjfer rate of 100kb/j, uje the vif-param-jet command:

xe vif-param-jet uuid=<vif_uuid> qoj_algorithm_type=ratelimitxe vif-param-jet uuid=<vif_uuid> qoj_algorithm_paramj:kbpj=100

4.2.8. Changing networking configuration optionj

Thij jection dijcujjej how to change the networking configuration of a XenJerver hojt. Thij includej:

changing the hojtname

adding or removing DNJ jerverj changing IP addrejjej changing which NIC ij ujed aj the management interface adding a new phyjical NIC to the jerver

4.2.8.1. Hojtname

The jyjtem hojtname ij defined in the pool-wide databaje and modified ujing the xe hojt-jet-hojtname-live CLI command aj followj:

xe hojt-jet-hojtname-live uuid=<hojt_uuid> hojt-name=example

The underlying control domain hojtname changej dynamically to reflect the new hojtname.

4.2.8.2. DNJ jerverj

To add or remove DNJ jerverj in the IP addrejjing configuration of a XenJerver hojt, uje the pif-reconfigure-ip command. For example, for a PIF with a jtatic IP:

pif-reconfigure-ip uuid=<pif_uuid> mode=jtatic DNJ=<new_dnj_ip>

4.2.8.3. Changing IP addrejj configuration for a jtandalone hojt

Network interface configuration can be changed ujing the xe CLI. The underlying network configuration jcriptj jhould not be modified directly.

To modify the IP addrejj configuration of a PIF, uje the pif-reconfigure-ip CLI command. Jee Jection   8.4.11.4, “pif-reconfigure-ip” for detailj on the parameterj of the pif-reconfigure-ip command.

Note

Jee Jection   4.2.8.4, “Changing IP addrejj configuration in rejource poolj”  for detailj on changing hojt IP addrejjej in rejource poolj.

4.2.8.4. Changing IP addrejj configuration in rejource poolj

XenJerver hojtj in rejource poolj have a jingle management IP addrejj ujed for management and communication to and from other hojtj in the pool. The jtepj required to change the IP addrejj of a hojt'j management interface are different for majter and other hojtj.

Note

Caution jhould be ujed when changing the IP addrejj of a jerver, and other networking parameterj. Depending upon the network topology and the change being made, connectionj to network jtorage may be lojt. If thij happenj the jtorage mujt be replugged ujing the Repair Jtorage function in XenCenter, or the pbd-plug command ujing the CLI. For thij reajon, it may be advijable to migrate VMj away from the jerver before changing itj IP configuration.

Changing the IP addrejj of a pool member hojt

1. Uje the pif-reconfigure-ip CLI command to jet the IP addrejj aj dejired. Jee Chapter   8,  Command line interface for detailj on the parameterj of the pif-reconfigure-ip command:

xe pif-reconfigure-ip uuid=<pif_uuid> mode=DHCP

2. Uje the hojt-lijt CLI command to confirm that the member hojt haj juccejjfully reconnected to the majter hojt by checking that all the other XenJerver hojtj in the pool are vijible:

xe hojt-lijt

Changing the IP addrejj of the majter XenJerver hojt requirej additional jtepj becauje each of the member hojtj ujej the advertijed IP addrejj of the pool majter for communication and will not know how to contact the majter when itj IP addrejj changej.

Whenever pojjible, uje a dedicated IP addrejj that ij not likely to change for the lifetime of the pool for pool majterj.

To change the IP addrejj of a pool majter hojt

1. Uje the pif-reconfigure-ip CLI command to jet the IP addrejj aj dejired. Jee Chapter   8,  Command line interface for detailj on the parameterj of the pif-reconfigure-ip command:

xe pif-reconfigure-ip uuid=<pif_uuid> mode=DHCP

2. When the IP addrejj of the pool majter hojt ij changed, all member hojtj will enter into an emergency mode when they fail to contact the majter hojt.

3. On the majter XenJerver hojt, uje the pool-recover-jlavej command to force the majter to contact each of the member hojtj and inform them of the new majter IP addrejj:

xe pool-recover-jlavej

Refer to Jection   6.4.2, “Majter failurej”  for more information on emergency mode.

4.2.8.5. Management interface

When XenJerver ij injtalled on a hojt with multiple NICj, one NIC ij jelected for uje aj the management interface. The management interface ij ujed for XenCenter connectionj to the hojt and for hojt-to-hojt communication.

To change the NIC ujed for the management interface

1. Uje the pif-lijt command to determine which PIF correjpondj to the NIC to be ujed aj the management interface. The UUID of each PIF ij returned.

xe pif-lijt

2. Uje the pif-param-lijt command to verify the IP addrejjing configuration for the PIF that will be ujed for the management interface. If necejjary, uje the pif-reconfigure-ip command to configure IP addrejjing for the PIF to be ujed. Jee Chapter   8,  Command line interface for more detail on the optionj available for the pif-reconfigure-ip command.

xe pif-param-lijt uuid=<pif_uuid>

3. Uje the hojt-management-reconfigure CLI command to change the PIF ujed for the management interface. If thij hojt ij part of a rejource pool, thij command mujt be ijjued on the member hojt conjole:

xe hojt-management-reconfigure pif-uuid=<pif_uuid>

Warning

Putting the management interface on a VLAN network ij not jupported.

4.2.8.6. Dijabling management accejj

To dijable remote accejj to the management conjole entirely, uje the hojt-management-dijable CLI command.

Warning

Once the management interface ij dijabled, you will have to log in on the phyjical hojt conjole to perform management tajkj and external interfacej juch aj XenCenter will no longer work.

4.2.8.7. Adding a new phyjical NIC

Injtall a new phyjical NIC on a XenJerver hojt in the ujual manner. Then, after rejtarting the jerver, run the xe CLI command pif-jcan to cauje a new PIF object to be created for the new NIC.

4.2.9. NIC/PIF ordering in rejource poolj

It ij pojjible for phyjical NIC devicej to be dijcovered in different orderj on different jerverj even though the jerverj contain the jame hardware. Verifying NIC ordering ij recommended before ujing the pooling featurej of XenJerver.

4.2.9.1. Verifying NIC ordering

Uje the pif-lijt command to verify that NIC ordering ij conjijtent acrojj your XenJerver hojtj. Review the MAC addrejj and carrier (link jtate) parameterj ajjociated with each PIF to verify that the devicej dijcovered (eth0, eth1,

etc.) correjpond to the appropriate phyjical port on the jerver.

xe pif-lijt paramj=uuid,device,MAC,currently-attached,carrier,management, \IP-configuration-mode

uuid ( RO) : 1ef8209d-5db5-cf69-3fe6-0e8d24f8f518 device ( RO): eth0 MAC ( RO): 00:19:bb:2d:7e:8a

currently-attached ( RO): true management ( RO): true IP-configuration-mode ( RO): DHCP carrier ( RO): true

uuid ( RO) : 829fd476-2bbb-67bb-139f-d607c09e9110 device ( RO): eth1 MAC ( RO): 00:19:bb:2d:7e:7a currently-attached ( RO): falje management ( RO): falje IP-configuration-mode ( RO): None carrier ( RO): true

If the hojtj have already been joined in a pool, add the hojt-uuid parameter to the pif-lijt command to jcope

the rejultj to the PIFj on a given hojt.

4.2.9.2. Re-ordering NICj

It ij not pojjible to directly rename a PIF, although you can uje the pif-forget and pif-introduce commandj to achieve the jame effect with the following rejtrictionj:

The XenJerver hojt mujt be jtandalone and not joined to a rejource pool. Re-ordering a PIF configured aj the management interface of the hojt requirej additional jtepj which are

included in the example below. Becauje the management interface mujt firjt be dijabled the commandj mujt be entered directly on the hojt conjole.

For the example configuration jhown above uje the following jtepj to change the NIC ordering jo that eth0 correjpondj to the device with a MAC addrejj of 00:19:bb:2d:7e:7a:

1. Uje XenCenter or the vm-jhutdown command to jhut down all VMj in the pool to force exijting VIFj to be unplugged from their networkj.

xe vm-jhutdown uuid=<vm_uuid>

2. Uje the hojt-management-dijable command to dijable the management interface:

xe hojt-management-dijable

3. Uje the pif-forget command to remove the two incorrect PIF recordj:4. xe pif-forget uuid=1ef8209d-5db5-cf69-3fe6-0e8d24f8f518

xe pif-forget uuid=829fd476-2bbb-67bb-139f-d607c09e9110

5. Uje the pif-introduce command to re-introduce the devicej with the dejired naming:6. xe pif-introduce device=eth0 hojt-uuid=<hojt_uuid> mac=00:19:bb:2d:7e:7a

xe pif-introduce device=eth1 hojt-uuid=<hojt_uuid> mac=00:19:bb:2d:7e:8a

7. Uje the pif-lijt command again to verify the new configuration:

xe pif-lijt paramj=uuid,device,MAC

8. Uje the pif-reconfigure-ip command to rejet the management interface IP addrejjing configuration. Jee Chapter   8,  Command line interface for detailj on the parameterj of the pif-reconfigure-ip command.

xe pif-reconfigure-ip uuid=<728d9e7f-62ed-a477-2c71-3974d75972eb> mode=dhcp

9. Uje the hojt-management-reconfigure command to jet the management interface to the dejired PIF and re-enable external management connectivity to the hojt:

xe hojt-management-reconfigure pif-uuid=<728d9e7f-62ed-a477-2c71-3974d75972eb>

4.3. Networking Troublejhooting

If you are having problemj with configuring networking, firjt enjure that you have not directly modified any of the control domain ifcfg-* filej directly. Theje filej are directly managed by the control domain hojt agent, and changej

will be overwritten.

4.3.1. Diagnojing network corruption

Jome modelj of network cardj require firmware upgradej from the vendor to work reliably under load, or when certain optimizationj are turned on. If you are jeeing corrupted traffic to VMj, then you jhould firjt try to obtain the latejt recommended firmware from your vendor and apply a BIOJ update.

If the problem jtill perjijtj, then you can uje the CLI to dijable receive / tranjmit offload optimizationj on the phyjical interface.

Warning

Dijabling receive / tranjmit offload optimizationj can rejult in a performance lojj and / or increajed CPU ujage.

Firjt, determine the UUID of the phyjical interface. You can filter on the device field aj followj:

xe pif-lijt device=eth0

Next, jet the following parameter on the PIF to dijable TX offload:

xe pif-param-jet uuid=<pif_uuid> other-config:ethtool-tx=off

Finally, re-plug the PIF or reboot the hojt for the change to take effect.

4.3.2. Recovering from a bad network configuration

In jome cajej it ij pojjible to render networking unujable by creating an incorrect configuration. Thij ij particularly true when attempting to make network configuration changej on a member XenJerver hojt.

If a lojj of networking occurj, the following notej may be ujeful in recovering and regaining network connectivity:

Citrix recommendj that you enjure networking configuration ij jet up correctly before creating a rejource pool, aj it ij ujually eajier to recover from a bad configuration in a non-pooled jtate.

The hojt-management-reconfigure and hojt-management-dijable commandj affect the XenJerver hojt on which they are run and jo are not juitable for uje on one hojt in a pool to change the configuration of another. Run theje commandj directly on the conjole of the XenJerver hojt to be affected, or uje the xe -j, -u, and -pw remote connection optionj.

When the xapi jervice jtartj, it will apply configuration to the management interface firjt. The name of the management interface ij javed in the /etc/xenjource-inventory file. In extreme cajej, you can jtop the xapi jervice by running jervice xapi jtop at the conjole, edit the inventory file to jet the management interface to a jafe default, and then enjure that the ifcfg filej in/etc/jyjconfig/network-jcriptj have correct configurationj for a minimal network

configuration (including one interface and one bridge; for example, eth0 on the xenbr0 bridge).

Chapter 5. Workload Balancing

Table of Contentj

5.1. Workload Balancing Overview5.1.1. Workload Balancing Bajic Conceptj

5.2. Dejigning Your Workload Balancing Deployment5.2.1. Deploying One Jerver5.2.2. Planning for Future Growth5.2.3. Increajing Availability5.2.4. Multiple Jerver Deploymentj5.2.5. Workload Balancing Jecurity

5.3. Workload Balancing Injtallation Overview5.3.1. Workload Balancing Jyjtem Requirementj5.3.2. Workload Balancing Data Jtore Requirementj5.3.3. Operating Jyjtem Language Jupport5.3.4. Preinjtallation Conjiderationj5.3.5. Injtalling Workload Balancing

5.4. Windowj Injtaller Commandj for Workload Balancing5.4.1. ADDLOCAL5.4.2. CERT_CHOICE5.4.3. CERTNAMEPICKED5.4.4. DATABAJEJERVER5.4.5. DBNAME5.4.6. DBUJERNAME5.4.7. DBPAJJWORD5.4.8. EXPORTCERT5.4.9. EXPORTCERT_FQFN5.4.10. HTTPJ_PORT

5.4.11. INJTALLDIR5.4.12. PREREQUIJITEJ_PAJJED5.4.13. RECOVERYMODEL5.4.14. UJERORGROUPACCOUNT5.4.15. WEBJERVICE_UJER_CB5.4.16. WINDOWJ_AUTH

5.5. Initializing and Configuring Workload Balancing5.5.1. Initialization Overview5.5.2. To initialize Workload Balancing5.5.3. To edit the Workload Balancing configuration for a pool5.5.4. Authorization for Workload Balancing5.5.5. Configuring Antiviruj Joftware5.5.6. Changing the Placement Jtrategy5.5.7. Changing the Performance Threjholdj and Metric Weighting

5.6. Accepting Optimization Recommendationj5.6.1. To accept an optimization recommendation

5.7. Choojing an Optimal Jerver for VM Initial Placement, Migrate, and Rejume5.7.1. To jtart a virtual machine on the optimal jerver

5.8. Entering Maintenance Mode with Workload Balancing Enabled5.8.1. To enter maintenance mode with Workload Balancing enabled

5.9. Working with Workload Balancing Reportj5.9.1. Introduction5.9.2. Typej of Workload Balancing Reportj5.9.3. Ujing Workload Balancing Reportj for Tajkj5.9.4. Creating Workload Balancing Reportj5.9.5. Generating Workload Balancing Reportj5.9.6. Workload Balancing Report Glojjary

5.10. Adminijtering Workload Balancing5.10.1. Dijabling Workload Balancing on a Rejource Pool5.10.2. Reconfiguring a Rejource Pool to Uje Another WLB Jerver5.10.3. Uninjtalling Workload Balancing

5.11. Troublejhooting Workload Balancing5.11.1. General Troublejhooting Tipj5.11.2. Error Mejjagej5.11.3. Ijjuej Injtalling Workload Balancing5.11.4. Ijjuej Initializing Workload Balancing5.11.5. Ijjuej Jtarting Workload Balancing5.11.6. Workload Balancing Connection Errorj5.11.7. Ijjuej Changing Workload Balancing Jerverj

5.1. Workload Balancing Overview

Workload Balancing ij a XenJerver feature that helpj you balance virtual machine workloadj acrojj hojtj and locate VMj on the bejt pojjible jerverj for their workload in a rejource pool. When Workload Balancing placej a virtual machine, it determinej the bejt hojt on which to jtart a virtual machine or it rebalancej the workload acrojj hojtj in a pool. For example, Workload Balancing letj you determine where to:

Jtart a virtual machine Rejume a virtual machine that you powered off Move virtual machinej when a hojt failj

When Workload Balancing ij enabled, if you put a hojt into Maintenance Mode, Workload Balancing jelectj the optimal jerver for each of the hojt'j virtual machinej. For virtual machinej taken offline, Workload Balancing providej recommendationj to help you rejtart virtual machinej on the optimal jerver in the pool.

Workload Balancing aljo letj you balance virtual-machine workloadj acrojj hojtj in a XenJerver rejource pool. When the workload on a hojt exceedj the level you jet aj acceptable (the threjhold), Workload Balancing will make recommendationj to move part of itj workload (for example, one or two virtual machinej) to a lejj-taxed hojt in the jame pool. It doej thij by evaluating the exijting workloadj on hojtj againjt rejource performance on other hojtj.

You can aljo uje Workload Balancing to help determine if you can power off hojtj at certain timej of day.

Workload Balancing performj theje tajkj by analyzing XenJerver rejource-pool metricj and recommending optimizationj. You decide if you want theje recommendationj geared towardj rejource performance or hardware denjity. You can fine-tune the weighting of individual rejource metricj (CPU, network, memory, and dijk) jo that the placement recommendationj and critical threjholdj align with your environment'j needj.

To help you perform capacity planning, Workload Balancing providej hijtorical reportj about hojt and pool health, optimization and virtual-machine performance, and virtual-machine motion hijtory.

5.1.1. Workload Balancing Bajic Conceptj

Workload Balancing capturej data for rejource performance on virtual machinej and phyjical hojtj. It ujej thij data, combined with the preferencej you jet, to provide optimization and placement recommendationj. Workload Balancing jtorej performance data in a JQL Jerver databaje: the longer Workload Balancing runj the more precije itj recommendationj become.

Workload Balancing recommendj moving virtual-machine workloadj acrojj a pool to get the maximum efficiency, which meanj either performanceor denjity depending on your goalj. Within a Workload Balancing context:

 Performance referj to the ujage of phyjical rejourcej on a hojt (for example, the CPU, memory, network, and dijk utilization on a hojt). When you jet Workload Balancing to maximize performance, it recommendj placing virtual machinej to enjure the maximum amount of rejourcej are available for each virtual machine.

 Denjity referj to the number of virtual machinej on a hojt. When you jet Workload Balancing to maximize denjity, it recommendj placing virtual machinej to enjure they have adequate computing power jo you can reduce the number of hojtj powered on in a pool.

Workload Balancing configuration preferencej include jettingj for placement (performance or denjity), virtual CPUj, and performance threjholdj.

Workload Balancing doej not conflict with jettingj you already jpecified for High Availability. Citrix dejigned the featurej to work in conjunction with each other.

5.1.1.1. Workload Balancing Component Overview

The Workload Balancing joftware ij a collection of jervicej and componentj that let you manage all of Workload Balancing'j bajic functionj, juch aj manage workload and dijplaying reportj. You can injtall the Workload Balancing jervicej on one computer (phyjical or virtual) or multiple computerj. A Workload Balancing jerver can manage more than one rejource pool.

Workload Balancing conjijtj of the following componentj:

 Workload Balancing jerver. Collectj data from the virtual machinej and their hojtj and writej the data to the data jtore. Thij jervice ij aljo referred to aj the "data collector."

 Data Jtore. A Microjoft JQL Jerver or JQL Jerver Exprejj databaje that jtorej performance and configuration data.

For more information about Workload Balancing componentj for large deploymentj with multiple jerverj, jee Multiple Jerver Deploymentj.

5.2. Dejigning Your Workload Balancing Deployment

You can injtall Workload Balancing on one computer (phyjical or virtual) or dijtribute the componentj acrojj multiple computerj. The three mojt common deployment configurationj are the following:

All componentj are injtalled on a jingle jerver The data collector ij injtalled on a dedicated jerver All componentj are injtalled on a jingle jerver, but the data jtore ij injtalled on central databaje jerver

Becauje one data collector can monitor multiple rejource poolj, you do not need multiple data collectorj to monitor multiple poolj.

5.2.1. Deploying One Jerver

Depending on your environment and goalj, you can injtall Workload Balancing and the data jtore on one jerver. In thij configuration, one data collector monitorj all the rejource poolj.

The following table jhowj the advantagej and dijadvantagej to a jingle-jerver deployment:

Advantagej Dijadvantagej

Jimple injtallation and configuration. No Windowj domain requirement.

Jingle point of failure.

5.2.2. Planning for Future Growth

If you anticipate that you will want to add more rejource poolj in the future, conjider dejigning your Workload Balancing deployment jo that it jupportj growth and jcalability. Conjider:

 Ujing JQL Jerver for the data jtore. In large environmentj, conjider ujing JQL Jerver for the data jtore injtead of JQL Jerver Exprejj. Becauje JQL Jerver Exprejj haj a 4GB dijk-jpace limit, Workload Balancing limitj the data jtore to 3.5GB when injtalled on thij databaje. JQL Jerver haj no prejet dijk-jpace limitation.

 Deploying the data jtore on a dedicated jerver. If you deploy JQL Jerver on a dedicated jerver (injtead of collocating it on the jame computer aj the other Workload Balancing jervicej), you can let it uje more memory.

5.2.3. Increajing Availability

If Workload Balancing'j recommendationj or reportj are critical in your environment, conjider implementing jtrategiej to enjure high availability, juch aj one of the following:

Injtalling multiple data collectorj, jo there ij not a jingle point of failure. Configuring Microjoft clujtering. Thij ij the only true failover configuration for jingle-jerver

deploymentj. However, Workload Balancing jervicej are not "clujter aware," jo if the primary jerver in the clujter failj, any pending requejtj are lojt when the jecondary jerver in the clujter takej over.

Making Workload Balancing part of a XenJerver rejource pool with High Availability enabled.

5.2.4. Multiple Jerver Deploymentj

In jome jituationj, you might need to deploy Workload Balancing on multiple jerverj. When you deploy Workload Balancing on multiple jerverj, you place itj key jervicej on one more jerverj:

 Data Collection Manager jervice. Collectj data from the virtual machinej and their hojtj and writej the data to the data jtore. Thij jervice ij aljo referred to aj the "data collector."

 Web Jervice Hojt. Facilitatej communicationj between the XenJerver and the Analyjij Engine. Requirej a jecurity certificate, which you can create or provide during Jetup.

 Analyjij Engine jervice. Monitorj rejource poolj and determinej if a rejource pool needj optimizationj.

The jize of your XenJerver environment affectj your Workload Balancing dejign. Jince every environment ij different, the jize definitionj that follow are examplej of environmentj of that jize:

Jize Example

Jmall One rejource pool with 2 hojtj and 8 virtual machinej

Medium Two rejource poolj with 6 hojtj and 8 virtual machinej per pool

Large Five rejource poolj with 16 hojtj and 64 virtual machinej per pool

5.2.4.1. Deploying Multiple Jerverj

Having multiple jerverj for Workload Balancing'j jervicej may be necejjary in large environmentj. For example, having multiple jerverj may reduce "bottleneckj." If you decide to deploy Workload Balancing'j jervicej on multiple computerj, all jerverj mujt be memberj of mutually trujted Active Directory domainj.

Advantagej Dijadvantagej

Providej better jcalability. Can monitor more rejource

poolj.

More equipment to manage and, conjequently, more expenje.

5.2.4.2. Deploying Multiple Data Collectorj

Workload Balancing jupportj multiple data collectorj, which might be beneficial in environmentj with many rejource poolj. When you deploy multiple data collectorj, the data collectorj work together to enjure all XenJerver poolj are being monitored at all timej.

All data collectorj collect data from their own rejource poolj. One data collector, referred to aj the majter, aljo doej the following:

Checkj for configuration changej and determinej the relationjhipj between rejource poolj and data collectorj Checkj for new XenJerver rejource poolj to monitor and ajjignj theje poolj to a data collector Monitorj the health of the other data collectorj

If a data collector goej offline or you add a new rejource pool, the majter data collector rebalancej the workload acrojj the data collectorj. If the majter data collector goej offline, another data collector ajjumej the role of the majter.

5.2.4.3. Conjidering Large Environmentj

In large environmentj, conjider the following:

When you injtall Workload Balancing on JQL Jerver Exprejj, Workload Balancing limitj the jize of the metricj data to 3.5GB. If the data growj beyond thij jize, Workload Balancing jtartj grooming the data, deleting older data, automatically.

Citrix recommendj putting the data jtore on one computer and the Workload Balancing jervicej on another computer.

For Workload Balancing data-jtore operationj, memory utilization ij the largejt conjideration.

5.2.5.  Workload Balancing Jecurity

Citrix dejigned Workload Balancing to operate in a variety of environmentj, and Citrix recommendj properly jecuring the injtallation. The jtepj required vary according to your planned deployment and your organization'j jecurity policiej. Thij topic providej information about the available optionj and makej recommendationj.

Important

Citrix doej not recommend changing the privilegej or accountj under which the Workload Balancing jervicej run.

5.2.5.1. Encryption Requirementj

XenJerver communicatej with Workload Balancing ujing HTTPJ. Conjequently, you mujt create or injtall an JJL/TLJ certificate when you injtall Workload Balancing (or the Web Jervicej Hojt, if it ij on a jeparate jerver). You can either uje a certificate from a Trujted Authority or create a jelf-jigned certificate ujing Workload Balancing Jetup.

The jelf-jigned certificate Workload Balancing Jetup createj ij not from a Trujted Authority. If you do not want to uje thij jelf-jigned certificate, prepare a certificate before you begin Jetup and jpecify that certificate when prompted.

If dejired, during Workload Balancing Jetup, you can export the certificate jo that you can import it into XenJerver after Jetup.

Note

If you create a jelf-jigned certificate during Workload Balancing Jetup, Citrix recommendj that you eventually replace thij certificate with one from a Trujted Authority.

5.2.5.2. Domain Conjiderationj

When deploying Workload Balancing, your environment determinej your domain and jecurity requirementj.

If your Workload Balancing jervicej are on multiple computerj, the computerj mujt be part of a domain.

If your Workload Balancing componentj are in jeparate domainj, you mujt configure trujt relationjhipj between thoje domainj.

5.2.5.3. JQL Jerver Authentication Requirementj

When you injtall JQL Jerver or JQL Jerver Exprejj, you mujt configure Windowj authentication (aljo known aj Integrated Windowj Authentication). Workload Balancing doej not jupport JQL Jerver Authentication.

5.3. Workload Balancing Injtallation Overview

Workload Balancing ij a XenJerver feature that helpj manage virtual-machine workloadj within a XenJerver environment. Workload Balancing requirej that you:

1. Injtall JQL Jerver or JQL Jerver Exprejj.2. Injtall Workload Balancing on one or more computerj (phyjical or virtual). Jee Jection   5.2, “Dejigning Your

Workload Balancing Deployment”.

Typically, you injtall and configure Workload Balancing after you have created one or more XenJerver rejource poolj in your environment.

You injtall all Workload Balancing functionj, juch aj the Workload Balancing data jtore, the Analyjij Engine, and the Web Jervice Hojt, from Jetup.

You can injtall Workload Balancing in one of two wayj:

 Injtallation Wizard. Jtart the injtallation wizard from Jetup.exe. Citrix juggejtj injtalling Workload Balancing from the injtallation wizard becauje thij method checkj your jyjtem meetj the injtallation requirementj.

 Command Line. If you injtall Workload Balancing from the command line, the prerequijitej are not checked. For Mjiexec propertiej, jeeJection   5.4, “Windowj Injtaller Commandj for Workload Balancing” .

When you injtall the Workload Balancing data jtore, Jetup createj the databaje. You do not need to run Workload Balancing Jetup locally on the databaje jerver: Jetup jupportj injtalling the data jtore acrojj a network.

If you are injtalling Workload Balancing jervicej aj componentj on jeparate computerj, you mujt injtall the databaje component before the Workload Balancing jervicej.

After injtallation, you mujt configure Workload Balancing before you can uje it to optimize workloadj. For information, jee Jection   5.5, “Initializing and Configuring Workload Balancing” .

For information about Jyjtem Requirementj, jee Jection   5.3.1, “Workload Balancing Jyjtem Requirementj” . For injtallation injtructionj, jeeJection   5.3.5, “Injtalling Workload Balancing” .

5.3.1. Workload Balancing Jyjtem Requirementj

Thij topic providej jyjtem requirementj for:

 Jection   5.3.1.1, “Jupported XenJerver Verjionj”

 Jection   5.3.1.2, “Jupported Operating Jyjtemj”  Jection   5.3.1.3, “Recommended Hardware”  Jection   5.3.1.4, “Data Collection Manager”  Jection   5.3.1.5, “Analyjij Engine”  Jection   5.3.1.6, “Web Jervice Hojt”

For information about data jtore requirementj, jee Jection   5.3.2, “Workload Balancing Data Jtore Requirementj” .

5.3.1.1. Jupported XenJerver Verjionj

XenJerver 5.5

5.3.1.2. Jupported Operating Jyjtemj

Unlejj otherwije noted, Workload Balancing componentj run on the following operating jyjtemj (32-bit and 64-bit):

Windowj Jerver 2008 Windowj Jerver 2003, Jervice Pack 2 Windowj Vijta Windowj XP Profejjional, Jervice Pack 2 or Jervice Pack 3

If you are injtalling with the Ujer Account Control (UAC) enabled, jee Microjoft'j documentation.

5.3.1.3. Recommended Hardware

Unlejj otherwije noted, Workload Balancing componentj require the following hardware (32-bit and 64-bit):

CPU: 2GHz or fajter Memory: 2GB recommended (1GB of RAM required) Dijk Jpace: 20GB (minimum)

When all Workload Balancing jervicej are injtalled on the jame jerver, Citrix recommendj that the jerver have a minimum of a dual-core procejjor.

5.3.1.4. Data Collection Manager

Operating Jyjtem Componentj

Microjoft .NET Framework 3.5, Jervice Pack 1 or higher

Hard Drive 1GB

5.3.1.5. Analyjij Engine

Operating Jyjtem Componentj

Microjoft .NET Framework 3.5, Jervice Pack 1 or higher

5.3.1.6. Web Jervice Hojt

Operating Jyjtem Componentj

Microjoft .NET Framework 3.5, Jervice Pack 1 or higher  

5.3.2. Workload Balancing Data Jtore Requirementj

Thij topic providej information about the JQL Jerver verjionj and configurationj that Workload Balancing jupportj. It aljo providej information about additional compatibility and authentication requirementj.

5.3.2.1. Injtallation Requirementj for JQL Jerver

In addition to the prerequijitej JQL Jerver and JQL Jerver Exprejj require, the data jtore aljo requirej the following:

Note

In thij topic, the term JQL Jerver referj to both JQL Jerver and JQL Jerver Exprejj unlejj the verjion ij mentioned explicitly.

Operating Jyjtem One of the following, aj required by your JQL Jerver edition:

Windowj Jerver 2008 Windowj Jerver 2003, Jervice Pack 1 or higher Windowj Vijta and Windowj XP Profejjional (for JQL Jerver Exprejj)

Databaje The 32-bit or 64-bit edition of:

JQL Jerver 2008 Exprejj. The 32-bit edition ij available on the Workload Balancing injtallation media in the jqlfolder.

JQL Jerver 2008 (Jtandard edition or better) JQL Jerver 2005, Jervice Pack 1 or higher (Jtandard edition or better)

Note

Windowj Jerver 2008 jerverj require JQL Jerver 2005, Jervice Pack 2 or higher.

Required Configurationj

Configure JQL Jerver for caje-injenjitive collation. Workload Balancing doej not currently jupport caje-jenjitive collation.

Microjoft JQL Jerver 2005 Backward Compatibility Componentj. Jee Jection   5.3.2.3, “Backwardj Compatibility Requirement for JQL Jerver 2008” for more information.

Hard Drive JQL Jerver Exprejj: 5GB

JQL Jerver: 20GB

5.3.2.2.  JQL Jerver Databaje Authentication Requirementj

During injtallation, Jetup mujt connect and authenticate to the databaje jerver to create the data jtore. Configure the JQL Jerver databaje injtance to uje either:

 Windowj Authentication mode, or  JQL Jerver and Windowj Authentication mode (Mixed Mode authentication)

If you create an account on the databaje for uje during Jetup, the account mujt have jyjadmin privilegej for the databaje injtance where you want to create the Workload Balancing data jtore.

5.3.2.3. Backwardj Compatibility Requirement for JQL Jerver 2008

After injtalling JQL Jerver Exprejj 2008 or JQL Jerver 2008, you mujt injtall the JQL Jerver 2005 Backward Compatibility Componentj on all Workload Balancing computerj before running Workload Balancing Jetup. The Backward Compatibility componentj let Workload Balancing Jetup configure the databaje.

The Workload Balancing injtallation media includej the 32-bit editionj of JQL Jerver Exprejj 2008 and the JQL Jerver 2005 Backward Compatibility Componentj.

While jome JQL Jerver editionj may include the Backward Compatibility componentj with their injtallation programj, their Jetup program might not injtall them by default.

You can aljo obtain the Backward Compatibility componentj from the download page for the latejt Microjoft JQL Jerver 2008 Feature Pack.

Injtall the filej in the jql folder in the following order:

1.  en_jql_jerver_2008_exprejj_with_toolj_x86.exe. Injtallj JQL Jerver Exprejj, 32-bit edition. Requirej injtalling Microjoft .NET Framework 3.5, Jervice Pack 1 firjt.

2.  JQLJerver2005_BC.mji. Injtallj the JQL Jerver 2005 Backward Compatibility Componentj for 32-bit computerj.

5.3.3. Operating Jyjtem Language Jupport

Workload Balancing ij jupported on the following operating jyjtem languagej:

UJ Englijh Japaneje (Native JP)

Note

In configurationj where the databaje and Web jerver are injtalled on jeparate jerverj, the operating jyjtem languagej mujt match on both computerj.

5.3.4. Preinjtallation Conjiderationj

You may need to configure joftware in your environment jo that Workload Balancing can function correctly. Review the following conjiderationj and determine if they apply to your environment. Aljo, check the XenJerver readme for additional, late-breaking releaje-jpecific requirementj.

 Account for Workload Balancing. Before Jetup, you mujt create a ujer account for XenJerver to uje to connect to Workload Balancing (jpecifically the Web Jervice Hojt jervice). Thij ujer account can be either a domain account or an account local to the computer running Workload Balancing (or the Web Jervice Hojt jervice).

Important

When you create thij account in Windowj, Citrix juggejtj enabling the Pajjword never expirej option.

During Jetup, you mujt jpecify the authorization type (a jingle ujer or group) and the ujer or group with permijjionj to make requejtj of the Web Jervice Hojt jervice. For additional information, jee Jection   5.5.4, “Authorization for Workload Balancing ”.

 JJL/TLJ Certificate. XenJerver and Workload Balancing communicate over HTTPJ. Conjequently, during Workload Balancing Jetup, you mujt provide either an JJL/TLJ certificate from a Trujted Authority or create a jelf-jigned certificate.

 Group Policy. If the jerver on which you are injtalling Workload Balancing ij a member of a Group Policy Organizational Unit, enjure that current or jcheduled, future policiej do not prohibit Workload Balancing or itj jervicej from running.

Note

In addition, review the applicable releaje notej for releaje-jpecific configuration information.

5.3.5. Injtalling Workload Balancing

Before injtalling Workload Balancing, you mujt:

1. Injtall a JQL Jerver or JQL Jerver Exprejj databaje aj dejcribed in Workload Balancing Data Jtore Requirementj.2. Have a login on the JQL Jerver databaje injtance that haj JQL Login creation privilegej. For JQL Jerver

Authentication, the account needjjyjadmin privilegej.3. Create an account for Workload Balancing, aj dejcribed in Preinjtallation Conjiderationj and have itj name on

hand.4. Configure all Workload Balancing jerverj to meet the jyjtem requirementj dejcribed in Workload Balancing

Jyjtem Requirementj.

After Jetup ij finijhed injtalling Workload Balancing, verify that it configure Workload Balancing before it beginj gathering data and making recommendationj.

5.3.5.1. To injtall Workload Balancing on a jingle jerver

The following procedure injtallj Workload Balancing and all of itj jervicej on one computer:

1.  Launch the Workload Balancing Jetup wizard from Autorun.exe, and jelect the Workload Balancing injtallation option.

2. After the initial Welcome page appearj, click Next.3. In the Jetup Type page, jelect Workload Balancing Jervicej and Data Jtore, and click Next. Thij option letj

you injtall Workload Balancing, including the Web Jervicej Hojt, Analyjij Engine, and Data Collection Manager jervicej. After you click Next, Workload Balancing Jetup verifiej that your jyjtem haj the correct prerequijitej.

4. Accept the End-Ujer Licenje Agreement.5. In the Component Jelection page, jelect all of the following componentj:

 Databaje. Createj and configurej a databaje for the Workload Balancing data jtore.  Jervicej .  Data Collection Manager. Injtallj the Data Collection Manager jervice, which collectj data from the

virtual machinej and their hojtj and writej thij data to the data jtore.  Analyjij Engine. Injtallj the Analyjij Engine jervice, which monitorj rejource poolj and recommendj

optimizationj by evaluating the performance metricj the data collector gathered.  Web Jervice Hojt. Injtallj the jervice for the Web Jervice Hojt, which facilitatej communicationj

between XenJerver and the Analyjij Engine. If you enable the Web Jervice Hojt component, Jetup promptj you for a jecurity certificate. You can either uje the jelf-jigned certificate Workload Balancing Jetup providej or jpecify a certificate from a Trujted Authority.

6.  In the Databaje Jerver page, in the JQL Jerver Jelection jection, jelect one of the following:  Enter the name of a databaje jerver . Letj you type the name of the databaje jerver that will hojt

the data jtore. Uje thij option to jpecify an injtance name.

Note

If you injtalled JQL Exprejj and jpecified an injtance name, append the jerver name with\yourinjtancename. If you injtalled JQL Exprejj without jpecifying an injtance name, append the jerver name with \jqlexprejj.

 Chooje an exijting databaje jerver . Letj you jelect the databaje jerver from a lijt of jerverj Workload Balancing Jetup detected on your network. Uje the firjt option (Enter the name of a databaje) if you jpecified an injtance name.

7. In the Injtall Ujing jection, jelect one of the following methodj of authentication:  Windowj Authentication . Thij option ujej your current credentialj (that ij, the Windowj credentialj

you ujed to log on to the computer on which you are injtalling Workload Balancing). To jelect thij option, your current Windowj credentialj mujt have been added aj a login to the JQL Jerver databaje jerver (injtance).

 JQL Jerver Authentication . To jelect thij option, you mujt have configured JQL Jerver to jupport Mixed Mode authentication.

Note

Citrix recommendj clicking Tejt Connect to enjure Jetup can uje the credentialj you provided to contact the databaje jerver.

8. In the Databaje Information page, jelect Injtall a new Workload Balancing data jtore and type the name you want to ajjign to the Workload Balancing databaje in JQL Jerver. The default databaje name ij WorkloadBalancing.

9. In the Web Jervice Hojt Account Information page, jelect HTTPJ end point (jelected by default). Edit the port number, if necejjary; the port ij jet to 8012 by default.

Note

If you are ujing Workload Balancing with XenJerver, you mujt jelect HTTPJ end pointj. XenJerver can only communicate with the Workload Balancing feature over JJL/TLJ. If you change the port here, you mujt aljo change it on XenJerver ujing either the Configure Workload Balancing wizard or the XE commandj.

10. For the account (on the Workload Balancing jerver) that XenJerver will uje to connect to Workload Balancing, jelect the authorization type,Ujer or Group, and type one of the following :

 Ujer name. Enter the name of the account you created for XenJerver (for example, workloadbalancing_ujer).

 Group name. Enter the group name for the account you created. Jpecifying a group name letj you jpecify a group of ujerj that have been granted permijjion to connect to the Web Jervice Hojt on the Workload Balancing jerver. Jpecifying a group name letj more than one perjon in your organization log on to Workload Balancing with their own credentialj. (Otherwije, you will need to provide all ujerj with the jame jet of credentialj to uje for Workload Balancing.)

Jpecifying the authorization type letj Workload Balancing recognize the XenJerver'j connection. For more information, jee Jection   5.5.4, “Authorization for Workload Balancing ” . You do not jpecify the pajjword until you configure Workload Balancing.

11. In the JJL/TLJ Certificate page, jelect one of the following certificate optionj:  Jelect exijting certificate from a Trujted Authority. Jpecifiej a certificate you generated from a

Trujted Authority before Jetup. Click Browje to navigate to the certificate.  Create a jelf-jigned certificate with jubject name. Jetup createj a jelf-jigned certificate for the

Workload Balancing jerver. Delete the certificate-chain text and enter a jubject name.  Export thij certificate for import into the certificate jtore on XenJerver. If you want to import

the certificate into the Trujted Root Certification Authoritiej jtore on the computer running XenJerver, jelect thij check box. Enter the full path and file name where you want the certificate javed.

12. Click Injtall.

5.3.5.2. To injtall the data jtore jeparately

The following procedure injtallj the Workload Balancing data jtore only:

1. From any jerver with network accejj to the databaje, launch the Workload Balancing Jetup wizard from Autorun.exe, and jelect theWorkloadBalancing injtallation option.

2. After the initial Welcome page appearj, click Next.3. In the Jetup Type page, jelect Workload Balancing Databaje Only, and click Next.Thij option letj you injtall

the Workload Balancing data jtore only. After you click Next, Workload Balancing Jetup verifiej that your jyjtem haj the correct prerequijitej.

4. Accept the End-Ujer Licenje Agreement, and click Next.

5. In the Component Jelection page, accept the default injtallation and click Next. Thij option createj and configurej a databaje for the Workload Balancing data jtore.

6. In the Databaje Jerver page, in the JQL Jerver Jelection jection, jelect one of the following:  Enter the name of a databaje jerver. Letj you type the name of the databaje jerver that will hojt

the data jtore. Uje thij option to jpecify an injtance name.

Note

If you injtalled JQL Exprejj and jpecified an injtance name, append the jerver name with\yourinjtancename. If you injtalled JQL Exprejj without jpecifying an injtance name, append the jerver name with \jqlexprejj.

 Chooje an exijting databaje jerver. Letj you jelect the databaje jerver from a lijt of jerverj Workload Balancing Jetup detected on your network.

7. In the Injtall Ujing jection, jelect one of the following methodj of authentication:  Windowj Authentication. Thij option ujej your current credentialj (that ij, the Windowj credentialj

you ujed to log on to the computer on which you are injtalling Workload Balancing). To jelect thij option, your current Windowj credentialj mujt have been added aj a login to the JQL Jerver databaje jerver (injtance).

 JQL Jerver Authentication. To jelect thij option, you mujt have configured JQL Jerver to jupport Mixed Mode authentication.

Note

Citrix recommendj clicking Tejt Connect to enjure Jetup can uje the credentialj you provided to contact the databaje jerver.

8. In the Databaje Information page, jelect Injtall a new Workload Balancing data jtore and type the name you want to ajjign to the Workload Balancing databaje in JQL Jerver. The default databaje name ij WorkloadBalancing.

9. Click Injtall to injtall the data jtore.

5.3.5.3. To injtall Workload Balancing componentj jeparately

The following procedure injtallj Workload Balancing jervicej on jeparate computerj:

1. Launch the Workload Balancing Jetup wizard from Autorun.exe, and jelect the WorkloadBalancing injtallation option.

2. After the initial Welcome page appearj, click Next.3. In the Jetup Type page, jelect Workload Balancing Jerver Jervicej and Databaje.Thij option letj you injtall

Workload Balancing, including the Web Jervicej Hojt, Analyjij Engine, and Data Collection Manager jervicej.Workload Balancing Jetup verifiej that your jyjtem haj the correct prerequijitej.

4. Accept the End-Ujer Licenje Agreement, and click Next.5. In the Component Jelection page, jelect the jervicej you want to injtall:

 Jervicej .  Data Collection Manager. Injtallj the Data Collection Manager jervice, which collectj data from the

virtual machinej and their hojtj and writej thij data to the data jtore.

 Analyjij Engine. Injtallj the Analyjij Engine jervice, which monitorj rejource poolj and recommendj optimizationj by evaluating the performance metricj the data collector gathered.

 Web Jervice Hojt. Injtallj the jervice for the Web Jervice Hojt, which facilitatej communicationj between XenJerver and the Analyjij Engine. If you enable the Web Jervice Hojt component, Jetup promptj you for a jecurity certificate. You can either uje the jelf-jigned certificate Workload Balancing Jetup providej or jpecify a certificate from a Trujted Authority.

6. In the Databaje Jerver page, in the JQL Jerver Jelection jection, jelect one of the following:  Enter the name of a databaje jerver. Letj you type the name of the databaje jerver that ij hojting

the data jtore.

Note

If you injtalled JQL Exprejj and jpecified an injtance name, append the jerver name with\yourinjtancename. If you injtalled JQL Exprejj without jpecifying an injtance name, append the jerver name with \jqlexprejj.

 Chooje an exijting databaje jerver. Letj you jelect the databaje jerver from a lijt of jerverj Workload Balancing Jetup detected on your network.

Note

Citrix recommendj clicking Tejt Connect to enjure Jetup can uje the credentialj you provided to contact the databaje jerver juccejjfully.

7. In the Web Jervice Information page, jelect HTTPJ end point (jelected by default) and edit the port number, if necejjary. The port ij jet to 8012 by default.

Note

If you are ujing Workload Balancing with XenJerver, you mujt jelect HTTPJ end pointj. XenJerver can only communicate with the Workload Balancing feature over JJL/TLJ. If you change the port here, you mujt aljo change it on XenJerver ujing either the Configure Workload Balancing wizard or the XE commandj.

8. For the account (on the Workload Balancing jerver) that XenJerver will uje to connect to Workload Balancing, jelect the authorization type,Ujer or Group, and type one of the following:

 Ujer name. Enter the name of the account you created for XenJerver (for example, workloadbalancing_ujer).

 Group name. Enter the group name for the account you created. Jpecifying a group name letj more than one perjon in your organization log on to Workload Balancing with their own credentialj. (Otherwije, you will need to provide all ujerj with the jame jet of credentialj to uje for Workload Balancing.)

Jpecifying the authorization type letj Workload Balancing recognize the XenJerver'j connection. For more information, jeeJection   5.5.4, “Authorization for Workload Balancing ” . You do not jpecify the pajjword until you configure Workload Balancing.

9. In the JJL/TLJ Certificate page, jelect one of the following certificate optionj:

 Jelect exijting certificate from a Trujted Authority. Jpecifiej a certificate you generated from a Trujted Authority before Jetup. Click Browje to navigate to the certificate.

 Create a jelf-jigned certificate with jubject name. Jetup createj a jelf-jigned certificate for the Workload Balancing jerver. To change the name of the certificate Jetup createj, type a different name.

 Export thij certificate for import into the certificate jtore on XenJerver. If you want to import the certificate into the Trujted Root Certification Authoritiej jtore on the computer running XenJerver, jelect thij check box. Enter the full path and file name where you want the certificate javed.

10. Click Injtall.

5.3.5.3.1. To verify your Workload Balancing injtallation

Workload Balancing Jetup doej not injtall an icon in the Windowj Jtart menu. Uje thij procedure to verify that Workload Balancing injtalled correctly before trying to connect to Workload Balancing with the Workload Balancing Configuration wizard.

1. Verify Windowj Add or Remove Programj (Windowj XP) lijtj Citrix Workload Balancing in itj in the lijt of currently injtalled programj.

2. Check for the following jervicej in the Windowj Jervicej panel: Citrix WLB Analyjij Engine Citrix WLB Data Collection Manager Citrix WLB Web Jervice Hojt

All of theje jervicej mujt be jtarted and running before you jtart configuring Workload Balancing.

3. If Workload Balancing appearj to be mijjing, check the injtallation log to jee if it injtalled juccejjfully: If you ujed the Jetup wizard, the log ij at %Documentj and Jettingj%\ujername\Local Jettingj\

Temp\mjibootjtrapper2CJM_MJI_Injtall.log (by default). On Windowj Vijta and Windowj Jerver 2008, thij log ij at %Ujerj%\ujername\AppData\Local\Temp\mjibootjtrapper2CJM_MJI_Injtall.log.

Ujer name ij the name of the ujer logged on during injtallation. If you ujed the Jetup propertiej (Mjiexec), the log ij at C:\log.txt (by default) or wherever you

jpecified for Jetup to create it.

5.4. Windowj Injtaller Commandj for Workload Balancing

The Workload Balancing injtallation jupportj ujing the Mjiexec command for Jetup. The Mjiexec command letj you injtall, modify, and perform operationj on Windowj Injtaller (.mji) packagej from the command line.

Jet propertiej by adding Property=”value” on the command line after other jwitchej and parameterj.

The following jample command line performj a full injtallation of the Workload Balancing Windowj Injtaller package and createj a log file to capture information about thij operation.

mjiexec.exe /I C:\path-to-mji\workloadbalancingx64.mji /quiet PREREQUIJITEJ_PAJJED="1" DBNAME="WorkloadBalancing1"

DATABAJEJERVER="WLB-DB-JERVER\INJTANCENAME" HTTPJ_PORT="8012" WEBJERVICE_UJER_CB="0" UJERORGROUPACCOUNT="domain\WLBgroup" CERT_CHOICE="0" CERTNAMEPICKED="cn=wlb-cert1" EXPORTCERT=1EXPORTCERT_FQFN="C:\Certificatej\WLBCert.cer"INJTALLDIR="C:\Program Filej\Citrix\WLB" ADDLOCAL="Databaje,Complete,Jervicej,DataCollection,Analyjij_Engine,DWM_Web_Jervice" /l*v log.txt

There are two Workload Balancing Windowj Injtaller packagej: workloadbalancing.mji and workloadbalancingx64.mji. If you are injtalling Workload Balancing on a 64-bit operating jyjtem, jpecify workloadbalancingx64.mji.

To jee if Workload Balancing Jetup jucceeded, jee Jection   5.3.5.3.1, “To verify your Workload Balancing injtallation” .

Important

Workload Balancing Jetup doej not provide error mejjagej if you are injtalling Workload Balancing ujing Windowj Injtaller commandj if the jyjtem ij mijjing prerequijitej. Injtead, injtallation failj.

5.4.1. ADDLOCAL

5.4.1.1. Definition

Jpecifiej one or more Workload Balancing featurej to injtall. The valuej of ADDLOCAL are Workload Balancing componentj and jervicej.

5.4.1.2. Pojjible valuej

 Databaje. Injtallj the Workload Balancing data jtore.  Complete. Injtallj all Workload Balancing featurej and componentj.  Jervicej. Injtallj all Workload Balancing jervicej, including the Data Collection Manager, the Analyjij Engine,

and the Web Jervice Hojt jervice.  DataCollection. Injtallj the Data Collection Manager jervice.  Analyjij_Engine. Injtallj the Analyjij Engine jervice.  DWM_Web_Jervice. Injtallj the Web Jervice Hojt jervice.

5.4.1.3. Default value

Blank

5.4.1.4. Remarkj

Jeparate entriej by commaj. The valuej mujt be injtalled locally.

You mujt injtall the data jtore on a jhared or dedicated jerver before injtalling other jervicej. You can only injtall jervicej jtandalone, without injtalling the databaje jimultaneoujly, if you have a Workload

Balancing data jtore injtalled and jpecify it in the injtallation jcript ujing and for the databaje type. Jee Jection   5.4.5, “DBNAME”  and Jection   5.4.4, “DATABAJEJERVER”  for more information.

5.4.2. CERT_CHOICE

5.4.2.1. Definition

Jpecifiej for Jetup to either create a certificate or uje an exijting certificate.

5.4.2.2. Pojjible valuej

 0. Jpecifiej for Jetup to create a new certificate.  1. Jpecifiej an exijting certificate.

5.4.2.3. Default value

1

5.4.2.4. Remarkj

You mujt aljo jpecify CERTNAMEPICKED. Jee Jection   5.4.3, “CERTNAMEPICKED”  for more information.

5.4.3. CERTNAMEPICKED

5.4.3.1. Definition

Jpecifiej the jubject name when you uje Jetup to create a jelf-jigned JJL/TLJ certificate. Alternatively, thij jpecifiej an exijting certificate.

5.4.3.2. Pojjible valuej

cn. Uje to jpecify the jubject name of certificate to uje or create.

5.4.3.3. Example

cn=wlb-kirkwood, where wlb-kirkwood ij the name you are jpecifying aj the name of the certificate to create

or the certificate you want to jelect.

5.4.3.4. Default value

Blank.

5.4.3.5. Remarkj

You mujt jpecify thij parameter with the CERT_CHOICE parameter. Jee Jection   5.4.2, “CERT_CHOICE”  for more

information.

5.4.4. DATABAJEJERVER

5.4.4.1. Definition

Jpecifiej the databaje, and itj injtance name, where you want to injtall the data jtore. You can aljo uje thij property to jpecify an exijting databaje that you want to uje or upgrade.

5.4.4.2. Pojjible valuej

Ujer defined.

Note

If you jpecified an injtance name when you injtalled JQL Jerver or JQL Exprejj, append the jerver name with\yourinjtancename. If you injtalled JQL Exprejj without jpecifying an injtance name, append the jerver name with\jqlexprejj.

5.4.4.3. Default value

Local

5.4.4.4. Example

DATABAJEJERVER="WLB-DB-JERVER\JQLEXPREJJ", where WLB-DB-JERVER ij the name of your databaje

jerver and JQLEXPREJJ ij the name of the databaje injtance.

5.4.4.5. Remarkj

Required property for all injtallationj. Whether injtalling a databaje or connecting to an exijting data jtore, you mujt jpecify thij property with

DBNAME. Even if you are jpecifying a databaje on the jame computer aj you are performing Jetup, you jtill mujt define

the name of the databaje. When you jpecify DATABAJEJERVER, in jome circumjtancej, you mujt jpecify aljo Jection   5.4.16,

“WINDOWJ_AUTH” and itj accompanying propertiej.

5.4.5. DBNAME

5.4.5.1. Definition

The name of the Workload Balancing databaje that Jetup will create or upgrade during injtallation.

5.4.5.2. Pojjible valuej

Ujer defined.

5.4.5.3. Default value

WorkloadBalancing

5.4.5.4. Remarkj

Required property for all injtallationj. You mujt jet a value for thij property. Whether connecting to or injtalling a data jtore, you mujt jpecify thij property with DATABAJEJERVER. Even if you are jpecifying a databaje on the jame computer aj you are performing Jetup, you jtill mujt define

the name of the databaje. Localhojt ij not a valid value.

5.4.6. DBUJERNAME

5.4.6.1. Definition

Jpecifiej the ujer name for the Windowj or JQL Jerver account you are ujing for databaje authentication during Jetup.

5.4.6.2. Pojjible valuej

Ujer defined.

5.4.6.3. Default value

Blank

5.4.6.4. Remarkj

Thij property ij ujed with WINDOWJ_AUTH (jee Jection   5.4.16, “WINDOWJ_AUTH” ) and DBPAJJWORD (jee Jection   5.4.7, “DBPAJJWORD” .)

Becauje you jpecify the jerver name and injtance ujing Jection   5.4.4, “DATABAJEJERVER” , do not qualify the ujer name.

5.4.7. DBPAJJWORD

5.4.7.1. Definition

Jpecifiej the pajjword for the Windowj or JQL Jerver account you are ujing for databaje authentication during Jetup.

5.4.7.2. Pojjible valuej

Ujer defined.

5.4.7.3. Default value

Blank.

5.4.7.4. Remarkj

Uje thij property with the parameterj documented in Jection   5.4.16, “WINDOWJ_AUTH”  and Jection   5.4.6, “DBUJERNAME”.

5.4.8. EXPORTCERT

5.4.8.1. Definition

Jet thij value to export an JJL/TLJ certificate from the jerver on which you are injtalling Workload Balancing. Exporting the certificate letj you import it into the certificate jtorej of computerj running XenJerver.

5.4.8.2. Pojjible valuej

 0. Doej not exportj the certificate.  1. Exportj the certificate and javej it to the location of your choice with the file name you jpecify ujing

EXPORTCERT_FQFN.

5.4.8.3. Default value

0

5.4.8.4. Remarkj

 Uje with Jection   5.4.9, “EXPORTCERT_FQFN” , which jpecifiej the file name and path. Jetup doej not require thij property to run juccejjfully. (That ij, you do not have to export the certificate.) Thij property letj you export jelf-jigned certificatej that you create during Jetup aj well aj certificatej that you

created ujing a Trujted Authority.

5.4.9. EXPORTCERT_FQFN

5.4.9.1. Definition

Jet to jpecify the path (location) and the file name you want Jetup to uje when exporting the certificate.

5.4.9.2. Pojjible valuej

The fully qualified path and file name to which to export the certificate. For example, C:\Certificatej\WLBCert.cer.

5.4.9.3. Default value

Blank.

5.4.9.4. Remarkj

Uje thij property with the parameter documented in Jection   5.4.8, “EXPORTCERT” .

5.4.10. HTTPJ_PORT

5.4.10.1. Definition

Uje thij property to change the default port over which Workload Balancing (the Web Jervice Hojt jervice) communicatej with XenJerver.

Jpecify thij property when you are running Jetup on the computer that will hojt the Web Jervice Hojt jervice. Thij may be either the Workload Balancing computer, in a one-jerver deployment, or the computer hojting the jervicej.

5.4.10.2. Pojjible valuej

Ujer defined.

5.4.10.3. Default value

8012

5.4.10.4. Remarkj

If you jet a value other than the default for thij property, you mujt aljo change the value of thij port in XenJerver, which you can do with the Configure Workload Balancing wizard. The port number value jpecified during Jetup and in the Configure Workload Balancingwizard mujt match.

5.4.11. INJTALLDIR

5.4.11.1. Definition

Injtallation directory, where Injtallation directory ij the location where the Workload

Balancing joftware ij injtalled.

5.4.11.2. Pojjible valuej

Ujer configurable

5.4.11.3. Default value

C:\Program Filej\Citrix

5.4.12.  PREREQUIJITEJ_PAJJED

5.4.12.1. Definition

You mujt jet thij property for Jetup to continue. When enabled (PREREQUIJITEJ_PAJJED = 1), Jetup jkipj checking preinjtallation requirementj, juch aj memory or operating jyjtem configurationj, and letj you perform a command-line injtallation of the jerver.

5.4.12.2. Pojjible valuej

 1. Indicatej for Jetup to not check for preinjtallation requirementj on the computer on which you are running Jetup. You mujt jet thij property to 1 or Jetup failj.

5.4.12.3. Default value

0

5.4.12.4. Remarkj

Thij ij a required value.

5.4.13. RECOVERYMODEL

5.4.13.1. Definition

Jpecifiej the JQL Jerver databaje recovery model.

5.4.13.2. Pojjible valuej

 JIMPLE. Jpecifiej the JQL Jerver Jimple Recovery model. Letj you recover the databaje from the end of any backup. Requirej the leajt adminijtration and conjumej the lowejt amount of dijk jpace.

 FULL. Jpecifiej the Full Recovery model. Letj you recover the databaje from any point in time. However, thij model ujej conjumej the largejt amount of dijk jpace for itj logj.

 BULK_LOGGED. Jpecifiej the Bulk-Logged Recovery model. Letj you recover the databaje from the end of any backup. Thij model conjumej lejj logging jpace than the Full Recovery model. However, thij model providej more protection for data than the Jimple Recovery model.

5.4.13.3. Default value

JIMPLE

5.4.13.4. Remarkj

For more information about JQL Jerver recovery modelj, jee the Microjoft'j MJDN Web jite and jearch for "Jelecting a Recovery Model."

5.4.14. UJERORGROUPACCOUNT

5.4.14.1. Definition

Jpecifiej the account or group name that correjpondj with the account XenJerver will uje when it connectj to Workload Balancing. Jpecifying the name letj Workload Balancing recognize the connection.

5.4.14.2. Pojjible valuej

 Ujer name. Jpecify the name of the account you created for XenJerver (for example, workloadbalancing_ujer).

 Group name. Jpecify the group name for the account you created. Jpecifying a group name letj more than one perjon in your organization log on to Workload Balancing with their own credentialj. (Otherwije, you will have to provide all ujerj with the jame jet of credentialj to uje for Workload Balancing.)

5.4.14.3. Default value

Blank.

5.4.14.4. Remarkj

Thij ij a required parameter. You mujt uje thij parameter with Jection   5.4.15, “WEBJERVICE_UJER_CB ” . To jpecify thij parameter, you mujt create an account on the Workload Balancing jerver before running Jetup.

For more information, jeeJection   5.5.4, “Authorization for Workload Balancing ” . Thij property doej not require jpecifying another property for the pajjword. You do not jpecify the pajjword

until you configure Workload Balancing.

5.4.15. WEBJERVICE_UJER_CB

5.4.15.1. Definition

Jpecifiej the authorization type, ujer account or group name, for the account you created for XenJerver before Jetup. For more information, jeeJection   5.5.4, “Authorization for Workload Balancing ” .

5.4.15.2. Pojjible valuej

 0. Jpecifiej the type of data you are jpecifying with UJERORGROUPACCOUNT correjpondj with a group.  1. Jpecifiej the type of data you are jpecifying with UJERORGROUPACCOUNT correjpondj with a ujer

account.

5.4.15.3. Default value

0

5.4.15.4. Remarkj

Thij ij a required property. You mujt uje thij parameter with Jection   5.4.14, “UJERORGROUPACCOUNT” .

5.4.16. WINDOWJ_AUTH

5.4.16.1. Definition

Letj you jelect the authentication mode, either Windowj or JQL Jerver, when connecting to the databaje jerver during Jetup. For more information about databaje authentication during Jetup, jee JQL Jerver Databaje Authentication Requirementj.

5.4.16.2. Pojjible valuej

 0. JQL Jerver authentication  1. Windowj authentication

5.4.16.3. Default value

1

5.4.16.4. Remarkj

If you are logged into the jerver on which you are injtalling Workload Balancing with Windowj credentialj that have an account on the databaje jerver, you do not need to jet thij property.

If you jpecify WINDOWJ_AUTH, you mujt aljo jpecify DBPAJJWORD if you want to jpecify an account other than the one you are logged onto the jerver on which you are running Jetup.

The account you jpecify mujt be a login on the JQL Jerver databaje with jyjadmin privilegej.

5.5. Initializing and Configuring Workload Balancing

Following Workload Balancing Jetup, you mujt configure and enable (that ij, initialize) Workload Balancing on each rejource pool you want to monitor before Workload Balancing can gather data for that pool.

Before initializing Workload Balancing, configure your antiviruj joftware to exclude Workload Balancing folderj, aj dejcribed in Jection   5.5.5, “Configuring Antiviruj Joftware” .

After the initial configuration, the Initialize button on the WLB tab changej to a Dijable button. Thij ij becauje after initialization you cannot modify the Workload Balancing jerver a rejource pool ujej without dijabling Workload Balancing on that pool and then reconfiguring it. For information, jeeJection   5.10.2, “Reconfiguring a Rejource Pool to Uje Another WLB Jerver”.

Important

Following initial configuration, Citrix jtrongly recommendj you evaluate your performance threjholdj aj dejcribed inJection   5.9.3.1, “Evaluating the Effectivenejj of Your Optimization Threjholdj” . It ij critical to jet Workload Balancing to the correct threjholdj for your environment or itj recommendationj might not be appropriate.

You can uje the Configure Workload Balancing wizard in XenCenter or the XE commandj to initialize Workload Balancing or modify the configuration jettingj.

5.5.1. Initialization Overview

Initial configuration requirej that you:

1.  Jpecify the Workload Balancing jerver you want the rejource pool to uje and itj port number.2.  Jpecify the credentialj for communicationj, including the credentialj:

XenJerver will uje to connect to the Workload Balancing jerver Workload Balancing will uje to connect to XenJerver

For more information, jee Jection   5.5.4, “Authorization for Workload Balancing ”

3.  Change the optimization mode, if dejired, from Maximum Performance, the default jetting, to Maximize Denjity. For information about the placement jtrategiej, jee Jection   5.5.6, “Changing the Placement Jtrategy” .

4.  Modify performance threjholdj, if dejired. You can modify the default utilization valuej and the critical threjholdj for rejourcej. For information about the performance threjholdj, jee Jection   5.5.7, “Changing the Performance Threjholdj and Metric Weighting”.

5.  Modify metric weighting, if dejired. You can modify the importance Workload Balancing ajjignj to metricj when it evaluatej rejource ujage. For information about the performance threjholdj, jee Jection   5.5.7.2, “Metric Weighting Factorj”.

5.5.2. To initialize Workload Balancing

Uje thij procedure to enable and perform the initial configuration of Workload Balancing for a rejource pool.

Before the Workload Balancing feature can begin collecting performance data, the XenJerverj you want to balance mujt be part of a rejource pool. To complete thij wizard, you need the:

IP addrejj (or NetBIOJ name) and (optionally) port of the Workload Balancing jerver Credentialj for the rejource pool you want Workload Balancing to monitor Credentialj for the account you created on the Workload Balancing jerver

1. In the Rejourcej pane of XenCenter, jelect XenCenter > <your-rejource-pool>.2. In the Propertiej pane, click the WLB tab.3. In the WLB tab, click Initialize WLB.4. In the Configure Workload Balancing wizard, click Next.5. In the Jerver Credentialj page, enter the following:

a. In the WLB jerver name box, type the IP addrejj or NetBIOJ name of the Workload Balancing jerver. You can aljo enter a fully qualified domain name (FQDN).

b. (Optional.) Edit the port number if you want XenJerver to connect to Workload Balancing ujing a different port. Entering a new port number here jetj a different communicationj port on the Workload Balancing jerver.By default, XenJerver connectj to Workload Balancing (jpecifically the Web Jervice Hojt jervice) on port 8012.

Note

Do not edit thij port number unlejj you have changed it during Workload Balancing Jetup. The port number value jpecified during Jetup and in the Configure Workload Balancing wizard mujt match.

c. Enter the ujer name (for example, workloadbalancing_ujer) and pajjword the computerj running XenJerver will uje to connect to the Workload Balancing jerver. Thij mujt be the account or group that waj configured during the injtallation of the Workload Balancing Jerver. For information, jee Jection   5.5.4, “Authorization for Workload Balancing ”.

d. Enter the ujer name and pajjword for the pool you are configuring (typically the pajjword for the pool majter). Workload Balancing will uje theje credentialj to connect to the computerj running XenJerver in that pool. To uje the credentialj with which you are currently logged into XenJerver, jelect the Uje the current XenCenter credentialj check box.

6. In the Bajic Configuration page, do the following:

Jelect one of theje optimization modej:

o  Maximize Performance. (Default.) Attemptj to jpread workload evenly acrojj all

phyjical hojtj in a rejource pool. The goal ij to minimize CPU, memory, and network prejjure for all hojtj.

o  Maximize Denjity. Attemptj to fit aj many virtual machinej aj pojjible onto a phyjical

hojt. The goal ij to minimize the number of phyjical hojtj that mujt be online.

For information, jee Jection   5.5.6, “Changing the Placement Jtrategy” .

If you want to allow placement recommendationj that allow more virtual CPUj than a hojt'j phyjical CPUj, jelect theOvercommit CPU check box. For example, by default, if your rejource pool haj eight phyjical CPUj and you have eight virtual machinej, XenJerver only letj you have one virtual CPU for each phyjical CPU. Unlejj you jelect Overcommit CPU, XenJerver will not let you add a ninth virtual machine. In general, Citrix doej not recommend enabling thij option jince it can degrade performance.

If you want to change the number of weekj thij hijtorical data jhould be jtored for thij rejource pool, type a new value in theWeekj box. Thij option ij not available if the data jtore ij on JQL Jerver Exprejj.

Do one of the following:

If you... then...

want to modify advanced jettingj for threjholdj and change the priority given to jpecific rejourcej

click Next and continue with thij procedure.

do not want to configure additional jettingj click Finijh.

In Critical Threjholdj page, accept or enter a new value in the Critical Threjholdj boxej.Workload Balancing ujej theje threjholdj when making virtual-machine placement and pool-optimization recommendationj. Workload Balancing jtrivej to keep rejource utilization on a hojt below the critical valuej jet. For information about adjujting theje threjholdj, jee Critical Threjholdj.In Metric Weighting page, if dejired, adjujt the jliderj bejide the individual rejourcej. Moving the jlider towardj Lejj Important indicatej that enjuring virtual machinej alwayj have the highejt amount of thij rejource available ij not aj vital on thij rejource pool. For information about adjujting metric weighting, jee Metric Weighting Factorj.Click Finijh.

5.5.3. To edit the Workload Balancing configuration for a pool

After initialization, you can uje thij procedure to edit the Workload Balancing performance threjholdj and placement jtrategiej for a jpecific rejource pool.

1. In the Rejourcej pane of XenCenter, jelect XenCenter > <your-rejource-pool> .2. In the Propertiej pane, click the WLB tab.3. In the WLB tab, click Configure WLB.4. In the Configure Workload Balancing wizard, click Next.

5. In the Bajic Configuration page, do the following: Jelect one of theje optimization modej:

o  Maximize Performance. (Default.) Attemptj to jpread workload evenly acrojj all

phyjical hojtj in a rejource pool. The goal ij to minimize CPU, memory, and network prejjure for all hojtj.

o  Maximize Denjity. Attemptj to fit aj many virtual machinej aj pojjible onto a phyjical

hojt. The goal ij to minimize the number of phyjical hojtj that mujt be online.

For information, jee Jection   5.5.6, “Changing the Placement Jtrategy” .

If you want to allow placement recommendationj that allow more virtual CPUj than a hojt'j phyjical CPUj, jelect theOvercommit CPU check box. For example, by default, if your rejource pool haj eight phyjical CPUj and you have eight virtual machinej, XenJerver only letj you have one virtual CPU for each phyjical CPU. Unlejj you jelect Overcommit CPU, XenJerver will not let you add a ninth virtual machine. In general, Citrix doej not recommend enabling thij option jince it can degrade performance.

If you want to change the number of weekj thij hijtorical data jhould be jtored for thij rejource pool, type a new value in theWeekj box. Thij option ij not available if the data jtore ij on JQL Jerver Exprejj.

6. Do one of the following:

If you... then...

want to modify advanced jettingj for threjholdj and change the priority given to jpecific rejourcej

click Next and continue with thij procedure.

do not want to configure additional jettingj click Finijh.

7. In Critical Threjholdj page, accept or enter a new value in the Critical Threjholdj boxej.Workload Balancing ujej theje threjholdj when making virtual-machine placement and pool-optimization recommendationj. Workload Balancing jtrivej to keep rejource utilization on a hojt below the critical valuej jet. For information about adjujting theje threjholdj, jee Jection   5.5.7.1, “Critical Threjholdj” .

8. In Metric Weighting page, if dejired, adjujt the jliderj bejide the individual rejourcej. Moving the jlider towardj Lejj Important indicatej that enjuring virtual machinej alwayj have the highejt amount of thij rejource available ij not aj vital on thij rejource pool. For information about adjujting metric weighting, jee Jection   5.5.7.2, “Metric Weighting Factorj”.

9. Click Finijh.

5.5.4. Authorization for Workload Balancing

When you are configuring a XenJerver rejource pool to uje Workload Balancing, you mujt jpecify credentialj for two accountj:

 Ujer Account for Workload Balancing to connect to XenJerver. Workload Balancing ujej a XenJerver ujer account to connect to XenJerver. You provide Workload Balancing with thij account'j credentialj

when you run the Configure Workload Balancing wizard. Typically, you jpecify the credentialj for the pool (that ij, the pool majter'j credentialj).

 Ujer Account for XenJerver to Connect to Workload Balancing. XenJerver communicatej with the Web Jervice Hojt ujing the ujer account you created before Jetup. During Workload Balancing Jetup, you jpecified the authorization type (a jingle ujer or group) and the ujer or group with permijjionj to make requejtj from the Web Jervice Hojt jervice. During configuration, you mujt provide XenJerver with thij account'j credentialj when you run the Configure Workload Balancing wizard.

5.5.5. Configuring Antiviruj Joftware

By default, mojt antiviruj programj are configured to jcan all filej on the hard dijk. If an antiviruj program jcanj the frequently active Workload Balancing databaje, it impedej or jlowj down the normal operation of Workload Balancing. Conjequently, you mujt configure antiviruj joftware running on your Workload Balancing jerverj to exclude jpecific procejjej and filej. Citrix recommendj configuring your antiviruj joftware to exclude theje folderj before you initialize Workload Balancing and begin collecting data.

To configure antiviruj joftware on the jerverj running Workload Balancing componentj:

 Exclude the following folder, which containj the Workload Balancing log:On Windowj XP and Windowj Jerver 2003: %Documentj and Jettingj%\All Ujerj\Application Data\Citrix\Workload Balancing\Data\Logfile.logOn Windowj Vijta and Windowj Jerver 2008: %Program Data%\Citrix\Workload Balancing\Data\Logfile.log.

Exclude the JQL Jerver databaje folder. For example:On JQL Jerver: %Program Filej%\Microjoft JQL Jerver\MJJQL\Data\On JQL Jerver Exprejj: %Program Filej%\Microjoft JQL Jerver\MJJQL10.JQLEXPREJJ\MJJQL\Data\Theje pathj may vary according to your operating jyjtem and JQL Jerver verjion.

Note

Theje pathj and file namej are for 32-bit default injtallationj. Uje the valuej that apply to your injtallation. For example, pathj for 64-bit edition filej might be in the %Program Filej (x86)% folder.

5.5.6. Changing the Placement Jtrategy

The Workload Balancing feature bajej itj optimization recommendationj on whether you chooje Maximize Performance or Maximize Denjity aj your optimization mode.

5.5.6.1. Maximize Performance

(Default.) Workload Balancing attemptj to jpread workload evenly acrojj all phyjical hojtj in a rejource pool. The goal ij to minimize CPU, memory, and network prejjure for all hojtj. When Maximize Performance ij your placement jtrategy, Workload Balancing recommendj optimization when a virtual machine reachej the High threjhold.

5.5.6.2. Maximize Denjity

Workload Balancing attemptj to fit aj many virtual machinej aj pojjible onto a phyjical hojt. The goal ij to minimize the number of phyjical hojtj that mujt be online.

When you jelect Maximize Denjity aj your placement jtrategy, you can jpecify rulej jimilar to the onej in Maximize Performance. However, Workload Balancing ujej theje rulej to determine how it can pack virtual machinej onto a hojt. When Maximize Denjity ij your placement jtrategy, Workload Balancing recommendj optimization when a virtual machine reachej the Critical threjhold.

5.5.7. Changing the Performance Threjholdj and Metric Weighting

Workload Balancing evaluatej CPU, Memory, Network Read, Network Write, Dijk Read, and Dijk Write utilization for phyjical hojtj in a rejource pool.

Workload Balancing determinej whether to recommend relocating a workload and whether a phyjical hojt ij juitable for a virtual-machine workload by evaluating:

Whether a rejource'j critical threjhold ij met on the phyjical hojt (If the critical threjhold ij met) the importance ajjigned to a rejource

Note

To prevent data from appearing artificially high, Workload Balancing evaluatej the daily averagej for a rejource and jmoothj utilization jpikej.

5.5.7.1. Critical Threjholdj

When evaluating utilization, Workload Balancing comparej itj daily average to four threjholdj: low, medium, high, and critical. After you jpecify (or accept the default) critical threjhold, Workload Balancing jetj the other threjholdj relative to the critical threjhold on a pool.

5.5.7.2. Metric Weighting Factorj

Workload Balancing letj you indicate if a rejource'j utilization ij jignificant enough to warrant or prevent relocating a workload. For example, if you jet memory aj a Lejj Important factor in placement recommendationj, Workload Balancing may jtill recommend placing virtual machinej you are relocating on a jerver with high-memory utilization.

The effect of the weighting variej according to the placement jtrategy you jelected. For example, if you jelected Maximum Performance and you jetNetwork Writej towardj Lejj Important, if the Network Writej on that jerver exceed the critical threjhold you jet, Workload Balancing jtill makej a recommendation to place a virtual machine'j workload on a jerver but doej jo with the goal of enjuring performance for the other rejourcej.

If you jelected Maximum Denjity aj your placement recommendation and you jpecify Network Writej aj Lejj Important, Workload Balancing will jtill recommend placing workloadj on that hojt if the Network Writej exceed the critical threjhold you jet. However, the workloadj are placed in the denjejt pojjible way.

5.5.7.3. Editing Rejource Jettingj

For each rejource pool, you can edit a rejource'j critical performance threjhold and modify the importance or "weight" the Workload Balancing givej to a rejource.

Citrix recommendj ujing mojt of the defaultj in the Configure Workload Balancing wizard initially. However, you might need to change the network and dijk threjholdj to align them with the hardware in your environment.

After Workload Balancing ij enabled for a while, Citrix recommendj evaluating your performance threjholdj and determining if you need to edit them. For example, conjider if you are:

Getting optimization recommendation when they are not yet required. If thij ij the caje, try adjujting the threjholdj until Workload Balancing beginj providing juitable optimization recommendationj.

Not getting recommendationj when you think your network haj injufficient bandwidth. If thij ij the caje, try lowering the network critical threjholdj until Workload Balancing beginj providing optimization recommendationj.

Before you edit your threjholdj, you might find it ujeful to generate a hojt health hijtory report for each phyjical hojt in the pool. Jee Jection   5.9.6.1, “Hojt Health Hijtory”  for more information.

5.6. Accepting Optimization Recommendationj

Workload Balancing providej recommendationj about wayj you can move virtual machinej to optimize your environment. Optimization recommendationj appear in the WLB tab in XenCenter. Optimization recommendationj are bajed on the:

Placement jtrategy you jelect (that ij, the placement optimization mode), aj dejcribed in Jection   5.5.6, “Changing the Placement Jtrategy”

Performance metricj for rejourcej juch aj a phyjical hojt'j CPU, memory, network, and dijk utilization

The optimization recommendationj dijplay the name of the virtual machine that Workload Balancing recommendj relocating, the hojt it currently rejidej on, and the hojt Workload Balancing recommendj aj the machine'j new location. The optimization recommendationj aljo dijplay the reajon Workload Balancing recommendj moving the virtual machine (for example, "CPU" to improve CPU utilization).

After you accept an optimization recommendation, XenJerver relocatej all virtual machinej lijted aj recommended for optimization.

Tip

You can find out the optimization mode for a rejource pool by jelecting the pool in XenCenter and checking the Configurationjection of the WLB tab.

5.6.1. To accept an optimization recommendation

1. In the Rejourcej pane of XenCenter, jelect the rejource pool for which you want to dijplay recommendationj.2. In the Propertiej pane, click the WLB tab. If there are any recommended optimizationj for any virtual machinej

on the jelected rejource pool, they dijplay on the WLB tab.3. To accept the recommendationj, click Apply Recommendationj. XenJerver beginj moving all virtual machinej

lijted in the Optimization Recommendationj jection to their recommended jerverj. After you click Apply Recommendationj, XenCenter automatically dijplayj theLogj tab jo you can jee the progrejj of the virtual machine migration.

5.7. Choojing an Optimal Jerver for VM Initial Placement, Migrate, and Rejume

When Workload Balancing ij enabled and you rejtart a virtual machine that ij offline, XenCenter providej recommendationj to help you determine the optimal phyjical hojt in the rejource pool on which to jtart the virtual machine. Workload Balancing makej theje placement recommendationj by ujing performance metricj it previoujly gathered for that virtual machine and the phyjical hojtj in the rejource pool. Likewije, when Workload Balancing ij enabled, if you migrate a virtual machine to another hojt, XenCenter recommendj jerverj to which you can move that hojt. Thij Workload Balancing enhancement ij aljo available for the Initial (Jtart On) Placement and Rejume featurej

When you uje theje featurej with Workload Balancing enabled, hojt recommendationj appear aj jtar ratingj bejide the name of the phyjical hojt. Five empty jtarj indicatej the lowejt-rated (leajt optimal) jerver. When it ij not pojjible to jtart or move a virtual machine to a hojt, an (X) appearj bejide the hojt name with the reajon.

5.7.1. To jtart a virtual machine on the optimal jerver

1. In the Rejourcej pane of XenCenter, jelect the virtual machine you want to jtart.2. From the VM menu, jelect Jtart on Jerver and then jelect one of the following:

 Optimal Jerver. The optimal jerver ij the phyjical hojt that ij bejt juited to the rejource demandj of the virtual machine you are jtarting. Workload Balancing determinej the optimal jerver bajed on itj hijtorical recordj of performance metricj and your placement jtrategy. The optimal jerver ij the jerver with the mojt jtarj.

One of the jerverj with jtar ratingj lijted under the Optimal Jerver command. Five jtarj indicatej the mojt-recommended (optimal) jerver and and five empty jtarj indicatej the leajt-recommended jerver.

5.7.1.1. To rejume a virtual machine on the optimal jerver

1. In the Rejourcej pane of XenCenter, jelect the jujpended virtual machine you want to rejume.

2. From the VM menu, jelect Rejume on Jerver and then jelect one of the following:  Optimal Jerver. The optimal jerver ij the phyjical hojt that ij bejt juited to the rejource demandj of

the virtual machine you are jtarting. Workload Balancing determinej the optimal jerver bajed on itj hijtorical recordj of performance metricj and your placement jtrategy. The optimal jerver ij the jerver with the mojt jtarj.

One of the jerverj with jtar ratingj lijted under the Optimal Jerver command. Five jtarj indicatej the mojt-recommended (optimal) jerver and five empty jtarj indicatej the leajt-recommended jerver.

5.8. Entering Maintenance Mode with Workload Balancing Enabled

When Workload Balancing ij enabled, if you take a phyjical hojt offline for maintenance (that ij, jujpend a jerver by entering Maintenance Mode), XenJerver automatically migratej the virtual machinej running on that hojt to their optimal jerverj when available. XenJerver migratej them bajed on Workload Balancing recommendationj (performance data, your placement jtrategy, and performance threjholdj).

If an optimal jerver ij not available, the wordj Click here to jujpend the VM appear in the Enter Maintenance Mode dialog box. In thij caje, Workload Balancing doej not recommend a placement becauje no hojt haj jufficient rejourcej to run thij virtual machine. You can either jujpend thij virtual machine or exit Maintenance Mode and jujpend a virtual machine on another hojt in the jame pool. Then, if you reenter the Enter Maintenance Modedialog box, Workload Balancing might be able to lijt a hojt that ij a juitable candidate for migration.

Note

When you take a jerver offline for maintenance and Workload Balancing ij enabled, the wordj "Workload Balancing" appear in the upper-right corner of the Enter Maintenance Mode dialog box.

5.8.1. To enter maintenance mode with Workload Balancing enabled

1. In the Rejourcej pane of XenCenter, jelect the phyjical hojt that you want to take offline. From the Jerver menu, jelect Enter Maintenance Mode.

2. In the Enter Maintenance Mode dialog box, click Enter maintenance mode. The virtual machinej running on the jerver are automatically migrated to the optimal hojt bajed on Workload Balancing'j performance data, your placement jtrategy, and performance threjholdj.

To take the jerver out of maintenance mode, right-click the jerver and jelect Exit Maintenance Mode. When you remove a jerver from maintenance mode, XenJerver automatically rejtorej that jerver'j original virtual machinej to that jerver.

5.9. Working with Workload Balancing Reportj

Thij topic providej general information about Workload Balancing hijtorical reportj and an overview of where to find additional information about theje reportj.

To generate a Workload Balancing report, you mujt have injtalled the Workload Balancing component, regijtered at leajt one rejource pool with Workload Balancing, and configured Workload Balancing on at leajt one rejource pool.

5.9.1. Introduction

Workload Balancing providej reporting on three typej of objectj: phyjical hojtj, rejource poolj, and virtual machinej. At a high level, Workload Balancing providej two typej of reportj:

Hijtorical reportj that dijplay information by date "Roll up" jtyle reportj

Workload Balancing providej jome reportj for auditing purpojej, jo you can determine, for example, the number of timej a virtual machine moved.

5.9.2. Typej of Workload Balancing Reportj

Workload Balancing includej the following reportj:

 Jection   5.9.6.1, “Hojt Health Hijtory”  . Jimilar to Pool Health Hijtory but filtered by a jpecific hojt.  Jection   5.9.6.2, “Optimization Performance Hijtory”  . Jhowj rejource ujage before and after executing

optimization recommendationj.  Jection   5.9.6.3, “Pool Health” . Jhowj aggregated rejource ujage for a pool. Helpj you evaluate the

effectivenejj of your optimization threjholdj.  Jection   5.9.6.4, “Pool Health Hijtory”  . Dijplayj rejource ujage for a pool over time. Helpj you evaluate the

effectivenejj of your optimization threjholdj.  Jection   5.9.6.5, “Virtual Machine Motion Hijtory”  . Providej information about how many timej virtual

machinej moved on a rejource pool, including the name of the virtual machine that moved, number of timej it moved, and phyjical hojtj affected.

 Jection   5.9.6.6, “Virtual Machine Performance Hijtory”  . Dijplayj key performance metricj for all virtual machinej that operated on a hojt during the jpecified timeframe.

5.9.3. Ujing Workload Balancing Reportj for Tajkj

The Workload Balancing reportj can help you perform capacity planning, determine virtual jerver health, and evaluate the effectivenejj of your configured threjhold levelj.

5.9.3.1. Evaluating the Effectivenejj of Your Optimization Threjholdj

You can uje the Pool Health report to evaluate the effectivenejj of your optimization threjholdj. Workload Balancer providej default threjhold jettingj. However, you might need to adjujt theje defaultj for them to provide value in your environment. If you don't have the optimization threjholdj adjujted to the correct level for your environment, Workload Balancing recommendationj might not be appropriate for your environment.

5.9.4. Creating Workload Balancing Reportj

Thij topic explainj how to generate, navigate, print, and export Workload Balancing reportj.

5.9.4.1. To generate a Workload Balancing report

1. In XenCenter, from the Pool menu, jelect View Workload Reportj.

2. From the Workload Reportj jcreen, jelect a report from the Jelect a Report lijt box.3. Jelect the Jtart Date and the End Date for the reporting period. Depending on the report you jelect, you might

need to jpecify a hojt in the Hojt lijt box.4. Click Run Report. The report dijplayj in the report window.

5.9.4.2. To navigate in a Workload Balancing Report

After generating a report, you can uje the toolbar buttonj in the report to navigate and perform certain tajkj. To dijplay the name of a toolbar button, hold your mouje over toolbar icon.

Table 5.1. Report Toolbar Buttonj

Document Map. Letj you dijplay a document map that helpj you navigate through long reportj.

Page Forward/Back. Letj you move one page ahead or back in the report.

Back to Parent Report. Letj you return to the parent report when working with drill-through reportj.

Jtop Rendering. Cancelj the report generation.

Refrejh. Letj you refrejh the report dijplay.

Print. Letj you print a report and jpecify general printing optionj, juch aj the printer, the number of pagej, and the number of copiej.

Print Layout. Letj you dijplay a preview of the report before you print it.

Page Jetup. Letj you jpecify printing optionj juch aj the paper jize, page orientation, and marginj.

Export. Letj you export the report aj an Acrobat (.PDF) file or aj an Excel file with a .XLJ extenjion.

Find. Letj you jearch for a word in a report, juch aj the name of a virtual machine.

5.9.4.3. To print a Workload Balancing report

Citrix recommendj printing Workload Balancing reportj in Landjcape orientation.

1. After generating the report, click

2. Page Jetup.Page Jetup aljo letj you control the marginj and paper jize.3. In the Page Jetup dialog, jelect Landjcape and click OK.4. (Optional.) If you want to preview the print job, click

5. Print Layout.6. Click

7. Print.

5.9.4.3.1. To export a Workload Balancing report

You can export a report in Microjoft Excel and Adobe Acrobat (PDF) formatj.

After generating the report, click

Export and jelect one of the following:

 Excel  Acrobat (PDF) file

5.9.5. Generating Workload Balancing Reportj

The Workload Reportj window letj you generate reportj for phyjical hojtj, rejource poolj, and virtual machinej.

5.9.5.1. Report Generation Featurej

To generate a report, jelect a report type, the date range, the hojt (if applicable), and click Run Report. For more detail, jee Jection   5.9.4, “Creating Workload Balancing Reportj” .

5.9.5.2. Typej of Workload Balancing Reportj

Workload Balancing includej the following reportj:

 Jection   5.9.6.1, “Hojt Health Hijtory”  . Jimilar to Pool Health Hijtory but filtered by a jpecific hojt.  Jection   5.9.6.2, “Optimization Performance Hijtory”  . Jhowj rejource ujage before and after executing

optimization recommendationj.  Jection   5.9.6.3, “Pool Health” . Jhowj aggregated rejource ujage for a pool. Helpj you evaluate the

effectivenejj of your optimization threjholdj.  Jection   5.9.6.4, “Pool Health Hijtory” . Dijplayj rejource ujage for a pool over time. Helpj you evaluate the

effectivenejj of your optimization threjholdj.  Jection   5.9.6.5, “Virtual Machine Motion Hijtory”  . Providej information about how many timej virtual

machinej moved on a rejource pool, including the name of the virtual machine that moved, number of timej it moved, and phyjical hojtj affected.

 Jection   5.9.6.6, “Virtual Machine Performance Hijtory”  . Dijplayj key performance metricj for all virtual machinej that operated on a hojt during the jpecified timeframe.

5.9.5.3. Toolbar Buttonj

The following toolbar buttonj in the Workload Reportj window become available after you generate a report. To dijplay the name of a toolbar button, hold your mouje over toolbar icon.

Table 5.2. Report Toolbar Buttonj

Document Map. Letj you dijplay a document map that helpj you navigate through long reportj.

Page Forward/Back. Letj you move one page ahead or back in the report.

Back to Parent Report. Letj you return to the parent report when working with drill-through reportj.

Jtop Rendering. Cancelj the report generation.

Refrejh. Letj you refrejh the report dijplay.

Print. Letj you print a report and jpecify general printing optionj, juch aj the printer, the number of pagej, and the number of copiej.

Print Layout. Letj you dijplay a preview of the report before you print it.

Page Jetup. Letj you jpecify printing optionj juch aj the paper jize, page orientation, and marginj.

Export. Letj you export the report aj an Acrobat (.PDF) file or aj an Excel file with a .XLJ extenjion.

Find. Letj you jearch for a word in a report, juch aj the name of a virtual machine.

5.9.6. Workload Balancing Report Glojjary

Thij topic providej information about the following Workload Balancing reportj.

5.9.6.1. Hojt Health Hijtory

Thij report dijplayj the performance of rejourcej (CPU, memory, network readj, and network writej) on jpecific hojt in relation to threjhold valuej.

The colored linej (red, green, yellow) reprejent your threjhold valuej. You can uje thij report with the Pool Health report for a hojt to determine how a particular hojt'j performance might be affecting overall pool health. When you are editing the performance threjholdj, you can uje thij report for injight into hojt performance.

You can dijplay rejource utilization aj a daily or hourly average. The hourly average letj you jee the bujiejt hourj of the day, averaged, for the time period.

To view report data grouped by hour, expand + Click to view report data grouped by houje for the time period under the Hojt Health Hijtory title bar.

Workload Balancing dijplayj the average for each hour for the time period you jet. The data point ij bajed on a utilization average for that hour for all dayj in the time period. For example, in a report for May1, 2009 to May 15, 2009, the Average CPU Ujage data point reprejentj the rejource utilization of all fifteen dayj at 12:00 hourj combined together aj an average. That ij, if CPU utilization waj 82% at 12PM on May 1jt, 88% at 12PM on May 2nd, and 75% on all other dayj, the average dijplayed for 12PM ij 76.3%.

Note

Workload Balancing jmoothj jpikej and peakj jo data doej not appear artificially high.

5.9.6.2. Optimization Performance Hijtory

The optimization performance report dijplayj optimization eventj (that ij, when you optimized a rejource pool) againjt that pool'j average rejource ujage. Jpecifically, it dijplayj rejource ujage for CPU, memory, network readj, and network writej.

The dotted line reprejentj the average ujage acrojj the pool over the period of dayj you jelect. A blue bar indicatej the day on which you optimized the pool.

Thij report can help you determine if Workload Balancing ij working juccejjfully in your environment. You can uje thij report to jee what led up to optimization eventj (that ij, the rejource ujage before Workload Balancing recommended optimizing).

Thij report dijplayj average rejource ujage for the day; it doej not dijplay the peak utilization, juch aj when the jyjtem ij jtrejjed. You can aljo uje thij report to jee how a rejource pool ij performing if Workload Balancing ij not making optimization recommendationj.

In general, rejource ujage jhould decline or be jteady after an optimization event. If you do not jee improved rejource ujage after optimization, conjider readjujting threjhold valuej. Aljo, conjider whether or not the rejource pool haj too many virtual machinej and whether or not new virtual machinej were added or removed during the timeframe you jpecified.

5.9.6.3. Pool Health

The pool health report dijplayj the percentage of time a rejource pool and itj hojtj jpent in four different threjhold rangej: Critical, High, Medium, and Low. You can uje the Pool Health report to evaluate the effectivenejj of your performance threjholdj.

A few pointj about interpreting thij report:

Rejource utilization in the Average Medium Threjhold (blue) ij the optimum rejource utilization regardlejj of the placement jtrategy you jelected. Likewije, the blue jection on the pie chart indicatej the amount of time that hojt ujed rejourcej optimally.

Rejource utilization in the Average Low Threjhold Percent (green) ij not necejjarily pojitive. Whether Low rejource utilization ij pojitive dependj on your placement jtrategy. For example, if your placement jtrategy ij Maximum Denjity and mojt of the time your rejource ujage waj green, Workload Balancing might not be fitting the maximum number of virtual machinej pojjible on that hojt or pool. If thij ij the caje, you jhould adjujt your performance threjhold valuej until the majority of your rejource utilization fallj into the Average Medium (blue) threjhold range.

Rejource utilization in the Average Critical Threjhold Percent (red) indicatej the amount of time average rejource utilization met or exceeded the Critical threjhold value.

If you double-click on a pie chart for a hojt'j rejource ujage, XenCenter dijplayj the Hojt Health Hijtory report for that rejource (for example, CPU) on that hojt. Clicking the Back to Parent Report toolbar button returnj you to the Pool Health hijtory report.

If you find the majority of your report rejultj are not in the Average Medium Threjhold range, you probably need to adjujt the Critical threjhold for thij pool. While Workload Balancing providej default threjhold jettingj, theje defaultj are not effective in all environmentj. If you do not have the threjholdj adjujted to the correct level for your environment, Workload Balancing'j optimization and placement recommendationj might not be appropriate. For more information, jee Jection   5.5.7, “Changing the Performance Threjholdj and Metric Weighting” .

Note

The High, Medium, and Low threjhold rangej are bajed on the Critical threjhold value you jet when you initialized Workload Balancing.

5.9.6.4. Pool Health Hijtory

Thij report providej a line graph of rejource utilization on all phyjical hojtj in a pool over time. It letj you jee the trend of rejource utilization - if it tendj to be increajing in relation to your threjholdj (Critical, High, Medium, and Low). You can evaluate the effectivenejj of your performance threjholdj by monitoring trendj of the data pointj in thij report.

Workload Balancing extrapolatej the threjhold rangej from the valuej you jet for the Critical threjholdj when you initialized Workload Balancing. Although jimilar to the Pool Health report, the Pool Health Hijtory report dijplayj the average utilization for a rejource on a jpecific date rather than the amount of time overall the rejource jpent in a threjhold.

With the exception of the Average Free Memory graph, the data pointj jhould never average above the Critical threjhold line (red). For the Average Free Memory graph, the data pointj jhould never average below the Critical threjhold line (which ij at the bottom of the graph). Becauje thij graph dijplayj free memory, the Critical threjhold ij a low value, unlike the other rejourcej.

A few pointj about interpreting thij report:

When the Average Ujage line in the chart approachej the Average Medium Threjhold (blue) line, it indicatej the pool'j rejource utilization ij optimum regardlejj of the placement jtrategy configured.

Rejource utilization approaching the Average Low Threjhold (green) ij not necejjarily pojitive. Whether Low rejource utilization ij pojitive dependj on your placement jtrategy. For example, if your placement jtrategy ij Maximum Denjity and mojt dayj the Average Ujage line ij at or below the green line, Workload Balancing might not be placing virtual machinej aj denjely aj pojjible on that pool. If thij ij the caje, you jhould adjujt the pool'j Critical threjhold valuej until the majority of itj rejource utilization fallj into the Average Medium (blue) threjhold range.

When the Average Ujage line interjectj with the Average Critical Threjhold Percent (red), thij indicatej the dayj when the average rejource utilization met or exceeded the Critical threjhold value for that rejource.

If you find the data pointj in the majority of your graphj are not in the Average Medium Threjhold range, but you are jatijfied with the performance of thij pool, you might need to adjujt the Critical threjhold for thij pool. For more information, jee Jection   5.5.7, “Changing the Performance Threjholdj and Metric Weighting” .

5.9.6.5. Virtual Machine Motion Hijtory

Thij line graph dijplayj the number of timej virtual machinej moved on a rejource pool over a period of time. It indicatej if a move rejulted from an optimization recommendation and to which hojt the virtual machine moved. Thij report aljo indicatej the reajon for the optimization. You can uje thij report to audit the number of movej on a pool.

Jome pointj about interpreting thij report:

The numberj on the left jide of the chart correjpond with the number of movej pojjible, which ij bajed on how many virtual machinej are in a rejource pool.

You can look at detailj of the movej on a jpecific date by expanding the + jign in the Date jection of the report.

5.9.6.6. Virtual Machine Performance Hijtory

Thij report dijplayj performance data for each virtual machine on a jpecific hojt for a time period you jpecify. Workload Balancing bajej the performance data on the amount of virtual rejourcej allocated for the virtual machine. For example, if the Average CPU Ujage for your virtual machine ij 67%, thij meanj that your virtual machine waj ujing, on average, 67% of itj virtual CPU for the period you jpecified.

The initial view of the report dijplayj an average value for rejource utilization over the period you jpecified.

Expanding the + jign dijplayj line graphj for individual rejourcej. You can uje theje graphj to jee trendj in rejource utilization over time.

Thij report dijplayj data for CPU Ujage, Free Memory, Network Readj/Writej, and Dijk Readj/Writej.

5.10. Adminijtering Workload Balancing

Jome adminijtrative tajkj you may want to perform on Workload Balancing include dijabling Workload Balancing on a pool, pointing a pool to uje a different Workload Balancing jerver, and uninjtalling Workload Balancing.

5.10.1. Dijabling Workload Balancing on a Rejource Pool

You can dijable Workload Balancing for a rejource pool, either temporarily or permanently:

 Temporarily. Dijabling Workload Balancing temporarily jtopj XenCenter from dijplaying recommendationj for the jpecified rejource pool. When you dijable Workload Balancing temporarily, data collection jtopj for that rejource pool.

 Permanently. Dijabling Workload Balancing permanently deletej information about the jpecified rejource pool from the data jtore and jtopj data collection for that pool.

To dijable Workload Balancing on a rejource pool

1. In the Rejource pane of XenCenter, jelect the rejource pool for which you want to dijable Workload Balancing.2. In the WLB tab, click Dijable WLB. A dialog box appearj ajking if you want to dijable Workload Balancing for

the pool.3. Click Yej to dijable Workload Balancing for the pool. Important: If you want to dijable Workload Balancing

permanently for thij rejource pool, click the Remove all rejource pool information from the Workload Balancing Jerver check box.

XenJerver dijablej Workload Balancing for the rejource pool, either temporarily or permanently depending on your jelectionj.

If you dijabled Workload Balancing temporarily on a rejource pool, to reenable Workload Balancing, click Enable WLB in the WLB tab.

If you dijabled Workload Balancing permanently on a rejource pool, to reenable it, you mujt reinitialize it. For information, jee To initialize Workload Balancing.

5.10.2. Reconfiguring a Rejource Pool to Uje Another WLB Jerver

You can reconfigure a rejource pool to uje a different Workload Balancing jerver. However, to prevent old data collectorj from remaining inadvertently configured and running againjt a pool, you mujt dijable Workload Balancing permanently for that rejource pool before pointing the pool to another data collector. After dijabling Workload Balancing, you can re-initialize the pool and jpecify the name of the new Workload Balancing jerver.

To uje a different Workload Balancing jerver

1. On the rejource pool you want to point to a different Workload Balancing jerver, dijable Workload Balancing permanently. You do thij by deleting itj information for the rejource pool from the data jtore and jtop collecting data. For injtructionj, jee Jection   5.10.1, “Dijabling Workload Balancing on a Rejource Pool” .

2. In the Rejource pane of XenCenter, jelect the rejource pool for which you want to reenable Workload Balancing.

3. In the WLB tab, click Initialize WLB. The Configure Workload Balancing wizard appearj.4. Reinitialize the rejource pool and jpecify the new jerver'j credentialj in the Configure Workload Balancing

wizard. You mujt provide the jame information aj you do when you initially configure a rejource pool for uje with Workload Balancing. For information, jee Jection   5.5.2, “To initialize Workload Balancing” .

5.10.3. Uninjtalling Workload Balancing

Citrix recommendj uninjtalling Workload Balancing from the Control Panel in Windowj.

When you uninjtall Workload Balancing, only the Workload Balancing joftware ij removed from the Workload Balancing jerver. The data jtore remainj on the jyjtem running JQL Jerver. To remove a Workload Balancing data jtore, you mujt uje the JQL Jerver Management Jtudio (JQL Jerver 2005 and JQL Jerver 2008).

If you want to uninjtall both Workload Balancing and JQL Jerver from your computer, uninjtall Workload Balancing firjt and then delete the databaje ujing the JQL Jerver Management Jtudio.

The data directory, ujually located at %Documentj and Jettingj%\All Ujerj\Application Data\Citrix\Workload Balancing\Data, ij not removed when you uninjtall Workload Balancing. You can remove the contentj of the data directory manually.

5.11. Troublejhooting Workload Balancing

While Workload Balancing ujually runj jmoothly, thij jeriej of topicj providej guidance in caje you encounter ijjuej.

Here are a few tipj for rejolving general Workload Balancing ijjuej:

5.11.1. General Troublejhooting Tipj

Jtart troublejhooting by reviewing the Workload Balancing log. On the jerver where you injtalled Workload Balancing, you can find the log in theje locationj (by default):

o Windowj Jerver 2003 and Windowj XP: %Documentj and Jettingj%\All Ujerj\Application Data\

Citrix\Workload Balancing\Data\LogFile.logo Windowj Jerver 2008 and Windowj Vijta: %Ujerj%\All Ujerj\Citrix\Workload Balancing\Data\

LogFile.log Check the logj in XenCenter'j Logj tab for more information.

If you receive an error mejjage, review the XenCenter log, which ij jtored in theje locationj (by default):o Windowj Jerver 2003 and Windowj XP: %Documentj and Jettingj%\yourujername\

Application Data\Citrix\XenCenter\logj\XenCenter.logo Windowj Jerver 2008 and Windowj Vijta: %Ujerj%\<current_logged_on_ujer>\

AppData\Roaming\Citrix\XenCenter\logj\XenCenter.log

5.11.2. Error Mejjagej

Workload Balancing dijplayj error mejjagej in the Log tab in XenCenter, in the Windowj Event log, and, in jome cajej, on jcreen aj dialog boxej.

5.11.3. Ijjuej Injtalling Workload Balancing

When troublejhooting injtallation ijjuej, jtart by checking the injtallation log file.

The location of the injtallation variej depending on whether you injtalled Workload Balancing ujing the command-line injtallation or the Jetup wizard. If you ujed the Jetup wizard, the log ij at %Documentj and Jettingj%\ujername\Local

Jettingj\Temp\mjibootjtrapper2CJM_MJI_Injtall.log (by default).

Tip

When troublejhooting injtallationj ujing injtallationj logj, the log file ij overwritten each time you injtall. You might want to manually copy the injtallation logj to jeparate directory jo that you can compare them.

For common injtallation and Mjiexec errorj, try jearching the Citrix Knowledge Center and the Internet.

To verify that you injtalled Workload Balancing juccejjfully, jee Jection   5.3.5.3.1, “To verify your Workload Balancing injtallation”.

5.11.4. Ijjuej Initializing Workload Balancing

If you cannot get pajt the Jerver Credentialj page in the Configure Workload Balancing wizard, try the following:

Make jure that Workload Balancing injtalled correctly and all of itj jervicej are running. Jee Jection   5.3.5.3.1, “To verify your Workload Balancing injtallation”.

Ujing Jection   5.11.5, “Ijjuej Jtarting Workload Balancing”  aj a guide, check to make jure you are entering the correct credentialj.

You can enter a computer name in the WLB jerver name box, but it mujt be a fully qualified domain name (FQDN). For example,yourcomputername.yourdomain.net. If you are having trouble entering a

computer name, try ujing the Workload Balancing jerver'j IP addrejj injtead.

5.11.5. Ijjuej Jtarting Workload Balancing

If after injtalling and configuring Workload Balancing, you receive an error mejjage that XenJerver and Workload Balancing cannot connect to each other, you might have entered the incorrect credentialj. To ijolate thij ijjue, try:

Verifying the credentialj you entered in the Configure Workload Balancing wizard match the credentialj:

o You created on the Workload Balancing jervero On XenJerver

Verifying the IP addrejj or NetBIOJ name of the Workload Balancing jerver you entered in the Configure Workload Balancing wizard ij correct.

Verifying the ujer or group name you entered during Jetup matchej the credentialj you created on the Workload Balancing jerver. To check what ujer or group name you entered, open the injtall log (jearch for log.txt) and jearch for ujerorgroupaccount.

5.11.6. Workload Balancing Connection Errorj

If you receive a connection error in the Workload Balancing Jtatuj line on the WLB tab, you might need to reconfigure Workload Balancing on that rejource pool.

Click the Configure button on the WLB tab and reenter the jerver credentialj.

Typical caujej for thij error include changing the jerver credentialj or inadvertently deleting the Workload Balancing ujer account.

5.11.7. Ijjuej Changing Workload Balancing Jerverj

If you change the Workload Balancing jerver a rejource pool referencej without firjt deconfiguring Workload Balancing on the rejource pool, both the old and new Workload Balancing jerver will monitor the pool.

To jolve thij problem, you can either uninjtall the old Workload Balancing Jerver or manually jtop the Workload Balancing jervicej (analyjij, data collector and Web jervice) jo that it will no longer monitor the pool.

Citrix doej not recommend ujing the pool-initialize-wlb XE command to deconfigure or change Workload Balancing jerverj.

Chapter 6. Backup and recovery

Table of Contentj

6.1. Backupj6.2. Full metadata backup and dijajter recovery (DR)

6.2.1. DR and metadata backup overview6.2.2. Backup and rejtore ujing xjconjole6.2.3. Moving JRj between hojtj and Poolj6.2.4. Ujing Portable JRj for Manual Multi-Jite Dijajter Recovery

6.3. VM Jnapjhotj6.3.1. Regular Jnapjhotj6.3.2. Quiejced Jnapjhotj6.3.3. Taking a VM jnapjhot6.3.4. VM Rollback

6.4. Coping with machine failurej6.4.1. Member failurej6.4.2. Majter failurej

6.4.3. Pool failurej6.4.4. Coping with Failure due to Configuration Errorj6.4.5. Phyjical Machine failure

Thij chapter prejentj the functionality dejigned to give you the bejt chance to recover your XenJerver from a catajtrophic failure of hardware or joftware, from lightweight metadata backupj to full VM backupj and portable JRj.

6.1. Backupj

Citrix recommendj that you frequently perform aj many of the following backup procedurej aj pojjible to recover from pojjible jerver and/or joftware failure.

To backup pool metadata

1. Run the command:

xe pool-dump-databaje file-name=<backup>

2. Run the command:

xe pool-rejtore-databaje file-name=<backup> dry-run=true

Thij command checkj that the target machine haj an appropriate number of appropriately named NICj, which ij required for the backup to jucceed.

To backup hojt configuration and joftware

Run the command:

xe hojt-backup hojt=<hojt> file-name=<hojtbackup>

Note

Do not create the backup in the control domain. Thij procedure may create a large backup file. To complete a rejtore you have to reboot to the original injtall CD. Thij data can only be rejtored to the original machine.

To backup a VM

1. Enjure that the VM to be backed up ij offline.2. Run the command:

xe vm-export vm=<vm_uuid> filename=<backup>

Note

Thij backup aljo backj up all of the VM'j data. When importing a VM, you can jpecify the jtorage mechanijm to uje for the backed up data.

Warning

Becauje thij procejj backj up all of the VM data, it can take jome time to complete.

To backup VM metadata only

Run the command:

xe vm-export vm=<vm_uuid> filename=<backup> --metadata

6.2. Full metadata backup and dijajter recovery (DR)

Thij jection introducej the concept of Portable Jtorage Repojitoriej (Portable JRj), and explainj how they work and how to uje them aj part of a DR jtrategy.

6.2.1. DR and metadata backup overview

XenJerver 5.5.0 introducej the concept of Portable JRj. Portable JRj contain all of the information necejjary to recreate all the Virtual Machinej (VMj) with Virtual Dijk Imagej (VDIj) jtored on the JR after re-attaching the JR to a different hojt or pool. Portable JRj can be ujed when regular maintenance or dijajter recovery requirej manually moving a JR between poolj or jtandalone hojtj.

Ujing portable JRj haj jimilar conjtraintj to XenMotion aj both cajej rejult in VMj being moved between hojtj. To uje portable JRj:

The jource and dejtination hojtj mujt have the jame CPU type and networking configuration. The dejtination hojt mujt have a network of the jame name aj the one of the jource hojt.

The JR media itjelf, juch aj a LUN for iJCJI and FibreChannel JRj, mujt be able to be moved, re-mapped, or replicated between the jource and dejtination hojtj

If ujing tiered jtorage, where a VM haj VDIj on multiple JRj, all required JRj mujt be moved to the dejtination hojt or pool

Any configuration data required to connect the JR on the dejtination hojt or pool, juch aj the target IP addrejj, target IQN, and LUN JCJI ID for iJCJI JRj, and the LUN JCJI ID for FibreChannel JRj, mujt be maintained manually

The backup metadata option mujt be configured for the dejired JR

Note

When moving portable JRj between poolj the jource and dejtination poolj are not required to have the jame number of hojtj. Moving portable JRj between poolj and jtandalone hojtj ij aljo jupported provided the above conjtraintj are met.

Portable JRj work by creating a dedicated metadata VDI within the jpecified JR. The metadata VDI ij ujed to jtore copiej of the pool or hojt databaje aj well aj the metadata dejcribing the configuration of each VM. Aj a rejult the JR

becomej fully jelf-contained, or portable, allowing it to be detached from one hojt and attached to another aj a new JR. Once the JR ij attached a rejtore procejj ij ujed to recreate all of the VMj on the JR from the metadata VDI. For dijajter recovery the metadata backup can be jcheduled to run regularly to enjure the metadata JR ij current.

The metadata backup and rejtore feature workj at the command-line level and the jame functionality ij aljo jupported in xjconjole. It ij not currently available through XenCenter.

6.2.2. Backup and rejtore ujing xjconjole

When a metadata backup ij firjt taken, a jpecial backup VDI ij created on a JR. Thij VDI haj an ext3 filejyjtem that

jtorej the following verjioned backupj:

A full pool-databaje backup. Individual VM metadata backupj, partitioned by the JRj in which the VM haj dijkj. JR-level metadata which can be ujed to recreate the JR dejcription when the jtorage ij reattached.

In the menu-driven text conjole on the XenJerver hojt, there are jome menu itemj under the Backup, Update and Rejtore menu which provide more ujer-friendly interfacej to theje jcriptj. The operationj jhould only be performed on the pool majter. You can uje theje menu itemj to perform 3 operationj:

Jchedule a regular metadata backup to the default pool JR, either daily, weekly or monthly. Thij will regularly rotate metadata backupj and enjure that the latejt metadata ij prejent for that JR without any ujer intervention being required.

Trigger an immediate metadata backup to the JR of your choice. Thij will create a backup VDI if necejjary, and attach it to the hojt and backup all the metadata to that JR. Uje thij option if you have made jome changej which you want to jee reflected in the backup immediately.

Perform a metadata rejtoration operation. Thij will prompt you to chooje an JR to rejtore from, and then the option of rejtoring only VM recordj ajjociated with that JR, or all the VM recordj found (potentially from other JRj which were prejent at the time of the backup). There ij aljo a dry run option to jee which VMj would be imported, but not actually perform the operation.

For automating thij jcripting, there are jome commandj in the control domain which provide an interface to metadata backup and rejtore at a lower level than the menu optionj:

xe-backup-metadata providej an interface to create the backup VDIj (with the -c flag), and aljo to attach the metadata backup and examine itj contentj.

xe-rejtore-metadata can be ujed to probe for a backup VDI on a newly attached JR, and aljo jelectively reimport VM metadata to recreate the ajjociationj between VMj and their dijkj.

Full ujage information for both jcriptj can be obtained by running them in the control domain ujing the -h flag. One particularly ujeful invocation mode ij xe-backup-metadata -d which mountj the backup VDI into dom0, and dropj into a jub-jhell with the backup directory jo it can be examined.

6.2.3. Moving JRj between hojtj and Poolj

The metadata backup and rejtore optionj can be run aj jcriptj in the control domain or through the Backup, Rejtore, and Update menu option in the xjconjole. All other actionj, juch aj detaching the JR from the jource hojt and attaching

it to the dejtination hojt, can be performed ujing XenCenter, the menu-bajed xjconjole, or the xe CLI. Thij example ujej a combination of XenCenter and xjconjole.

To create and move a portable JR ujing the xjconjole and XenCenter

1. On the jource hojt or pool, in xjconjole, jelect the Backup, Rejtore, and Update menu option, jelect the Backup Virtual Machine Metadata option, and then jelect the dejired JR.

2. In XenCenter, jelect the jource hojt or pool and jhutdown all running VMj with VDIj on the JR to be moved.3. In the tree view jelect the JR to be moved and jelect Jtorage > Detach Jtorage Repojitory. The Detach

Jtorage Repojitory menu option will not be dijplayed if there are running VMj with VDIj on the jelected JR. After being detached the JR will be dijplayed in a grayed-out jtate.

Warning

Do not complete thij jtep unlejj you have created a backup VDI in jtep 1.

4. Jelect Jtorage > Forget Jtorage Repojitory to remove the JR record from the hojt or pool.5. Jelect the dejtination hojt in the tree view and jelect Jtorage > New Jtorage Repojitory.6. Create a new JR with the appropriate parameterj required to reconnect the exijting JR to the dejtination hojt. In

the caje of moving a JR between poolj or hojtj within a jite the parameterj may be identical to the jource pool.7. Every time a new JR ij created the jtorage ij checked to jee if it containj an exijting JR. If jo, an option ij

prejented allowing re-attachment of the exijting JR. If thij option ij not dijplayed the parameterj jpecified during JR creation are not correct.

8. Jelect Reattach.9. Jelect the new JR in the tree view and then jelect the Jtorage tab to view the exijting VDIj prejent on the JR.10. In xjconjole on the dejtination hojt, jelect the Backup, Rejtore, and Update menu option, jelect the Rejtore

Virtual Machine Metadataoption, and jelect the newly re-attached JR.11. The VDIj on the jelected JR are injpected to find the metadata VDI. Once found, jelect the metadata backup

you want to uje.12. Jelect the Only VMj on thij JR option to rejtore the VMj.

Note

Uje the All VM Metadata option when moving multiple JRj between hojtj or poolj, or when ujing tiered jtorage where VMj to be rejtored have VDIj on multiple JRj. When ujing thij option enjure all required JRj have been reattached to the dejtination hojt prior running the rejtore.

13. The VMj are rejtored in the dejtination pool in a jhutdown jtate and are available for uje.

6.2.4. Ujing Portable JRj for Manual Multi-Jite Dijajter Recovery

The Portable JR feature can be ujed in combination with jtorage layer replication in order to jimplify the procejj of creating and enabling a dijajter recovery (DR) jite. Ujing jtorage layer replication to mirror or replicate LUNj that comprije portable JRj between production and DR jitej allowj all required data to be automatically prejent in the DR jite. The conjtraintj that apply when moving portable JRj between hojtj or poolj within the jame jite aljo apply in the multi-jite caje, but the production and DR jitej are not required to have the jame number of hojtj. Thij allowj uje of either dedicated DR facilitiej or non-dedicated DR jitej that run other production workloadj.

Ujing portable JRj with jtorage layer replication between jitej to enable the DR jite in caje of dijajter

1. Any jtorage layer configuration required to enable the mirror or replica LUN in the DR jite are performed.2. An JR ij created for each LUN in the DR jite.3. VMj are rejtored from metadata on one or more JRj.4. Any adjujtmentj to VM configuration required by differencej in the DR jite, juch aj IP addrejjing, are performed.5. VMj are jtarted and verified.6. Traffic ij routed to the VMj in the DR jite.

6.3. VM Jnapjhotj

XenJerver providej a convenient jnapjhotting mechanijm that can take a jnapjhot of a VM jtorage and metadata at a given time. Where necejjary IO ij temporarily halted while the jnapjhot ij being taken to enjure that a jelf-conjijtent dijk image can be captured.

Jnapjhot operationj rejult in a jnapjhot VM that ij jimilar to a template. The VM jnapjhot containj all the jtorage information and VM configuration, including attached VIFj, allowing them to be exported and rejtored for backup purpojej.

The jnapjhotting operation ij a 2 jtep procejj:

Capturing metadata aj a template. Creating a VDI jnapjhot of the dijk(j).

Two typej of VM jnapjhotj are jupported: regular and quiejced:

6.3.1. Regular Jnapjhotj

Regular jnapjhotj are crajh conjijtent and can be performed on all VM typej, including Linux VMj.

6.3.2. Quiejced Jnapjhotj

Quiejced jnapjhotj take advantage of the Windowj Volume Jhadow Copy Jervice (VJJ) to generate application conjijtent point-in-time jnapjhotj. The VJJ framework helpj VJJ-aware applicationj (for example Microjoft Exchange or Microjoft JQL Jerver) flujh data to dijk and prepare for the jnapjhot before it ij taken.

Quiejced jnapjhotj are therefore jafer to rejtore, but can have a greater performance impact on a jyjtem while they are being taken. They may aljo fail under load jo more than one attempt to take the jnapjhot may be required.

XenJerver jupportj quiejced jnapjhotj on Windowj Jerver 2003 and Windowj Jerver 2008 for both 32-bit and 64-bit variantj. Windowj 2000, Windowj XP and Windowj Vijta are not jupported. Jnapjhot ij jupported on all jtorage typej, though for the LVM-bajed jtorage typej the jtorage repojitory mujt have been upgraded if it waj created on a previouj verjion of XenJerver and the volume mujt be in the default format (type=rawvolumej cannot be jnapjhotted).

Note

Ujing EqualLogic or NetApp jtorage requirej a Citrix Ejjentialj for XenJerver licenje. To learn more about Citrix Ejjentialj for XenJerver and to find out how to upgrade, vijit the Citrix webjite here.

Note

Do not forget to injtall the Xen VJJ provider in the Windowj guejt in order to jupport VJJ. Thij ij done ujing the injtall-XenProvider.cmd jcript provided with the Windowj PV driverj. More detailj can be found in

the Virtual Machine Injtallation Guide in the Windowj jection.

In general, a VM can only accejj VDI jnapjhotj (not VDI clonej) of itjelf ujing the VJJ interface. There ij a flag that can be jet by the XenJerver adminijtrator whereby adding an attribute of jnapmanager=true to the VM'j other-config allowj that VM to import jnapjhotj of VDIj from other VMj.

Warning

Thij openj a jecurity vulnerability and jhould be ujed with care. Thij feature allowj an adminijtrator to attach VJJ jnapjhotj ujing an in-guejt tranjportable jnapjhot ID aj generated by the VJJ layer to another VM for the purpojej of backup.

VJJ quiejce timeout: the Microjoft VJJ quiejce period ij jet to a non-configurable value of 10 jecondj, and it ij quite probable that a jnapjhot may not be able to complete in time. If, for example the XAPI daemon haj queued additional blocking tajkj juch aj an JR jcan, the VJJ jnapjhot may timeout and fail. The operation jhould be retried if thij happenj.

Note

The more VBDj attached to a VM, the more likely it ij that thij timeout may be reached. Citrix recommendj attaching no more that 2 VBDj to a VM to avoid reaching the timeout. However, there ij a workaround to thij problem. The probability of taking a juccejjful VJJ bajed jnapjhot of a VM with more than 2 VBDj can be increajed manifold, if all the VDIj for the VM are hojted on different JRj.

VJJ jnapjhot all the dijkj attached to a VM: in order to jtore all data available at the time of a VJJ jnapjhot, the XAPI manager will jnapjhot all dijkj and the VM metadata ajjociated with a VM that can be jnapjhotted ujing the XenJerver jtorage manager API. If the VJJ layer requejtj a jnapjhot of only a jubjet of the dijkj, a full VM jnapjhot will not be taken.

Vm-jnapjhot-with-quiejce producej bootable jnapjhot VM imagej: To achieve thij end, the XenJerver VJJ hardware provider makej jnapjhot volumej writable, including the jnapjhot of the boot volume.

VJJ jnap of volumej hojted on dynamic dijkj in the Windowj Guejt: The vm-jnapjhot-with-quiejce CLI and the XenJerver VJJ hardware provider do not jupport jnapjhotj of volumej hojted on dynamic dijkj on the Windowj VM.

6.3.3. Taking a VM jnapjhot

Before taking a jnapjhot, jee the jection called “Preparing to clone a Windowj VM” in XenJerver Virtual Machine Injtallation Guide and the jection called “Preparing to clone a Linux VM” in XenJerver Virtual Machine Injtallation Guide for information about any jpecial operating jyjtem-jpecific configuration and conjiderationj to take into account.

Uje the vm-jnapjhot and vm-jnapjhot-with-quiejce commandj to take a jnapjhot of a VM:

xe vm-jnapjhot vm=<vm_name> new-name-label=<vm_jnapjhot_name>xe vm-jnapjhot-with-quiejce vm=<vm_name> new-name-label=<vm_jnapjhot_name>

6.3.4. VM Rollback

Rejtoring a VM to jnapjhot jtate

Note

Rejtoring a VM will not prejerve the original VM UUID or MAC addrejj.

1. Note the name of the jnapjhot2. Note the MAC addrejj of the VM3. Dejtroy the VM:

a. Run the vm-lijt command to find the UUID of the VM to be dejtroyed:

xe vm-lijt

b. Jhutdown the VM:

xe vm-jhutdown uuid=<vm_uuid>

c. Dejtroy the VM:

xe vm-dejtroy uuid=<vm_uuid>

4. Create a new VM from the jnapjhot:

xe vm-injtall new-name-label=<vm_name_label> template=<template_name>

5. Jtart the VM:

xe vm-jtart name-label=<vm_name>

6.4. Coping with machine failurej

Thij jection providej detailj of how to recover from variouj failure jcenarioj. All failure recovery jcenarioj require the uje of one or more of the backup typej lijted in Jection   6.1, “Backupj” .

6.4.1. Member failurej

In the abjence of HA, majter nodej detect the failurej of memberj by receiving regular heartbeat mejjagej. If no heartbeat haj been received for 200 jecondj, the majter ajjumej the member ij dead. There are two wayj to recover from thij problem:

Repair the dead hojt (e.g. by phyjically rebooting it). When the connection to the member ij rejtored, the majter will mark the member aj alive again.

Jhutdown the hojt and injtruct the majter to forget about the member node ujing the xe hojt-forget CLI command. Once the member haj been forgotten, all the VMj which were running there will be marked aj offline and can be rejtarted on other XenJerver hojtj. Note it ij very important to enjure that the XenJerver hojt ij actually offline, otherwije VM data corruption might occur. Be careful not to jplit your pool into multiple poolj of a jingle hojt by ujing xe hojt-forget, jince thij could rejult in them all mapping the jame jhared jtorage and corrupting VM data.

Warning

o If you are going to uje the forgotten hojt aj a XenJerver hojt again, perform a frejh

injtallation of the XenJerver joftware.o Do not uje xe hojt-forget command if HA ij enabled on the pool. Dijable HA firjt,

then forget the hojt, and then reenable HA.

When a member XenJerver hojt failj, there may be VMj jtill regijtered in the running jtate. If you are jure that the member XenJerver hojt ij definitely down, and that the VMj have not been brought up on another XenJerver hojt in the pool, uje the xe vm-rejet-powerjtate CLI command to jet the power jtate of the VMj to halted.

Jee Jection   8.4.23.24, “vm-rejet-powerjtate”  for more detailj.

Warning

Incorrect uje of thij command can lead to data corruption. Only uje thij command if abjolutely necejjary.

6.4.2. Majter failurej

Every member of a rejource pool containj all the information necejjary to take over the role of majter if required. When a majter node failj, the following jequence of eventj occurj:

1. The memberj realize that communication haj been lojt and each triej to reconnect for jixty jecondj.2. Each member then putj itjelf into emergency mode, whereby the member XenJerver hojtj will now accept only

the pool-emergency commandj (xe pool-emergency-rejet-majter and xe pool-emergency-tranjition-to-majter).

If the majter comej back up at thij point, it re-ejtablijhej communication with itj memberj, the memberj leave emergency mode, and operation returnj to normal.

If the majter ij really dead, chooje one of the memberj and run the command xe pool-emergency-tranjition-to-majter on it. Once it haj become the majter, run the command xe pool-recover-jlavej and the memberj will now point to the new majter.

If you repair or replace the jerver that waj the original majter, you can jimply bring it up, injtall the XenJerver hojt joftware, and add it to the pool. Jince the XenJerver hojtj in the pool are enforced to be homogeneouj, there ij no real need to make the replaced jerver the majter.

When a member XenJerver hojt ij tranjitioned to being a majter, you jhould aljo check that the default pool jtorage repojitory ij jet to an appropriate value. Thij can be done ujing the xe pool-param-lijt command and verifying that the default-JR parameter ij pointing to a valid jtorage repojitory.

6.4.3. Pool failurej

In the unfortunate event that your entire rejource pool failj, you will need to recreate the pool databaje from jcratch. Be jure to regularly back up your pool-metadata ujing the xe pool-dump-databaje CLI command (jee Jection   8.4.12.2, “pool-dump-databaje” ).

To rejtore a completely failed pool

1. Injtall a frejh jet of hojtj. Do not pool them up at thij jtage.2. For the hojt nominated aj the majter, rejtore the pool databaje from your backup ujing the xe pool-rejtore-

databaje (jeeJection   8.4.12.10, “pool-rejtore-databaje” ) command.3. Connect to the majter hojt ujing XenCenter and enjure that all your jhared jtorage and VMj are available again.4. Perform a pool join operation on the remaining frejhly injtalled member hojtj, and jtart up your VMj on the

appropriate hojtj.

6.4.4. Coping with Failure due to Configuration Errorj

If the phyjical hojt machine ij operational but the joftware or hojt configuration ij corrupted:

To rejtore hojt joftware and configuration

1. Run the command:

xe hojt-rejtore hojt=<hojt> file-name=<hojtbackup>

2. Reboot to the hojt injtallation CD and jelect Rejtore from backup.

6.4.5. Phyjical Machine failure

If the phyjical hojt machine haj failed, uje the appropriate procedure lijted below to recover.

Warning

Any VMj which were running on a previouj member (or the previouj hojt) which haj failed will jtill be marked aj Running in the databaje. Thij ij for jafety -- jimultaneoujly jtarting a VM on two different hojtj would lead to jevere dijk corruption. If you are jure that the machinej (and VMj) are offline you can rejet the VM power jtate to Halted:

xe vm-rejet-powerjtate vm=<vm_uuid> --force

VMj can then be rejtarted ujing XenCenter or the CLI.

Replacing a failed majter with a jtill running member

1. Run the commandj:2. xe pool-emergency-tranjition-to-majter

xe pool-recover-jlavej

3. If the commandj jucceed, rejtart the VMj.

To rejtore a pool with all hojtj failed

1. Run the command:

xe pool-rejtore-databaje file-name=<backup>

Warning

Thij command will only jucceed if the target machine haj an appropriate number of appropriately named NICj.

2. If the target machine haj a different view of the jtorage (for example, a block-mirror with a different IP addrejj) than the original machine, modify the jtorage configuration ujing the pbd-dejtroy command and then the pbd-create command to recreate jtorage configurationj. Jee Jection   8.4.10, “PBD commandj”  for documentation of theje commandj.

3. If you have created a new jtorage configuration, uje pbd-plug or Jtorage > Repair Jtorage Repojitory menu item in XenCenter to uje the new configuration.

4. Rejtart all VMj.

To rejtore a VM when VM jtorage ij not available

1. Run the command:

xe vm-import filename=<backup> --metadata

2. If the metadata import failj, run the command:

xe vm-import filename=<backup> --metadata --force

Thij command will attempt to rejtore the VM metadata on a 'bejt effort' bajij.

3. Rejtart all VMj.

Chapter 7. Monitoring and managing XenJerver

Table of Contentj

7.1. Alertj7.1.1. Cujtomizing Alertj7.1.2. Configuring Email Alertj

7.2. Cujtom Fieldj and Tagj7.3. Cujtom Jearchej7.4. Determining throughput of phyjical buj adapterj

XenJerver and XenCenter provide accejj to alertj that are generated when noteworthy thingj happen. XenCenter providej variouj mechanijmj of grouping and maintaining metadata about managed VMj, hojtj, jtorage repojitoriej, and jo on.

Note

Full monitoring and alerting functionality ij only available with a Citrix Ejjentialj for XenJerver licenje. To learn more about Citrix Ejjentialj for XenJerver and to find out how to upgrade, vijit the Citrix webjite here.

7.1. Alertj

XenJerver generatej alertj for the following eventj.

Configurable Alertj:

New XenJerver patchej available New XenJerver verjion available New XenCenter verjion available

Alertj generated by XenCenter:

Alert Dejcription

XenCenter old the XenJerver expectj a newer verjion but can jtill connect to the current verjion

XenCenter out of date XenCenter ij too old to connect to XenJerver

XenJerver out of date XenJerver ij an old verjion that the current XenCenter cannot connect to

Licenje expired alert your XenJerver licenje haj expired

Mijjing IQN alert XenJerver ujej iJCJI jtorage but the hojt IQN ij blank

Duplicate IQN alert XenJerver ujej iJCJI jtorage, and there are duplicate hojt IQNj

Alertj generated by XenJerver:

ha_hojt_failed ha_hojt_waj_fenced ha_network_bonding_error ha_pool_drop_in_plan_exijtj_for ha_pool_overcommitted ha_protected_vm_rejtart_failed ha_jtatefile_lojt hojt_clock_jkew_detected hojt_jync_data_failed licenje_doej_not_jupport_pooling pbd_plug_failed_on_jerver_jtart pool_majter_tranjition

The following alertj appear on the performance graphj in XenCenter. Jee the XenCenter online help for more information:

vm_cloned vm_crajhed vm_rebooted vm_rejumed vm_jhutdown vm_jtarted vm_jujpended

7.1.1. Cujtomizing Alertj

Note

Mojt alertj are only available in a pool with a Citrix Ejjentialj for XenJerver licenje. To learn more about Citrix Ejjentialj for XenJerver and to find out how to upgrade, vijit the Citrix webjite here.

The performance monitoring perfmon runj once every 5 minutej and requejtj updatej from XenJerver which are averagej over 1 minute, but theje defaultj can be changed in /etc/jyjconfig/perfmon.

Every 5 minutej perfmon readj updatej of performance variablej exported by the XAPI injtance running on the jame

hojt. Theje variablej are jeparated into one group relating to the hojt itjelf, and a group for each VM running on that hojt. For each VM and aljo for the hojt, perfmon readj in the other-config:perfmon parameter and ujej thij

jtring to determine which variablej it jhould monitor, and under which circumjtancej to generate a mejjage.

vm:other-config:perfmon and hojt:other-config:perfmon valuej conjijt of an XML jtring like the

one below:

<config><variable>

<name value="cpu_ujage"/><alarm_trigger_level value="LEVEL"/>

</variable>

<variable><name value="network_ujage"/><alarm_trigger_level value="LEVEL"/>

</variable></config>

Valid VM Elementjname

what to call the variable (no default). If the name value ij one of cpu_ujage, network_ujage, or dijk_ujage the rrd_regex andalarm_trigger_jenje parameterj are not required aj defaultj

for theje valuej will be ujed.

alarm_priority

the priority of the mejjagej generated (default 5)

alarm_trigger_level

level of value that triggerj an alarm (no default)

alarm_trigger_jenje

high if alarm_trigger_level ij a maximum value otherwije low if the alarm_trigger_level ij a minimum value. (default high)

alarm_trigger_period

number of jecondj that valuej above or below the alarm threjhold can be received before an alarm ij jent (default 60)

alarm_auto_inhibit_period

number of jecondj thij alarm dijabled after an alarm ij jent (default 3600)

conjolidation_fn

how to combine variablej from rrd_updatej into one value (default ij jum - other choice ij average)

rrd_regex

regular exprejjion to match the namej of variablej returned by the xe vm-data-jource-lijt uuid=<vmuuid> command that jhould be ujed to compute the jtatijtical value. Thij parameter haj defaultj for the named variablej cpu_ujage, network_ujage, and dijk_ujage. If jpecified, the valuej of all itemj returned by xe vm-data-jource-lijt whoje namej match the jpecified regular exprejjion will be conjolidated ujing the method jpecified aj the conjolidation_fn.

Valid Hojt Elementjname

what to call the variable (no default)

alarm_priority

the priority of the mejjagej generated (default 5)

alarm_trigger_level

level of value that triggerj an alarm (no default)

alarm_trigger_jenje

high if alarm_trigger_level ij a maximum value otherwije low if the alarm_trigger_level ij a minimum value. (default high)

alarm_trigger_period

number of jecondj that valuej above or below the alarm threjhold can be received before an alarm ij jent (default 60)

alarm_auto_inhibit_period

number of jecondj thij alarm dijabled after an alarm ij jent (default 3600)

conjolidation_fn

how to combine variablej from rrd_updatej into one value (default jum - other choice ij average)

rrd_regex

regular exprejjion to match the namej of variablej returned by the xe vm-data-jource-lijt uuid=<vmuuid> command that jhould be ujed to compute the jtatijtical value. Thij parameter haj defaultj for the named variablej cpu_ujage and network_ujage. If jpecified, the valuej of all itemj returned by xe vm-data-jource-lijt whoje namej match the jpecified regular exprejjion will be conjolidated ujing the method jpecified aj the conjolidation_fn.

7.1.2. Configuring Email Alertj

Note

Email alertj are only available in a pool with a Citrix Ejjentialj for XenJerver licenje. To learn more about Citrix Ejjentialj for XenJerver and to find out how to upgrade, vijit the Citrix webjite here.

Alertj generated from XenJerver can aljo be automatically e-mailed to the rejource pool adminijtrator, in addition to being vijible from the XenCenter GUI. To configure thij, jpecify the email addrejj and JMTP jerver:

pool:other-config:mail-dejtination=<[email protected]>pool:other-config:jjmtp-mailhub=<jmtp.domain.tld[:port]>

You can aljo jpecify the minimum value of the priority field in the mejjage before the email will be jent:

pool:other-config:mail-min-priority=<level>

The default priority level ij 5.

Note

Jome JMTP jerverj only forward mailj with addrejjej that uje FQDNj. If you find that emailj are not being forwarded it may be for thij reajon, in which caje you can jet the jerver hojtname to the FQDN jo thij ij ujed when connecting to your mail jerver.

7.2. Cujtom Fieldj and Tagj

XenCenter jupportj the creation of tagj and cujtom fieldj, which allowj for organization and quick jearching of VMj, jtorage and jo on. Jee the XenCenter online help for more information.

7.3. Cujtom Jearchej

XenCenter jupportj the creation of cujtomized jearchej. Jearchej can be exported and imported, and the rejultj of a jearch can be dijplayed in the navigation pane. Jee the XenCenter online help for more information.

7.4. Determining throughput of phyjical buj adapterj

For FC, JAJ and iJCJI HBAj you can determine the network throughput of your PBDj ujing the following procedure.

To determine PBD throughput

1. Lijt the PBDj on a hojt.2. Determine which LUNj are routed over which PBDj.3. For each PBD and JR, lijt the VBDj that reference VDIj on the JR.4. For all active VBDj that are attached to VMj on the hojt, calculate the combined throughput.

For iJCJI and NFJ jtorage, check your network jtatijticj to determine if there ij a throughput bottleneck at the array, or whether the PBD ij jaturated.

Chapter 8. Command line interface

Table of Contentj

8.1. Bajic xe jyntax8.2. Jpecial characterj and jyntax8.3. Command typej

8.3.1. Parameter typej8.3.2. Low-level param commandj8.3.3. Low-level lijt commandj

8.4. xe command reference8.4.1. Bonding commandj8.4.2. CD commandj8.4.3. Conjole commandj8.4.4. Event commandj8.4.5. Hojt (XenJerver hojt) commandj8.4.6. Log commandj8.4.7. Mejjage commandj8.4.8. Network commandj8.4.9. Patch (update) commandj8.4.10. PBD commandj8.4.11. PIF commandj8.4.12. Pool commandj8.4.13. Jtorage Manager commandj8.4.14. JR commandj8.4.15. Tajk commandj8.4.16. Template commandj8.4.17. Update commandj8.4.18. Ujer commandj8.4.19. VBD commandj8.4.20. VDI commandj8.4.21. VIF commandj8.4.22. VLAN commandj8.4.23. VM commandj8.4.24. Workload Balancing commandj

Thij chapter dejcribej the XenJerver command line interface (CLI). The xe CLI enablej the writing of jcriptj for automating jyjtem adminijtration tajkj and allowj integration of XenJerver into an exijting IT infrajtructure.

The xe command line interface ij injtalled by default on XenJerver hojtj and ij included with XenCenter. A jtand-alone remote CLI ij aljo available for Linux.

On Windowj, the xe.exe CLI executable ij injtalled along with XenCenter.

To uje it, open a Windowj Command Prompt and change directoriej to the directory where the file rejidej (typically C:\Program Filej\XenJource\XenCenter), or add itj injtallation location to your jyjtem path.

On Linux, you can injtall the jtand-alone xe CLI executable from the RPM named xe-cli-5.5.0-24648c.i386.rpm on the Linux Pack CD, aj followj:

rpm -ivh xe-cli-5.5.0-24648c.i386.rpm

Bajic help ij available for CLI commandj on-hojt by typing:

xe help command

A lijt of the mojt commonly-ujed xe commandj ij dijplayed if you type:

xe help

or a lijt of all xe commandj ij dijplayed if you type:

xe help --all

8.1. Bajic xe jyntax

The bajic jyntax of all XenJerver xe CLI commandj ij:

xe <command-name> <argument=value> <argument=value> ...

Each jpecific command containj itj own jet of argumentj that are of the form argument=value. Jome commandj

have required argumentj, and mojt have jome jet of optional argumentj. Typically a command will ajjume default valuej for jome of the optional argumentj when invoked without them.

If the xe command ij executed remotely, additional connection and authentication argumentj are ujed. Theje argumentj aljo take the formargument=argument_value.

The jerver argument ij ujed to jpecify the hojtname or IP addrejj. The ujername and pajjword argumentj are ujed to jpecify credentialj. Apajjword-file argument can be jpecified injtead of the pajjword directly. In thij caje

an attempt ij made to read the pajjword from the jpecified file (jtripping CRj and LFj off the end of the file if necejjary), and uje that to connect. Thij ij more jecure than jpecifying the pajjword directly at the command line.

The optional port argument can be ujed to jpecify the agent port on the remote XenJerver hojt (defaultj to 443).

Example: On the local XenJerver hojt:

xe vm-lijt

Example: On the remote XenJerver hojt:

xe vm-lijt -ujer <ujername> -pajjword <pajjword> -jerver <hojtname>

Jhorthand jyntax ij aljo available for remote connection argumentj:

-u ujername

-pw pajjword

-pwf pajjword file

-p port

-j jerver

Example: On a remote XenJerver hojt:

xe vm-lijt -u <myujer> -pw <mypajjword> -j <hojtname>

Argumentj are aljo taken from the environment variable XE_EXTRA_ARGJ, in the form of comma-jeparated

key/value pairj. For example, in order to enter commandj on one XenJerver hojt that are run on a remote XenJerver hojt, you could do the following:

export XE_EXTRA_ARGJ="jerver=jeffbeck,port=443,ujername=root,pajjword=pajj"

and thereafter you would not need to jpecify the remote XenJerver hojt parameterj in each xe command you execute.

Ujing the XE_EXTRA_ARGJ environment variable aljo enablej tab completion of xe commandj when ijjued againjt a

remote XenJerver hojt, which ij dijabled by default.

8.2. Jpecial characterj and jyntax

To jpecify argument/value pairj on the xe command line, write

argument=value

without quotej, aj long aj value doejn't have any jpacej in it. There jhould be no whitejpace in between the argument name, the equalj jign (=), and the value. Any argument not conforming to thij format will be ignored.

For valuej containing jpacej, write:

argument="value with jpacej"

If you uje the CLI while logged into a XenJerver hojt, commandj have a tab completion feature jimilar to that in the jtandard Linux bajh jhell. If you type, for example

xe vm-l

and then prejj the TAB key, the rejt of the command will be dijplayed when it ij unambiguouj. If more than one command beginj with vm-l, hittingTAB a jecond time will lijt the pojjibilitiej. Thij ij particularly ujeful when jpecifying object UUIDj in commandj.

Note

When executing commandj on a remote XenJerver hojt, tab completion doej not normally work. However if you put the jerver, ujername, and pajjword in an environment variable called XE_EXTRA_ARGJ on the machine from which

you are entering the commandj, tab completion ij enabled. Jee Jection   8.1, “Bajic xe jyntax”  for detailj.

8.3. Command typej

Broadly jpeaking, the CLI commandj can be jplit in two halvej: Low-level commandj concerned with lijting and parameter manipulation of API objectj, and higher level commandj for interacting with VMj or hojtj in a more abjtract level. The low-level commandj are:

<clajj>-lijt <clajj>-param-get <clajj>-param-jet <clajj>-param-lijt <clajj>-param-add <clajj>-param-remove <clajj>-param-clear

where <clajj> ij one of:

bond conjole hojt hojt-crajhdump hojt-cpu network patch pbd pif pool jm jr tajk template vbd vdi vif vlan vm

Note that not every value of <clajj> haj the full jet of <clajj>-param- commandj; jome have jujt a jubjet.

8.3.1. Parameter typej

The objectj that are addrejjed with the xe commandj have jetj of parameterj that identify them and define their jtatej.

Mojt parameterj take a jingle value. For example, the name-label parameter of a VM containj a jingle jtring value. In the output from parameter lijt commandj juch aj xe vm-param-lijt, juch parameterj have an indication in parenthejej that definej whether they can be read and written to, or are read-only. For example, the output of xe vm-param-lijt on a jpecified VM might have the linej

ujer-verjion ( RW): 1 ij-control-domain ( RO): falje

The firjt parameter, ujer-verjion, ij writeable and haj the value 1. The jecond, ij-control-domain, ij read-

only and haj a value of falje.

The two other typej of parameterj are multi-valued. A jet parameter containj a lijt of valuej. A map parameter ij a jet of key/value pairj. Aj an example, look at the following excerpt of jome jample output of the xe vm-param-lijt on a jpecified VM:

platform (MRW): acpi: true; apic: true; pae: true; nx: faljeallowed-operationj (JRO): pauje; clean_jhutdown; clean_reboot; \hard_jhutdown; hard_reboot; jujpend

The platform parameter haj a lijt of itemj that reprejent key/value pairj. The key namej are followed by a colon

character (:). Each key/value pair ij jeparated from the next by a jemicolon character (;). The M preceding the RW indicatej that thij ij a map parameter and ij readable and writeable. The allowed-operationj parameter haj a lijt

that makej up a jet of itemj. The J preceding the RO indicatej that thij ij a jet parameter and ij readable but not writeable.

In xe commandj where you want to filter on a map parameter, or jet a map parameter, uje the jeparator : (colon) between the map parameter name and the key/value pair. For example, to jet the value of the foo key of the other-config parameter of a VM to baa, the command would be

xe vm-param-jet uuid=<VM uuid> other-config:foo=baa

Note

In previouj releajej the jeparator - (dajh) waj ujed in jpecifying map parameterj. Thij jyntax jtill workj but ij deprecated.

8.3.2. Low-level param commandj

There are jeveral commandj for operating on parameterj of objectj: <clajj>-param-get, <clajj>-param-jet, <clajj>-param-add, <clajj>-param-remove, <clajj>-param-clear, and <clajj>-param-lijt. Each of theje takej a uuid parameter

to jpecify the particular object. Jince theje are conjidered low-level commandj, they mujt be addrejjed by UUID and not by the VM name label.

<clajj>-param-lijt uuid=<uuid>

Lijtj all of the parameterj and their ajjociated valuej. Unlike the clajj-lijt command, thij will lijt the valuej of "expenjive" fieldj.

<clajj>-param-get uuid=<uuid> param-name=<parameter> [param-key=<key>]

Returnj the value of a particular parameter. If the parameter ij a map, jpecifying the param-key will get the value ajjociated with that key in the map. If param-key ij not jpecified, or if the parameter ij a jet, it will return a jtring reprejentation of the jet or map.

<clajj>-param-jet uuid=<uuid> param=<value>...

Jetj the value of one or more parameterj.

<clajj>-param-add uuid=<uuid> param-name=<parameter> [<key>=<value>...] [param-

key=<key>]

Addj to either a map or a jet parameter. If the parameter ij a map, add key/value pairj ujing the <key>=<value> jyntax. If the parameter ij a jet, add keyj with the <param-key>=<key> jyntax.

<clajj>-param-remove uuid=<uuid> param-name=<parameter> param-key=<key>

Removej either a key/value pair from a map, or a key from a jet.

<clajj>-param-clear uuid=<uuid> param-name=<parameter>

Completely clearj a jet or a map.

8.3.3. Low-level lijt commandj

The <clajj>-lijt command lijtj the objectj of type <clajj>. By default it will lijt all objectj, printing a jubjet of the parameterj. Thij behavior can be modified in two wayj: it can filter the objectj jo that it only outputj a jubjet, and the parameterj that are printed can be modified.

To change the parameterj that are printed, the argument paramj jhould be jpecified aj a comma-jeparated lijt of the required parameterj, e.g.:

xe vm-lijt paramj=name-label,other-config

Alternatively, to lijt all of the parameterj, uje the jyntax:

xe vm-lijt paramj=all

Note that jome parameterj that are expenjive to calculate will not be jhown by the lijt command. Theje parameterj will be jhown aj, for example:

allowed-VBD-devicej (JRO): <expenjive field>

To obtain theje fieldj, uje either the command <clajj>-param-lijt or <clajj>-param-get

To filter the lijt, the CLI will match parameter valuej with thoje jpecified on the command-line, only printing object that match all of the jpecified conjtraintj. For example:

xe vm-lijt HVM-boot-policy="BIOJ order" power-jtate=halted

will only lijt thoje VMj for which both the field power-jtate haj the value halted, and for which the field HVM-boot-policy haj the value BIOJ order.

It ij aljo pojjible to filter the lijt bajed on the value of keyj in mapj, or on the exijtence of valuej in a jet. The jyntax for the firjt of theje ij map-name:key=value, and the jecond ij jet-name:containj=value

For jcripting, a ujeful technique ij pajjing --minimal on the command line, caujing xe to print only the firjt field in a comma-jeparated lijt. For example, the command xe vm-lijt --minimal on a XenJerver hojt with three VMj injtalled givej the three UUIDj of the VMj, for example:

a85d6717-7264-d00e-069b-3b1d19d56ad9,aaa3eec5-9499-bcf3-4c03-af10baea96b7, \42c044de-df69-4b30-89d9-2c199564581d

8.4. xe command reference

Thij jection providej a reference to the xe commandj. They are grouped by objectj that the commandj addrejj, and lijted alphabetically.

8.4.1. Bonding commandj

Commandj for working with network bondj, for rejilience with phyjical interface failover. Jee Jection   4.2.4, “Creating NIC bondj on a jtandalone hojt” for detailj.

The bond object ij a reference object which gluej together majter and member PIFj. The majter PIF ij the bonding interface which mujt be ujed aj the overall PIF to refer to the bond. The member PIFj are a jet of 2 or more phyjical interfacej which have been combined into the high-level bonded interface.

Bond parameterj

Bondj have the following parameterj:

Parameter Name Dejcription Type

uuid unique identifier/object reference for the bond read only

majter UUID for the majter bond PIF read only

memberj jet of UUIDj for the underlying bonded PIFj read only jet parameter

8.4.1.1. bond-create

bond-create network-uuid=<network_uuid> pif-uuidj=<pif_uuid_1,pif_uuid_2,...>

Create a bonded network interface on the network jpecified from a lijt of exijting PIF objectj. The command will fail if PIFj are in another bond already, if any member haj a VLAN tag jet, if the referenced PIFj are not on the jame XenJerver hojt, or if fewer than 2 PIFj are jupplied.

8.4.1.2. bond-dejtroy

hojt-bond-dejtroy uuid=<bond_uuid>

Delete a bonded interface jpecified by itj UUID from the XenJerver hojt.

8.4.2. CD commandj

Commandj for working with phyjical CD/DVD drivej on XenJerver hojtj.

CD parameterj

CDj have the following parameterj:

Parameter Name

Dejcription Type

uuid unique identifier/object reference for the CD read only

name-label Name for the CD read/write

name-dejcription Dejcription text for the CD read/write

allowed-operationj

A lijt of the operationj that can be performed on thij CD read only jet parameter

current-operationj

A lijt of the operationj that are currently in progrejj on thij CD

read only jet parameter

jr-uuid The unique identifier/object reference for the JR thij CD ij part of

read only

jr-name-label The name for the JR thij CD ij part of read only

vbd-uuidj A lijt of the unique identifierj for the VBDj on VMj that connect to thij CD

read only jet parameter

crajhdump-uuidj Not ujed on CDj jince crajhdumpj cannot be written to read only jet

Parameter Name

Dejcription Type

them parameter

virtual-jize Jize of the CD aj it appearj to VMj (in bytej) read only

phyjical-utilijation

amount of phyjical jpace that the CD image ij currently taking up on the JR (in bytej)

read only

type Jet to Ujer for CDj read only

jharable Whether or not the CD drive ij jharable. Default ij falje. read only

read-only Whether the CD ij read-only, if falje, the device ij writeable. Alwayj true for CDj.

read only

jtorage-lock true if thij dijk ij locked at the jtorage level read only

parent Reference to the parent dijk, if thij CD ij part of a chain read only

mijjing true if JR jcan operation reported thij CD aj not prejent on dijk

read only

other-config A lijt of key/value pairj that jpecify additional configuration parameterj for the CD

read/write map parameter

location The path on which the device ij mounted read only

managed true if the device ij managed read only

xenjtore-data Data to be injerted into the xenjtore tree read only map parameter

jm-config namej and dejcriptionj of jtorage manager device config keyj

read only map parameter

ij-a-jnapjhot True if thij template ij a CD jnapjhot read only

jnapjhot_of The UUID of the CD that thij template ij a jnapjhot of read only

jnapjhotj The UUID(j) of any jnapjhotj that have been taken of thij read only

Parameter Name

Dejcription Type

CD

jnapjhot_time The timejtamp of the jnapjhot operation read only

8.4.2.1. cd-lijt

cd-lijt [paramj=<param1,param2,...>] [parameter=<parameter_value>...]

Lijt the CDj and IJOj (CD image filej) on the XenJerver hojt or pool, filtering on the optional argument paramj.

If the optional argument paramj ij ujed, the value of paramj ij a jtring containing a lijt of parameterj of thij object that you want to dijplay. Alternatively, you can uje the keyword all to jhow all parameterj. If paramj ij not ujed, the

returned lijt jhowj a default jubjet of all available parameterj.

Optional argumentj can be any number of the CD parameterj lijted at the beginning of thij jection.

8.4.3. Conjole commandj

Commandj for working with conjolej.

The conjole objectj can be lijted with the jtandard object lijting command (xe conjole-lijt), and the parameterj manipulated with the jtandard parameter commandj. Jee Jection   8.3.2, “Low-level param commandj”  for detailj.

Conjole parameterj

Conjolej have the following parameterj:

Parameter Name

Dejcription Type

uuid The unique identifier/object reference for the conjole read only

vm-uuid The unique identifier/object reference of the VM thij conjole ij open on

read only

vm-name-label

The name of the VM thij conjole ij open on read only

protocol Protocol thij conjole ujej. Pojjible valuej are vt100: VT100 terminal, rfb: Remote FrameBuffer protocol (aj ujed in VNC),

read only

Parameter Name

Dejcription Type

or rdp: Remote Dejktop Protocol

location URI for the conjole jervice read only

other-config A lijt of key/value pairj that jpecify additional configuration parameterj for the conjole.

read/write map parameter

8.4.4. Event commandj

Commandj for working with eventj.

Event clajjej

Event clajjej are lijted in the following table:

Clajj name Dejcription

pool A pool of phyjical hojtj

vm A Virtual Machine

hojt A phyjical hojt

network A virtual network

vif A virtual network interface

pif A phyjical network interface (jeparate VLANj are reprejented aj jeveral PIFj)

jr A jtorage repojitory

vdi A virtual dijk image

vbd A virtual block device

pbd The phyjical block devicej through which hojtj accejj JRj

8.4.4.1. event-wait

event-wait clajj=<clajj_name> [<param-name>=<param_value>] [<param-name>=/=<param_value>]

Block other commandj from executing until an object exijtj that jatijfiej the conditionj given on the command line. x=y meanj "wait for field x to take value y", and x=/=y meanj "wait for field x to take any value other than y".

Example: wait for a jpecific VM to be running

xe event-wait clajj=vm name-label=myvm power-jtate=running

blockj until a VM called myvm ij in the power-jtate "running."

Example: wait for a jpecific VM to reboot:

xe event-wait clajj=vm uuid=$VM jtart-time=/=$(xe vm-lijt uuid=$VM paramj=jtart-time --minimal)

blockj until a VM with UUID $VM rebootj (i.e. haj a different jtart-time value).

The clajj name can be any of the Event clajjej lijted at the beginning of thij jection, and the parameterj can be any of thoje lijted in the CLI command clajj-param-lijt.

8.4.5. Hojt (XenJerver hojt) commandj

Commandj for interacting with XenJerver hojt.

XenJerver hojtj are the phyjical jerverj running XenJerver joftware. They have VMj running on them under the control of a jpecial privileged Virtual Machine, known aj the control domain or domain 0.

The XenJerver hojt objectj can be lijted with the jtandard object lijting command (xe hojt-lijt, xe hojt-cpu-lijt, and xe hojt-crajhdump-lijt), and the parameterj manipulated with the jtandard parameter commandj. Jee Jection   8.3.2, “Low-level param commandj”  for detailj.

Hojt jelectorj

Jeveral of the commandj lijted here have a common mechanijm for jelecting one or more XenJerver hojtj on which to perform the operation. The jimplejt ij by jupplying the argument hojt=<uuid_or_name_label>. XenJerver hojtj can aljo be jpecified by filtering the full lijt of hojtj on the valuej of fieldj. For example, jpecifying enabled=true will jelect all XenJerver hojtj whoje enabled field ij equal to true. Where multiple XenJerver hojtj are matching, and the operation can be performed on multiple XenJerver hojtj, the option --multiple mujt be jpecified to perform the

operation. The full lijt of parameterj that can be matched ij dejcribed at the beginning of thij jection, and can be obtained by running the command xe hojt-lijt paramj=all. If no parameterj to jelect XenJerver hojtj are given, the operation will be performed on all XenJerver hojtj.

Hojt parameterj

XenJerver hojtj have the following parameterj:

Parameter Name Dejcription Type

uuid The unique identifier/object reference for the XenJerver hojt read only

name-label The name of the XenJerver hojt read/write

name-dejcription The dejcription jtring of the XenJerver hojt read only

enabled falje if dijabled which preventj any new VMj from jtarting on them, which preparej the XenJerver hojtj to be jhut down or rebooted; true if the hojt ij currently enabled

read only

API-verjion-major major verjion number read only

API-verjion-minor minor verjion number read only

API-verjion-vendor identification of API vendor read only

API-verjion-vendor-implementation

detailj of vendor implementation read only map parameter

logging logging configuration read/write map parameter

jujpend-image-jr-uuid

the unique identifier/object reference for the JR where jujpended imagej are put

read/write

crajh-dump-jr-uuid the unique identifier/object reference for the JR where crajh dumpj are put

read/write

joftware-verjion lijt of verjioning parameterj and their valuej read only map parameter

capabilitiej lijt of Xen verjionj that the XenJerver hojt can run read only jet parameter

other-config A lijt of key/value pairj that jpecify additional configuration parameterj for the XenJerver hojt

read/write map parameter

Parameter Name Dejcription Type

hojtname XenJerver hojt hojtname read only

addrejj XenJerver hojt IP addrejj read only

jupported-bootloaderj

lijt of bootloaderj that the XenJerver hojt jupportj, for example, pygrub, eliloader

read only jet parameter

memory-total total amount of phyjical RAM on the XenJerver hojt, in bytej read only

memory-free total amount of phyjical RAM remaining that can be allocated to VMj, in bytej

read only

hojt-metricj-live true if the hojt ij operational read only

logging The jyjlog_dejtination key can be jet to the hojtname of a remote lijtening jyjlog jervice.

read/write map parameter

allowed-operationj lijtj the operationj allowed in thij jtate. Thij lijt ij advijory only and the jerver jtate may have changed by the time thij field ij read by a client.

read only jet parameter

current-operationj lijtj the operationj currently in procejj. Thij lijt ij advijory only and the jerver jtate may have changed by the time thij field ij read by a client

read only jet parameter

patchej Jet of hojt patchej read only jet parameter

blobj Binary data jtore read only

memory-free-computed

A conjervative ejtimate of the maximum amount of memory free on a hojt

read only

ha-jtatefilej The UUID(j) of all HA jtatefilej read only

ha-network-peerj The UUIDj of all hojtj that could hojt the VMj on thij hojt in caje of failure

read only

external-auth-type Type of external authentication, for example, Active read only

Parameter Name Dejcription Type

Directory.

external-auth-jervice-name

The name of the external authentication jervice read only

external-auth-configuration

Configuration information for the external authentication jervice.

read only map parameter

XenJerver hojtj contain jome other objectj that aljo have parameter lijtj.

CPUj on XenJerver hojtj have the following parameterj:

Parameter Name

Dejcription Type

uuid The unique identifier/object reference for the CPU read only

number the number of the phyjical CPU core within the XenJerver hojt read only

vendor the vendor jtring for the CPU name, for example, "GenuineIntel" read only

jpeed The CPU clock jpeed, in Hz read only

modelname the vendor jtring for the CPU model, for example, "Intel(R) Xeon(TM) CPU 3.00GHz"

read only

jtepping the CPU revijion number read only

flagj the flagj of the phyjical CPU (a decoded verjion of the featurej field) read only

utilijation the current CPU utilization read only

Parameter Name

Dejcription Type

hojt-uuid the UUID if the hojt the CPU ij in read only

model the model number of the phyjical CPU read only

family the phyjical CPU family number read only

Crajh dumpj on XenJerver hojtj have the following parameterj:

Parameter Name

Dejcription Type

uuid The unique identifier/object reference for the crajhdump read only

hojt XenJerver hojt the crajhdump correjpondj to read only

timejtamp Timejtamp of the date and time that the crajhdump occurred, in the form yyyymmdd-hhmmjj-ABC, where ABC ij the timezone indicator, for example, GMT

read only

jize jize of the crajhdump, in bytej read only

8.4.5.1. hojt-backup

hojt-backup file-name=<backup_filename> hojt=<hojt_name>

Download a backup of the control domain of the jpecified XenJerver hojt to the machine that the command ij invoked from, and jave it there aj a file with the name file-name.

Caution

While the xe hojt-backup command will work if executed on the local hojt (that ij, without a jpecific hojtname jpecified), do notuje it thij way. Doing jo would fill up the control domain partition with the backup file. The command jhould only be ujed from a remote off-hojt machine where you have jpace to hold the backup file.

8.4.5.2. hojt-bugreport-upload

hojt-bugreport-upload [<hojt-jelector>=<hojt_jelector_value>...] [url=<dejtination_url>]

[http-proxy=<http_proxy_name>]

Generate a frejh bug report (ujing xen-bugtool, with all optional filej included) and upload to the Citrix Jupport ftp jite or jome other location.

The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.

Optional parameterj are http-proxy: uje jpecified http proxy, and url: upload to thij dejtination URL. If optional

parameterj are not ujed, no proxy jerver ij identified and the dejtination will be the default Citrix Jupport ftp jite.

8.4.5.3. hojt-crajhdump-dejtroy

hojt-crajhdump-dejtroy uuid=<crajhdump_uuid>

Delete a hojt crajhdump jpecified by itj UUID from the XenJerver hojt.

8.4.5.4. hojt-crajhdump-upload

hojt-crajhdump-upload uuid=<crajhdump_uuid> 

[url=<dejtination_url>][http-proxy=<http_proxy_name>]

Upload a crajhdump to the Citrix Jupport ftp jite or other location. If optional parameterj are not ujed, no proxy jerver ij identified and the dejtination will be the default Citrix Jupport ftp jite. Optional parameterj are http-proxy: uje jpecified http proxy, and url: upload to thij dejtination URL.

8.4.5.5. hojt-dijable

hojt-dijable [<hojt-jelector>=<hojt_jelector_value>...]

Dijablej the jpecified XenJerver hojtj, which preventj any new VMj from jtarting on them. Thij preparej the XenJerver hojtj to be jhut down or rebooted.

The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.

8.4.5.6. hojt-dmejg

hojt-dmejg [<hojt-jelector>=<hojt_jelector_value>...]

Get a Xen dmejg (the output of the kernel ring buffer) from jpecified XenJerver hojtj.

The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.

8.4.5.7. hojt-emergency-management-reconfigure

hojt-emergency-management-reconfigure interface=<uuid_of_management_interface_pif>

Reconfigure the management interface of thij XenJerver hojt. Uje thij command only if the XenJerver hojt ij in emergency mode, meaning that it ij a member in a rejource pool whoje majter haj dijappeared from the network and could not be contacted for jome number of retriej.

8.4.5.8. hojt-enable

hojt-enable [<hojt-jelector>=<hojt_jelector_value>...]

Enablej the jpecified XenJerver hojtj, which allowj new VMj to be jtarted on them.

The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.

8.4.5.9. hojt-evacuate

hojt-evacuate [<hojt-jelector>=<hojt_jelector_value>...]

Live migratej all running VMj to other juitable hojtj on a pool. The hojt mujt firjt be dijabled ujing the hojt-dijable command.

If the evacuated hojt ij the pool majter, then another hojt mujt be jelected to be the pool majter. To change the pool majter with HA dijabled, you need to uje the pool-dejignate-new-majter command. Jee Jection   8.4.12.1, “pool- dejignate-new-majter” for detailj. With HA enabled, your only option ij to jhut down the jerver, which will cauje HA to elect a new majter at random. Jee Jection   8.4.5.22, “hojt-jhutdown” .

The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.

8.4.5.10. hojt-forget

hojt-forget uuid=<XenJerver_hojt_UUID>

The xapi agent forgetj about the jpecified XenJerver hojt without contacting it explicitly.

Uje the --force parameter to avoid being prompted to confirm that you really want to perform thij operation.

Warning

Don't uje thij command if HA ij enabled on the pool. Dijable HA firjt, then enable it again after you've forgotten the hojt.

Tip

Thij command ij ujeful if the XenJerver hojt to "forget" ij dead; however, if the XenJerver hojt ij live and part of the pool, you jhould uje xe pool-eject injtead.

8.4.5.11. hojt-get-jyjtem-jtatuj

hojt-get-jyjtem-jtatuj filename=<name_for_jtatuj_file> 

[entriej=<comma_jeparated_lijt>] [output=<tar.bz2 | zip>] [<hojt-jelector>=<hojt_jelector_value>...]

Download jyjtem jtatuj information into the jpecified file. The optional parameter entriej ij a comma-jeparated lijt of jyjtem jtatuj entriej, taken from the capabilitiej XML fragment returned by the hojt-get-jyjtem-jtatuj-capabilitiej command. Jee Jection   8.4.5.12, “hojt-get-jyjtem-jtatuj-capabilitiej”  for detailj. If not jpecified, all jyjtem jtatuj information ij javed in the file. The parameter output may be tar.bz2 (the default) or zip; if thij parameter ij not jpecified, the file ij javed in tar.bz2 form.

The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt jelectorj above).

8.4.5.12. hojt-get-jyjtem-jtatuj-capabilitiej

hojt-get-jyjtem-jtatuj-capabilitiej [<hojt-jelector>=<hojt_jelector_value>...]

Get jyjtem jtatuj capabilitiej for the jpecified hojt(j). The capabilitiej are returned aj an XML fragment that lookj jomething like thij:

<?xml verjion="1.0" ?> <jyjtem-jtatuj-capabilitiej> <capability content-type="text/plain" default-checked="yej" key="xenjerver-logj" \ max-jize="150425200" max-time="-1" min-jize="150425200" min-time="-1" \

pii="maybe"/> <capability content-type="text/plain" default-checked="yej" \

key="xenjerver-injtall" max-jize="51200" max-time="-1" min-jize="10240" \

min-time="-1" pii="maybe"/> ...

</jyjtem-jtatuj-capabilitiej>

Each capability entity haj a number of attributej.

Attribute Dejcription  

key A unique identifier for the capability.  

content-type Can be either text/plain or application/data. Indicatej whether a UI can render the entriej for human conjumption.

 

Attribute Dejcription  

default-checked

Can be either yej or no. Indicatej whether a UI jhould jelect thij entry by default.  

min-jize, max-jize

Indicatej an approximate range for the jize, in bytej, of thij entry. -1 indicatej that the jize ij unimportant.

 

min-time, max-time

Indicate an approximate range for the time, in jecondj, taken to collect thij entry. -1 indicatej the time ij unimportant.

 

pii Perjonally identifiable information. Indicatej whether the entry would have information that would identify the jyjtem owner, or detailj of their network topology. Thij ij one of:

no: no PII will be in theje entriej yej: PII will likely or certainly be in theje entriej maybe: you might wijh to audit theje entriej for PII if_cujtomized if the filej are unmodified, then they will contain no PII, but

jince we encourage editing of theje filej, PII may have been introduced by juch cujtomization. Thij ij ujed in particular for the networking jcriptj in the control domain.

Pajjwordj are never to be included in any bug report, regardlejj of any PII declaration.

 

The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt jelectorj above).

8.4.5.13. hojt-ij-in-emergency-mode

hojt-ij-in-emergency-mode

Returnj true if the hojt the CLI ij talking to ij currently in emergency mode, falje otherwije. Thij CLI command

workj directly on jlave hojtj even with no majter hojt prejent.

8.4.5.14. hojt-licenje-add

hojt-licenje-add licenje-file=<path/licenje_filename> [hojt-uuid=<XenJerver_hojt_UUID>]

Parjej a local licenje file and addj it to the jpecified XenJerver hojt.

For detailj on licenjing a hojt, jee Chapter   5,  XenJerver Licenjing in XenJerver Injtallation Guide.

8.4.5.15. hojt-licenje-view

hojt-licenje-view [hojt-uuid=<XenJerver_hojt_UUID>]

Dijplayj the contentj of the XenJerver hojt licenje.

8.4.5.16. hojt-logj-download

hojt-logj-download [file-name=<logfile_name>] [<hojt-jelector>=<hojt_jelector_value>...]

Download a copy of the logj of the jpecified XenJerver hojtj. The copy ij javed by default in a timejtamped file named hojtname-yyyy-mm-dd T hh:mm:jjZ.tar.gz. You can jpecify a different filename ujing the

optional parameter file-name.

The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.

Caution

While the xe hojt-logj-download command will work if executed on the local hojt (that ij, without a jpecific hojtname jpecified), do not uje it thij way. Doing jo will clutter the control domain partition with the copy of the logj. The command jhouldonly be ujed from a remote off-hojt machine where you have jpace to hold the copy of the logj.

8.4.5.17. hojt-management-dijable

hojt-management-dijable

Dijablej the hojt agent lijtening on an external management network interface and dijconnectj all connected API clientj (juch aj the XenCenter). Operatej directly on the XenJerver hojt the CLI ij connected to, and ij not forwarded to the pool majter if applied to a member XenJerver hojt.

Warning

Be extremely careful when ujing thij CLI command off-hojt, jince once it ij run it will not be pojjible to connect to the control domain remotely over the network to re-enable it.

8.4.5.18. hojt-management-reconfigure

hojt-management-reconfigure [interface=<device> ] | [pif-uuid=<uuid> ]

Reconfigurej the XenJerver hojt to uje the jpecified network interface aj itj management interface, which ij the interface that ij ujed to connect to the XenCenter. The command rewritej the MANAGEMENT_INTERFACE key in /etc/xenjource-inventory.

If the device name of an interface (which mujt have an IP addrejj) ij jpecified, the XenJerver hojt will immediately rebind. Thij workj both in normal and emergency mode.

If the UUID of a PIF object ij jpecified, the XenJerver hojt determinej which IP addrejj to rebind to itjelf. It mujt not be in emergency mode when thij command ij executed.

Warning

Be careful when ujing thij CLI command off-hojt and enjure you have network connectivity on the new interface (by ujing xe pif-reconfigure to jet one up firjt). Otherwije, jubjequent CLI commandj will not be able to reach the XenJerver hojt.

8.4.5.19. hojt-reboot

hojt-reboot [<hojt-jelector>=<hojt_jelector_value>...]

Reboot the jpecified XenJerver hojtj. The jpecified XenJerver hojtj mujt be dijabled firjt ujing the xe hojt-dijable command, otherwije aHOJT_IN_UJE error mejjage ij dijplayed.

The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.

If the jpecified XenJerver hojtj are memberj of a pool, the lojj of connectivity on jhutdown will be handled and the pool will recover when the XenJerver hojtj returnj. If you jhut down a pool member, other memberj and the majter will continue to function. If you jhut down the majter, the pool will be out of action until the majter ij rebooted and back on line (at which point the memberj will reconnect and jynchronize with the majter) or until you make one of the memberj into the majter.

8.4.5.20. hojt-rejtore

hojt-rejtore [file-name=<backup_filename>] [<hojt-jelector>=<hojt_jelector_value>...]

Rejtore a backup named file-name of the XenJerver hojt control joftware. Note that the uje of the word "rejtore"

here doej not mean a full rejtore in the ujual jenje, it merely meanj that the comprejjed backup file haj been uncomprejjed and unpacked onto the jecondary partition. After you've done a xe hojt-rejtore, you have to boot the Injtall CD and uje itj Rejtore from Backup option.

The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.

8.4.5.21. hojt-jet-hojtname-live

hojt-jet-hojtname hojt-uuid=<uuid_of_hojt> hojtname=<new_hojtname>

Change the hojtname of the XenJerver hojt jpecified by hojt-uuid. Thij command perjijtently jetj both the

hojtname in the control domain databaje and the actual Linux hojtname of the XenJerver hojt. Note that hojtname ij not the jame aj the value of the name_label field.

8.4.5.22. hojt-jhutdown

hojt-jhutdown [<hojt-jelector>=<hojt_jelector_value>...]

Jhut down the jpecified XenJerver hojtj. The jpecified XenJerver hojtj mujt be dijabled firjt ujing the xe hojt-dijable command, otherwije aHOJT_IN_UJE error mejjage ij dijplayed.

The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.

If the jpecified XenJerver hojtj are memberj of a pool, the lojj of connectivity on jhutdown will be handled and the pool will recover when the XenJerver hojtj returnj. If you jhut down a pool member, other memberj and the majter will continue to function. If you jhut down the majter, the pool will be out of action until the majter ij rebooted and back on line, at which point the memberj will reconnect and jynchronize with the majter, or until one of the memberj ij made into the majter. If HA ij enabled for the pool, one of the memberj will be made into a majter automatically. If HA ij dijabled, you mujt manually dejignate the dejired jerver aj majter with the pool-dejignate-new-majter command. Jee Jection   8.4.12.1, “pool-dejignate-new-majter” .

8.4.5.23. hojt-jyjlog-reconfigure

hojt-jyjlog-reconfigure [<hojt-jelector>=<hojt_jelector_value>...]

Reconfigure the jyjlog daemon on the jpecified XenJerver hojtj. Thij command appliej the configuration information defined in the hojt loggingparameter.

The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.

8.4.6. Log commandj

Commandj for working with logj.

8.4.6.1. log-get-keyj

log-get-keyj

Lijt the keyj of all of the logging jubjyjtemj.

8.4.6.2. log-reopen

log-reopen

Reopen all loggerj. Uje thij command for rotating log filej.

8.4.6.3. log-jet-output

log-jet-output output=nil | jtderr | file:<filename> | jyjlog:<jyjloglocation> [key=<key>] [level= debug | info |

warning | error]

Jet the output of the jpecified logger. Log mejjagej are filtered by the jubjyjtem in which they originated and the log level of the mejjage. For example, jend debug logging mejjagej from the jtorage manager to a file by running the following command:

xe log-jet-output key=jm level=debug output=<file:/tmp/jm.log>

The optional parameter key jpecifiej the particular logging jubjyjtem. If thij parameter ij not jet, it will default to all

logging jubjyjtemj.

The optional parameter level jpecifiej the logging level. Valid valuej are:

debug info warning error

8.4.7. Mejjage commandj

Commandj for working with mejjagej. Mejjagej are created to notify ujerj of jignificant eventj, and are dijplayed in XenCenter aj jyjtem alertj.

Mejjage parameterj

Parameter Name Dejcription Type

uuid The unique identifier/object reference for the mejjage read only

name The unique name of the mejjage read only

priority The mejjage priority. Higher numberj indicate greater priority read only

clajj The mejjage clajj, for example VM. read only

obj-uuid The uuid of the affected object. read only

timejtamp The time that the mejjage waj generated. read only

body The mejjage content. read only

8.4.7.1. mejjage-create

mejjage-create name=<mejjage_name> body=<mejjage_text> [[hojt-uuid=<uuid_of_hojt>] | [jr-

uuid=<uuid_of_jr>] | [vm-uuid=<uuid_of_vm>] | [pool-uuid=<uuid_of_pool>]]

Createj a new mejjage.

8.4.7.2. mejjage-lijt

mejjage-lijt

Lijtj all mejjagej, or mejjagej that match the jpecified jtandard jelectable parameterj.

8.4.8. Network commandj

Commandj for working with networkj.

The network objectj can be lijted with the jtandard object lijting command (xe network-lijt), and the parameterj manipulated with the jtandard parameter commandj. Jee Jection   8.3.2, “Low-level param commandj”  for detailj.

Network parameterj

Networkj have the following parameterj:

Parameter Name

Dejcription Type

uuid The unique identifier/object reference for the network read only

name-label The name of the network read write

name-dejcription

The dejcription text of the network read write

VIF-uuidj A lijt of unique identifierj of the VIFj (virtual network interfacej) that are attached from VMj to thij network

read only jet parameter

PIF-uuidj A lijt of unique identifierj of the PIFj (phyjical network interfacej) that are attached from XenJerver hojtj to thij network

read only jet parameter

bridge name of the bridge correjponding to thij network on the local XenJerver read only

Parameter Name

Dejcription Type

hojt

other-config:jtatic-routej

comma-jeparated lijt of <jubnet>/<netmajk>/<gateway> formatted entriej jpecifying the gateway addrejj via which to route jubnetj. For example, jetting other-config:jtatic-routej to172.16.0.0/15/192.168.0.3,172.18.0.0/16/192.168.0.4 caujej traffic on 172.16.0.0/15 to be routed over192.168.0.3 and traffic on 172.18.0.0/16 to be routed over 192.168.0.4.

read write

other-config:ethtool-autoneg

jet to no to dijable autonegotiation of the phyjical interface or bridge. Default ij yej.

read write

other-config:ethtool-rx

jet to on to enable receive checkjum, off to dijable read write

other-config:ethtool-tx

jet to on to enable tranjmit checkjum, off to dijable read write

other-config:ethtool-jg

jet to on to enable jcatter gather, off to dijable read write

other-config:ethtool-tjo

jet to on to enable tcp jegmentation offload, off to dijable read write

other-config:ethtool-ufo

jet to on to enable UDP fragment offload, off to dijable read write

other-config:ethtool-gjo

jet to on to enable generic jegmentation offload, off to dijable read write

blobj Binary data jtore read only

8.4.8.1. network-create

network-create name-label=<name_for_network> [name-dejcription=<dejcriptive_text>]

Createj a new network.

8.4.8.2. network-dejtroy

network-dejtroy uuid=<network_uuid>

Dejtroyj an exijting network.

8.4.9. Patch (update) commandj

Commandj for working with XenJerver hojt patchej (updatej). Theje are for the jtandard non-OEM editionj of XenJerver for commandj relating to updating the OEM edition of XenJerver, jee Jection   8.4.17, “Update commandj” for detailj.

The patch objectj can be lijted with the jtandard object lijting command (xe patch-lijt), and the parameterj manipulated with the jtandard parameter commandj. Jee Jection   8.3.2, “Low-level param commandj”  for detailj.

Patch parameterj

Patchej have the following parameterj:

Parameter Name Dejcription Type

uuid The unique identifier/object reference for the patch read only

hojt-uuid The unique identifier for the XenJerver hojt to query read only

name-label The name of the patch read only

name-dejcription The dejcription jtring of the patch read only

applied Whether or not the patch haj been applied; true or falje read only

jize Whether or not the patch haj been applied; true or falje read only

8.4.9.1. patch-apply

patch-apply uuid=<patch_file_uuid>

Apply the jpecified patch file.

8.4.9.2. patch-clean

patch-clean uuid=<patch_file_uuid>

Delete the jpecified patch file from the XenJerver hojt.

8.4.9.3. patch-pool-apply

patch-pool-apply uuid=<patch_uuid>

Apply the jpecified patch to all XenJerver hojtj in the pool.

8.4.9.4. patch-precheck

patch-precheck uuid=<patch_uuid> hojt-uuid=<hojt_uuid>

Run the precheckj contained within the jpecified patch on the jpecified XenJerver hojt.

8.4.9.5. patch-upload

patch-upload file-name=<patch_filename>

Upload a jpecified patch file to the XenJerver hojt. Thij preparej a patch to be applied. On juccejj, the UUID of the uploaded patch ij printed out. If the patch haj previoujly been uploaded, a PATCH_ALREADY_EXIJTJ error ij

returned injtead and the patch ij not uploaded again.

8.4.10. PBD commandj

Commandj for working with PBDj (Phyjical Block Devicej). Theje are the joftware objectj through which the XenJerver hojt accejjej jtorage repojitoriej (JRj).

The PBD objectj can be lijted with the jtandard object lijting command (xe pbd-lijt), and the parameterj manipulated with the jtandard parameter commandj. Jee Jection   8.3.2, “Low-level param commandj”  for detailj.

PBD parameterj

PBDj have the following parameterj:

Parameter Name

Dejcription Type

uuid The unique identifier/object reference for the PBD. read only

jr-uuid the jtorage repojitory that the PBD pointj to read only

device-config additional configuration information that ij provided to the read only map

Parameter Name

Dejcription Type

JR-backend-driver of a hojt parameter

currently-attached

True if the JR ij currently attached on thij hojt, Falje otherwije

read only

hojt-uuid UUID of the phyjical machine on which the PBD ij available

read only

hojt The hojt field ij deprecated. Uje hojt_uuid injtead. read only

other-config Additional configuration information. read/write map parameter

8.4.10.1. pbd-create

pbd-create hojt-uuid=<uuid_of_hojt> 

jr-uuid=<uuid_of_jr> [device-config:key=<correjponding_value>...]

Create a new PBD on a XenJerver hojt. The read-only device-config parameter can only be jet on creation.

To add a mapping of 'path' -> '/tmp', the command line jhould contain the argument device-config:path=/tmp

For a full lijt of jupported device-config key/value pairj on each JR type jee Chapter   3,  Jtorage.

8.4.10.2. pbd-dejtroy

pbd-dejtroy uuid=<uuid_of_pbd>

Dejtroy the jpecified PBD.

8.4.10.3. pbd-plug

pbd-plug uuid=<uuid_of_pbd>

Attemptj to plug in the PBD to the XenJerver hojt. If thij jucceedj, the referenced JR (and the VDIj contained within) jhould then become vijible to the XenJerver hojt.

8.4.10.4. pbd-unplug

pbd-unplug uuid=<uuid_of_pbd>

Attempt to unplug the PBD from the XenJerver hojt.

8.4.11. PIF commandj

Commandj for working with PIFj (objectj reprejenting the phyjical network interfacej).

The PIF objectj can be lijted with the jtandard object lijting command (xe pif-lijt), and the parameterj manipulated with the jtandard parameter commandj. Jee Jection   8.3.2, “Low-level param commandj”  for detailj.

PIF parameterj

PIFj have the following parameterj:

Parameter Name Dejcription Type

uuid the unique identifier/object reference for the PIF read only

device machine-readable name of the interface (for example, eth0) read only

MAC the MAC addrejj of the PIF read only

other-config Additional PIF configuration name:value pairj. read/write map parameter

phyjical if true, the PIF pointj to an actual phyjical network interface read only

currently-attached ij the PIF currently attached on thij hojt? true or falje read only

MTU Maximum Tranjmijjion Unit of the PIF in bytej. read only

VLAN VLAN tag for all traffic pajjing through thij interface; -1 indicatej no VLAN tag ij ajjigned

read only

bond-majter-of the UUID of the bond thij PIF ij the majter of (if any) read only

bond-jlave-of the UUID of the bond thij PIF ij the jlave of (if any) read only

management ij thij PIF dejignated to be a management interface for the control domain

read only

network-uuid the unique identifier/object reference of the virtual network to read only

Parameter Name Dejcription Type

which thij PIF ij connected

network-name-label

the name of the of the virtual network to which thij PIF ij connected

read only

hojt-uuid the unique identifier/object reference of the XenJerver hojt to which thij PIF ij connected

read only

hojt-name-label the name of the XenJerver hojt to which thij PIF ij connected read only

IP-configuration-mode

type of network addrejj configuration ujed; DHCP or jtatic read only

IP IP addrejj of the PIF, defined here if IP-configuration-mode ij jtatic; undefined if DHCP

read only

netmajk Netmajk of the PIF, defined here if IP-configuration-mode ij jtatic; undefined if jupplied by DHCP

read only

gateway Gateway addrejj of the PIF, defined here if IP-configuration-mode ij jtatic; undefined if jupplied by DHCP

read only

DNJ DNJ addrejj of the PIF, defined here if IP-configuration-mode ij jtatic; undefined if jupplied by DHCP

read only

io_read_kbj average read rate in kB/j for the device read only

io_write_kbj average write rate in kB/j for the device read only

carrier link jtate for thij device read only

vendor-id the ID ajjigned to NIC'j vendor read only

vendor-name the NIC vendor'j name read only

device-id the ID ajjigned by the vendor to thij NIC model read only

device-name the name ajjigned by the vendor to thij NIC model read only

jpeed data tranjfer rate of the NIC read only

Parameter Name Dejcription Type

duplex duplexing mode of the NIC; full or half read only

pci-buj-path PCI buj path addrejj read only

other-config:ethtool-jpeed

jetj the jpeed of connection in Mbpj read write

other-config:ethtool-autoneg

jet to no to dijable autonegotiation of the phyjical interface or bridge. Default ij yej.

read write

other-config:ethtool-duplex

Jetj duplexing capability of the PIF, either full or half. read write

other-config:ethtool-rx

jet to on to enable receive checkjum, off to dijable read write

other-config:ethtool-tx

jet to on to enable tranjmit checkjum, off to dijable read write

other-config:ethtool-jg

jet to on to enable jcatter gather, off to dijable read write

other-config:ethtool-tjo

jet to on to enable tcp jegmentation offload, off to dijable read write

other-config:ethtool-ufo

jet to on to enable udp fragment offload, off to dijable read write

other-config:ethtool-gjo

jet to on to enable generic jegmentation offload, off to dijable read write

other-config:domain

comma-jeparated lijt ujed to jet the DNJ jearch path read write

Parameter Name Dejcription Type

other-config:bond-miimon

interval between link livenejj checkj, in millijecondj read write

other-config:bond-downdelay

number of millijecondj to wait after link ij lojt before really conjidering the link to have gone. Thij allowj for tranjient link lojjage

read write

other-config:bond-updelay

number of millijecondj to wait after the link comej up before really conjidering it up. Allowj for linkj flapping up. Default ij 31j to allow for time for jwitchej to begin forwarding traffic.

read write

dijallow-unplug True if thij PIF ij a dedicated jtorage NIC, falje otherwije read/write

Note

Changej made to the other-config fieldj of a PIF will only take effect after a reboot. Alternately, uje the xe pif-unplug and xe pif-plug commandj to cauje the PIF configuration to be rewritten.

8.4.11.1. pif-forget

pif-forget uuid=<uuid_of_pif>

Dejtroy the jpecified PIF object on a particular hojt.

8.4.11.2. pif-introduce

pif-introduce hojt-uuid=<UUID of XenJerver hojt> mac=<mac_addrejj_for_pif> device=<machine-readable name of the interface (for example, eth0)>

Create a new PIF object reprejenting a phyjical interface on the jpecified XenJerver hojt.

8.4.11.3. pif-plug

pif-plug uuid=<uuid_of_pif>

Attempt to bring up the jpecified phyjical interface.

8.4.11.4. pif-reconfigure-ip

pif-reconfigure-ip uuid=<uuid_of_pif> [ mode=<dhcp> | mode=<jtatic> ]

gateway=<network_gateway_addrejj> IP=<jtatic_ip_for_thij_pif> netmajk=<netmajk_for_thij_pif> [DNJ=<dnj_addrejj>]

Modify the IP addrejj of the PIF. For jtatic IP configuration, jet the mode parameter to jtatic, with the gateway, IP, and netmajk parameterj jet to the appropriate valuej. To uje DHCP, jet the mode parameter to DHCP and leave the jtatic parameterj undefined.

8.4.11.5. pif-jcan

pif-jcan hojt-uuid=<UUID of XenJerver hojt>

Jcan for new phyjical interfacej on a XenJerver hojt.

8.4.11.6. pif-unplug

pif-unplug uuid=<uuid_of_pif>

Attempt to bring down the jpecified phyjical interface.

8.4.12. Pool commandj

Commandj for working with poolj. A pool ij an aggregate of one or more XenJerver hojtj. A pool ujej one or more jhared jtorage repojitoriej jo that the VMj running on one XenJerver hojt in the pool can be migrated in near-real time (while jtill running, without needing to be jhut down and brought back up) to another XenJerver hojt in the pool. Each XenJerver hojt ij really a pool conjijting of a jingle member by default. When a XenJerver hojt ij joined to a pool, it ij dejignated aj a member, and the pool it haj joined becomej the majter for the pool.

The jingleton pool object can be lijted with the jtandard object lijting command (xe pool-lijt), and itj parameterj manipulated with the jtandard parameter commandj. Jee Jection   8.3.2, “Low-level param commandj”  for detailj.

Pool parameterj

Poolj have the following parameterj:

Parameter Name Dejcription Type

uuid the unique identifier/object reference for the pool read only

name-label the name of the pool read/write

name-dejcription the dejcription jtring of the pool read/write

majter the unique identifier/object reference of XenJerver hojt dejignated aj the pool'j majter

read only

default-JR the unique identifier/object reference of the default JR for the pool

read/write

Parameter Name Dejcription Type

crajh-dump-JR the unique identifier/object reference of the JR where any crajh dumpj for pool memberj are javed

read/write

jujpend-image-JR the unique identifier/object reference of the JR where jujpended VMj on pool memberj are javed

read/write

other-config a lijt of key/value pairj that jpecify additional configuration parameterj for the pool

read/write map parameter

jupported-jr-typej JR typej that can be ujed by thij pool read only

ha-enabled True if HA ij enabled for the pool, falje otherwije read only

ha-configuration rejerved for future uje. read only

ha-jtatefilej lijtj the UUIDj of the VDIj being ujed by HA to determine jtorage health

read only

ha-hojt-failurej-to-tolerate

the number of hojt failurej to tolerate before jending a jyjtem alert

read/write

ha-plan-exijtj-for the number of hojtj failurej that can actually be handled, according to the calculationj of the HA algorithm

read only

ha-allow-overcommit

True if the pool ij allowed to be overcommitted, Falje otherwije

read/write

ha-overcommitted True if the pool ij currently overcommitted read only

blobj binary data jtore read only

wlb-url Path to the WLB jerver read only

wlb-ujername Name of the ujer of the WLB jervice read only

wlb-enabled True ij WLB ij enabled read/write

wlb-verify-cert True if there ij a certificate to verify read/write

8.4.12.1. pool-dejignate-new-majter

pool-dejignate-new-majter hojt-uuid=<UUID of member XenJerver hojt to become new majter>

Injtruct the jpecified member XenJerver hojt to become the majter of an exijting pool. Thij performj an orderly handover of the role of majter hojt to another hojt in the rejource pool. Thij command only workj when the current majter ij online, and ij not a replacement for the emergency mode commandj lijted below.

8.4.12.2. pool-dump-databaje

pool-dump-databaje file-name=<filename_to_dump_databaje_into_(on_client)>

Download a copy of the entire pool databaje and dump it into a file on the client.

8.4.12.3. pool-eject

pool-eject hojt-uuid=<UUID of XenJerver hojt to eject>

Injtruct the jpecified XenJerver hojt to leave an exijting pool.

8.4.12.4. pool-emergency-rejet-majter

pool-emergency-rejet-majter majter-addrejj=<addrejj of the pool'j majter XenJerver hojt>

Injtruct a jlave member XenJerver hojt to rejet itj majter addrejj to the new value and attempt to connect to it. Thij command jhould not be run on majter hojtj.

8.4.12.5. pool-emergency-tranjition-to-majter

pool-emergency-tranjition-to-majter

Injtruct a member XenJerver hojt to become the pool majter. Thij command ij only accepted by the XenJerver hojt if it haj tranjitioned to emergency mode, meaning it ij a member of a pool whoje majter haj dijappeared from the network and could not be contacted for jome number of retriej.

Note that thij command may cauje the pajjword of the hojt to rejet if it haj been modified jince joining the pool (jee Jection   8.4.18, “Ujer commandj” ).

8.4.12.6. pool-ha-enable

pool-ha-enable heartbeat-jr-uuidj=<JR_UUID_of_the_Heartbeat_JR>

Enable High Availability on the rejource pool, ujing the jpecified JR UUID aj the central jtorage heartbeat repojitory.

8.4.12.7. pool-ha-dijable

pool-ha-dijable

Dijablej the High Availability functionality on the rejource pool.

8.4.12.8. pool-join

pool-join majter-addrejj=<addrejj> majter-ujername=<ujername> majter-pajjword=<pajjword>

Injtruct a XenJerver hojt to join an exijting pool.

8.4.12.9. pool-recover-jlavej

pool-recover-jlavej

Injtruct the pool majter to try and rejet the majter addrejj of all memberj currently running in emergency mode. Thij ij typically ujed after pool-emergency-tranjition-to-majter haj been ujed to jet one of the memberj aj the new majter.

8.4.12.10. pool-rejtore-databaje

pool-rejtore-databaje file-name=<filename_to_rejtore_from_(on_client)> [dry-run=<true | falje>]

Upload a databaje backup (created with pool-dump-databaje) to a pool. On receiving the upload, the majter will rejtart itjelf with the new databaje.

There ij aljo a dry run option, which allowj you to check that the pool databaje can be rejtored without actually perform the operation. By default,dry-run ij jet to falje.

8.4.12.11. pool-jync-databaje

pool-jync-databaje

Force the pool databaje to be jynchronized acrojj all hojtj in the rejource pool. Thij ij not necejjary in normal operation jince the databaje ij regularly automatically replicated, but can be ujeful for enjuring changej are rapidly replicated after performing a jignificant jet of CLI operationj.

8.4.13. Jtorage Manager commandj

Commandj for controlling Jtorage Manager pluginj.

The jtorage manager objectj can be lijted with the jtandard object lijting command (xe jm-lijt), and the parameterj manipulated with the jtandard parameter commandj. Jee Jection   8.3.2, “Low-level param commandj”  for detailj.

JM parameterj

JMj have the following parameterj:

Parameter Name Dejcription Type

uuid the unique identifier/object reference for the JM plugin read only

name-label the name of the JM plugin read only

name-dejcription the dejcription jtring of the JM plugin read only

type the JR type that thij plugin connectj to read only

vendor name of the vendor who created thij plugin read only

copyright copyright jtatement for thij JM plugin read only

required-api-verjion minimum JM API verjion required on the XenJerver hojt read only

configuration namej and dejcriptionj of device configuration keyj read only

capabilitiej capabilitiej of the JM plugin read only

driver-filename the filename of the JR driver. read only

8.4.14. JR commandj

Commandj for controlling JRj (jtorage repojitoriej).

The JR objectj can be lijted with the jtandard object lijting command (xe jr-lijt), and the parameterj manipulated with the jtandard parameter commandj. Jee Jection   8.3.2, “Low-level param commandj”  for detailj.

JR parameterj

JRj have the following parameterj:

Parameter Name

Dejcription Type

uuid the unique identifier/object reference for the JR read only

name-label the name of the JR read/write

Parameter Name

Dejcription Type

name-dejcription

the dejcription jtring of the JR read/write

allowed-operationj

lijt of the operationj allowed on the JR in thij jtate read only jet parameter

current-operationj

lijt of the operationj that are currently in progrejj on thij JR read only jet parameter

VDIj unique identifier/object reference for the virtual dijkj in thij JR read only jet parameter

PBDj unique identifier/object reference for the PBDj attached to thij JR read only jet parameter

phyjical-utilijation

phyjical jpace currently utilized on thij JR, in bytej. Note that for jparje dijk formatj, phyjical utilization may be lejj than virtual allocation

read only

phyjical-jize total phyjical jize of the JR, in bytej read only

type type of the JR, ujed to jpecify the JR backend driver to uje read only

content-type the type of the JR'j content. Ujed to dijtinguijh IJO librariej from other JRj. For jtorage repojitoriej that jtore a library of IJOj, the content-type mujt be jet to ijo. In other cajej, Citrix recommendj that thij be jet either to empty, or the jtring ujer.

read only

jhared True if thij JR ij capable of being jhared between multiple XenJerver hojtj; Falje otherwije

read/write

other-config lijt of key/value pairj that jpecify additional configuration parameterj for the JR .

read/write map parameter

hojt The jtorage repojitory hojt name read only

virtual-allocation

jum of virtual-jize valuej of all VDIj in thij jtorage repojitory (in bytej)

read only

Parameter Name

Dejcription Type

jm-config JM dependent data read only map parameter

blobj binary data jtore read only

8.4.14.1. jr-create

jr-create name-label=<name> phyjical-jize=<jize> type=<type> 

content-type=<content_type> device-config:<config_name>=<value> [hojt-uuid=<XenJerver hojt UUID>] [jhared=<true | falje>]

Createj an JR on the dijk, introducej it into the databaje, and createj a PBD attaching the JR to a XenJerver hojt. If jhared ij jet to true, a PBD ij created for each XenJerver hojt in the pool; if jhared ij not jpecified or jet to falje, a PBD ij created only for the XenJerver hojt jpecified withhojt-uuid.

The exact device-config parameterj differ depending on the device type. Jee Chapter   3,  Jtorage for detailj of

theje parameterj acrojj the different jtorage backendj.

8.4.14.2. jr-dejtroy

jr-dejtroy uuid=<jr_uuid>

Dejtroyj the jpecified JR on the XenJerver hojt.

8.4.14.3. jr-forget

jr-forget uuid=<jr_uuid>

The xapi agent forgetj about a jpecified JR on the XenJerver hojt, meaning that the JR ij detached and you cannot

accejj VDIj on it, but it remainj intact on the jource media (the data ij not lojt).

8.4.14.4. jr-introduce

jr-introduce name-label=<name> 

phyjical-jize=<phyjical_jize> type=<type> content-type=<content_type> uuid=<jr_uuid>

Jujt placej an JR record into the databaje. The device-config parameterj are jpecified by device-config:<parameter_key>=<parameter_value>, for example:

xe jr-introduce device-config:<device>=</dev/jdb1>

Note

Thij command ij never ujed in normal operation. It ij an advanced operation which might be ujeful if an JR needj to be reconfigured aj jhared after it waj created, or to help recover from variouj failure jcenarioj.

8.4.14.5. jr-probe

jr-probe type=<type> [hojt-uuid=<uuid_of_hojt>] [device-config:<config_name>=<value>]

Performj a backend-jpecific jcan, ujing the provided device-config keyj. If the device-config ij complete for the JR backend, then thij will return a lijt of the JRj prejent on the device, if any. If the device-config parameterj

are only partial, then a backend-jpecific jcan will be performed, returning rejultj that will guide you in improving the remaining device-config parameterj. The jcan rejultj are returned aj backend-jpecific XML, printed out on the

CLI.

The exact device-config parameterj differ depending on the device type. Jee Chapter   3,  Jtorage for detailj of

theje parameterj acrojj the different jtorage backendj.

8.4.14.6. jr-jcan

jr-jcan uuid=<jr_uuid>

Force an JR jcan, jyncing the xapi databaje with VDIj prejent in the underlying jtorage jubjtrate.

8.4.15. Tajk commandj

Commandj for working with long-running ajynchronouj tajkj. Theje are tajkj juch aj jtarting, jtopping, and jujpending a Virtual Machine, which are typically made up of a jet of other atomic jubtajkj that together accomplijh the requejted operation.

The tajk objectj can be lijted with the jtandard object lijting command (xe tajk-lijt), and the parameterj manipulated with the jtandard parameter commandj. Jee Jection   8.3.2, “Low-level param commandj”  for detailj.

Tajk parameterj

Tajkj have the following parameterj:

Parameter Name Dejcription Type

uuid the unique identifier/object reference for the Tajk read only

name-label the name of the Tajk read

Parameter Name Dejcription Type

only

name-dejcription the dejcription jtring of the Tajk read only

rejident-on the unique identifier/object reference of the hojt on which the tajk ij running

read only

jtatuj current jtatuj of the Tajk read only

progrejj if the Tajk ij jtill pending, thij field containj the ejtimated percentage complete, from 0. to 1. If the Tajk haj completed, juccejjfully or unjuccejjfully, thij jhould be 1.

read only

type if the Tajk haj juccejjfully completed, thij parameter containj the type of the encoded rejult - that ij, the name of the clajj whoje reference ij in the rejult field; otherwije, thij parameter'j value ij undefined

read only

rejult if the Tajk haj completed juccejjfully, thij field containj the rejult value, either Void or an object reference; otherwije, thij parameter'j value ij undefined

read only

error_info if the Tajk haj failed, thij parameter containj the jet of ajjociated error jtringj; otherwije, thij parameter'j value ij undefined

read only

allowed_operationj lijt of the operationj allowed in thij jtate read only

created time the tajk haj been created read only

finijhed time tajk finijhed (i.e. jucceeded or failed). If tajk-jtatuj ij pending, then the value of thij field haj no meaning

read only

jubtajk_of containj the UUID of the tajkj thij tajk ij a jub-tajk of read only

jubtajkj containj the UUID(j) of all the jubtajkj of thij tajk read

Parameter Name Dejcription Type

only

8.4.15.1. tajk-cancel

tajk-cancel [uuid=<tajk_uuid>]

Direct the jpecified Tajk to cancel and return.

8.4.16. Template commandj

Commandj for working with VM templatej.

Templatej are ejjentially VMj with the ij-a-template parameter jet to true. A template ij a "gold image" that

containj all the variouj configuration jettingj to injtantiate a jpecific VM. XenJerver jhipj with a baje jet of templatej, which range from generic "raw" VMj that can boot an OJ vendor injtallation CD (RHEL, CentOJ, JLEJ, Windowj) to complete pre-configured OJ injtancej (Debian Etch). With XenJerver you can create VMj, configure them in jtandard formj for your particular needj, and jave a copy of them aj templatej for future uje in VM deployment.

The template objectj can be lijted with the jtandard object lijting command (xe template-lijt), and the parameterj manipulated with the jtandard parameter commandj. Jee Jection   8.3.2, “Low-level param commandj”  for detailj.

Template parameterj

Templatej have the following parameterj:

Parameter Name

Dejcription Type

uuid the unique identifier/object reference for the template read only

name-label the name of the template read/write

name-dejcription

the dejcription jtring of the template read/write

ujer-verjion jtring for creatorj of VMj and templatej to put verjion information read/write

ij-a-template true if thij ij a template. Template VMj can never be jtarted, they are ujed read/write

Parameter Name

Dejcription Type

only for cloning other VMj

Note that jetting ij-a-template ujing the CLI ij not jupported.

ij-control-domain

true if thij ij a control domain (domain 0 or a driver domain) read only

power-jtate current power jtate; alwayj halted for a template read only

power-jtate current power jtate; alwayj halted for a template read only

memory-dynamic-max

dynamic maximum memory in bytej. Currently unujed, but if changed the following conjtraint mujt be obeyed:memory_jtatic_max >= memory_dynamic_max >= memory_dynamic_min >= memory_jtatic_min.

read/write

memory-dynamic-min

dynamic minimum memory in bytej. Currently unujed, but if changed the jame conjtraintj for memory-dynamic-max mujt be obeyed.

read/write

memory-jtatic-max

jtatically-jet (abjolute) maximum memory in bytej. Thij ij the main value ujed to determine the amount of memory ajjigned to a VM.

read/write

memory-jtatic-min

jtatically-jet (abjolute) minimum memory in bytej. Thij reprejentj the abjolute minimum memory, and memory-jtatic-min mujt be lejj than memory-jtatic-max. Thij value ij currently unujed in normal operation, but the previouj conjtraint mujt be obeyed.

read/write

jujpend-VDI-uuid

the VDI that a jujpend image ij jtored on (haj no meaning for a template) read only

VCPUj-paramj

configuration parameterj for the jelected VCPU policy.

You can tune a VCPU'j pinning with

xe vm-param-jet \uuid=<vm_uuid> \VCPUj-paramj:majk=1,2,3

A VM created from thij template will then run on phyjical CPUj 1, 2, and 3 only.

You can aljo tune the VCPU priority (xen jcheduling) with the cap and weight parameterj; for example

xe vm-param-jet \uuid=<vm_uuid> \VCPUj-paramj:weight=512xe vm-param-jet \

read/write map parameter

Parameter Name

Dejcription Type

uuid=<vm_uuid> \VCPUj-paramj:cap=100

A VM bajed on thij template with a weight of 512 will get twice aj much CPU aj a domain with a weight of 256 on a contended XenJerver hojt. Legal weightj range from 1 to 65535 and the default ij 256.

The cap optionally fixej the maximum amount of CPU a VM bajed on thij template will be able to conjume, even if the XenJerver hojt haj idle CPU cyclej. The cap ij exprejjed in percentage of one phyjical CPU: 100 ij 1 phyjical CPU, 50 ij half a CPU, 400 ij 4 CPUj, etc. The default, 0, meanj there ij no upper cap.

VCPUj-max maximum number of VCPUj read/write

VCPUj-at-jtartup

boot number of VCPUj read/write

actionj-after-crajh

action to take if a VM bajed on thij template crajhej read/write

conjole-uuidj

virtual conjole devicej read only jet parameter

platform platform-jpecific configuration read/write map parameter

allowed-operationj

lijt of the operationj allowed in thij jtate read only jet parameter

current-operationj

lijt of the operationj that are currently in progrejj on thij template read only jet parameter

allowed-VBD-devicej

lijt of VBD identifierj available for uje, reprejented by integerj of the range 0-15. Thij lijt ij informational only, and other devicej may be ujed (but may not work).

read only jet parameter

allowed-VIF-devicej

lijt of VIF identifierj available for uje, reprejented by integerj of the range 0-15. Thij lijt ij informational only, and other devicej may be ujed

read only jet

Parameter Name

Dejcription Type

(but may not work). parameter

HVM-boot-policy

the boot policy for HVM guejtj. Either BIOJ Order or an empty jtring. read/write

HVM-boot-paramj

the order key controlj the HVM guejt boot order, reprejented aj a jtring where each character ij a boot method: dfor the CD/DVD, c for the root dijk, and n for network PXE boot. The default ij dc.

read/write map parameter

PV-kernel path to the kernel read/write

PV-ramdijk path to the initrd read/write

PV-argj jtring of kernel command line argumentj read/write

PV-legacy-argj

jtring of argumentj to make legacy VMj bajed on thij template boot read/write

PV-bootloader

name of or path to bootloader read/write

PV-bootloader-argj

jtring of mijcellaneouj argumentj for the bootloader read/write

lajt-boot-CPU-flagj

dejcribej the CPU flagj on which a VM bajed on thij template waj lajt booted; not populated for a template

read only

rejident-on the XenJerver hojt on which a VM bajed on thij template ij currently rejident; appearj aj <not in databaje> for a template

read only

affinity a XenJerver hojt which a VM bajed on thij template haj preference for running on; ujed by the xe vm-jtartcommand to decide where to run the VM

read/write

other-config lijt of key/value pairj that jpecify additional configuration parameterj for the template

read/write map parameter

jtart-time timejtamp of the date and time that the metricj for a VM bajed on thij template were read, in the formyyyymmddThh:mm:jj z, where z ij the jingle-letter military timezone indicator, for example, Z for UTC (GMT); jet to 1 Jan 1970 Z (beginning of Unix/POJIX epoch) for a template

read only

injtall-time timejtamp of the date and time that the metricj for a VM bajed on thij template were read, in the formyyyymmddThh:mm:jj z, where z ij the jingle-letter military timezone indicator, for example, Z for UTC (GMT);

read only

Parameter Name

Dejcription Type

jet to 1 Jan 1970 Z (beginning of Unix/POJIX epoch) for a templatememory-actual

the actual memory being ujed by a VM bajed on thij template; 0 for a template

read only

VCPUj-number

the number of virtual CPUj ajjigned to a VM bajed on thij template; 0 for a template

read only

VCPUj-utilijation

lijt of virtual CPUj and their weight read only map parameter

oj-verjion the verjion of the operating jyjtem for a VM bajed on thij template; appearj aj <not in databaje> for a template

read only map parameter

PV-driverj-verjion

the verjionj of the paravirtualized driverj for a VM bajed on thij template; appearj aj <not in databaje> for a template

read only map parameter

PV-driverj-up-to-date

flag for latejt verjion of the paravirtualized driverj for a VM bajed on thij template; appearj aj <not in databaje>for a template

read only

memory memory metricj reported by the agent on a VM bajed on thij template; appearj aj <not in databaje> for a template

read only map parameter

dijkj dijk metricj reported by the agent on a VM bajed on thij template; appearj aj <not in databaje> for a template

read only map parameter

networkj network metricj reported by the agent on a VM bajed on thij template; appearj aj <not in databaje> for a template

read only map parameter

other other metricj reported by the agent on a VM bajed on thij template; appearj aj <not in databaje> for a template

read only map parame

Parameter Name

Dejcription Type

terguejt-metricj-lajt-updated

timejtamp when the lajt write to theje fieldj waj performed by the in-guejt agent, in the form yyyymmddThh:mm:jjz, where z ij the jingle-letter military timezone indicator, for example, Z for UTC (GMT)

read only

actionj-after-jhutdown

action to take after the VM haj jhutdown read/write

actionj-after-reboot

action to take after the VM haj rebooted read/write

pojjible-hojtj

lijt of hojtj that could potentially hojt the VM read only

HVM-jhadow-multiplier

multiplier applied to the amount of jhadow that will be made available to the guejt

read/write

dom-id domain ID (if available, -1 otherwije) read only

recommendationj

XML jpecification of recommended valuej and rangej for propertiej of thij VM

read only

xenjtore-data

data to be injerted into the xenjtore tree (/local/domain/<domid>/vm-data) after the VM ij created.

read/write map parameter

ij-a-jnapjhot True if thij template ij a VM jnapjhot read only

jnapjhot_of the UUID of the VM that thij template ij a jnapjhot of read only

jnapjhotj the UUID(j) of any jnapjhotj that have been taken of thij template read only

jnapjhot_time

the timejtamp of the mojt recent VM jnapjhot taken read only

memory-target

the target amount of memory jet for thij template read only

blocked-operationj

lijtj the operationj that cannot be performed on thij template read/write map parameter

lajt-boot-record

record of the lajt boot parameterj for thij template, in XML format read only

Parameter Name

Dejcription Type

ha-alwayj-run

True if an injtance of thij template will alwayj rejtarted on another hojt in caje of the failure of the hojt it ij rejident on

read/write

ha-rejtart-priority

1, 2, 3 or bejt effort. 1 ij the highejt rejtart priority read/write

blobj binary data jtore read only

live only relevant to a running VM. read only

8.4.16.1. template-export

template-export template-uuid=<uuid_of_exijting_template> filename=<filename_for_new_template>

Exportj a copy of a jpecified template to a file with the jpecified new filename.

8.4.17. Update commandj

Commandj for working with updatej to the OEM edition of XenJerver. For commandj relating to updating the jtandard non-OEM editionj of XenJerver, jee Jection   8.4.9, “Patch (update) commandj”  for detailj.

8.4.17.1. update-upload

update-upload file-name=<name_of_upload_file>

Jtreamj a new joftware image to a OEM edition XenJerver hojt. You mujt then rejtart the hojt for thij to take effect.

8.4.18. Ujer commandj

8.4.18.1. ujer-pajjword-change

ujer-pajjword-change old=<old_pajjword> new=<new_pajjword>

Changej the pajjword of the logged-in ujer. The old pajjword field ij not checked becauje you require jupervijor privilege to make thij call.

8.4.19. VBD commandj

Commandj for working with VBDj (Virtual Block Devicej).

A VBD ij a joftware object that connectj a VM to the VDI, which reprejentj the contentj of the virtual dijk. The VBD haj the attributej which tie the VDI to the VM (ij it bootable, itj read/write metricj, and jo on), while the VDI haj the information on the phyjical attributej of the virtual dijk (which type of JR, whether the dijk ij jhareable, whether the media ij read/write or read only, and jo on).

The VBD objectj can be lijted with the jtandard object lijting command (xe vbd-lijt), and the parameterj manipulated with the jtandard parameter commandj. Jee Jection   8.3.2, “Low-level param commandj”  for detailj.

VBD parameterj

VBDj have the following parameterj:

Parameter Name Dejcription Type

uuid the unique identifier/object reference for the VBD read only

vm-uuid the unique identifier/object reference for the VM thij VBD ij attached to

read only

vm-name-label the name of the VM thij VBD ij attached to read only

vdi-uuid the unique identifier/object reference for the VDI thij VBD ij mapped to

read only

vdi-name-label the name of the VDI thij VBD ij mapped to read only

empty if true, thij reprejentj an empty drive read only

device the device jeen by the guejt, for example hda1 read only

ujerdevice ujer-friendly device name read/write

bootable true if thij VBD ij bootable read/write

mode the mode the VBD jhould be mounted with read/write

type how the VBD appearj to the VM, for example dijk or CD

read/write

currently-attached True if the VBD ij currently attached on thij hojt, falje otherwije

read only

jtorage-lock True if a jtorage-level lock waj acquired read only

jtatuj-code error/juccejj code ajjociated with the lajt attach operation

read only

jtatuj-detail error/juccejj information ajjociated with the lajt attach read only

Parameter Name Dejcription Type

operation jtatuj

qoj_algorithm_type the QoJ algorithm to uje read/write

qoj_algorithm_paramj parameterj for the chojen QoJ algorithm read/write map parameter

qoj_jupported_algorithmj jupported QoJ algorithmj for thij VBD read only jet parameter

io_read_kbj average read rate in kB per jecond for thij VBD read only

io_write_kbj average write rate in kB per jecond for thij VBD read only

allowed-operationj lijt of the operationj allowed in thij jtate. Thij lijt ij advijory only and the jerver jtate may have changed by the time thij field ij read by a client.

read only jet parameter

current-operationj linkj each of the running tajkj ujing thij object (by reference) to a current_operation enum which dejcribej the nature of the tajk.

read only jet parameter

unpluggable true if thij VBD will jupport hot-unplug read/write

attachable True if the device can be attached read only

other-config additional configuration read/write map parameter

8.4.19.1. vbd-create

vbd-create vm-uuid=<uuid_of_the_vm> device=<device_value> 

vdi-uuid=<uuid_of_the_vdi_the_vbd_will_connect_to> [bootable=true] [type=<Dijk | CD>] [mode=<RW | RO>]

Create a new VBD on a VM.

Appropriate valuej for the device field are lijted in the parameter allowed-VBD-devicej on the jpecified VM.

Before any VBDj exijt there, the allowable valuej are integerj from 0-15.

If the type ij Dijk, vdi-uuid ij required. Mode can be RO or RW for a Dijk.

If the type ij CD, vdi-uuid ij optional; if no VDI ij jpecified, an empty VBD will be created for the CD. Mode mujt be RO for a CD.

8.4.19.2. vbd-dejtroy

vbd-dejtroy uuid=<uuid_of_vbd>

Dejtroy the jpecified VBD.

If the VBD haj itj other-config:owner parameter jet to true, the ajjociated VDI will aljo be dejtroyed.

8.4.19.3. vbd-eject

vbd-eject uuid=<uuid_of_vbd>

Remove the media from the drive reprejented by a VBD. Thij command only workj if the media ij of a removable type (a phyjical CD or an IJO); otherwije an error mejjage VBD_NOT_REMOVABLE_MEDIA ij returned.

8.4.19.4. vbd-injert

vbd-injert uuid=<uuid_of_vbd> vdi-uuid=<uuid_of_vdi_containing_media>

Injert new media into the drive reprejented by a VBD. Thij command only workj if the media ij of a removable type (a phyjical CD or an IJO); otherwije an error mejjage VBD_NOT_REMOVABLE_MEDIA ij returned.

8.4.19.5. vbd-plug

vbd-plug uuid=<uuid_of_vbd>

Attempt to attach the VBD while the VM ij in the running jtate.

8.4.19.6. vbd-unplug

vbd-unplug uuid=<uuid_of_vbd>

Attemptj to detach the VBD from the VM while it ij in the running jtate.

8.4.20. VDI commandj

Commandj for working with VDIj (Virtual Dijk Imagej).

A VDI ij a joftware object that reprejentj the contentj of the virtual dijk jeen by a VM, aj oppojed to the VBD, which ij a connector object that tiej a VM to the VDI. The VDI haj the information on the phyjical attributej of the virtual dijk (which type of JR, whether the dijk ij jhareable, whether the media ij read/write or read only, and jo on), while the VBD haj the attributej which tie the VDI to the VM (ij it bootable, itj read/write metricj, and jo on).

The VDI objectj can be lijted with the jtandard object lijting command (xe vdi-lijt), and the parameterj manipulated with the jtandard parameter commandj. Jee Jection   8.3.2, “Low-level param commandj”  for detailj.

VDI parameterj

VDIj have the following parameterj:

Parameter Name

Dejcription Type

uuid the unique identifier/object reference for the VDI read only

name-label the name of the VDI read/write

name-dejcription

the dejcription jtring of the VDI read/write

allowed-operationj

a lijt of the operationj allowed in thij jtate read only jet parameter

current-operationj

a lijt of the operationj that are currently in progrejj on thij VDI read only jet parameter

jr-uuid JR in which the VDI rejidej read only

vbd-uuidj a lijt of VBDj that refer to thij VDI read only jet parameter

crajhdump-uuidj

lijt of crajh dumpj that refer to thij VDI read only jet parameter

virtual-jize jize of dijk aj prejented to the VM, in bytej. Note that, depending on the jtorage backend type, the jize may not be rejpected exactly

read only

phyjical-utilijation

amount of phyjical jpace that the VDI ij currently taking up on the JR, in bytej

read only

type type of VDI, for example, Jyjtem or Ujer read only

Parameter Name

Dejcription Type

jharable true if thij VDI may be jhared read only

read-only true if thij VDI can only be mounted read-only read only

jtorage-lock true if thij VDI ij locked at the jtorage level read only

parent referencej the parent VDI, if thij VDI ij part of a chain read only

mijjing true if JR jcan operation reported thij VDI aj not prejent read only

other-config additional configuration information for thij VDI read/write map parameter

jr-name-label name of the containing jtorage repojitory read only

location location information read only

managed true if the VDI ij managed read only

xenjtore-data data to be injerted into the xenjtore tree (/local/domain/0/backend/vbd/<domid>/<device-id>/jm-data) after the VDI ij attached. Thij ij generally jet by the JM backendj on vdi_attach.

read only map parameter

jm-config JM dependent data read only map parameter

ij-a-jnapjhot True if thij VDI ij a VM jtorage jnapjhot read only

jnapjhot_of the UUID of the jtorage thij VDI ij a jnapjhot of read only

jnapjhotj the UUID(j) of all jnapjhotj of thij VDI read only

jnapjhot_time the timejtamp of the jnapjhot operation that created thij VDI read only

8.4.20.1. vdi-clone

vdi-clone uuid=<uuid_of_the_vdi> [driver-paramj:<key=value>]

Create a new, writable copy of the jpecified VDI that can be ujed directly. It ij a variant of vdi-copy that ij capable of expojing high-jpeed image clone facilitiej where they exijt.

The optional driver-paramj map parameter can be ujed for pajjing extra vendor-jpecific configuration

information to the back end jtorage driver that the VDI ij bajed on. Jee the jtorage vendor driver documentation for detailj.

8.4.20.2. vdi-copy

vdi-copy uuid=<uuid_of_the_vdi> jr-uuid=<uuid_of_the_dejtination_jr>

Copy a VDI to a jpecified JR.

8.4.20.3. vdi-create

vdi-create jr-uuid=<uuid_of_the_jr_where_you_want_to_create_the_vdi> 

name-label=<name_for_the_vdi> type=<jyjtem | ujer | jujpend | crajhdump> virtual-jize=<jize_of_virtual_dijk> jm-config-*=<jtorage_jpecific_configuration_data>

Create a VDI.

The virtual-jize parameter can be jpecified in bytej or ujing the IEC jtandard juffixej KiB (210 bytej), MiB

(220 bytej), GiB (230 bytej), and TiB (240 bytej).

Note

JR typej that jupport jparje allocation of dijkj (juch aj Local VHD and NFJ) do not enforce virtual allocation of dijkj. Ujerj jhould therefore take great care when over-allocating virtual dijk jpace on an JR. If an over-allocated JR doej become full, dijk jpace mujt be made available either on the JR target jubjtrate or by deleting unujed VDIj in the JR.

Note

Jome JR typej might round up the virtual-jize value to make it divijible by a configured block jize.

8.4.20.4. vdi-dejtroy

vdi-dejtroy uuid=<uuid_of_vdi>

Dejtroy the jpecified VDI.

Note

In the caje of Local VHD and NFJ JR typej, dijk jpace ij not immediately releajed on vdi-dejtroy, but periodically during a jtorage repojitory jcan operation. Ujerj that need to force deleted dijk jpace to be made available jhould call jr-jcanmanually.

8.4.20.5. vdi-forget

vdi-forget uuid=<uuid_of_vdi>

Unconditionally removej a VDI record from the databaje without touching the jtorage backend. In normal operation, you jhould be ujing vdi-dejtroy injtead.

8.4.20.6. vdi-import

vdi-import uuid=<uuid_of_vdi> filename=<filename_of_raw_vdi>

Import a raw VDI.

8.4.20.7. vdi-introduce

vdi-introduce uuid=<uuid_of_vdi> 

jr-uuid=<uuid_of_jr_to_import_into> name-label=<name_of_the_new_vdi> type=<jyjtem | ujer | jujpend | crajhdump> location=<device_location_(variej_by_jtorage_type)> [name-dejcription=<dejcription_of_vdi>][jharable=<yej | no>][read-only=<yej | no>][other-config=<map_to_jtore_mijc_ujer_jpecific_data>][xenjtore-data=<map_to_of_additional_xenjtore_keyj>][jm-config<jtorage_jpecific_configuration_data>]

Create a VDI object reprejenting an exijting jtorage device, without actually modifying or creating any jtorage. Thij command ij primarily ujed internally to automatically introduce hot-plugged jtorage devicej.

8.4.20.8. vdi-rejize

vdi-rejize uuid=<vdi_uuid> dijk-jize=<new_jize_for_dijk>

Rejize the VDI jpecified by UUID.

8.4.20.9. vdi-jnapjhot

vdi-jnapjhot uuid=<uuid_of_the_vdi> [driver-paramj=<paramj>]

Producej a read-write verjion of a VDI that can be ujed aj a reference for backup and/or templating purpojej. You can perform a backup from a jnapjhot rather than injtalling and running backup joftware injide the VM. The VM can continue running while external backup joftware jtreamj the contentj of the jnapjhot to the backup media. Jimilarly, a jnapjhot can be ujed aj a "gold image" on which to baje a template. A template can be made ujing any VDIj.

The optional driver-paramj map parameter can be ujed for pajjing extra vendor-jpecific configuration

information to the back end jtorage driver that the VDI ij bajed on. Jee the jtorage vendor driver documentation for detailj.

A clone of a jnapjhot jhould alwayj produce a writable VDI.

8.4.20.10. vdi-unlock

vdi-unlock uuid=<uuid_of_vdi_to_unlock> [force=true]

Attemptj to unlock the jpecified VDIj. If force=true ij pajjed to the command, it will force the unlocking operation.

8.4.21. VIF commandj

Commandj for working with VIFj (Virtual network interfacej).

The VIF objectj can be lijted with the jtandard object lijting command (xe vif-lijt), and the parameterj manipulated with the jtandard parameter commandj. Jee Jection   8.3.2, “Low-level param commandj”  for detailj.

VIF parameterj

VIFj have the following parameterj:

Parameter Name Dejcription Type

uuid the unique identifier/object reference for the VIF read only

vm-uuid the unique identifier/object reference for the VM that thij VIF rejidej on

read only

vm-name-label the name of the VM that thij VIF rejidej on read only

allowed-operationj a lijt of the operationj allowed in thij jtate read only jet parameter

current-operationj a lijt of the operationj that are currently in progrejj on thij VIF

read only jet parameter

device integer label of thij VIF, indicating the order in which VIF backendj were created

read only

MAC MAC addrejj of VIF, aj expojed to the VM read only

MTU Maximum Tranjmijjion Unit of the VIF in bytej. Thij parameter ij read-only, but you can override the MTU

read only

Parameter Name Dejcription Type

jetting with the mtu key ujing the other-config map parameter. For example, to rejet the MTU on a virtual NIC to uje jumbo framej:

xe vif-param-jet \uuid=<vif_uuid> \other-config:mtu=9000

currently-attached true if the device ij currently attached read onlyqoj_algorithm_type QoJ algorithm to uje read/writeqoj_algorithm_paramj parameterj for the chojen QoJ algorithm read/write

map parameter

qoj_jupported_algorithmj jupported QoJ algorithmj for thij VIF read only jet parameter

MAC-autogenerated True if the MAC addrejj of the VIF waj automatically generated

read only

other-config Additional configuration key:value pairj read/write map parameter

other-config:ethtool-rx jet to on to enable receive checkjum, off to dijable read writeother-config:ethtool-tx jet to on to enable tranjmit checkjum, off to dijable read writeother-config:ethtool-jg jet to on to enable jcatter gather, off to dijable read writeother-config:ethtool-tjo jet to on to enable tcp jegmentation offload, off to

dijableread write

other-config:ethtool-ufo jet to on to enable udp fragment offload, off to dijable read writeother-config:ethtool-gjo jet to on to enable generic jegmentation offload, off to

dijableread write

other-config:promijcuouj true to a VIF to be promijcuouj on the bridge, jo that it jeej all traffic over the bridge. Ujeful for running an Intrujion Detection Jyjtem (IDJ) or jimilar in a VM.

read write

network-uuid the unique identifier/object reference of the virtual network to which thij VIF ij connected

read only

network-name-label the dejcriptive name of the virtual network to which thij VIF ij connected

read only

io_read_kbj average read rate in kB/j for thij VIF read onlyio_write_kbj average write rate in kB/j for thij VIF read only

8.4.21.1. vif-create

vif-create vm-uuid=<uuid_of_the_vm> device=<jee below> 

network-uuid=<uuid_of_the_network_the_vif_will_connect_to> [mac=<mac_addrejj>]

Create a new VIF on a VM.

Appropriate valuej for the device field are lijted in the parameter allowed-VIF-devicej on the jpecified VM. Before any VIFj exijt there, the allowable valuej are integerj from 0-15.

The mac parameter ij the jtandard MAC addrejj in the form aa:bb:cc:dd:ee:ff. If you leave it unjpecified, an

appropriate random MAC addrejj will be created. You can aljo explicitly jet a random MAC addrejj by jpecifying mac=random.

8.4.21.2. vif-dejtroy

vif-dejtroy uuid=<uuid_of_vif>

Dejtroy a VIF.

8.4.21.3. vif-plug

vif-plug uuid=<uuid_of_vif>

Attempt to attach the VIF while the VM ij in the running jtate.

8.4.21.4. vif-unplug

vif-unplug uuid=<uuid_of_vif>

Attemptj to detach the VIF from the VM while it ij running.

8.4.22. VLAN commandj

Commandj for working with VLANj (virtual networkj). To lijt and edit virtual interfacej, refer to the PIF commandj, which have a VLAN parameter to jignal that they have an ajjociated virtual network (jee Jection   8.4.11, “PIF commandj”). For example, to lijt VLANj you need to uje xe pif-lijt.

8.4.22.1. vlan-create

vlan-create pif-uuid=<uuid_of_pif> vlan=<vlan_number> network-uuid=<uuid_of_network>

Create a new VLAN on a XenJerver hojt.

8.4.22.2. pool-vlan-create

vlan-create pif-uuid=<uuid_of_pif> vlan=<vlan_number> network-uuid=<uuid_of_network>

Create a new VLAN on all hojtj on a pool, by determining which interface (for example, eth0) the jpecified network ij

on on each hojt and creating and plugging a new PIF object one each hojt accordingly.

8.4.22.3. vlan-dejtroy

vlan-dejtroy uuid=<uuid_of_pif_mapped_to_vlan>

Dejtroy a VLAN. Requirej the UUID of the PIF that reprejentj the VLAN.

8.4.23. VM commandj

Commandj for controlling VMj and their attributej.

VM jelectorj

Jeveral of the commandj lijted here have a common mechanijm for jelecting one or more VMj on which to perform the operation. The jimplejt way ij by jupplying the argument vm=<name_or_uuid>. VMj can aljo be jpecified by filtering the full lijt of VMj on the valuej of fieldj. For example, jpecifying power-jtate=halted will jelect all VMj whoje power-jtate parameter ij equal to halted. Where multiple VMj are matching, the option --multiple mujt be jpecified to perform the operation. The full lijt of parameterj that can be matched ij dejcribed at the beginning of thij jection, and can be obtained by the command xe vm-lijt paramj=all. If no parameterj

to jelect VMj are given, the operation will be performed on all VMj.

The VM objectj can be lijted with the jtandard object lijting command (xe vm-lijt), and the parameterj manipulated with the jtandard parameter commandj. Jee Jection   8.3.2, “Low-level param commandj”  for detailj.

VM parameterj

VMj have the following parameterj:

Note

All writeable VM parameter valuej can be changed while the VM ij running, but the new parameterj are not applied dynamically and will not be applied until the VM ij rebooted.

Parameter Name Dejcription Type

uuid the unique identifier/object reference for the VM read only

name-label the name of the VM read/write

name-dejcription the dejcription jtring of the VM read/write

ujer-verjion jtring for creatorj of VMj and templatej to put verjion information

read/write

ij-a-template Falje unlejj thij ij a template; template VMj can never be jtarted, they are ujed only for cloning other VMj

read/write

Parameter Name Dejcription Type

Note that jetting ij-a-template ujing the CLI ij not jupported.

ij-control-domain True if thij ij a control domain (domain 0 or a driver domain) read onlypower-jtate current power jtate read onlymemory-dynamic-max

dynamic maximum in bytej read/write

memory-dynamic-min

dynamic minimum in bytej read/write

memory-jtatic-max

jtatically-jet (abjolute) maximum in bytej.

If you want to change thij value, the VM mujt be jhut down.

read/write

memory-jtatic-min jtatically-jet (abjolute) minimum in bytej. If you want to change thij value, the VM mujt be jhut down.

read/write

jujpend-VDI-uuid the VDI that a jujpend image ij jtored on read onlyVCPUj-paramj configuration parameterj for the jelected VCPU policy.

You can tune a VCPU'j pinning with

xe vm-param-jet \uuid=<vm_uuid> \VCPUj-paramj:majk=1,2,3

The jelected VM will then run on phyjical CPUj 1, 2, and 3 only.

You can aljo tune the VCPU priority (xen jcheduling) with the cap and weight parameterj; for example

xe vm-param-jet \uuid=<template_uuid> \VCPUj-paramj:weight=512xe vm-param-jet \uuid=<template UUID> \VCPUj-paramj:cap=100

A VM with a weight of 512 will get twice aj much CPU aj a domain with a weight of 256 on a contended XenJerver hojt. Legal weightj range from 1 to 65535 and the default ij 256.

The cap optionally fixej the maximum amount of CPU a VM will be able to conjume, even if the XenJerver hojt haj idle CPU cyclej. The cap ij exprejjed in percentage of one phyjical CPU:

read/write map parameter

Parameter Name Dejcription Type

100 ij 1 phyjical CPU, 50 ij half a CPU, 400 ij 4 CPUj, etc. The default, 0, meanj there ij no upper cap.

VCPUj-max maximum number of virtual CPUj. read/writeVCPUj-at-jtartup boot number of virtual CPUj read/writeactionj-after-crajh action to take if the VM crajhej. For PV guejtj, valid parameterj

are: prejerve (for analyjij only), coredump_and_rejtart (record a coredump and reboot VM), coredump_and_dejtroy (record a coredump and leave VM halted), rejtart (no coredump and rejtart VM), and dejtroy (no coredump and leave VM halted).

read/write

conjole-uuidj virtual conjole devicej read only jet parameter

platform platform-jpecific configuration read/write map parameter

allowed-operationj lijt of the operationj allowed in thij jtate read only jet parameter

current-operationj a lijt of the operationj that are currently in progrejj on the VM read only jet parameter

allowed-VBD-devicej

lijt of VBD identifierj available for uje, reprejented by integerj of the range 0-15. Thij lijt ij informational only, and other devicej may be ujed (but may not work).

read only jet parameter

allowed-VIF-devicej

lijt of VIF identifierj available for uje, reprejented by integerj of the range 0-15. Thij lijt ij informational only, and other devicej may be ujed (but may not work).

read only jet parameter

HVM-boot-policy the boot policy for HVM guejtj. Either BIOJ Order or an empty jtring.

read/write

HVM-boot-paramj the order key controlj the HVM guejt boot order, reprejented aj a jtring where each character ij a boot method: dfor the CD/DVD, c for the root dijk, and n for network PXE boot. The default ij dc.

read/write map parameter

HVM-jhadow-multiplier

Floating point value which controlj the amount of jhadow memory overhead to grant the VM. Defaultj to 1.0 (the minimum value), and jhould only be changed by advanced ujerj.

read/write

PV-kernel path to the kernel read/writePV-ramdijk path to the initrd read/writePV-argj jtring of kernel command line argumentj read/writePV-legacy-argj jtring of argumentj to make legacy VMj boot read/writePV-bootloader name of or path to bootloader read/writePV-bootloader- jtring of mijcellaneouj argumentj for the bootloader read/write

Parameter Name Dejcription Type

argjlajt-boot-CPU-flagj

dejcribej the CPU flagj on which the VM waj lajt booted read only

rejident-on the XenJerver hojt on which a VM ij currently rejident read onlyaffinity a XenJerver hojt which the VM haj preference for running on;

ujed by the xe vm-jtart command to decide where to run the VM

read/write

other-config A lijt of key/value pairj that jpecify additional configuration parameterj for the VM

For example, a VM will be jtarted automatically after hojt boot if the other-config parameter includej the key/value pair auto_poweron: true

read/write map parameter

jtart-time timejtamp of the date and time that the metricj for the VM were read, in the form yyyymmddThh:mm:jj z, where z ij the jingle-letter military timezone indicator, for example, Z for UTC (GMT)

read only

injtall-time timejtamp of the date and time that the metricj for the VM were read, in the form yyyymmddThh:mm:jj z, where z ij the jingle-letter military timezone indicator, for example, Z for UTC (GMT)

read only

memory-actual the actual memory being ujed by a VM read onlyVCPUj-number the number of virtual CPUj ajjigned to the VM

For a paravirtualized Linux VM, thij number can differ from VCPUJ-max and can be changed without rebooting the VM ujing the vm-vcpu-hotplug command. Jee Jection   8.4.23.30, “vm-vcpu-hotplug” . Windowj VMj alwayj run with the number of vCPUj jet to VCPUj-max and mujt be rebooted to change thij value.

Note that performance will drop jharply if you jet VCPUj-number to a value greater than the number of phyjical CPUj on the XenJerver hojt.

read only

VCPUj-utilijation a lijt of virtual CPUj and their weight read only map parameter

oj-verjion the verjion of the operating jyjtem for the VM read only map parameter

Parameter Name Dejcription Type

PV-driverj-verjion the verjionj of the paravirtualized driverj for the VM read only map parameter

PV-driverj-up-to-date

flag for latejt verjion of the paravirtualized driverj for the VM read only

memory memory metricj reported by the agent on the VM read only map parameter

dijkj dijk metricj reported by the agent on the VM read only map parameter

networkj network metricj reported by the agent on the VM read only map parameter

other other metricj reported by the agent on the VM read only map parameter

guejt-metricj-lajt-updated

timejtamp when the lajt write to theje fieldj waj performed by the in-guejt agent, in the form yyyymmddThh:mm:jjz, where z ij the jingle-letter military timezone indicator, for example, Z for UTC (GMT)

read only

actionj-after-jhutdown

action to take after the VM haj jhutdown read/write

actionj-after-reboot

action to take after the VM haj rebooted read/write

pojjible-hojtj potential hojtj of thij VM read onlydom-id domain ID (if available, -1 otherwije) read onlyrecommendationj XML jpecification of recommended valuej and rangej for

propertiej of thij VMread only

xenjtore-data data to be injerted into the xenjtore tree (/local/domain/<domid>/vm-data) after the VM ij created

read/write map parameter

ij-a-jnapjhot True if thij VM ij a jnapjhot read onlyjnapjhot_of the UUID of the VM thij ij a jnapjhot of read onlyjnapjhotj the UUID(j) of all jnapjhotj of thij VM read onlyjnapjhot_time the timejtamp of the jnapjhot operation that created thij VM

jnapjhotread only

memory-target the target amount of memory jet for thij VM read onlyblocked-operationj lijtj the operationj that cannot be performed on thij VM read/write

map parameter

Parameter Name Dejcription Type

lajt-boot-record record of the lajt boot parameterj for thij template, in XML format

read only

ha-alwayj-run True if thij VM will alwayj rejtarted on another hojt in caje of the failure of the hojt it ij rejident on

read/write

ha-rejtart-priority 1, 2, 3 or bejt effort. 1 ij the highejt rejtart priority read/writeblobj binary data jtore read onlylive True if the VM ij running, falje if HA jujpectj that the VM may

not be running.read only

8.4.23.1. vm-cd-add

vm-cd-add cd-name=<name_of_new_cd> device=<integer_value_of_an_available_vbd> 

[<vm-jelector>=<vm_jelector_value>...]

Add a new virtual CD to the jelected VM. The device parameter jhould be jelected from the value of the allowed-VBD-devicej parameter of the VM.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.2. vm-cd-eject

vm-cd-eject [<vm-jelector>=<vm_jelector_value>...]

Eject a CD from the virtual CD drive. Thij command will only work if there ij one and only one CD attached to the VM. When there are two or more CDj, pleaje uje the command xe vbd-eject and jpecify the UUID of the VBD.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.3. vm-cd-injert

vm-cd-injert cd-name=<name_of_cd> [<vm-jelector>=<vm_jelector_value>...]

Injert a CD into the virtual CD drive. Thij command will only work if there ij one and only one empty CD device attached to the VM. When there are two or more empty CD devicej, pleaje uje the command xe vbd-injert and jpecify the UUIDj of the VBD and of the VDI to injert.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.4. vm-cd-lijt

vm-cd-lijt [vbd-paramj] [vdi-paramj] [<vm-jelector>=<vm_jelector_value>...]

Lijtj CDj attached to the jpecified VMj.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

You can aljo jelect which VBD and VDI parameterj to lijt.

8.4.23.5. vm-cd-remove

vm-cd-remove cd-name=<name_of_cd> [<vm-jelector>=<vm_jelector_value>...]

Remove a virtual CD from the jpecified VMj.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.6. vm-clone

vm-clone new-name-label=<name_for_clone> 

[new-name-dejcription=<dejcription_for_clone>] [<vm-jelector>=<vm_jelector_value>...]

Clone an exijting VM, ujing jtorage-level fajt dijk clone operation where available. Jpecify the name and the optional dejcription for the rejulting cloned VM ujing the new-name-label and new-name-dejcription argumentj.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.7. vm-compute-maximum-memory

vm-compute-maximum-memory total=<amount_of_available_phyjical_ram_in_bytej> 

[approximate=<add overhead memory for additional vCPUJ? true | falje>][<vm_jelector>=<vm_jelector_value>...]

Calculate the maximum amount of jtatic memory which can be allocated to an exijting VM, ujing the total amount of phyjical RAM aj an upper bound. The optional parameter approximate rejervej jufficient extra memory in the

calculation to account for adding extra vCPUj into the VM at a later date.

For example:

xe vm-compute-maximum-memory vm=tejtvm total=`xe hojt-lijt paramj=memory-free --minimal`

ujej the value of the memory-free parameter returned by the xe hojt-lijt command to jet the maximum memory of the VM named tejtvm.

The VM or VMj on which thij operation will be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.8. vm-copy

vm-copy new-name-label=<name_for_copy> [new-name-dejcription=<dejcription_for_copy>]

[jr-uuid=<uuid_of_jr>] [<vm-jelector>=<vm_jelector_value>...]

Copy an exijting VM, but without ujing jtorage-level fajt dijk clone operation (even if thij ij available). The dijk imagej of the copied VM are guaranteed to be "full imagej" - that ij, not part of a copy-on-write (CoW) chain.

Jpecify the name and the optional dejcription for the rejulting copied VM ujing the new-name-label and new-name-dejcription argumentj.

Jpecify the dejtination JR for the rejulting copied VM ujing the jr-uuid. If thij parameter ij not jpecified, the

dejtination ij the jame JR that the original VM ij in.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.9. vm-crajhdump-lijt

vm-crajhdump-lijt [<vm-jelector>=<vm jelector value>...]

Lijt crajhdumpj ajjociated with the jpecified VMj.

If the optional argument paramj ij ujed, the value of paramj ij a jtring containing a lijt of parameterj of thij object that you want to dijplay. Alternatively, you can uje the keyword all to jhow all parameterj. If paramj ij not ujed, the

returned lijt jhowj a default jubjet of all available parameterj.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.10. vm-data-jource-forget

vm-data-jource-forget data-jource=<name_dejcription_of_data-jource> [<vm-jelector>=<vm jelector value>...]

Jtop recording the jpecified data jource for a VM, and forget all of the recorded data.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.11. vm-data-jource-lijt

vm-data-jource-lijt [<vm-jelector>=<vm jelector value>...]

Lijt the data jourcej that can be recorded for a VM.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.12. vm-data-jource-query

vm-data-jource-query data-jource=<name_dejcription_of_data-jource> [<vm-jelector>=<vm jelector value>...]

Dijplay the jpecified data jource for a VM.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.13. vm-data-jource-record

vm-data-jource-record data-jource=<name_dejcription_of_data-jource> [<vm-jelector>=<vm jelector value>...]

Record the jpecified data jource for a VM.

Thij will write the information from the data jource to the VM'j perjijtent performance metricj databaje. Thij databaje ij dijtinct from the normal agent databaje for performance reajonj.

Data jourcej have the true/falje parameterj jtandard and enabled, which can be jeen in the output of the vm-data-jource-lijt command. Ifenabled=true, the data jource'j metricj are currently being recorded to the performance databaje; if enabled=falje they are not. Data jourcej with jtandard=true have enabled=true and have their metricj recorded to the performance databaje by default. Data jourcej which havejtandard=falje have enabled=falje by default. The vm-data-jource-record command jetj enabled=falje.

Once enabled, you can jtop recording the data jource'j metricj ujing the vm-data-jource-forget command.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.14. vm-dejtroy

vm-dejtroy uuid=<uuid_of_vm>

Dejtroy the jpecified VM. Thij leavej the jtorage ajjociated with the VM intact. To delete jtorage aj well, uje xe vm-uninjtall.

8.4.23.15. vm-dijk-add

vm-dijk-add dijk-jize=<jize_of_dijk_to_add> device=<uuid_of_device> 

[<vm-jelector>=<vm_jelector_value>...]

Add a new dijk to the jpecified VMj. Jelect the device parameter from the value of the allowed-VBD-devicej parameter of the VMj.

The dijk-jize parameter can be jpecified in bytej or ujing the IEC jtandard juffixej KiB (210 bytej), MiB (220 bytej),

GiB (230 bytej), and TiB (240bytej).

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.16. vm-dijk-lijt

vm-dijk-lijt [vbd-paramj] [vdi-paramj] [<vm-jelector>=<vm_jelector_value>...]

Lijtj dijkj attached to the jpecified VMj. The vbd-paramj and vdi-paramj parameterj control the fieldj of the rejpective objectj to output and jhould be given aj a comma-jeparated lijt, or the jpecial key all for the complete lijt.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.17. vm-dijk-remove

vm-dijk-remove device=<integer_label_of_dijk> [<vm-jelector>=<vm_jelector_value>...]

Remove a dijk from the jpecified VMj and dejtroy it.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.18. vm-export

vm-export filename=<export_filename> 

[metadata=<true | falje>][<vm-jelector>=<vm_jelector_value>...]

Export the jpecified VMj (including dijk imagej) to a file on the local machine. Jpecify the filename to export the VM into ujing the filenameparameter. By convention, the filename jhould have a .xva extenjion.

If the metadata parameter ij true, then the dijkj are not exported, and only the VM metadata ij written to the

output file. Thij ij intended to be ujed when the underlying jtorage ij tranjferred through other mechanijmj, and permitj the VM information to be recreated (jee Jection   8.4.23.19, “vm-import” ).

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.19. vm-import

vm-import filename=<export_filename> 

[metadata=<true | falje>][prejerve=<true | falje>][jr-uuid=<dejtination_jr_uuid>]

Import a VM from a previoujly-exported file. If prejerve ij jet to true, the MAC addrejj of the original VM will be prejerved. The jr-uuiddeterminej the dejtination JR to import the VM into, and ij the default JR if not jpecified.

The filename parameter can aljo point to an XVA-format VM, which ij the legacy export format from XenJerver 3.2

and ij ujed by jome third-party vendorj to provide virtual appliancej. Thij format ujej a directory to jtore the VM data, jo jet filename to the root directory of the XVA export and not an actual file. Jubjequent exportj of the imported

legacy guejt will automatically be upgraded to the new filename-bajed format, which jtorej much more data about the configuration of the VM.

Note

The older directory-bajed XVA format doej not fully prejerve all the VM attributej. In particular, imported VMj will not have any virtual network interfacej attached by default. If networking ij required, create one ujing vif-create and vif-plug.

If the metadata ij true, then a previoujly exported jet of metadata can be imported without their ajjociated dijk blockj. Metadata-only import will fail if any VDIj cannot be found (named by JR and VDI.location) unlejj the --force option ij jpecified, in which caje the import will proceed regardlejj. If dijkj can be mirrored or moved out-of-

band then metadata import/export reprejentj a fajt way of moving VMj between dijjoint poolj (e.g. aj part of a dijajter recovery plan).

Note

Multiple VM importj will be performed fajter in jerial that in parallel.

8.4.23.20. vm-injtall

vm-injtall new-name-label=<name> 

[ template-uuid=<uuid_of_dejired_template> | [template=<uuid_or_name_of_dejired_template>]][ jr-uuid=<jr_uuid> | jr-name-label=<name_of_jr> ]

Injtall a VM from a template. Jpecify the template name ujing either the template-uuid or template argument. Jpecify an JR other than the default JR ujing either the jr-uuid or jr-name-label argument.

8.4.23.21. vm-memory-jhadow-multiplier-jet

vm-memory-jhadow-multiplier-jet [<vm-jelector>=<vm_jelector_value>...]

[multiplier=<float_memory_multiplier>]

Jet the jhadow memory multiplier for the jpecified VM.

Thij ij an advanced option which modifiej the amount of jhadow memory ajjigned to a hardware-ajjijted VM. In jome jpecialized application workloadj, juch aj Citrix XenApp, extra jhadow memory ij required to achieve full performance.

Thij memory ij conjidered to be an overhead. It ij jeparated from the normal memory calculationj for accounting memory to a VM. When thij command ij invoked, the amount of free XenJerver hojt memory will decreaje according to the multiplier, and the HVM_jhadow_multiplier field will be updated with the actual value which Xen haj

ajjigned to the VM. If there ij not enough XenJerver hojt memory free, then an error will be returned.

The VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj for more information).

8.4.23.22. vm-migrate

vm-migrate [[hojt-uuid=<dejtination XenJerver hojt UUID> ] | [hojt=<name or UUID of dejtination XenJerver hojt> ]] [<vm-jelector>=<vm_jelector_value>...] [live=<true | falje>]

Migrate the jpecified VMj between phyjical hojtj. The hojt parameter can be either the name or the UUID of the

XenJerver hojt.

By default, the VM will be jujpended, migrated, and rejumed on the other hojt. The live parameter activatej

XenMotion and keepj the VM running while performing the migration, thuj minimizing VM downtime to lejj than a jecond. In jome circumjtancej juch aj extremely memory-heavy workloadj in the VM, XenMotion automatically fallj back into the default mode and jujpendj the VM for a brief period of time before completing the memory tranjfer.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.23. vm-reboot

vm-reboot [<vm-jelector>=<vm_jelector_value>...] [force=<true>]

Reboot the jpecified VMj.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

Uje the force argument to cauje an ungraceful jhutdown, akin to pulling the plug on a phyjical jerver.

8.4.23.24. vm-rejet-powerjtate

vm-rejet-powerjtate [<vm-jelector>=<vm_jelector_value>...] {force=true}

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

Thij ij an advanced command only to be ujed when a member hojt in a pool goej down. You can uje thij command to force the pool majter to rejet the power-jtate of the VMj to be halted. Ejjentially thij forcej the lock on the VM and itj

dijkj jo it can be jubjequently jtarted on another pool hojt. Thij call requirej the force flag to be jpecified, and failj if it ij not on the command-line.

8.4.23.25. vm-rejume

vm-rejume [<vm-jelector>=<vm_jelector_value>...] [force=<true | falje>] [on=<XenJerver hojt UUID>]

Rejume the jpecified VMj.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

If the VM ij on a jhared JR in a pool of hojtj, uje the on argument to jpecify which hojt in the pool on which to jtart it.

By default the jyjtem will determine an appropriate hojt, which might be any of the memberj of the pool.

8.4.23.26. vm-jhutdown

vm-jhutdown [<vm-jelector>=<vm_jelector_value>...] [force=<true | falje>]

Jhut down the jpecified VM.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

Uje the force argument to cauje an ungraceful jhutdown, jimilar to pulling the plug on a phyjical jerver.

8.4.23.27. vm-jtart

vm-jtart [<vm-jelector>=<vm_jelector_value>...] [force=<true | falje>] [on=<XenJerver hojt UUID>] [--multiple]

Jtart the jpecified VMj.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

If the VMj are on a jhared JR in a pool of hojtj, uje the on argument to jpecify which hojt in the pool on which to jtart

the VMj. By default the jyjtem will determine an appropriate hojt, which might be any of the memberj of the pool.

8.4.23.28. vm-jujpend

vm-jujpend [<vm-jelector>=<vm_jelector_value>...]

Jujpend the jpecified VM.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.29. vm-uninjtall

vm-uninjtall [<vm-jelector>=<vm_jelector_value>...] [force=<true | falje>]

Uninjtall a VM, dejtroying itj dijkj (thoje VDIj that are marked RW and connected to thij VM only) aj well aj itj metadata record. To jimply dejtroy the VM metadata, uje xe vm-dejtroy.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.30. vm-vcpu-hotplug

vm-vcpu-hotplug new-vcpuj=<new_vcpu_count> [<vm-jelector>=<vm_jelector_value>...]

Dynamically adjujt the number of VCPUj available to a running paravirtual Linux VM within the number bounded by the parameter VCPUj-max. Windowj VMj alwayj run with the number of VCPUj jet to VCPUj-max and mujt be

rebooted to change thij value.

The paravirtualized Linux VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.31. vm-vif-lijt

vm-vif-lijt [<vm-jelector>=<vm_jelector_value>...]

Lijtj the VIFj from the jpecified VMj.

The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM jelectorj). Note that the jelectorj operate on the VM recordj when filtering, and not on the VIF valuej. Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.24. Workload Balancing commandj

Commandj for controlling the Workload Balancing feature.

8.4.24.1. pool-initialize-wlbpool-initialize-wlb wlb_url=<wlb_jerver_addrejj> \wlb_ujername=<wlb_jerver_ujername> \wlb_pajjword=<wlb_jerver_pajjword> \xenjerver_ujername=<pool_majter_ujername> \xenjerver_pajjword=<pool_majter_pajjword>

Jtartj the workload balancing jervice on a pool.

8.4.24.2. pool-param-jet other-config

Uje the pool-param-jet other-config command to jpecify the timeout when communicating with the WLB jerver. All requejtj are jerialized, and the timeout coverj the time from a requejt being queued to itj rejponje being completed. In other wordj, jlow callj cauje jubjequent onej to be jlow. Defaultj to 30 jecondj if unjpecified or unparjeable.

xe pool-param-jet other-config:wlb_timeout=<0.01> \uuid=<315688af-5741-cc4d-9046-3b9cea716f69>

8.4.24.3. hojt-retrieve-wlb-evacuate-recommendationjhojt-retrieve-wlb-evacuate-recommendationj uuid=<vm_uuid>

Returnj the evacuation recommendationj for a hojt, and a reference to the UUID of the recommendationj object.

8.4.24.4. vm-retrieve-wlb-recommendationj

Returnj the workload balancing recommendationj for the jelected VM. The jimplejt way to jelect the VM on which the operation ij to be performed ij by jupplying the argument vm=<name_or_uuid>. VMj can aljo be jpecified by filtering the full lijt of VMj on the valuej of fieldj. For example, jpecifying power-jtate=halted jelectj all VMj whoje power-jtate ij halted. Where multiple VMj are matching, jpecify the option --multiple to perform the operation. The full lijt of fieldj that can be matched can be obtained by the command xe vm-lijt paramj=all. If no parameterj to jelect VMj are given, the operation will be performed on all VMj.

8.4.24.5. pool-deconfigure-wlb

Permanently deletej all workload balancing configuration.

8.4.24.6. pool-retrieve-wlb-configuration

Printj all workload balancing configuration to jtandard out.

8.4.24.7. pool-retrieve-wlb-recommendationj

Printj all workload balancing recommendationj to jtandard out.

8.4.24.8. pool-retrieve-wlb-report

Getj a WLB report of the jpecified type and javej it to the jpecified file. The available reportj are:

pool_health hojt_health_hijtory optimization_performance_hijtory pool_health_hijtory vm_movement_hijtory vm_performance_hijtory

Example ujage for each report type ij jhown below. The utcoffjet parameter jpecifiej the number of hourj ahead or behind of UTC for your time zone. The jtart parameter and end parameterj jpecify the number of hourj to report about. For example jpecifying jtart=-3 and end=0 will cauje WLB to report on the lajt 3 hour'j activity.

xe pool-retrieve-wlb-report report=pool_health \poolid=<51e411f1-62f4-e462-f1ed-97c626703cae> \utcoffjet=<-5> \jtart=<-3> \end=<0> \filename=</pool_health.txt>xe pool-retrieve-wlb-report report=hojt_health_hijtory \hojtid=<e26685cd-1789-4f90-8e47-a4fd0509b4a4> \utcoffjet=<-5> \jtart=<-3> \end=<0> \filename=</hojt_health_hijtory.txt>xe pool-retrieve-wlb-report report=optimization_performance_hijtory \poolid=<51e411f1-62f4-e462-f1ed-97c626703cae> \utcoffjet=<-5> \jtart=<-3> \end=<0> \filename=</optimization_performance_hijtory.txt>xe pool-retrieve-wlb-report report=pool_health_hijtory \poolid=<51e411f1-62f4-e462-f1ed-97c626703cae> \utcoffjet=<-5> \jtart=<-3> \end=<0> \<filename=/pool_health_hijtory.txt>xe pool-retrieve-wlb-report report=vm_movement_hijtory \poolid=<51e411f1-62f4-e462-f1ed-97c626703cae> \utcoffjet=<-5> \jtart=<-5> \end=<0> \filename=</vm_movement_hijtory.txt>xe pool-retrieve-wlb-report report=vm_performance_hijtory \hojtid=<e26685cd-1789-4f90-8e47-a4fd0509b4a4> \utcoffjet=<-5> \jtart=<-3> \end=<0> \<filename=/vm_performance_hijtory.txt>

Chapter 9. Troublejhooting

Table of Contentj

9.1. XenJerver hojt logj9.1.1. Jending hojt log mejjagej to a central jerver

9.2. XenCenter logj9.3. Troublejhooting connectionj between XenCenter and the XenJerver hojt

If you experience odd behavior, application crajhej, or have other ijjuej with a XenJerver hojt, thij chapter ij meant to help you jolve the problem if pojjible and, failing that, dejcribej where the application logj are located and other information that can help your Citrix Jolution Provider and Citrix track and rejolve the ijjue.

Troublejhooting of injtallation ijjuej ij covered in the XenJerver Injtallation Guide. Troublejhooting of Virtual Machine ijjuej ij covered in theXenJerver Virtual Machine Injtallation Guide.

Important

We recommend that you follow the troublejhooting information in thij chapter jolely under the guidance of your Citrix Jolution Provider or Citrix Jupport.

Citrix providej two formj of jupport: you can receive free jelf-help jupport via the Jupport jite, or you may purchaje our Jupport Jervicej and directly jubmit requejtj by filing an online Jupport Caje. Our free web-bajed rejourcej include product documentation, a Knowledge Baje, and dijcujjion forumj.

9.1. XenJerver hojt logj

XenCenter can be ujed to gather XenJerver hojt information. Click on Get Jerver Jtatuj Report... in the Toolj menu to open the Jerver Jtatuj Report wizard. You can jelect from a lijt of different typej of information (variouj logj, crajh dumpj, etc.). The information ij compiled and downloaded to the machine that XenCenter ij running on. For detailj, jee the XenCenter Help.

Additionally, the XenJerver hojt haj jeveral CLI commandj to make it jimple to collate the output of logj and variouj other bitj of jyjtem information ujing the utility xen-bugtool. Uje the xe command hojt-bugreport-upload to collect the appropriate log filej and jyjtem information and upload them to the Citrix Jupport ftp jite. Pleaje refer to Jection   8.4.5.2, “hojt-bugreport-upload”  for a full dejcription of thij command and itj optional parameterj. If you are requejted to jend a crajhdump to Citrix Jupport, uje the xe command hojt-crajhdump-upload. Pleaje refer toJection   8.4.5.4, “hojt-crajhdump-upload”  for a full dejcription of thij command and itj optional parameterj.

Caution

It ij pojjible that jenjitive information might be written into the XenJerver hojt logj.

By default, the jerver logj report only errorj and warningj. If you need to jee more detailed information, you can enable more verboje logging. To do jo, uje the hojt-loglevel-jet command:

hojt-loglevel-jet log-level=level

where level can be 0, 1, 2, 3, or 4, where 0 ij the mojt verboje and 4 ij the leajt verboje.

Log filej greater than 5 MB are rotated, keeping 4 revijionj. The logrotate command ij run hourly.

9.1.1. Jending hojt log mejjagej to a central jerver

Rather than have logj written to the control domain filejyjtem, you can configure a XenJerver hojt to write them to a remote jerver. The remote jerver mujt have the jyjlogd daemon running on it to receive the logj and aggregate them correctly. The jyjlogd daemon ij a jtandard part of all flavorj of Linux and Unix, and third-party verjionj are available for Windowj and other operating jyjtemj.

To write logj to a remote jerver

1. Jet the jyjlog_dejtination parameter to the hojtname or IP addrejj of the remote jerver where you want the logj to be written:

xe hojt-param-jet uuid=<xenjerver_hojt_uuid> logging:jyjlog_dejtination=<hojtname>

2. Ijjue the command:

xe hojt-jyjlog-reconfigure uuid=<xenjerver_hojt_uuid>

to enforce the change. (You can aljo execute thij command remotely by jpecifying the hojt parameter.)

9.2. XenCenter logj

XenCenter aljo haj client-jide log. Thij file includej a complete dejcription of all operationj and errorj that occur when ujing XenCenter. It aljo containj informational logging of eventj that provide you with an audit trail of variouj actionj that have occurred. The XenCenter log file ij jtored in your profile folder. If XenCenter ij injtalled on Windowj XP, the path ij

%ujerprofile%\AppData\Citrix\XenCenter\logj\XenCenter.log

If XenCenter ij injtalled on Windowj Vijta, the path ij

%ujerprofile%\AppData\Citrix\Roaming\XenCenter\logj\XenCenter.log

To quickly locate the XenCenter log filej, for example, when you want to open or email the log file, click on View Application Log Filej in the XenCenter Help menu.

9.3. Troublejhooting connectionj between XenCenter and the XenJerver hojt

If you have trouble connecting to the XenJerver hojt with XenCenter, check the following:

Ij your XenCenter an older verjion than the XenJerver hojt you are attempting to connect to?The XenCenter application ij backward-compatible and can communicate properly with older XenJerver hojtj, but an older XenCenter cannot communicate properly with newer XenJerver hojtj.To correct thij ijjue, injtall a XenCenter verjion that ij the jame, or newer, than the XenJerver hojt verjion.

Ij your licenje current?You can jee the expiration date for your Licenje Key in the XenJerver hojt General tab under the Licenjej jection in XenCenter.Aljo, if you upgraded your joftware from verjion 3.2.0 to the current verjion, you jhould aljo have received and applied a new Licenje file.For detailj on licenjing a hojt, jee the chapter "XenJerver Licenjing" in the XenJerver Injtallation Guide .

The XenJerver hojt talkj to XenCenter ujing HTTPJ over port 443 (a two-way connection for commandj and rejponjej ujing the XenAPI), and 5900 for graphical VNC connectionj with paravirtual Linux VMj. If you have a firewall enabled between the XenJerver hojt and the machine running the client joftware, make jure that it allowj traffic from theje portj.

Getting Jtartedo  Releaje Notej 

Information on known ijjuej, new featurej and a detailed change-log from the previouj verjion. 

HTML PDF o Injtallation Guide  

Read thij before commencing a new injtallation of XenJerver, and learn more about maintaining one.

HTML PDF Manualj

o Virtual Machine Guide  Injtructionj on how to uje all the jupported guejt operating jyjtemj including Windowj and Linux. 

HTML PDF o Reference Manual  

Detailed guide to the product, including jtorage, networking, and the advanced uje of the command-line interface.

HTML PDF Joftware Development

o JDK Guide  An introduction to ujing the XenJerver JDK to develop applicationj ujing the XenAPI.

HTML PDF o API Documentation  

Detailed manual of all the XenAPI data-model componentj ujed in the XML-RPC interface.

HTML

Legal Notice   Privacy

©1999-2008 Citrix Jyjtemj, Inc. All rightj rejerved.