Nutanix Best Practices - ConsentGenie

22
Nutanix on HPE ® ProLiant ® Nutanix Best Practices Version 2.1 February 2019 BP-2086

Transcript of Nutanix Best Practices - ConsentGenie

Page 1: Nutanix Best Practices - ConsentGenie

Nutanix on HPE®

ProLiant®Nutanix Best Practices

Version 2.1 • February 2019 • BP-2086

Page 2: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

Copyright | 2

CopyrightCopyright 2019 Nutanix, Inc.

Nutanix, Inc.1740 Technology Drive, Suite 150San Jose, CA 95110

All rights reserved. This product is protected by U.S. and international copyright and intellectualproperty laws.

Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All othermarks and names mentioned herein may be trademarks of their respective companies.

Page 3: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

3

Contents

1. Executive Summary.................................................................................41.1. Trademark Disclaimer.................................................................................................... 4

2. Nutanix Enterprise Cloud Overview...................................................... 62.1. Nutanix Acropolis Architecture.......................................................................................7

3. HPE ProLiant Servers............................................................................. 83.1. Network Topology...........................................................................................................93.2. ProLiant Management and Monitoring...........................................................................93.3. ProLiant Host Upgrades...............................................................................................10

4. General Best Practices......................................................................... 114.1. Common Networking Best Practices........................................................................... 114.2. Cluster Expansion........................................................................................................ 134.3. Node Replacement...................................................................................................... 13

5. ProLiant Best Practices........................................................................ 145.1. ProLiant Networking.....................................................................................................14

6. Conclusion..............................................................................................17

Appendix..........................................................................................................................18Best Practice Checklist....................................................................................................... 18References...........................................................................................................................19About Nutanix...................................................................................................................... 20Trademark Notice................................................................................................................ 20

List of Figures................................................................................................................ 21

List of Tables.................................................................................................................. 22

Page 4: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

1. Executive Summary | 4

1. Executive SummaryNutanix runs on HPE® ProLiant® rack-mount servers. Intended for datacenter administratorsresponsible for procuring, designing, installing, and operating Nutanix on HPE ProLiant, thisbest practices document can help steer the decision-making process when deploying Nutanix.We didn’t write this guide to provide advice or guidance on technical issues solely related toHPE ProLiant servers, and you should contact HPE directly with any issues that do not relate toNutanix software.

HPE ProLiant servers provide infrastructure that supports business objectives and growth. TheNutanix Enterprise Cloud streamlines enterprise datacenter infrastructure by integrating serverand storage resources into a turnkey system. When running Nutanix on HPE ProLiant, you nolonger need a SAN, which reduces the number of devices to purchase, deploy, and maintain andimproves speed and agility.

In this document, we address system life cycle operations that require special handling in theProLiant environment, such as network cabling, firmware upgrades, and node replacement.Where multiple options are available, we provide the information you need to decide betweenthem. We also highlight configurations and features that may be common in the field but that wedon’t recommend for Nutanix environments.

The Nutanix Field Installation Guide for HPE ProLiant Servers offers step-by-step installationinstructions; this guide supplements those instructions. We do not, however, intend to supplantor supersede any guidance, instructional manuals, or directives from HPE regarding ProLiantservers, and you should direct questions regarding the same to HPE.

Table 1: Document Version History

Version Number Published Notes

1.0 June 2017 Original publication.

2.0 February 2018 Gen10 updates.

2.1 February 2019 Updated Nutanix overview.

1.1. Trademark Disclaimer© 2019 Nutanix, Inc. All rights reserved. Nutanix®, the Enterprise Cloud Platform™ and theNutanix logo are trademarks of Nutanix, Inc., registered or pending registration in the UnitedStates and other countries. HPE® and ProLiant® are the registered trademarks of Hewlett-

Page 5: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

1. Executive Summary | 5

Packard Development LP and/or its affiliates. Citrix Hypervisor® is a trademark of Citrix Systems,Inc. and/or one or more of its subsidiaries, and may be registered in the United States Patent andTrademark Office and in other countries. Windows® and Hyper-V™ are registered trademarksor trademarks of Microsoft Corporation in the United States and/or other countries. Linux® isthe registered trademark of Linus Torvalds in the U.S. and other countries. VMware ESXi™ is aregistered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions.All other brand names mentioned herein are for identification purposes only and may be thetrademarks of their respective holder(s). Nutanix is not associated with, sponsored or endorsedby Hewlett-Packard.

Page 6: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

2. Nutanix Enterprise Cloud Overview | 6

2. Nutanix Enterprise Cloud OverviewNutanix delivers a web-scale, hyperconverged infrastructure solution purpose-built forvirtualization and cloud environments. This solution brings the scale, resilience, and economicbenefits of web-scale architecture to the enterprise through the Nutanix Enterprise CloudPlatform, which combines three product families—Nutanix Acropolis, Nutanix Prism, and NutanixCalm.

Attributes of this Enterprise Cloud OS include:

• Optimized for storage and compute resources.

• Machine learning to plan for and adapt to changing conditions automatically.

• Self-healing to tolerate and adjust to component failures.

• API-based automation and rich analytics.

• Simplified one-click upgrade.

• Native file services for user and application data.

• Native backup and disaster recovery solutions.

• Powerful and feature-rich virtualization.

• Flexible software-defined networking for visualization, automation, and security.

• Cloud automation and life cycle management.

Nutanix Acropolis provides data services and can be broken down into three foundationalcomponents: the Distributed Storage Fabric (DSF), the App Mobility Fabric (AMF), and AHV.Prism furnishes one-click infrastructure management for virtual environments running onAcropolis. Acropolis is hypervisor agnostic, supporting three third-party hypervisors—ESXi,Hyper-V, and Citrix Hypervisor—in addition to the native Nutanix hypervisor, AHV.

Figure 1: Nutanix Enterprise Cloud

Page 7: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

2. Nutanix Enterprise Cloud Overview | 7

2.1. Nutanix Acropolis ArchitectureAcropolis does not rely on traditional SAN or NAS storage or expensive storage networkinterconnects. It combines highly dense storage and server compute (CPU and RAM) into asingle platform building block. Each building block delivers a unified, scale-out, shared-nothingarchitecture with no single points of failure.

The Nutanix solution requires no SAN constructs, such as LUNs, RAID groups, or expensivestorage switches. All storage management is VM-centric, and I/O is optimized at the VM virtualdisk level. The software solution runs on nodes from a variety of manufacturers that are eitherall-flash for optimal performance, or a hybrid combination of SSD and HDD that provides acombination of performance and additional capacity. The DSF automatically tiers data across thecluster to different classes of storage devices using intelligent data placement algorithms. Forbest performance, algorithms make sure the most frequently used data is available in memory orin flash on the node local to the VM.

To learn more about the Nutanix Enterprise Cloud, please visit the Nutanix Bible andNutanix.com.

Page 8: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

3. HPE ProLiant Servers | 8

3. HPE ProLiant ServersThe HPE ProLiant server combines compute, memory, and storage in a number of form factors.Each rack-mount server chassis holds one Nutanix node containing both compute and storage.Purchase HPE ProLiant servers from the Nutanix-specified list of supported server models andcomponents. After you have procured compatible hardware, install Nutanix software on theProLiant servers.

This document addresses the following ProLiant server models:

• DL360 Gen10 8SFF

• DL380 Gen10 12LFF

• DL380 Gen10 24SFF

• DL360 Gen9 8SFF

• DL380 Gen9 8SFF

• DL380 Gen9 12LFF

• DL380 Gen9 24SFF

Nutanix supports ProLiant server models as they are tested. For a complete list of supportedservers, see the HPE ProLiant Hardware Compatibility List (HCL). For Nutanix installationinstructions, refer to the Nutanix Field Installation Guide for HPE ProLiant Servers. The followingtable outlines the supported hypervisors for each server model.

Table 2: Hypervisor Support

Model AHV ESXi Hyper-V

DL360 Gen10 8SFF X X Not supported

DL380 Gen10 12LFF X X Not supported

DL380 Gen10 24SFF X X Not supported

DL360 Gen9 8SFF X X Not supported

DL380 Gen9 8SFF X X Not supported

DL380 Gen9 12LFF X X Not supported

DL380 Gen9 24SFF X X Not supported

Page 9: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

3. HPE ProLiant Servers | 9

3.1. Network TopologyProLiant servers operate much like existing Nutanix server models, such as NX, Dell XC, andLenovo HX. Each ProLiant server connects to a top-of-rack (ToR) switch through an HPE560FLR-SFP+ or HPE 640FLR-SFP+ adapter and 10 or 25 GbE cables. A single 1 GbE cable forout-of-band (OOB) management connects to the dedicated Integrated Lights-Out (iLO) connectorport. iLO is a baseboard management controller (BMC), similar to the IPMI managementinterface on NX servers, and provides a web and SSH interface for monitoring and managing thephysical components.

Figure 2: ProLiant Network Topology

The dedicated iLO port can connect either to the same ToR switches or to a set of dedicatedmanagement switches. Nutanix recommends separating the management network from the datanetwork as shown in the figure above, but this configuration is not required.

3.2. ProLiant Management and MonitoringConnecting to a server’s iLO web interface allows you to manage each server independentlyand view details such as fan speed, temperature, and hardware logs. You can also access theremote console through the iLO interface to view the on-screen state of the server. You can usethe Nutanix Prism web interface to monitor the state of ProLiant servers, but more advancedserver management operations still require an iLO connection.

Page 10: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

3. HPE ProLiant Servers | 10

3.3. ProLiant Host UpgradesThe HPE Smart Update Manager (SUM) manages the BIOS and various firmware versions ofProLiant servers. To update each server’s components, follow an HPE-supported method forrunning SUM in offline mode. Examples of supported methods include localhost updating bybooting from ISO or remote initiating from Linux® and Windows® operating systems and HPEOneView. Because you install drivers as part of HCL-specified ESXi and AHV images, run SUMin offline advanced mode and choose the Downgrade Firmware installation option to upgrade ordowngrade only firmware components. Each Service Pack for ProLiant (SPP) release (specifiedin the HCL) includes a version of SUM you can use to deploy the SPP components. You can alsodownload the latest version of SUM from the HPE Support Center.

Note: Remote mounting the SPP ISO using iLO virtual media functionality requiresan iLO Advanced license.

For more information on SPP and SUM, as well as a complete list of available firmwarecomponents, view the release notes for your specific SPP at the HPE website. Check theNutanix HPE ProLiant HCL for the recommended and tested firmware versions and hardwarecomponents.

Page 11: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

4. General Best Practices | 11

4. General Best PracticesWhen using Foundation to image a cluster with VMware ESXi, use the HPE-customized ESXiISO image that includes HPE-specific device drivers. Nutanix nodes use a whitelist of hypervisorISOs based on the file hash, and this whitelist includes the custom HPE ESXi ISO.

The Nutanix HPE ProLiant HCL specifies the firmware and BIOS versions recommended forProLiant deployments using Nutanix. Be sure to use the firmware and BIOS versions specifiedin this guide for all components of the ProLiant server. If you need a firmware upgrade that fallsoutside the specified versions, please contact Nutanix Support before performing the upgrade.

When performing physical server maintenance using HPE tools such as SUM or OneView,ensure that you shut down no more than one node in a Nutanix cluster at a time. BecauseHPE tools, and not Nutanix tools, conduct host management operations, the process needscoordination between the hardware and software layers. As in any other Nutanix deployment,administrators perform hypervisor and AOS upgrades through the Nutanix Prism interface. It isparticularly important to closely align with the tested and supported hypervisor and AOS versionslisted in the HCL in ProLiant deployments.

4.1. Common Networking Best PracticesIn all Nutanix deployments, the CVM and hypervisor management network adapters must sharethe same network broadcast domain and subnet. You must have connectivity in the same layer 3network between all CVMs and hypervisors in the same Nutanix cluster. The host managementadapters do not have to be in this same subnet, but placing them in the same network simplifiesnetwork address design.

Storage Traffic Between CVMs in ESXiIn ESXi hosts, you can create a port group named CVM Network that prefers vmnic0 as activeand vmnic1 as standby to ensure minimal latency between Nutanix CVMs. Connect vmnic0 inall physical servers to Switch-A. Connect vmnic1 in all servers to Switch-B. In this configuration,traffic between CVM nodes in the same rack moves over the same switch without needing totraverse the upstream switches. You can configure traffic for additional port groups and VMs touse the other switch (B on vmnic1) as active to separate CVM traffic from user VM traffic at thephysical switch level. You can also select other port group configuration options such as load-based teaming or originating virtual port ID for guest traffic, depending on the VM requirements.

The following vSphere networking configuration diagram shows the CVM Network failover order.

Page 12: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

4. General Best Practices | 12

Figure 3: CVM Network Active Adapter Selection

Configure the Guest Network as shown in the following diagram or use any other suitableteaming strategy for this port group.

Figure 4: Guest Network Active Adapter Selection

Page 13: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

4. General Best Practices | 13

Use the following commands from any CVM to set the active and standby adapters of all ESXihosts in a Nutanix cluster for the CVM Network and Guest Network port groups:for i in `hostips`; do echo $i; ssh root@$i 'esxcli network vswitch standard portgroup policy failover set -a=vmnic0 -p="CVM Network" -s=vmnic1';done

for i in `hostips`; do echo $i; ssh root@$i 'esxcli network vswitch standard portgroup policy failover set -a=vmnic1 -p="Guest Network" -s=vmnic0’;done

Storage Traffic Between CVMs in AHVIn the default AHV network configuration with two uplinks, configure one adapter from the AHVhost as active and one as backup. You can’t select the active adapter in such a way that itpersists between host reboots. When multiple uplinks from the AHV host connect to multipleswitches, ensure that adequate bandwidth exists between these switches to support NutanixCVM replication traffic between nodes. Nutanix recommends redundant 40 Gbps or fasterconnections between switches, which you can achieve with a leaf-spine configuration or directinterswitch link.

4.2. Cluster ExpansionExpanding a Nutanix cluster on HPE ProLiant servers currently requires two steps:

1. Use the standalone Foundation VM to image the new server with the correct hypervisor andAOS version. Image the node, but do not add it to any cluster during the Foundation process.

2. Use Prism or the Nutanix command-line interface (nCLI) in the existing ProLiant cluster todiscover the newly imaged node and add it to the cluster. Node discovery uses IPv6 multicasttraffic between Nutanix CVMs, so configure the upstream network devices to allow this trafficbetween nodes. Additionally, Nutanix recommends placing the CVMs and hypervisor hosts inthe native, or default, untagged VLAN. If the CVMs are in a tagged VLAN, place the node youwant to add in this same VLAN before discovery.

Tip: With ProLiant servers, you must use the Foundation VM instead of CVM-based cluster expansion to support bare metal imaging. For ease of use, deploy theFoundation VM on the existing Nutanix cluster before adding nodes.

4.3. Node ReplacementNode replacement consists of node addition and node removal.

• Node addition: Node addition for ProLiant servers uses the same two-step process describedin the cluster expansion section above.

• Node removal: To remove the old node from the cluster, follow the Nutanix Prism WebConsole Guide.

Page 14: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

5. ProLiant Best Practices | 14

5. ProLiant Best PracticesThe following best practices apply to HPE ProLiant servers.

Perform physical server configuration directly using the iLO interface. During normal operation,be sure to perform all firmware, BIOS, and iLO upgrades on ProLiant servers using the SPP andSUM for the corresponding physical server following the Nutanix HPE HCL. Firmware updates forHPE ProLiant servers are not currently available through the Prism interface.

Each SPP release (specified in the HCL) includes a version of SUM you can use to deploy theSPP components. You can also download the latest version of SUM from the HPE SupportCenter.

The following table summarizes where you perform various tasks in ProLiant on Nutanixdeployments. Basic hardware monitoring consists of simple values such as system performanceand system alerts. Basic hardware management involves tasks such as simple server restarts.Advanced management and logging monitors values such as fan speed, temperature, or detailedhardware boot logs. You perform these tasks in the Prism web interface, in the hypervisorinterface, or directly in the iLO.

Table 3: Management Responsibilities

Responsibilities Prism Hypervisor iLO

Basic hardware monitoring and alerts X X X

Basic hardware management X X

Advanced hardware management and logs X

5.1. ProLiant NetworkingThe following diagram shows the preferred networking configuration for ProLiant servers. The10 or 25 GbE HPE 560FLR-SFP+ or 640FLR-SFP+ LOM NICs connect to two separate ToRswitches. A separate OOB connection to the dedicated iLO connector port provides access toiLO for management. Administrators can create this connection either through a dedicated set ofmanagement switches or through the same ToR switch that the 10 or 25 GbE ports use. Nutanixrecommends using a separate OOB management network for fault tolerance and high availability.

Page 15: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

5. ProLiant Best Practices | 15

Figure 5: ProLiant ESXi Network Detail

We recommend a leaf-spine network architecture to eliminate oversubscription, providingmaximum throughput and scalability for east-west traffic. Choose a line-rate, nonblocking ToRswitch that provides high throughput and low latency between nodes in the Nutanix cluster.

Configure the ToR switch ports facing the ProLiant servers using the vendor-recommendedconfiguration for server ports. Use a configuration like portfast or edge to ensure that the server-facing port transitions immediately to the spanning tree forwarding state. Additionally, configureports to automatically detect and negotiate speed and duplex. Configure the CVM and hypervisorVLAN of the Nutanix nodes as untagged or native in the ToR switch.

Perform Foundation node installation using a dedicated VM running on a laptop or otherinfrastructure device in the network. Place the Foundation VM on a network that can reach theiLO interface of all servers and the CVM and hypervisor interfaces.

AHV networking is similar to ESXi networking, as shown in the following diagram. The cablingfor the ProLiant servers is identical, and all of the same management recommendations apply.Nutanix recommends using the active-backup load balancing algorithm for simplicity, but to use

Page 16: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

5. ProLiant Best Practices | 16

both uplink network adapters for additional throughput, use balance-slb load balancing or LACPinstead. For more information, review the AHV Networking best practices guide.

Figure 6: ProLiant AHV Network Detail

Page 17: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

6. Conclusion | 17

6. ConclusionNutanix can turn your existing compute into web-scale storage. The Nutanix Enterprise Cloudallows a clustered pool of HPE ProLiant servers to act as a centrally managed compute andstorage system that offers high performance, redundancy, and ease of management not found intraditional SAN-based infrastructures.

Nutanix provides shared storage without the SAN. We have validated and tested the Nutanixon HPE ProLiant best practices in this guide for functionality and performance, providingadministrators the tools they need to generate successful, error-free designs for theirenvironments.

Combining ProLiant rack-mount servers with the Nutanix Enterprise Cloud gives administratorsthe flexibility to build a compute and storage infrastructure on the hardware of their choice.

Page 18: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

Appendix | 18

Appendix

Best Practice Checklist

General Best Practices• Refer to the Nutanix HPE ProLiant HCL to find a complete set of compatible hardware,

firmware, and software to use in a Nutanix on ProLiant deployment.

⁃ Contact Nutanix Support before using firmware versions not on the HCL.

• Upgrade firmware versions using HPE utilities.

⁃ Ensure that you shut down no more than one Nutanix node at any given time when usingHPE utilities such as SUM.

• Perform hypervisor and AOS upgrades through Prism.

• Expand the Nutanix cluster using a two-step process:

⁃ Image the node using a Foundation VM.

⁃ Add the new node through Prism.

• Replace nodes using the node removal and node addition processes.

Common Networking Best Practices

• Allow IPv6 multicast traffic between nodes.

• Place CVM network adapters in the same subnet and broadcast domain as the hypervisormanagement network adapter.

⁃ Placing the iLO in this same network is optional but can simplify network design.

• Place the Nutanix CVM adapter and hypervisor host adapters in the native or default untaggedVLAN.

• In ESXi, create a port group for all CVMs that prefers the same ToR switch.

⁃ For all other port groups, use Route based on originating virtual port ID for the standardvSwitch and Route based on physical NIC load for the distributed vSwitch.

• In AHV, use the default active-backup mode for simplicity.

⁃ Only use balance-slb to use the capacity of both uplink adapters if required.

⁃ Refer to the AHV Networking best practices guide for more advanced networkingconfigurations.

Page 19: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

Appendix | 19

• Use a dedicated Foundation VM on a network with access to the iLO OOB, CVM, andhypervisor network.

⁃ Run this VM on a laptop connected to the ToR switch or on an existing virtual infrastructure.Nutanix recommends deploying this Foundation VM in the cluster for easy node expansion.

ProLiant Best Practices• Upgrade firmware using the iLO interface connection to access iLO.

⁃ Following the Nutanix Field Installation Guide for HPE ProLiant Servers, use SUM toupgrade firmware in offline mode on the server to HCL-compatible levels.

• Perform physical server hardware configuration using iLO.

• Perform day-to-day server management using Prism and the hypervisor interface.

• Use iLO for hardware-specific management and monitoring for items such as fan speed,temperature, and detailed hardware logs.

Networking

• Use HPE 560FLR-SFP+ or 640FLR-SFP+ LOM NICs

⁃ Connect two NIC ports to two separate ToR switches for fault tolerance.

• Use a leaf-spine network that provides a line-rate, nonblocking connection between all Nutanixnodes.

• Configure ToR switches as edge or server ports using a configuration similar to thatdemonstrated in KB 2455.

• Configure ToR switch ports to carry the CVM and hypervisor VLAN as untagged, or native.

• Connect the iLO interface for OOB management access using iLO.

• Use a dedicated OOB management network if possible.

⁃ This management network must connect to the primary data network for Foundation towork properly.

References1. Nutanix Field Installation Guide for HPE ProLiant Servers2. Nutanix HPE ProLiant Hardware Compatibility List3. Nutanix Prism Web Console Guide4. Nutanix vSphere Networking Best Practices5. Nutanix AHV Best Practices Guide6. Nutanix AHV Networking Best Practices Guide

Page 20: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

Appendix | 20

About NutanixNutanix makes infrastructure invisible, elevating IT to focus on the applications and services thatpower their business. The Nutanix Enterprise Cloud OS leverages web-scale engineering andconsumer-grade design to natively converge compute, virtualization, and storage into a resilient,software-defined solution with rich machine intelligence. The result is predictable performance,cloud-like infrastructure consumption, robust security, and seamless application mobility for abroad range of enterprise applications. Learn more at www.nutanix.com or follow us on Twitter@nutanix.

Trademark Notice© 2019 Nutanix, Inc. All rights reserved. Nutanix®, the Enterprise Cloud Platform™ and theNutanix logo are trademarks of Nutanix, Inc., registered or pending registration in the UnitedStates and other countries. HPE® and ProLiant® are the registered trademarks of Hewlett-Packard Development LP and/or its affiliates. Citrix Hypervisor® is a trademark of Citrix Systems,Inc. and/or one or more of its subsidiaries, and may be registered in the United States Patent andTrademark Office and in other countries. Windows® and Hyper-V™ are registered trademarksor trademarks of Microsoft Corporation in the United States and/or other countries. Linux® isthe registered trademark of Linus Torvalds in the U.S. and other countries. VMware ESXi™ is aregistered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions.All other brand names mentioned herein are for identification purposes only and may be thetrademarks of their respective holder(s). Nutanix is not associated with, sponsored or endorsedby Hewlett-Packard.

Disclaimer: This guide may contain links to external websites that are not part of Nutanix.com.Nutanix does not control these sites and disclaims all responsibility for the content or accuracyof any external site. Our decision to link to an external site should not be considered anendorsement of any content on such site.

Page 21: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

21

List of FiguresFigure 1: Nutanix Enterprise Cloud................................................................................... 6

Figure 2: ProLiant Network Topology................................................................................ 9

Figure 3: CVM Network Active Adapter Selection...........................................................12

Figure 4: Guest Network Active Adapter Selection......................................................... 12

Figure 5: ProLiant ESXi Network Detail.......................................................................... 15

Figure 6: ProLiant AHV Network Detail........................................................................... 16

Page 22: Nutanix Best Practices - ConsentGenie

Nutanix on HPE® ProLiant®

22

List of TablesTable 1: Document Version History................................................................................... 4

Table 2: Hypervisor Support.............................................................................................. 8

Table 3: Management Responsibilities............................................................................ 14