Cisco UCS vs HP Virtual Connect

10
CISCO UCS vs HP Virtual Connect/BladeSystem Stefano SOLIANI November 2013 Introduction This document provides a list of competing features from the HP BladeSystem and Cisco UCS solutions. The objective of the document is to highlight weaknesses and strengths as a starting point for an enhanced competition to UCS in the Data Center area. Weaknesses should be considered from both the customer perspective (to create the proper message and perspective) and HP internal development and technical marketing (to plan for future features release, partnership or marketing material). Legenda: GREEN: HP advantage RED: Cisco advantage

Transcript of Cisco UCS vs HP Virtual Connect

Page 1: Cisco UCS vs HP Virtual Connect

CISCO UCS vs HP Virtual Connect/BladeSystem

Stefano SOLIANI

November 2013

Introduction

This document provides a list of competing features from the HP BladeSystem and Cisco UCS solutions. The objective of the document is to highlight weaknesses and strengths as a starting point for an enhanced competition to UCS in the Data Center area. Weaknesses should be considered from both the customer perspective (to create the proper message and perspective) and HP internal development and technical marketing (to plan for

future features release, partnership or marketing material).

Legenda: GREEN: HP advantage

RED: Cisco advantage

Page 2: Cisco UCS vs HP Virtual Connect

Domain HP BladeSystem with Virtual Connect CISCO UCS with UCS Manager Competitive Approach

Chassis c3000 – 8 blades

c7000 – 16 blades (or up to 32 servers if the double-dense

option is enabled) 16/32 servers is the upper limit to one BladeSystem, scalability is achieved with

replication of entire enclosure.

5108 – 8 half width blades – up to 20

(potentially 40) chassis under one UCS-> up to 160 (potentially 320) servers → higher

scalability, lower total number of switching components

Compare entire architecture according to

the required number of servers (see below)

System

Manager

Virtual Connect Manager runs on Fabric

module. VCEM is a plug in for HP System Insight

Manager and benefits from the rich feature set of HP-SIM.

UCS Manager runs on fabric module.

UCS Central SW consolidates management of a number of UCS Managers.

UCS Director extends management, automation and orchestration across the entire DC, including UCS, Nexus, Storage (ex: Vblock, FlexPod)

UCM integrates with third-party management

tools (from BMC, CA, EMC, IBM, Microsoft, Vmware) through open XML API.

VCEM is part of an architecture of

management tools from HP

OneView simplifies some operations vrt UCSM/UCS Central http://www.youtube.com/watch?v=VcL_l

GGnkzk

Equipment discovery

Don't know UCSM automatically perform an inventory and deep discovery of any attached equipment,

without manual intervention, in around 10 minutes, regardless of number of chassis. Captured info are under Equipment tab.

Profiles Server Connection Profile content: MAC address

WWN SAN boot

PXE boot UUID BIOS settings

VLAN assignment VSAN assignment

Service Profile content: MAC address (from pool in template, per fabric)

WWPN address (from pool in template, per fabric)

VLAN VSAN UUID (server id, from pool in template)

WWNN (storage node id, from pool in template) Boot Policy

For every missing component of a server Connection Profile define its necessity

and value (simpler profile but still well scoped facilitates operations)

Page 3: Cisco UCS vs HP Virtual Connect

Domain HP BladeSystem with Virtual Connect CISCO UCS with UCS Manager Competitive Approach

PXE boot settings

iSCSI boot settings

NO template or cloning ?

Power control policy

BIOS settings Firmware version

Adapter policy QoS Policy Network Control Policy

Pin Group Policy RAID settings

- templates and cloning to replicate profiles

- more settings than in SCP - support « stateless server »

Server deployment

Difference is pretty significant (47% in time). Mostly due to installing firmware

updates on Proliant blades. http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns944/ucs_vs

_hp_deployment.pdf

Time to deploy a new server is shorter with UCM than VC

http://www.youtube.com/watch?v=nijWlNzSgCQ

Fabric VC FlexFabric module (24 ports) 61xx – 62xx (up to 96 ports) enables larger scalability in single domain

Compare whole architecture and multi-domain.

Type of Fabric ports

- uplink - downlink - stacking

- uplink - server - sync

- appliance ports

Does BladeSystem consider a single L2 domain north of fabric switches? If not, this is not an issue.

If yes, how is an appliance connected ?

Switching mode – Loop avoidance

VC loop avoidance and no STP on DC LAN side. VC modules present themselves as endpoints. Loop avoidance

blocks uplinks to avoid loop.

End-Host mode, no switching on DC LAN side. Elements present themselves as endpoints. Switch mode and EH mode can be used

independently on Ethernet and FC fabric.

Switching Architecture

HP BladeSystem is flatter, as the 2 fabrics are connected also to transport

Switch fabrics do not transport traffic F2F, they need another L2 switch for comm between:

Fewer hops for HP

Page 4: Cisco UCS vs HP Virtual Connect

Domain HP BladeSystem with Virtual Connect CISCO UCS with UCS Manager Competitive Approach

traffic from fabric to fabric. – 2 servers pinned to different fabric switch

– 2 VMs on different servers pinned to different fabric switch

– 2 VMs on SAME host using VMFEX to bypass the vSwitch, because vNICS are pinned directly to fabric switch

Rule to optimize : connect VMs/servers communicating with each other to the same

fabric.

Switching

mode - pinning

There is no Pinning mechanism to link

blades port to uplink ports. The VC modules work in switching mode.

Pinning simplifies operations in UCS, increasing

switching speed. Pinning is done at server level. This means that

all VMs running on a server will be pinned to the same uplink. This can bring some unbalance in

load sharing on uplinks.

No need to select VC modules mode,

switching is on and loops are prevented.

Switching

mode change

Not a problem. must change to Switch mode when attaching

separate L2 domains, which is often the case when 2 already existing nw needs to talk to the

apps on UCS (to be fixed in UCM after v1.4).

Blade

features

Proliant BL460c

Processor : 2 socket Intel Xeon E5 family - up to 12 core disks : 2 x SFF hot-plug SAS, SATA,

SAS SDD, and SATA SSD hard drive Memory : up to 384GB

Adapters : 2 on board adapters + additional cards - see below

B230 M2

Processor : 2 socket Intel Xeon E7-2800 Disk : 2 SSD drive Memory : up to 512 GB

Adapters : 2 CNA (see below)

B230 has more memory and newer

processor. Need to deep dive to discover more, or highlight other features/future releases.

HP is Gartner's #1 for Blade Servers in

2013 http://www.gartner.com/technology/reprints.do?id=1-

1FGKRJ6&ct=130506&st=sb

Network FlexFabric adapter. Cisco VIC or 3rd parties adapter. Cisco closes some doors refusing

Page 5: Cisco UCS vs HP Virtual Connect

Domain HP BladeSystem with Virtual Connect CISCO UCS with UCS Manager Competitive Approach

Adapter VC Flex-10 Gb Eth module transforms

each blade port into 4 NICs, managed individually. Up to 24 connections per

blade. These are physical interfaces, therefore applies also to physical server (with

hypervisor you can set bw limits at VM level).

Minimum and maximum BW can be set. HP supports Netqueue for I/O

virtualization and performance optimization.

HP supports SR/IOV for direct attach to VMs to bypass vSwitch

Cisco VIC 1280 supports up to 256 virtual interfaces.

With UCM and Cisco VIC you can set CoS using % of the total link BW as a minimum (you

can also set the maximum). This is done per vNIC or per port group (group of vNICs).

Cisco vNIC does not support Netqueue

UCS does not need SR/IOV as it supports VN-Tag, but some VICs support SR/IOV

Netqueue support.

I/O performance can be in favor of HP. Better BW allocation for HP.

Port Aggregation

Link Aggregation Control Protocol (IRF?)

Equivalent in HP?

LACP and Port Channel/Virtual Port Channel for Ethernet

F_Port Channeling and Trunking for FC ports

between UCS and N5K/MDS for resiliency and load balancing

NW Virtualization

VC-Flex 10 does not support VEPA Cisco VIC supports VN-Tag http://www.definethecloud.net/access-layer-network-virtualization-vn-tag-and-

vepa/

Performance - 2x10Gb/server with embedded dual port

- up to 100Gbps/server in full-duplex with additional adapters - up to 5Tbps across enclosure midplan

- minimum 1Gbps up to 80Gbps/server (depends

on # of FEX-Fabric cables and VIC – VIC1280 supports up to 8x10Gbp) - 8 x 10Gb/FEX

- Midplan : 160Gb/chassis (1:1 oversubscription) FEX to fabric is full duplex (x2 in BW)

Over-subscription ratio may grow in UCS

with full server deployment per system

Page 6: Cisco UCS vs HP Virtual Connect

Domain HP BladeSystem with Virtual Connect CISCO UCS with UCS Manager Competitive Approach

Scalability &

Components

16 blades scenario :

1 VCM 2 Fabric modules

16 blades 16 Flex-10 20Gb/blade (embedded dual port, no

additional adapter)

128 blades scenario :

1 VCEM (1 domain with VCM)

16 Fabric modules (8 with double-dense) 128 blades (4 c7000 enclosures is upper

limit for one VCEM domain) 128 Flex 10 20Gb/blade (embedded dual port, no

additional adapter) 32 SAN cables (2 cables/enclosure and

SAN)

1 UCM 2 Fabric modules

16 blades (2 chassis) 16 VIC 4 FEX (part of the 2 chassis, automatic

management) 34 cables (16/chassis, 20Gb/blade - automatic

management) 160 blades scenario :

1 UCM

2 Fabric modules 160 blades (20 chassis are supported so far)

160 VIC 40 FEX (part of the 20 chassis, automatic management)

82 cables (1/fabric+4/chassis, 5Gb/blade, automatic management)

4 SAN cables (2 cables/SAN) max scalability to keep max BW/blade and oversubscription to 1:1 :

10 chassis (8 uplinks/chassis to each fabric) → 80 blades

Cisco seems more scalable on the switch

fabric side.

More components on UCS. Scalability issue on I/O performance for

Cisco, as UCS can only scale to 40 blades to use adapters at 10Gb.

VCEM can scale up to 1000 server in 250 domains.

Cost http://www.slideshare.net/Ciscodatacenter/cisco-ucs-hp-and-ibm-a-blade-architecture-

comparison?utm_source=slideshow01&utm_medium=ssemail&utm_campaign=share_slideshow

To be verified

Congestion control

Priority-based Flow Control – Quantified Congestion Notification (not yet

implemented)

Data Center Bridging (PFC, but not QCN) (QCN not needed

http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns945/ns1060/snia_whitepaper.pdf )

Page 7: Cisco UCS vs HP Virtual Connect

Domain HP BladeSystem with Virtual Connect CISCO UCS with UCS Manager Competitive Approach

Fiber channel FCoE is great but requires new switches

to support it. BladeSystem C7000 supports FCoE, and then extrapolates the

traffic as Ethernet, Fiber Channel.

UCS has IO Modules in fabric to provide FC

connections. Blades can only use FCoE, which is then extrapolated to FC at fabric.

In Switch mode Zoning is pretty basic (per VSAN zoning), unless MDS is connect to fabric

switch, in which case fabric switch can download zoning tables.

Similar for FCoE.

iSCSI is routable across IP networks, while FCoE is not.

SAN Up to 4 FC uplinks to SAN, therefore max 4 different SANs can be connected

per switch. Servers are pinned automatically to SAN uplinks

All ports configurable as 1/10 Gigabit Ethernet or 1/2/4/8-Gbps Fibre Channel

NPIV supported supported

Networking standard

Ethernet Fiber Channel

Fiber Channel over Ethernet iSCSI

Infiniband Serial Attached SCSI (SAS)

Ethernet FC

FcoE iSCSI

Infiniband is valuable for its ultra-low latency support

Extended feature set

HP 5900v is a virtual switch to facilitate VM migration and integrate switch management down to VM level

(worldwide availability in October 2013)

Cisco provides additional features either directly or through partnership with other major players. Ex : N1Kv for VM-FEX, VMware NSX for

distributed virtual firewall, distributed L3 switching, etc

Relationship Cisco/VMware is getting complex after deployment of NSX.

Power Management

Not found anything on this. Power can be controlled through power priority and capping, applied to power groups.

Power consumption

To be analyzed using available Cisco and HP power calculators.

Traffic Don't know SPAN in UCS can be attached to logical

Page 8: Cisco UCS vs HP Virtual Connect

Domain HP BladeSystem with Virtual Connect CISCO UCS with UCS Manager Competitive Approach

Monitoring elements, not only physical. F.e., vNIC, vHBA,

VM, VM-FEX

IPv6 Supported UCS works with Ipv6. Only management is on

IPv4 only (IPv6 in roadmap)

Cisco DC technologies

A fully competitive approach to Cisco in the server space should enlarge the vision to the entire DC technology.

Cisco provides a large set of architectures, solutions, products and technologies to face every aspect of the life of a DC. A complete competitive analysis should be done to highlights areas of HP superiority, areas of improvements, and create customer messages while asking technical marketing

and internal development to provide a counter action. The following summarizes Cisco DC capabilities.

Page 9: Cisco UCS vs HP Virtual Connect

http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/white_paper_c11-706418.pdf

Gartner gives HP the highest ranking for Completeness of vision in the DC Nw Infrastructure, and position HP as the best alternatives to Cisco,

especially for the work done on SDN, network management and automation. http://www.gartner.com/technology/reprints.do?id=1-1E20SLP&ct=130212&st=sb

Cisco launched on Nov 6, 2013 a series of products, technologies and partnership to fully support the Application-Centric Infrastructure

http://seekingalpha.com/news-article/8090762-cisco-pioneers-real-time-application-delivery-in-global-data-centers-and-clouds-to-enable-greater-

Page 10: Cisco UCS vs HP Virtual Connect

business-agility?source=email_rt_mc_body&app=n http://www.cisco.com/en/US/solutions/ns340/ns517/ns224/ns945/app-centric-infrastructure.html?CAMPAIGN=Insieme&COUNTRY_SITE=us&POSITION=PR&REFERRING_SITE=Press_Release&CREATIVE=PR+to+ACI+

launch+landing

The SDN concept has been taken to the extreme, with a new concept of controller capable of transforming an application set of requirements into computing, networking, storage and virtualization components, all managed centrally through both Open APIs and highly performing hardware.