Post on 22-Mar-2018
© 2011 IBM Corporation
IBM Power Systems
Virtualization for IBM i
John Bizon
jbizon@us.ibm.com
© 2011 IBM Corporation
IBM Power Systems
2
Agenda
VirtualizationHow do you define virtualization?
The benefits of virtualization
PowerVM
IBM i VirtualizationIBM i Hosting
VIOS Hosting
Comparison of IBM i Virtualization
Related TopicsConsoles for Virtualization
Virtual Partition Manager
Externally attached disk
Partition mobility
© 2011 IBM Corporation
IBM Power Systems
3
What is Virtualization?
• Some types of System i Virtualization
Processor
Memory
Disk Storage
Network
VIRTUALIZATION is a term that refers to the abstraction of computer resources.
© 2011 IBM Corporation
IBM Power Systems
44
Virtualization is important to your boss
CIOs select their ten most important
visionary plan elements
76% of CIOs cited “implementing a virtualized computing environment” as part of their visionary plans to enhance competitiveness
© 2011 IBM Corporation
IBM Power Systems
5
Virtualization – Increase Utilization
Reduce CPU overcapacityInfrequent peak capacity needs
Accurately sizing new workloads
Headroom for unexpected growth
Acquisition granularity
Consolidate low use servers Test, development, QA, HA, DR
Staging for new upgrades
BenefitsLower hardware, environmentals
Lower core based software costs
Minimum I/O footprint for….Connectivity to different segmentsAvailabilityQueuingBandwidth
Can often add new workloads without additional I/O footprint
Hypervisor
LAN 2 LAN 3 Storage
Virtual Images
LAN 1
© 2011 IBM Corporation
IBM Power Systems
6
Virtualization – Improve Quality of Service
Decouple logical from physical Dynamically add/remove resources
Reduce OS variation
Simplify disaster recovery
Utilize live LPAR migration
Improve network performanceUse low latency virtual networks
Re-size instance to match changing requirementsUp or down
CPU, memory, and I/O all done independently
Hypervisor
OS OS OS OS
Logical Resources
PhysicalResources
© 2011 IBM Corporation
IBM Power Systems
7
Virtualization – Improve Flexibility and Time to Value
Rapidly deploy new workloads
without:Acquiring and racking new server
Cabling
Simplify automated provisioningPhysical activities such as cabling are
hard to automate
Virtualization is key to dynamic infrastructure
Re-purpose assetsVirtual resources can be re-purposed to handle future requirements
Hypervisor
OS OS OS
© 2011 IBM Corporation
IBM Power Systems
8
IBM develops hypervisorthat would become VM on the mainframe
IBM
announces
first
machines to
do physical
partitioning
IBM
announces
LPAR on
the
mainframe
IBM
announces
LPAR on
POWER™
1967 1973 1987
IBM intro’s
POWER
Hypervisor™
for System p™
and System i™
IBM
announces
PowerVM
200720041999 2008
IBM announces
POWER6™, the
first UNIX®
servers with
Live Partition
Mobility
PowerVM built on 40 years of virtualization leadership
And virtualization innovation continues with PowerVM™
© 2011 IBM Corporation
IBM Power Systems
9
Multi-OS support:
UNIX, i and Linux
Over 15,000 applications
Share processor, memory and I/O
across operating environments
PowerVM is the foundation for shared infrastructure
© 2011 IBM Corporation
IBM Power Systems
10
PowerVM Editions are tailored to client needs
PowerVM Editions
offer a unified
virtualization solution
for all Power
workloads
PowerVM Express Edition
– Evaluations, pilots, PoCs
– Single-server projects
PowerVM Standard Edition
– Production deployments
– Server consolidation
PowerVM Enterprise Edition
– Multi-server deployments
– Cloud infrastructure
Network Balancing
10 per core
(up to 1000)
Enterprise
Shared Storage Pools+
Active Memory Deduplication**
Active Memory Sharing
Live Partition Mobility Performance Improvements
10 per core
(up to 1000)
Standard
Suspend/Resume
Thin Provisioning
Live Partition Mobility
2 per server
Express
NPIV
Shared Processor Pools
Virtual I/O Server
Concurrent VMs
PowerVM Editions
Network Balancing
10 per core
(up to 1000)
Enterprise
Shared Storage Pools+
Active Memory Deduplication**
Active Memory Sharing
Live Partition Mobility Performance Improvements
10 per core
(up to 1000)
Standard
Suspend/Resume
Thin Provisioning
Live Partition Mobility
2 per server
Express
NPIV
Shared Processor Pools
Virtual I/O Server
Concurrent VMs
PowerVM Editions
** Requires eFW7.4
© 2011 IBM Corporation
IBM Power Systems
11
0
20000
40000
60000
80000
100000
120000
140000
160000
Jo
bs
/min
1vcpu 2vcpu 4vcpu 6vcpu 8vcpu
Number of virtual CPUs
AIM7 Performance Benchmark
Single VM Scaling (Scale-up)
vSphere 4 on HP DL380 PowerVM on Power 750
HP DL380 G6 Power 750
65%PowerVM outperforms VMwareby up to 65% on Power 750, running the same Linux workloads and virtualized resources*
PowerVM on POWER7 delivers virtualization without limits with higher performance than VMware for the same virtual workloads
* “A Comparison of PowerVM and VMware Virtualization Performance”, April 2010http://www.ibm.com/systems/power/software/virtualization/whitepapers/compare_perf.html
PowerVM runs workloads more efficiently than VMware, with far superior resource utilization, price/performance, resilience and availability
© 2011 IBM Corporation
IBM Power Systems
12
Flexibility FactorsVMware ESX 3.5
(in VMware
Infrastructure 3)
VMware vSphere
4 & 5PowerVM
Dynamic virtual CPU changes in VM
No Add (but not Remove) Yes
Dynamic memory changes in VM
No Add (but not Remove) Yes
Dynamic I/O device changes in VM
No Some Yes
Direct access to I/O devices from within VM
No Some (with VT-d enabled) Yes
Integrated LPAR and WPAR support
No No Yes
PowerVM delivers superior flexibility to optimize IT resource utilization and improve responsiveness
Source: http://www.vmware.com/files/pdf/products/vsphere/vmware-what-is-new-vsphere5.pdf
© 2011 IBM Corporation
IBM Power Systems
13
Risk Management
Factors
VMware ESX 3.5(in VMware
Infrastructure 3)
VMware vSphere
4 & 5PowerVM
Implementation of virtualization technology
Third-party software add-on
Third-party software add-on
Integrated into server firmware
Isolation of I/O drivers from hypervisor
No No Yes (using VIOS)
Built-in cross-platform virtualization support
No NoYes (using
PowerVM Lx86)
Live migration across processor generations
NoSome (with Intel FlexMigration)
Yes
(Power6-Power7)
PowerVM delivers superior security to help manage risk and maximize availability
Source: http://www.vmware.com/files/pdf/products/vsphere/vmware-what-is-new-vsphere5.pdf
© 2011 IBM Corporation
IBM Power Systems
14
IBM i Virtualization MethodsIBM i hosting
IBM i client partition uses I/O
resources from another IBM i
host partition
Best environment for a
homogeneous IBM i environment
Best option for Windows
Integration on IBM i
Familiar IBM i environment
Can host AIX and Linux
partitions
Virtual I/O Server (VIOS) hosting IBM i client partition uses I/O
resources from a VIOS host partition
Best environment for IBM i, AIX and
Linux
Typically requires the least amount CPU
NPIV, FCoE, Storage Systems, Partition
Suspend/Resume
iVIOS
Hypervisor
POWER6/7
i
Hypervisor
i
POWER6/7
ClientHost ClientHost
• `
© 2011 IBM Corporation
IBM Power Systems
15
IBM i as a client
IBM i 6.1.1 or 7.1 can be a client to an IBM i 6.1.1 or 7.1 hosting partition
IBM i 6.1.1 or 7.1 can be a client to a VIOS hosting partition
Requires POWER6 or POWER7 server
IBM iVIOS
Hypervisor
POWER 6/7 Blade
IBM iVIOS
Hypervisor
POWER6/7
IBM iIBM i
Hypervisor
POWER6/7
Option 1 Option 2
© 2011 IBM Corporation
IBM Power Systems
16
IBM i Based Virtualization
IBM i Based Virtualization – as a client– IBM i partition uses I/O resources hosted by another IBM
i partition or by a VIOS partition
– Eliminates requirement to buy adapters and disk drives
for each IBM i partition
– Requires POWER6 or POWER7 systems.
– Requires IBM i V6R1 or later
– Host partition can share physical or virtual Optical
devices as well a storage
Adds to IBM i Storage Virtualization
Capabilities– OS/400 hosting of Linux™ partitions 2001
– i5/OS added hosting of AIX® on POWER5
– Integrated BladeCenter and System x servers running
VMware™, Windows™, or Linux
– IBM i hosting IBM i on POWER6/7
IBM i
Hypervisor
IBM i
POWER6/7
© 2011 IBM Corporation
IBM Power Systems
17
Possible IBM i client implementation
Previously many slots required for a partition
– Minimum requirements included :Console device, Load source
Alternate restart device
Now storage adapters can be virtualized. This can include disk
storage, Optical drives and Ethernet LAN
P1 P1
P2
P3
P2
P3
IBM iIBM i IBM i
IBM i
IBM i
IBM i
© 2011 IBM Corporation
IBM Power Systems
18
Client partition becomes a cartridge install
Client storage space can be pre-created and deployed as needed
Pre-creation can include SLIC, OS, LPPS, PTFs, Applications, IDs, etc
P1
P2
P3
© 2011 IBM Corporation
IBM Power Systems
19
What is the VIOS?
A special purpose appliance partitionProvide I/O virtualization
Advanced partition virtualization enabler
Available in 2004
Built on top of AIX, but not an AIX partition
IBM i first attached to VIOS in 2008 with IBM i 6.1
VIOS is licensed with PowerVM
© 2011 IBM Corporation
IBM Power Systems
20
Virtualizing storage with IBM i or VIOS
Single host provides access to SAN or internal storage– AIX, IBM i, or Linux client partitions
– Protect data via RAID-5, RAID-6, or RAID-10
Redundant VIOS hosts provide multiple paths to attached SAN storage– AIX, IBM i, and Linux client partitions
– One set of disk
Redundant IBM i or VIOS hosts provide access SAN or internal storage– AIX, IBM i, and Linux client partitions
– Client LPAR protects data via mirroring
– Two sets of disk and adapters
© 2011 IBM Corporation
IBM Power Systems
21
VIOS Virtualization Components
Virtual I/O Server (VIOS)
– Required to connect to open storage
– Part of PowerVM
DASD
– Fibre Channel adapter(s) assigned to
VIOS LPAR in HMC
– LUNs are 512B open storage
– Each SAN LUN virtualized directly to IBM i
client
– Storage pools not used in VIOS for IBM i
– LUNs virtualized by creating virtual target
SCSI devices
– Can use MPIO
Virtual SCSI adapters
– Created in HMC
– Server SCSI adapter in VIOS, client SCSI
adapter in IBM i
– Multiple pairs supported when HMC used
Optical
– IBM i can use any DVD drive connected to
a supported VIOS adapter
– DVD drive in VIOS virtualized directly
(OPTxx)
VIOS Host
Virtual SCSI
connection
IBM i Client
DVD
hdiskX LUNsDDxx
/dev/cd0
DVD
OPTxx
IVE
Virtual LAN
connection
CMNxx
SEA
vtscsiX
vtoptX
512B
AIX/VIOS
LUNs
FC
FC
FC
© 2011 IBM Corporation
IBM Power Systems
22
Dual VIOS Hosts – Supported
VIOS Host IBM i Client
DVD
hdiskX LUNs DDxx
/dev/cd0
DVD
OPTxx
IVE CMNxx
SEA
vtscsiX
vtoptX
Duplicate single VIOS
environment
– Virtual SCSI client/server
adapter pair
– Separate set of LUNs in 2nd
VIOS
– Same size and number of
LUNs
Adapter-level mirroring between
2 sets of disk used in client LPAR
– Mirroring between virtual
SCSI client adapters
Client can withstand failure or
scheduled downtime in either
host
Can be used for multiple clients
Weigh against cost
VIOS Host
DVD
hdiskX LUNs
/dev/cd0
IVE
SEA
vtscsiX
vtoptX
© 2011 IBM Corporation
IBM Power Systems
23
N_Port ID Virtualization (NPIV)
N_Port ID Virtualization (NPIV) provides direct Fibre Channel connections from client partitions
to SAN resources, simplifying SAN management
Physical fibre channel adapter (IOA) is owned by VIOS partition
Supported with PowerVM Express, Standard, and Enterprise Edition
Supports AIX 5.3, AIX 6.1, IBM i 6.1, IBM i 7.1, and Linux
POWER6 or POWER7 systems with an 8Gb PCIe fibre channel adapter or 10Gb Fibre
Channel over Ethernet (FCoE) adapter
Power Hypervisor
VIOS
Virtual FC AdapterFC Adapter Virtual FC Adapter
Enables use of existing storage management tools
Simplifies storage provisioning (i.e. zoning, LUN masking)
Enables access to SAN devices including tape libraries
IBM i requires:
LIC 6.1.1 or LIC 7.1
DS5000 or DS8000 storage subsystem
and/or
Supported tape/tape library devices
© 2011 IBM Corporation
IBM Power Systems
24
Comparisons: IBM i vs VIOS hosting LPAR CPU utilization Each host had ½-CPU and 2G memory
36 disks virtualized to 2 IBM i client LPARs
0
10
20
30
40
50
60
0 10000 20000 30000 40000 50000 60000
Client LPAR TPMs
CP
U U
tiliza
tio
n
i host 1 i host 2 VIO host 1 VIO host 2
© 2011 IBM Corporation
IBM Power Systems
25
Comparisons: IBM i vs VIOS hosting – client IOPS
• Disk configuration in each host LPAR – 4 5903 IOAs, 18 drives, R5+hot spare protection, one vSCSI host, 6 vSCSI LUNs.
• IBM i mirroring turned on in clients
0
5
10
15
20
25
0 1000 2000 3000 4000 5000 6000
Client LPAR IOPS
Dis
k R
es
po
ns
e T
ime
(m
s)
LPAR1 hosted by i LPAR2 hosted by i LPAR1 hosted by VIO LPAR2 hosted by VIO
© 2011 IBM Corporation
IBM Power Systems
26
Virtualization Comparison
IBM i Host Virtual I/O Server Host
Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton
Memory
Disk
SSD
Network
DVD
Tape
Adapters
Partition
Mobility
Skills
© 2011 IBM Corporation
IBM Power Systems
27
Virtualization Comparison
IBM i Host Virtual I/O Server Host
Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton
Memory Advanced Memory Sharing
Disk
SSD
Network
DVD
Tape
Adapters
Partition
Mobility
Skills
© 2011 IBM Corporation
IBM Power Systems
28
Virtualization Comparison
IBM i Host Virtual I/O Server Host
Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton
Memory Advanced Memory Sharing
Disk IBM i internal Storage
IBM i native attached external storage
Virtualize via Network Storage Space (vSCSI)
VIOS internal storage
VIOS attached external storage
vSCIS or NPIV
SSD
Network
DVD
Tape
Adapters
Partition
Mobility
Skills
© 2011 IBM Corporation
IBM Power Systems
29
Virtualization Comparison
IBM i Host Virtual I/O Server Host
Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton
Memory Advanced Memory Sharing
Disk IBM i internal Storage
IBM i native attached external storage
Virtualize via Network Storage Space (vSCSI)
VIOS internal storage
VIOS attached external storage
vSCIS or NPIV
SSD Client is not aware of SSD’s Client is aware of SSD’s
Network
DVD
Tape
Adapters
Partition
Mobility
Skills
© 2011 IBM Corporation
IBM Power Systems
30
Virtualization Comparison
IBM i Host Virtual I/O Server Host
Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton
Memory Advanced Memory Sharing
Disk IBM i internal Storage
IBM i native attached external storage
Virtualize via Network Storage Space (vSCSI)
VIOS internal storage
VIOS attached external storage
vSCIS or NPIV
SSD Client is not aware of SSD’s Client is aware of SSD’s
Network Proxy ARP or NAT and Layer-2 Bridge Layer-2 bridge
DVD
Tape
Adapters
Partition
Mobility
Skills
© 2011 IBM Corporation
IBM Power Systems
31
Virtualization Comparison
IBM i Host Virtual I/O Server Host
Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton
Memory Advanced Memory Sharing
Disk IBM i internal Storage
IBM i native attached external storage
Virtualize via Network Storage Space (vSCSI)
VIOS internal storage
VIOS attached external storage
vSCIS or NPIV
SSD Client is not aware of SSD’s Client is aware of SSD’s
Network Proxy ARP or NAT and Layer-2 Bridge Layer-2 bridge
DVD Yes Yes
Tape
Adapters
Partition
Mobility
Skills
© 2011 IBM Corporation
IBM Power Systems
32
Virtualization Comparison
IBM i Host Virtual I/O Server Host
Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton
Memory Advanced Memory Sharing
Disk IBM i internal Storage
IBM i native attached external storage
Virtualize via Network Storage Space (vSCSI)
VIOS internal storage
VIOS attached external storage
vSCIS or NPIV
SSD Client is not aware of SSD’s Client is aware of SSD’s
Network Proxy ARP or NAT and Layer-2 Bridge Layer-2 bridge
DVD Yes Yes
Tape Yes – IBM i 7.1 TR2
Not aware of tape library robotics
Yes – VIOS 2.2
Need NPIV to support tape libraries
Adapters
Partition
Mobility
Skills
© 2011 IBM Corporation
IBM Power Systems
33
Virtualization Comparison
IBM i Host Virtual I/O Server Host
Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton
Memory Advanced Memory Sharing
Disk IBM i internal Storage
IBM i native attached external storage
Virtualize via Network Storage Space (vSCSI)
VIOS internal storage
VIOS attached external storage
vSCIS or NPIV
SSD Client is not aware of SSD’s Client is aware of SSD’s
Network Proxy ARP or NAT and Layer-2 Bridge Layer-2 bridge
DVD Yes Yes
Tape Yes – IBM i 7.1 TR2
Not aware of tape library robotics
Yes – VIOS 2.2
Need NPIV to support tape libraries
Adapters FCoE, 10GB Ethernet
Partition
Mobility
Skills
© 2011 IBM Corporation
IBM Power Systems
34
Virtualization Comparison
IBM i Host Virtual I/O Server Host
Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton
Memory Advanced Memory Sharing
Disk IBM i internal Storage
IBM i native attached external storage
Virtualize via Network Storage Space (vSCSI)
VIOS internal storage
VIOS attached external storage
vSCIS or NPIV
SSD Client is not aware of SSD’s Client is aware of SSD’s
Network Proxy ARP or NAT and Layer-2 Bridge Layer-2 bridge
DVD Yes Yes
Tape Yes – IBM i 7.1 TR2
Not aware of tape library robotics
Yes – VIOS 2.2
Need NPIV to support tape libraries
Adapters FCoE, 10GB Ethernet
Partition
Mobility
Partition Suspend/Resume
Skills
© 2011 IBM Corporation
IBM Power Systems
35
Virtualization Comparison
IBM i Host Virtual I/O Server Host
Processor PowerVM CPU Virtualizaton PowerVM CPU Virtualizaton
Memory Advanced Memory Sharing
Disk IBM i internal Storage
IBM i native attached external storage
Virtualize via Network Storage Space (vSCSI)
VIOS internal storage
VIOS attached external storage
vSCIS or NPIV
SSD Client is not aware of SSD’s Client is aware of SSD’s
Network Proxy ARP or NAT and Layer-2 Bridge Layer-2 bridge
DVD Yes Yes
Tape Yes – IBM i 7.1 TR2
Not aware of tape library robotics
Yes – VIOS 2.2
Need NPIV to support tape libraries
Adapters FCoE, 10GB Ethernet
Partition
Mobility
Partition Suspend/Resume
Skills IBM i AIX and IBM i skills
© 2011 IBM Corporation
IBM Power Systems
36
Integrated Virtualization
Manager
Hardware Management
Console
Console Options
IVM
SDMC: Browser InterfaceManage Multiple SystemsNew User InterfaceEnhanced FunctionalitySupport 1000 LPARsSupports P6 & P7
© 2011 IBM Corporation
IBM Power Systems
37
HMCHardware appliance
Legacy
7042- CR6
Rack mount
Systems Director
Management ConsoleHardware appliance next-gen
7042- CR6 (7944-A2Y)
System x3550 M3 Commercial
2.53GHz Intel Xeon E5630,
Quad Core & Memory: 8 GB
HDD: 2x 500MB = 1 TB
Rack mount
PowerVM IVMPart of PowerVM
Focus on Blades &
small servers
Systems Director
Management ConsoleSoftware appliance next-gen
Runs on VMware or KVM
Customer supplied IBM x86 HW
Part of Flex ITME
Multiple offerings for flexibility and ease of use for Power Systems virtualization and HW service
Firmware
Management Console Offering Highlights
© 2011 IBM Corporation
IBM Power Systems
38
HMC SDMC Transition Roadmap
Power 7 Servers
• Allow ample transition time
• Add enhancements as appropriate
SDMC (release 1H 2011)
• Built on Director 6.2.1.2
• HW/SW appliance on x86
• Supports P6 & P7 servers
Director Mgmt Console
IVM IVM Transition • IVM supports POWER 7 today• No further functional enhancements planned
HMCHMC Maint Mode• Service fixes
HMC Transition • Full function through 1H
2011
• New I/O. Includes SR-IOV support
• No other virtualization enhancements
© 2011 IBM Corporation
IBM Power Systems
39
Support for IBM Storage Systems with IBM i
Notes- This table does not list more detailed considerations, for example required levels of firmware or PTFs required or configuration performance considerations- POWER7 servers require IBM i 6.1 or later- This table can change over time as addition hardware/software capabilities/options are added# DS3200 only supports SAS connection, not supported on Rack/Tower servers which use only Fibre Channel connections, supported on Blades with SAS## DS3500 has either SAS or Fibre Channel connection. Rack/Tower only uses Fibre Channel. Blades in BCH support either SAS or Fibre Channel. Blades in BCS only uses SAS.### Not supported on IBM i 7.1. But see SCORE System RPQ 846-15284 for exception support* Supported with Smart Fibre Channel adapters – NOT supported with IOP-based Fibre Channel adapters** NPIV requires Machine Code Level of 6.1.1 or later and requires NPIV capable HBAs (FC adapters) and switches@ BCH supports DS3400, DS3500, DS3950 & BCS supports DS3200, DS3500@@ N Series can only be used as file server. No load source/boot support. Support only through IFS. No IBM i data base support% NPIV requires IBM i 7.1 TR2 (Technology Refresh 2) and latest firmware released May 2011 or later
Table as of
April 12, 2011N Series
@@
DS3200
DS3400
DS3500
DS3950
DS4700
DS4800
DS5020
Storwize
V7000
DS5100
DS5300DS6800 SVC XIV
DS8100
DS8300
DS8700
DS8800
Rack /
Tower
Systems
IBM i
Version
Hardware
5.4 / 6.1 / 7.1
POWER5/6/7
6.1 / 7.1
POWER6/7
Not
DS3200#,
Yes
DS3500##
6.1 / 7.1
POWER6/7
6.1 / 7.1
POWER6/7
6.1 / 7.1
POWER6/7
5.4 / 6.1
POWER5/6/7
Not 7.1 ###
POWER5/6/7
6.1 / 7.1
POWER6/7
6.1 / 7.1
POWER6/7
5.4 / 6.1 / 7.1
POWER5/6/7
5.4 / 6.1 / 7.1
POWER5/6/7
IBM i
Attach
IFS / NFS
(NAS)VIOS VIOS VIOS
Direct* or
VIOS –VSCSI
and NPIV%
Direct VIOS VIOS
Direct or VIOS
– VSCSI and
NPIV**
Direct or VIOS
– VSCSI and
NPIV**
Power
Blades
IBM i
Version
Hardware
6.1 / 7.1
POWER6/7
IFS / NFS
(NAS)
6.1 / 7.1
POWER6/7
@, #, ##
6.1 / 7.1
POWER6/7
(BCH)
6.1 / 7.1
POWER6/7
(BCH)
6.1 / 7.1
POWER6/7
(BCH)
Not
supported
6.1 / 7.1
POWER6/7
(BCH)
6.1 / 7.1
POWER6/7
(BCH)
6.1 / 7.1
POWER6/7
(BCH)
6.1 / 7.1
POWER6/7
(BCH)
IBM i
Attach
IFS
(NAS)VIOS VIOS VIOS VIOS n/a VIOS VIOS
VIOS
NPIV**
VIOS
NPIV**
For more details, use the System Storage Interoperability Center: www.ibm.com/systems/support/storage/config/ssic/Note there are currently some differences between the above table and the SSIC. The SSIC should be updated to reflect the above information
© 2011 IBM Corporation
IBM Power Systems
40
Virtual Partition Manager
• IBM i based tool to create simple Linux partitions
(HMC not required or present)
• Max one IBM i partition with up to 4 Linux partitions
and 4 virtual Ethernets
• Linux partitions must use all virtual I/O
• DST-type interface to create/manage
• Dynamic LPAR not supported
• Uncapped partitions supported
• IBM i 7.1 TR3, the ability to create up to four IBM i
partitions will be enabled in VPM
Originally for iSeries POWER5 customers that want to get started
with Linux
No-charge, included with IBM i
IBM i Linux or
IBM i
Virtual SCSI
Virtual Ethernet
© 2011 IBM Corporation
IBM Power Systems
41
Partition Suspend/Resume
Underlying technology required for Live Partition Mobility
© 2011 IBM Corporation
IBM Power Systems
42
Live Partition Mobility
Live Parition Mobility available for AIX and Linux
Requires PowerVM Enterprise Edition
© 2011 IBM Corporation
IBM Power Systems
43
Learn More About PowerVM
http://www.ibm.com/systems/power/software/virtualization
PowerVM Client
Success Stories
* Download from PowerVM
portal or order a hard copy
PowerVM portal on IBM Web site
© 2011 IBM Corporation
IBM Power Systems
44
Resources and references
Techdocs – http://www.ibm.com/support/techdocs
(updates to this presentation, tips & techniques, white papers, etc.)
PowerVM Virtualization on IBM System p: Introduction and Configuration
Fourth Edition - SG24-7940
http://www.redbooks.ibm.com/abstracts/sg247940.html?Open
PowerVM Virtualization on IBM System p: Managing and Monitoring - SG24-
7590
http://www.redbooks.ibm.com/abstracts/sg247590.html?Open
IBM System p Advanced POWER Virtualization (PowerVM) Best Practices -
redp4194
http://www.redbooks.ibm.com/abstracts/redp4194.html?Open
Power Systems: Virtual I/O Server and Integrated Virtualization Manager
commands (iphcg.pdf)
http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphcg/iphcg.pdf
© 2011 IBM Corporation
IBM Power Systems
45
The nd, Thank You!
© 2011 IBM Corporation
IBM Power Systems
46
This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in
other countries, and the information is subject to change without notice. Consult your local IBM business contact for information on the IBM
offerings available in your area.
Information in this document concerning non-IBM products was obtained from the suppliers of these products or other public sources. Questions
on the capabilities of non-IBM products should be addressed to the suppliers of those products.
IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give
you any license to these patents. Send license inquires, in writing, to IBM Director of Licensing, IBM Corporation, New Castle Drive, Armonk, NY
10504-1785 USA.
All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives
only.
The information contained in this document has not been submitted to any formal IBM test and is provided "AS IS" with no warranties or
guarantees either expressed or implied.
All examples cited or described in this document are presented as illustrations of the manner in which some IBM products can be used and the
results that may be achieved. Actual environmental costs and performance characteristics will vary depending on individual client configurations
and conditions.
IBM Global Financing offerings are provided through IBM Credit Corporation in the United States and other IBM subsidiaries and divisions
worldwide to qualified commercial and government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment
type and options, and may vary by country. Other restrictions may apply. Rates and offerings are subject to change, extension or withdrawal
without notice.
IBM is not responsible for printing errors in this document that result in pricing or information inaccuracies.
All prices shown are IBM's United States suggested list prices and are subject to change without notice; reseller prices may vary.
IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.
Any performance data contained in this document was determined in a controlled environment. Actual results may vary significantly and are
dependent on many factors including system hardware configuration and software design and configuration. Some measurements quoted in this
document may have been made on development-level systems. There is no guarantee these measurements will be the same on generally-
available systems. Some measurements quoted in this document may have been estimated through extrapolation. Users of this document
should verify the applicable data for their specific environment.
Revised September 26, 2006
Special notices
© 2011 IBM Corporation
IBM Power Systems
47
IBM, the IBM logo, ibm.com AIX, AIX (logo), AIX 6 (logo), AS/400, Active Memory, BladeCenter, Blue Gene, CacheFlow, ClusterProven, DB2, ESCON, i5/OS, i5/OS
(logo), IBM Business Partner (logo), IntelliStation, LoadLeveler, Lotus, Lotus Notes, Notes, Operating System/400, OS/400, PartnerLink, PartnerWorld, PowerPC, pSeries,
Rational, RISC System/6000, RS/6000, THINK, Tivoli, Tivoli (logo), Tivoli Management Environment, WebSphere, xSeries, z/OS, zSeries, AIX 5L, Chiphopper, Chipkill,
Cloudscape, DB2 Universal Database, DS4000, DS6000, DS8000, EnergyScale, Enterprise Workload Manager, General Purpose File System, , GPFS, HACMP,
HACMP/6000, HASM, IBM Systems Director Active Energy Manager, iSeries, Micro-Partitioning, POWER, PowerExecutive, PowerVM, PowerVM (logo), PowerHA, Power
Architecture, Power Everywhere, Power Family, POWER Hypervisor, Power Systems, Power Systems (logo), Power Systems Software, Power Systems Software (logo),
POWER2, POWER3, POWER4, POWER4+, POWER5, POWER5+, POWER6, POWER7, pureScale, System i, System p, System p5, System Storage, System z, Tivoli
Enterprise, TME 10, TurboCore, Workload Partitions Manager and X-Architecture are trademarks or registered trademarks of International Business Machines Corporation
in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (®
or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be
registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at
www.ibm.com/legal/copytrade.shtml
The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org.
UNIX is a registered trademark of The Open Group in the United States, other countries or both.
Linux is a registered trademark of Linus Torvalds in the United States, other countries or both.
Microsoft, Windows and the Windows logo are registered trademarks of Microsoft Corporation in the United States, other countries or both.
Intel, Itanium, Pentium are registered trademarks and Xeon is a trademark of Intel Corporation or its subsidiaries in the United States, other countries or both.
AMD Opteron is a trademark of Advanced Micro Devices, Inc.
Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries or both.
TPC-C and TPC-H are trademarks of the Transaction Performance Processing Council (TPPC).
SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPEC OMP, SPECviewperf, SPECapc, SPEChpc, SPECjvm, SPECmail, SPECimap and SPECsfs are
trademarks of the Standard Performance Evaluation Corp (SPEC).
NetBench is a registered trademark of Ziff Davis Media in the United States, other countries or both.
AltiVec is a trademark of Freescale Semiconductor, Inc.
Cell Broadband Engine is a trademark of Sony Computer Entertainment Inc.
InfiniBand, InfiniBand Trade Association and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade Association.
Other company, product and service names may be trademarks or service marks of others.
Revised February 9, 2010
Special notices (cont.)
© 2011 IBM Corporation
IBM Power Systems
48
The IBM benchmarks results shown herein were derived using particular, well configured, development-level and generally-available computer systems. Buyers should
consult other sources of information to evaluate the performance of systems they are considering buying and should consider conducting application oriented testing. For
additional information about the benchmarks, values and systems tested, contact your local IBM office or IBM authorized resel ler or access the Web site of the benchmark
consortium or benchmark vendor.
IBM benchmark results can be found in the IBM Power Systems Performance Report at http://www.ibm.com/systems/p/hardware/system_perf.html .
All performance measurements were made with AIX or AIX 5L operating systems unless otherwise indicated to have used Linux. For new and upgraded systems, AIX
Version 4.3, AIX 5L or AIX 6 were used. All other systems used previous versions of AIX. The SPEC CPU2006, SPEC2000, LINPACK, and Technical Computing
benchmarks were compiled using IBM's high performance C, C++, and FORTRAN compilers for AIX 5L and Linux. For new and upgraded systems, the latest versions of
these compilers were used: XL C Enterprise Edition V7.0 for AIX, XL C/C++ Enterprise Edition V7.0 for AIX, XL FORTRAN Enterprise Edition V9.1 for AIX, XL C/C++
Advanced Edition V7.0 for Linux, and XL FORTRAN Advanced Edition V9.1 for Linux. The SPEC CPU95 (retired in 2000) tests used preprocessors, KAP 3.2 for FORTRAN
and KAP/C 1.4.2 from Kuck & Associates and VAST-2 v4.01X8 from Pacific-Sierra Research. The preprocessors were purchased separately from these vendors. Other
software packages like IBM ESSL for AIX, MASS for AIX and Kazushige Goto’s BLAS Library for Linux were also used in some benchmarks.
For a definition/explanation of each benchmark and the full list of detailed results, visit the Web site of the benchmark consortium or benchmark vendor.
TPC http://www.tpc.org
SPEC http://www.spec.org
LINPACK http://www.netlib.org/benchmark/performance.pdf
Pro/E http://www.proe.com
GPC http://www.spec.org/gpc
VolanoMark http://www.volano.com
STREAM http://www.cs.virginia.edu/stream/
SAP http://www.sap.com/benchmark/
Oracle Applications http://www.oracle.com/apps_benchmark/
PeopleSoft - To get information on PeopleSoft benchmarks, contact PeopleSoft directly
Siebel http://www.siebel.com/crm/performance_benchmark/index.shtm
Baan http://www.ssaglobal.com
Fluent http://www.fluent.com/software/fluent/index.htm
TOP500 Supercomputers http://www.top500.org/
Ideas International http://www.ideasinternational.com/benchmark/bench.html
Storage Performance Council http://www.storageperformance.org/results
Revised March 12, 2009
Notes on benchmarks and values
© 2011 IBM Corporation
IBM Power Systems
49
Revised April 2, 2007
Notes on performance estimates
rPerf for AIX
rPerf (Relative Performance) is an estimate of commercial processing performance relative to other IBM UNIX systems. It is derived from an IBM analytical model which uses characteristics from IBM internal workloads, TPC and SPEC benchmarks. The rPerf model is not intended to represent any specific public benchmark results and should not be reasonably used in that way. The model simulates some of the system operations such as CPU, cache and memory. However, the model does not simulate disk or network I/O operations.
rPerf estimates are calculated based on systems with the latest levels of AIX and other pertinent software at the time of system announcement. Actual performance will vary based on application and configuration specifics. The IBM eServer pSeries 640 is the baseline reference system and has a value of 1.0. Although rPerf may be used to approximate relative IBM UNIX commercial processing performance, actual system performance may vary and is dependent upon many factors including system hardware configuration and software design and configuration. Note that the rPerf methodology used for the POWER6 systems is identical to that used for the POWER5 systems. Variations in incremental system performance may be observed in commercial workloads due to changes in the underlying system architecture.
All performance estimates are provided "AS IS" and no warranties or guarantees are expressed or implied by IBM. Buyers should consult other sources of information, including system benchmarks, and application sizing guides to evaluate the performance of a system they are considering buying. For additional information about rPerf, contact your local IBM office or IBM authorized reseller.
========================================================================
CPW for IBM i
Commercial Processing Workload (CPW) is a relative measure of performance of processors running the IBM i operating system. Performance in customer environments may vary. The value is based on maximum configurations. More performance information is available in the Performance Capabilities Reference at: www.ibm.com/systems/i/solutions/perfmgmt/resource.html
© 2011 IBM Corporation
IBM Power Systems
50
Revised March 12, 2009
Notes on HPC benchmarks and valuesThe IBM benchmarks results shown herein were derived using particular, well configured, development-level and generally-available computer systems. Buyers should
consult other sources of information to evaluate the performance of systems they are considering buying and should consider conducting application oriented testing. For
additional information about the benchmarks, values and systems tested, contact your local IBM office or IBM authorized reseller or access the Web site of the benchmark
consortium or benchmark vendor.
IBM benchmark results can be found in the IBM Power Systems Performance Report at http://www.ibm.com/systems/p/hardware/system_perf.html .
All performance measurements were made with AIX or AIX 5L operating systems unless otherwise indicated to have used Linux. For new and upgraded systems, AIX
Version 4.3 or AIX 5L were used. All other systems used previous versions of AIX. The SPEC CPU2000, LINPACK, and Technical Computing benchmarks were compiled
using IBM's high performance C, C++, and FORTRAN compilers for AIX 5L and Linux. For new and upgraded systems, the latest versions of these compilers were used: XL
C Enterprise Edition V7.0 for AIX, XL C/C++ Enterprise Edition V7.0 for AIX, XL FORTRAN Enterprise Edition V9.1 for AIX, XL C/C++ Advanced Edition V7.0 for Linux, and
XL FORTRAN Advanced Edition V9.1 for Linux. The SPEC CPU95 (retired in 2000) tests used preprocessors, KAP 3.2 for FORTRAN and KAP/C 1.4.2 from Kuck &
Associates and VAST-2 v4.01X8 from Pacific-Sierra Research. The preprocessors were purchased separately from these vendors. Other software packages like IBM ESSL
for AIX, MASS for AIX and Kazushige Goto’s BLAS Library for Linux were also used in some benchmarks.
For a definition/explanation of each benchmark and the full list of detailed results, visit the Web site of the benchmark consortium or benchmark vendor.
SPEC http://www.spec.org
LINPACK http://www.netlib.org/benchmark/performance.pdf
Pro/E http://www.proe.com
GPC http://www.spec.org/gpc
STREAM http://www.cs.virginia.edu/stream/
Fluent http://www.fluent.com/software/fluent/index.htm
TOP500 Supercomputers http://www.top500.org/
AMBER http://amber.scripps.edu/
FLUENT http://www.fluent.com/software/fluent/fl5bench/index.htm
GAMESS http://www.msg.chem.iastate.edu/gamess
GAUSSIAN http://www.gaussian.com
ANSYS http://www.ansys.com/services/hardware-support-db.htm
Click on the "Benchmarks" icon on the left hand side frame to expand. Click on "Benchmark Results in a Table" icon for benchmark results.
ABAQUS http://www.simulia.com/support/v68/v68_performance.php
ECLIPSE http://www.sis.slb.com/content/software/simulation/index.asp?seg=geoquest&
MM5 http://www.mmm.ucar.edu/mm5/
MSC.NASTRAN http://www.mscsoftware.com/support/prod%5Fsupport/nastran/performance/v04_sngl.cfm
STAR-CD www.cd-adapco.com/products/STAR-CD/performance/320/index/html
NAMD http://www.ks.uiuc.edu/Research/namd
HMMER http://hmmer.janelia.org/
http://powerdev.osuosl.org/project/hmmerAltivecGen2mod