z196 and z114 HW Subsystems Update zOS 2012-03-14 - z196...z114, z196, & z10 BC •2 I/O domains...
Transcript of z196 and z114 HW Subsystems Update zOS 2012-03-14 - z196...z114, z196, & z10 BC •2 I/O domains...
© Copyright IBM Corporation 2012
IBM System z
GSE z/OS workgroup meeting 2012/03/14
z196 and z114 HW Subsystems Update
Jean-Paul GoemaereIBM Belgium / Luxemburg
© Copyright IBM Corporation 2012
IBM System z
I/O infrastructure� New PCIe-based I/O infrastructure� New PCIe I/O drawer
� Increased port granularity
� Designed for improved power and bandwidth compared to I/O cage and I/O drawer
Storage� New PCIe-based FICON Express8S features� ESCON Statement of Direction
Networking� New PCIe-based OSA-Express4S features
Coupling� New 12x InfiniBand and 1x InfiniBand features (HCA3-O fanouts)
� 12x InfiniBand – improved service times when using 12x IFB3 protocol� 1x InfiniBand – increased port count
zEnterprise 114 and zEnterprise 196 GA2
© Copyright IBM Corporation 2012
IBM System z
STIz990/z890
STIz9
InfiniBandz10/z196 GA1
STIz900/z800
PCIe: Peripheral Component Interface (PCI) ExpressSTI: Self-Timed Interconnect
6 GBps
2.7 GBps
2 GBps
1 GBps
I/O Subsystem Internal Bus Interconnect Speeds (GBps)
PCIez196 GA2
z1148 GBps
© Copyright IBM Corporation 2012
IBM System z
IBM System zBalanced System
Comparisonfor High End Servers
Balanced SystemCPU, nWay, Memory,
I/O Bandwidth*
Memory
3 TB**
System I/O Bandwidth384 GB/Sec*
PCI for1-way
12021.5 TB**
64-way
920
80-wayProcessors
288 GB/sec*
600512 GB
54-way
96 GB/sec
450256 GB
32-way
24 GB/sec
30064 GB
16-way
* Servers exploit a subset of its designed I/O capability** Up to 1 TB per LPARPCI – Processor Capacity Index
172.8 GB/sec*
z10 EC
z9 EC
zSeries 990
zSeries 900
z196 - 2011
© Copyright IBM Corporation 2012
IBM System z
z196 GA2 I/O Infrastructure (PCIe Based) with PCIe I/O drawerBook 3Book 1
Memory
x16 PCIeGen2 8 GBps
Memory Memory Memory
Book 0 Book 2
PCIeswitch
OSA-Express4S
PCIe (8x) PCIe (8x) HCA2 (8x)
PCIeswitch
PCIeswitch
FICON Express8S
PCIeswitch
4 GB/s PCIeGen2 x8
…
SC1, SC0 (FBC)
PU PU
PU PU PU
PU
SC1, SC0 (FBC)
PU PU
PU PU PU
PU
SC1, SC0 (FBC)
PU PU
PU PU PU
PU
SC1, SC0 (FBC)
PU PU
PU PU PU
PU
PCIe I/O drawerPCIe I/O drawer
FanoutsFanoutsFanoutsFanouts
6 GBps
HCA2 (8x)
IFB-MPRII
IFB-MP
FICON Express8
2 GBps mSTI
Channels
1 GBpsmSTI
2 GBps mSTI
Ports
2GBpsmSTI
FP
GA
FP
GA
FP
GA
FP
GA
RII IFB-MPIFB-MP
OSA-Express3
I/O Cage & I/O drawerI/O Cage & I/O drawer
RIIRII
© Copyright IBM Corporation 2012
IBM System z
z114 and z196 at GA2 support two different internal I/O infrastructures
� The current InfiniBand I/O infrastructure first made available on z10– InfiniBand fanouts supporting the current 6 GBps InfiniBand I/O interconnect– InfiniBand I/O card domain multiplexers with Redundant I/O interconnect in:
• The 14U, 28-slot, 7-domain I/O cage (z196 only)• The 5U, 8-slot, 2-domain IO drawer (z114 and z196)
– Selected legacy I/O feature cards• Carry forward and new build
� The New PCI Express I/O infrastructure– PCIe fanouts supporting a new 8 GBps PCIe I/O interconnect– PCIe switches with Redundant I/O interconnect in for I/O domains in a
new 7U, 32-slot, 4-domain I/O drawer (z114 and z196 GA2)– New FICON Express8S and OSA-Express4S I/O feature cards
• Designed to:
� Reduce purchase granularity (fewer ports per card)
� Improve performance
� Increase I/O port density
� Save energy
© Copyright IBM Corporation 2012
IBM System z
PCIe I/O Drawer
© Copyright IBM Corporation 2012
IBM System z
PCIe I/O drawer and PCIe I/O features
� Increased infrastructure bandwidth– PCI Express 2 x16 - 8 GBps interconnect
(Compared to 6 GBps 12x Infiniband DDR interconnect)– PCI Express 2 x8 - 4 GBps available to PCIe I/O feature cards
(Compared to 2 GBps or less available to older I/O feature cards)
� Compact– Two 32-slot PCIe I/O drawers occupy the same space as one 28-slot I/O cage– Increases I/O port density 14% (Equivalent to an increase from 28 to 32 slots)
� Improved I/O feature purchase granularity– “Half high” I/O feature cards compared to older I/O feature cards– Two FICON Express8S channels per feature (Four on FICON Express8)– One or two OSA-Express4S ports per feature (Two or four on OSA-Express3)
� Reduced power consumption
� Designed for Improved Reliability, Availability, an d Serviceability– Concurrent field MES install and repair– Symmetrical, redundant cooling across all cards and power supplies– Temperature monitoring of critical ASICs
© Copyright IBM Corporation 2012
IBM System z
7U
7U
Rear
Front� Supports only the new PCIe I/O cards
introduced with z114 and z196 GA2.
� Supports 32 PCIe I/O cards, 16 front and 16 rear, vertical orientation, in four 8-card domains (shown as 0 to 3).
� Requires four PCIe switch cards ( ����), each connected to an 8 MBps PCIe I/O interconnect to activate all four domains.
� To support Redundant I/O Interconnect(RII) between front to back domain pairs 0-1 and 2-3 the two interconnects to each pair must be from 2 different PCIe fanouts. (All four domains in one of these cages can be activated with two fanouts.)
� Concurrent field install and repair.
� Requires 7 EIA Units of space(12.25 inches ≈ 311 mm)
Domain 0 Domain 2
� �
� �
Domain 3 Domain 1
New 32 slot PCIe I/O drawer
z114TLLB9
© Copyright IBM Corporation 2012
IBM System z
z196TLLB10
z114 & z196•4 I/O domains•32 I/O slots (PCIe I/O cards only)• At least two PCIe fanouts (4 ports per drawer)•7 EIA Units
z114, z196, & z10 BC•2 I/O domains•8 I/O slots (legacy I/O cards only)• At least 2 HCA2-C fanouts (2 ports per drawer)•Up to two drawers on a pair of fanouts•5 EIA Units
z196 & earlier (except z10 BC and z800)•7 I/O domains•28 I/O slots (legacy I/O cards only)•Up to 4 fanouts (z9 and later Systems) for all 7 domains•14 EIA Units
z114
Not Supported
I/O drawer
FC 4000
PCIe I/O drawer
FC 4003
z114 & z196 GA2 I/O Drawers and Cages
© Copyright IBM Corporation 2012
IBM System z
z196 and z114 I/O Features
© Copyright IBM Corporation 2012
IBM System z
*All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal with out notice. Any reliance on these statements of general direction is at the rel ying party's sole risk and will not create liabilit y or obligation for IBM.
ESCON Statement of Direction* - July 12, 2011
� The IBM zEnterprise 196 and the IBM zEnterprise 114 are the last System z servers to support ESCON channels. IBM plans to not offer ESCO N channels as an orderable feature on System z servers that follow the z196 (m achine type 2817) and z114 (machine type 2818). In addition, ESCON channels ca nnot be carried forward on an upgrade to such follow-on servers.
� This plan applies to channel path identifier (CHPID ) types CNC, CTC, CVC, and CBY and to features 2323 and 2324.
� IBM Global Technology Services offers an ESCON to F ICON Migration solution, Offering ID #6948-97D, to help facilitate migration from ESCON to FICON. This offering is designed to help customers to simplify and manage a single physical and operational environment - FICON channels on the main frame with continued connectivity to ESCON devices.
� This is a restatement of general direction already published. Refer to Hardware Announcement 111-012, dated February 15, 2011.
© Copyright IBM Corporation 2012
IBM System z
z196TLLB13
� PRIZM is a purpose built appliance designed exclusi vely for IBM System z
� PRIZM converts native FICON (FC) protocol to native ESCON (CNC) protocol allowing ESCON and B/T devices to connect t o FICON channels
Front View
Rear View
2U
ESCON Ports: MT-RJ
FICON Ports: LC Duplex
PRIZM Basics
© Copyright IBM Corporation 2012
IBM System z
z196TLLB14
Where Does PRIZM Fit in the Data Center?
Native FICON Tape & DASD
FICON Channel Extension
1. Point-to-Point FICON
2. Switched FICON
3. Cascaded and Channel Extended FICON
ISL
IP
*PRIZM supports all ESCON control units: Tape, Printers, Com Devices, FEPs etc.
ESCON Device Pool
FICON
FICON
ESCON
ESCON
© Copyright IBM Corporation 2012
IBM System z
z196 GA2 and z114 I/O Features supported
� Features – PCIe I/O drawer– FICON Express8S
• SX and LX– OSA-Express4S
• 10 GbE LR and SR• GbE SX and LX
� Features – I/O cage and I/O drawer– Crypto Express3– ESCON (240 or fewer)– FICON Express8 (Carry forward or RPQ 8P2534 to fill empty slots)– FICON Express4 (Carry forward only)– ISC-3– OSA-Express3 1000BASE-T– OSA-Express3 (Carry forward or RPQ 8P2534 to fill empty slots)
• 10 GbE, GbE– OSA-Express2 (Carry forward only)
• GbE, 1000BASE-T– PSC (Carry forward or new build, no MES add)
28 slot I/O cage(not on z114)
8 slot I/O drawer
PCIe I/O drawer
32 I/O slots
Supported features
© Copyright IBM Corporation 2012
IBM System z
� For FICON, zHPF, and FCP environments� CHPID types: FC and FCP
� 2 PCHIDs/CHPIDs
�Auto-negotiates to 2, 4, or 8 Gbps
�Increased performance compared to FICON Express8
� 10KM LX - 9 micron single mode fiberƒ Unrepeated distance - 10 kilometers (6.2 miles)
ƒ Receiving device must also be LX
� SX - 50 or 62.5 micron multimode fiber
ƒ Distance variable with link data rate and fiber type
ƒ Receiving device must also be SX
� 2 channels of LX or SX (no mix)
� Small form factor pluggable (SFP) optics� Concurrent repair/replace action for each SFP LX SX
FICON Express8S – PCIe I/O drawer
# 0409 – 10KM LX, # 0410 – SX
OR
2, 4, 8 Gbps
2, 4, 8 Gbps
FLASH
SFP+
SFP+ IBM ASIC
IBM ASIC
PCIeSwitch
HBAASIC
HBAASIC
FLASH
© Copyright IBM Corporation 2012
IBM System z
1200
14000
31000
20000
52000
20000
92000
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
100000
I/O driver benchmark I/Os per second4k block sizeChannel 100% utilized
FICONExpress4
andFICON
Express2
zHPF
FICON Express8
zHPF
FICON Express8FICON
Express4and
FICONExpress2
ESCON
zHPF
FICON Express8S
FICON Express8S
z10 z10z196z10
z196z10
z196z114
z196z114
FICON performance on System z
350
520620
770
620
1600
0100200300400500600700800900
10001100120013001400150016001700
FICON Express44 Gbps
I/O driver benchmarkMegaBytes per secondFull-duplexLarge sequentialread/write mix
FICONExpress44 Gbps
FICON Express
88 Gbps
FICON Express88 Gbps FICON
Express8S8 Gbps
FICON Express8
S8 Gbps
z10z9 z10
z196z10
z196z10
z196z114
z196z114
zHPF
zHPF
zHPF
77% increase
108% increase
© Copyright IBM Corporation 2012
IBM System z
FCP performance on System z
15750
31500
60000
84000 8400092000
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
100000
FE8 FE8 FE8S8 Gbps 8 Gbps 8 Gbps
I/Os per secondRead/writes/mix 4k block size, channel 100% utilized
z9 z9 z10 z10 z196z196z114
520
770
1500
0
200
400
600
800
1000
1200
1400
1600 MegaBytes per second (full-duplex)Large sequential Read/write mix
z10z196z10
z196z114
95% increase
10% increase
FE44 Gbps
FE44 Gbps
FE44 Gbps
FE8S8 Gbps
FE88 Gbps
FE44 Gbps
© Copyright IBM Corporation 2012
IBM System z
z196TLLB19
FCP channels to support T10-DIF for enhanced reliability
� System z Fibre Channel Protocol (FCP) has implement ed support of the American National Standards Institute's (ANSI) T10 Data Inte grity Field (DIF) standard.
– Data integrity protection fields are generated by the operating system and propagated through the storage area network (SAN).
– System z helps to provide added end-to-end data protection between the operating system and the storage device
� An extension to the standard, Data Integrity Extens ions (DIX), provides checksum protection from the application layer through the h ost bus adapter (HBA), where cyclical redundancy checking (CRC) protection is im plemented
� T10-DIF support by the FICON Express8S and FICON Ex press8 features, when defined as CHPID type FCP, is exclusive to z196 and z114.
� Exploitation of the T10-DIF standard requires suppo rt by the operating system and the storage device
– z/VM V5.4, V6.1 with PTFs and V6.2 for guest exploitation– Linux on System z distributions:
• IBM is working with its Linux distribution partners to include support in future Linux on System z distribution releases. .
© Copyright IBM Corporation 2012
IBM System z
� 10 Gigabit Ethernet (10 GbE)
� CHPID types: OSD, OSX
� Single mode (LR) or multimode (SR) fiber
� One port of LR or one port of SR
� 1 PCHID/CHPID
� Gigabit Ethernet (GbE)
� CHPID types: OSD (OSN not supported)
� Single mode (LX) or multimode (SX) fiber
� Two ports of LX or two ports of SX
� 1 PCHID/CHPID
� Small form factor optics – LC Duplex
OSA-Express4S fiber optic – PCIe I/O drawer
# 0406 – 10 GbE LR, # 0407 – 10 GbE SR
PCIeIBM ASIC
FPGA
PCIeIBM ASIC
FPGA
# 0404 – GbE LX, # 0405 – GbE SX
© Copyright IBM Corporation 2012
IBM System z
OSA-Express4S 10 GbE performance (laboratory)Mixed Streams – 1492 Byte MTUs
1240
2080
0
500
1000
1500
2000
2500
MB
ps
680
1180
0
200
400
600
800
1000
1200
1400
MB
ps
615
1120
0
200
400
600
800
1000
1200
1400M
Bps
11801680
0
500
1000
1500
2000
2500
MB
ps
Inbound Streams – 1492 Byte MTUs
OSA-E3 OSA-E4S
80% increase
70% increase 70% increase
40% increase
Mixed Streams – 8000 Byte MTUsInbound Streams – 8000 Byte MTUs
OSA-E3 OSA-E4S
OSA-E3 OSA-E4S OSA-E3 OSA-E4S
Notes:� 1 megabyte per second(MBps) is 1,048,576bytes per second
� MBps represents payloadthroughput (doesnot count packetand frame headers)
Maximum transmission units (MTUs)
© Copyright IBM Corporation 2012
IBM System z
z196TLLB22
z196/z114 HiperSockets – doubled the number
HiperSockets
L3 z/VM LP4
L2 z/VM LP5
z/VSELP3
Linux on System zLP2
z/OSLP1
� High-speed “intraserver” network
� Independent, integrated, virtual LANs
� Communication path – system memory
� Communication across LPARs– Single LPAR - connect up to 32 HiperSockets
� Support for multiple LCSS's & spanned channels
� Virtual LAN (IEEE 802.1q) support
� HiperSockets Network Concentrator
� Broadcast support for IPv4 packets
� IPv6
� HiperSockets Network Traffic Analyzer (HS NTA)
� No physical cabling or external connections require d
© Copyright IBM Corporation 201223
IBM System z
Support for HiperSockets Completion Queue
� HiperSockets Completion Queue: (Statement of Direction July 12, 2011)– IBM plans to support transferring HiperSockets messages asynchronously, in
addition to the current synchronous manner on z196 and z114. This could be especially helpful in burst situations. The Completion Queue function is designed to allow HiperSockets to transfer data synchronously if possible and asynchronously if necessary, thus combining ultra-low latency with more tolerance for traffic peaks. HiperSockets Completion Queue is planned to be supported in the z/VM and z/VSE environments in a future deliverable
� Operating System Support– z/OS V1.13 (Toleration, no exploitation) – Linux on System z distributions
• Red Hat Enterprise Linux (RHEL) 6.2• Novell SUSE Linux Enterprise Server (SLES) 11 SP2
© Copyright IBM Corporation 201224
IBM System z
Support for HiperSockets Integration with the IEDN
� HiperSockets integration with the IEDN: (Statement of Direction July 12, 2011)– Within a zEnterprise environment, it is planned for HiperSockets to be integrated
with the intraensemble data network (IEDN), extending the reach of the HiperSockets network outside of the central processor complex (CPC) to the entire ensemble, appearing as a single Layer 2 network. HiperSockets integration with the IEDN is planned to be supported in z/OS V1.13 and z/VM in a future deliverable*
� Operating System Support:– z/OS V1.13 with PTFs– z/VM V6.2 with PTFs planned to be available April 13, 2012
CEC X
z/OS LPAR
OSX NIC
Server A
(native)
HiperSockets (IQDX)
IQDX NIC
PR/SM
If 1
OSX NIC
Linux LPAR
Server B
(native)If A
z/VM LPAR B
VSwitch B
Primary BC
zLin
ux
QB
1zL
inu
x Q
B2
zLin
ux
QB
3zL
inu
x Q
B4
OS
X
Upl
ink
Por
t
HS
Brid
ge
Por
tzL
inu
x V
B1
z/O
S
VB
2
QEBSMz/VM LPAR A
VSwitch A
Secondary BC
zLin
ux
QA
1zL
inu
x Q
A2
zLin
ux
QA
3zL
inu
x Q
A4
OS
X
Upl
ink
Por
t
HS
Brid
ge
Por
tzL
inu
x V
A1
z/O
S
VA
2
QEBSM
© Copyright IBM Corporation 2012
IBM System z
z196TLLB25
CPU CPU CPU CPU
Other Bulk SD EE
Q1 Q2 Q3 Q4
z/OS Instance
(TCP/IP stack)
OSA-Express
Device
WAN
IWQ
EE traffic separation optimizes the host inbound EE processing
Inbound Workload Queuing for Enterprise Extender
� OSA separates inbound network traffic to multiple input queues (based on type of workload) on the same host (device) interface
� Each input queue can be serviced concurrently by separate processors
� Stack receives pre-sorted packets and host processing is optimized based on the unique requirements for each unique type of workload
� New – Enterprise Extender traffic is added as a new input queue (new IWQ)!
© Copyright IBM Corporation 2012
IBM System z
Coupling HW
© Copyright IBM Corporation 2012
IBM System z
Up to 16 CHPIDs – across 2 ports*
z114 and z196 GA2 InfiniBand HCA3 Fanouts
Up to 16 CHPIDs – across 4 ports*
IFB IFB
HCA3-O for 12x IFB & 12x IFB3
IFB
HCA3-O LR for 1x IFB
IFB IFB IFB
* Performance considerations may reduce the number of CHPIDs per port.
� New 12x InfiniBand and 1x InfiniBand fanout cards� Exclusive to zEnterprise 196 and zEnterprise 114
– HCA3-O fanout for 12x InfiniBand coupling links• CHPID type – CIB
– Improved service times with 12x IFB3 protocol– Two ports per feature– Fiber optic cabling – 150 meters– Supports connectivity to HCA2-O
(No connectivity to System z9 HCA1-O)– Link data rate of 6 GBps
– HCA3-O LR fanout for 1x InfiniBand coupling links• CHPID type – CIB
– Four ports per feature– Fiber optic cabling
– 10 km unrepeated, 100 km repeated– Supports connectivity to HCA2-O LR– Link data rate server-to-server 5 Gbps– Link data rate with WDM; 2.5 or 5 Gbps
Note: The InfiniBand link data rates do not represent the performance of the link. The actual performance is dependent upon many factors including latency through the adapters, cable lengths, and the type of workload.
© Copyright IBM Corporation 2012
IBM System z
12x InfiniBand Coupling IFB3 Protocol (HCA3-O fanout)
� Two protocols1. 12x IFB – HCA3-O to HCA3-O or HCA2-O
2. 12x IFB3 - improved service times for HCA3-O to HCA3-O
– 12x IFB3 service times are designed to be 40% faster than 12x IFB
� 12x IFB3 protocol activation requirements– Four or fewer CHPIDs per HCA3-O port
• If more than four CHPIDs are defined per port, CHPIDs will use IFB protocol and run at 12x IFB service times
Up to 16 CHPIDs – across 2 ports*
IFB IFB
HCA3-O for 12x IFB & 12x IFB3
* Performance considerations may reduce the number of CHPIDs per port.
Note: The InfiniBand link data rates do not represent the performance of the link. The actual performance is dependent upon many factors including latency through the adapters, cable lengths, and the type of workload.
© Copyright IBM Corporation 2012
IBM System z
z196TLLB29
STP Recovery Enhancements – Going away signal
� Going away signal is a reliable unambiguous signal to indicate that the CPC is about to enter a check stopped state
� In an STP-only Coordinated Timing Network, when a g oing away signal from the CTS is received by the BTS,
– BTS safely takes over as CTS– Going away signal has priority over OLS in a 2 server CTN– BTS can also use going away signal to take over as CTS for CTNs with 3 or more
servers without communicating with Arbiter
� Dependencies on OLS and CAR removed in a 2 server C TN
� Dependency on BTS>Arbiter communication removed in CTNs with 3 or more servers
� Prerequisites:– InfiniBand (IFB) links using
• HCA3-O to HCA3-O
– 12x IFB or 12x IFB3• HCA3-O LR to HCA3-O LR
– 1x IFB
� Current recovery design still used when going away signal not received by BTS and for other failure types
Driver 93 onlyHiper EC.MCL N48165.053 (alert/circumvention)
Hiper EC.MCL N48165.057 (alert/fix)
© Copyright IBM Corporation 2012
IBM System z
z196TLLB30
zBX Virtual Servers improved time
•SE's Battery Operated Clock (BOC)•server's Time-of-Day (TOD)•Network Time Protocol (NTP)•Universal Time Coordinated (UTC)
© Copyright IBM Corporation 2012
IBM System z
Latest news March 6th, 2012
© Copyright IBM Corporation 2012
IBM System z
LGR is the very best
z/VM software
enhancement since
64-bit support became
available.”
“
– Mark Shackelford, Vice President, Information Services, Baldor
� Increased flexibility with Live Guest Relocation (LGR) to move virtual servers without disruption.
� Increased management of resources with multi-system virtualization to allow up to four z/VM®
instances to be clustered as a single system image.
� Increased scalability with up to four systems horizontally, even on mixed hardware generations.
� Increased availability through non-disruptively moving work to available system resources and non-disruptively moving system resources to work.
Relief from the challenges associated with virtual machine sprawl on competitive systems
Superior virtualization for today and the future: z/VM 6.2
© Copyright IBM Corporation 2012
IBM System z
Increased flexibility for deployment and managementIncreased flexibility for deployment and management
Enhanced disaster recoveryEnhanced disaster recovery
Dynamic discovery of storage resourcesDynamic discovery of storage resources
APIs for Unified Resource Manager� Integration between Unified Resource Manager and the broader ecosystem of
management tools �Programmatic access to the same functions provided by the Hardware Management
Console
�Simplified configuration of storage resources and v irtual servers
Geographically Dispersed Parallel Sysplex ™ (GDPS®)�End-to-end application CA/DR now available for all zEnterprise resources
including zBX
New capabilities for IBM zEnterprise System and Unified Resource Manager
© Copyright IBM Corporation 2012
IBM System z
Network simplification and enhancementsNetwork simplification and enhancements
• HiperSocketsTCompletion Queue
• Improved network monitoring and metrics• HiperSockets integration with the intraensemble data network (IEDN)
• z/VM® V6.2 HiperSockets Virtual Switch Bridge support
• Server Application State Protocol (SASP) load balancing
New capabilities for IBM zEnterprise System and Unified Resource Manager
• Support for additional configurations• Support for additional configurations
• Additional Fibre Channel optics for BladeCenter chassis• Support for BladeCenter HX5 blade with 192 and 256 GB memory• Up to 56 System x blades• SAP support for System x Linux and Microsoft Windows
© Copyright IBM Corporation 2012
IBM System z
z196TLLB35
System z Statements of Direction
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
© Copyright IBM Corporation 2012
IBM System z
z196TLLB36
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
zEnterprise – Statements of Direction (SODs)
� The IBM zEnterprise 196 and the zEnterprise 114 are t he last System z servers to support the Power Sequence Controller (PSC) feature .
– IBM intends to not offer support for the PSC (feature #6501) on future System z servers after the z196 (2817) and z114 (2818). PSC features cannot be ordered and cannot be carried forward on upgrade to such a follow-on server.
� The IBM zEnterprise 196 and the zEnterprise 114 are t he last System z servers to offer ordering of ISC-3
– Enterprises should begin migrating from ISC-3 features (#0217, #0218, #0219) to 12x InfiniBand (#0163 - HCA2-O or #0171 - HCA3-O fanout) or 1x InfiniBand (#0168 - HCA2-O LR or #0170 - HCA3-O LR fanout) coupling links.
� The IBM zEnterprise 196 and the zEnterprise 114 are t he last System z servers to support ESCON channels
– IBM plans not to offer ESCON channels as an orderable feature on System z servers that follow the z196 (machine type 2817) and z114 (machine type 2818). In addition, ESCON channels cannot be carried forward on an upgrade to such follow-on servers. This plan applies to channel path identifier (CHPID) types CNC, CTC, CVC, and CBY and to features #2323 and #2324.
– System z customers should continue migrating from ESCON to FICON. Alternate solutions are available for connectivity to ESCON devices
© Copyright IBM Corporation 2012
IBM System z
z196TLLB37
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
zEnterprise – Statements of Direction (SODs) - continued
� The IBM zEnterprise 196 and the zEnterprise 114 are t he last System z servers to support OSA-Express2 features:
– Enterprises should begin migrating from OSA-Express2 features (#3364, #3365, #3366) to OSA-Express3/OSA Express4S features.
� The IBM zEnterprise 196 and the zEnterprise 114 are t he last System z servers to support dial-up modems for use with the Remote Supp ort Facility (RSF), and the External Time Source (ETS) option of Server Time Pr otocol (STP).
– The currently available Network Time Protocol (NTP) server option for ETS as well as Internet time services available using broadband connections can be used to provide the same degree of accuracy as dial-up time services.
– Enterprises should begin migrating from dial-up modems to Broadband for RSF connections.
© Copyright IBM Corporation 2012
IBM System z
z196TLLB38
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
zEnterprise – Statements of Direction (SODs) – October 12, 2011� Removal of modem support
– Beginning with the next System z server after the IBM zEnterprise 196 and 114, the new Hardware Management Console (HMC) is intended to no longer provide modem support. As a result, modems will not be allowed for use with the Remote Support Facility (RSF), and one of the External Time Source (ETS) options of Server Time Protocol (STP). Only broadband connections will be allowed. The new HMC driver is planned to provide enhanced security by providing Network Time Protocol (NTP) authentication support, when a NTP server is accessed to get accurate time for the STP Coordinated Timing Network (CTN).
– Note that the above changes will affect new orders of z196 and z114, as well as upgrades of HMC driver levels to this new version
– Enterprises using modems for RSF or STP should plan on migrating to broadband connections. The currently available NTP server option for ETS, as well as internet time services available using broadband connections, can be used to provide the same degree of accuracy as dial-up time services
– Reference: Integrating the Hardware Management Console's Broadband Remote Support Facility into your Enterprise, SC28-6880
� Note: When implemented, the above changes are intended to apply to new HMC orders for z196 and z114, as well as upgrades of older HMCs to this new version of HMC LIC.
© Copyright IBM Corporation 2012
IBM System z
z196TLLB39
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
zEnterprise – Statements of Direction (SODs) – October 12, 2011
� IBM intends that the zEnterprise196 and zEnterprise 114 will be the last servers to offer ordering of the external Ethernet switch.
� As a result, it will no longer be possible to order as server features on future System z servers the Ethernet switches required to by HMCs. Customers should plan to use existing supported switches or to acquire additional switches separately to implement the required HMC LAN connectivity.
� Note: Ethernet switches are offered today as FC #0070 on zEnterprise servers.
CEC BPH Connections
© Copyright IBM Corporation 2012
IBM System z
z196TLLB40
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
zEnterprise – Statements of Direction (SODs) – October 12, 2011� GDPS/Global Mirror clusters managed by SA AppMan:
– GDPS plans to enhance its Distributed Cluster Management (DCM) support for IBM Tivoli System Automation Application Manager (SA AppMan) by extending it to the GDPS/Global Mirror (GM) offering in addition to the GDPS/PPRC offering available today. This will allow for coordinated disaster recovery across System z and distributed servers at unlimited distances. With GDPS/GM managing replication of data for both System z and for the distributed servers under SA AppMan control, this solution can also provide cross-platform data consistency across the System z and distributed servers.
� GDPS DCM support for stand alone distributed server s: – GDPS plans to enhance its DCM support for SA AppMan by extending it to stand
alone distributed servers, building upon the support for clustered distributed servers available today. This capability can benefit distributed servers running on a zBX or on other distributed platforms, which are not members of a clustered network, and will allow continuous availability and disaster recovery across heterogeneous platforms. Support is planned for GDPS/PPRC and GDPS/GM.
© Copyright IBM Corporation 2012
IBM System z
z196TLLB41
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
zEnterprise – Statements of Direction (SODs) – October 12, 2011
� HMC z/VM Tower Systems Management Support – z/VM 6.2 is intended to be the last release supported by the HMC z/VM Tower
systems management support originally introduced with System z9. The alternative implementation for virtual server and virtual resource management for z/VM V6 continues to be supported by the zEnterprise Unified Resource Manager on zEnterprise or later.
© Copyright IBM Corporation 2012
IBM System z
z196TLLB42
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
zEnterprise – Statements of Direction (SODs) – October 12, 2011
� Global Resource Serialization (GRS) ring to support FICON channels– Many customers are migrating from ESCON channels to FICON channels, and in
Hardware Announcement dated July 12, 2011, IBM announced that the zEnterprise 196 (z196) and zEnterprise 114 (z114) generation of servers is intended to be the last to support ESCON channels. Although IBM recommends that you use GRS Star, and for GRS Ring environments recommends XCF communications for CTC management, IBM intends to extend z/OS Global Resource Serialization (GRS) Ring function to natively support FICON channel-to-channel (CTC) connections in the z/OS release following z/OS V1.13, and to make this support available on z/OS V1.11, V1.12, and V1.13
� Revision of FICON Express4 support on future System z servers: – In previous statements of direction IBM stated that the IBM zEnterprise 196 and
IBM zEnterprise 114 would be the last servers to support FICON Express4 channels. IBM now plans to support carry forward of the FICON Express4 features (#3321 and #3322 only) into the server after the zEnterprise System.
© Copyright IBM Corporation 2012
IBM System z
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM.
zEnterprise – Statements of Direction (SODs) – March 6, 2012
� Removal of support for Ethernet half-duplex operati on and 10 Mbps link data rate
– The next IBM mainframe product family announced after the IBM zEnterprise196 (z196) and the IBM zEnterprise 114 (z114) is planned to be the last System z product family to support half-duplex operation and 10 Mbps link data rate for copper Ethernet environments. The 1000BASE-T Ethernet feature will support full-duplex operation and auto-negotiation to 100 or 1000 Mbps exclusively.
© Copyright IBM Corporation 2012
IBM System z
Questions?
© Copyright IBM Corporation 2012
IBM System z
Thank you