Release Notice ETERNUS CS8000 V6 -...

21
CS_V6.0A_SP02_ReleaseNote_eng.docx 0 Fujitsu ETERNUS CS8000 Version 6.0A SP02 May 16 th , 2014 Release Note All rights reserved, including intellectual property rights. Technical data is subject to modifications and delivery subject to availability. Any liability that the data and illustra- tions are complete, actual or correct is excluded. Designations may be trademarks and/or copyrights of the respective manufacturer, the use of which by third parties for their own purposes may infringe the rights of such owner. © Fujitsu Technology Solutions GmbH 2014 All rights reserved

Transcript of Release Notice ETERNUS CS8000 V6 -...

CS_V6.0A_SP02_ReleaseNote_eng.docx 0

Fujitsu ETERNUS CS8000 Version 6.0A SP02 May 16

th, 2014

Release Note All rights reserved, including intellectual property rights. Technical data is subject to modifications and delivery subject to availability. Any liability that the data and illustra-tions are complete, actual or correct is excluded. Designations may be trademarks and/or copyrights of the respective manufacturer, the use of which by third parties for their own purposes may infringe the rights of such owner. © Fujitsu Technology Solutions GmbH 2014 All rights reserved

CS_V6.0A_SP02_ReleasNote_eng.docx 1

ETERNUS CS8000 V6.0A SP02 Release Notes

1 General 4

1.1 Ordering 4 1.2 Delivery 4 1.3 Documentation 5 1.4 Licenses for Open Source Software 5 1.5 Useful links 5

1.5.1 Partner Portal for Fujitsu Service and Service Partners 5 1.5.1.1 Manuals 5 1.5.1.2 Release Notice 5 1.5.1.3 Support Bulletins 5 1.5.1.4 Support Matrix 5

1.5.2 Internet 6 1.5.2.1 Release Notice and manuals 6 1.5.2.2 General technical information about ETERNUS CS8000 6

1.6 Brief product description 6 1.6.1 CS VTL 6 1.6.2 CS ViNS 7 1.6.3 Coexistence of CS VTL and CS ViNS 7 1.6.4 Hardware Architecture 8 1.6.5 Graphical user interface 10 1.6.6 Monitoring of ETERNUS CS8000 10

2 Enhancements compared to the prev. release V6.0A SP00 11 2.1 General 11

2.1.1 LT270-S2 native Support (FJ) 11 2.1.2 SAS update installation 11 2.1.3 BCU-CD all available FWs 11

2.2 CS ViNS functionality 11 2.2.1 ViNS on 8200 11 2.2.2 ViNS Ext. File Protection BA: Support 2 billion inodes (A0599392) 11

2.3 CS VTL functionality 11 2.3.1 Increase max Lun Mapping value from 512 to 2048 (PB_877) 11 2.3.2 Web GUI enhancement. EPG 11 2.3.3 Brocade_config extension (A0588378) 11 2.3.4 DeDuplication enhancements 11 2.3.5 DeDuplication on CS8200 11 2.3.6 Service Request 1617976 – CS8000 -> CS800 Cascading

configuration 11 3 Technical information 12

3.1 Resource requirements 12 3.2 Product installation 12 3.3 Discontinued functions 13

3.3.1 Announcement for discontinuation of GUI-CD support 13 3.4 Incompatibilities 13

3.4.1 Firewall Configuration 13 3.4.2 Administration part of the Web-GUI 13 3.4.3 GXCC 14 3.4.4 Retrieval of V4.0 accounting data by means of specific scripts 14 3.4.5 Display of NAS-IDP/tape drives throughput 14

3.5 Restrictions 14 3.5.1 Documentation 14 3.5.2 General Restrictions and Known Issues 14

3.5.2.1 CAFS Message 14 3.5.2.2 Errors after rebooting a RAID System (firmware update etc) 14

3.5.3 VTL-Interface specific Restrictions and Known Issues 15 3.5.3.1 Host Operation SCSI Timeouts and Aborts 15 3.5.3.2 Long Distance Volume Replication, bandwith usage 15 3.5.3.3 Swap Memory overflow when using vjuk 15

3.5.4 ViNS-Interface specific Restrictions and Known Issues 15 3.5.4.1 New check for owner parameter for filegroups added (A0598675) 15

CS_V6.0A_SP02_ReleasNote_eng.docx 2

3.5.4.2 dsmmigrate Jobs get hung if HPLS temporarily is out of Scratch Vols - ViNS backend (A0598415) 15

3.5.4.3 Cancellation of ILMC Jobs 15 3.5.4.4 HSM failover environment differs between cluster nodes 16 3.5.4.5 dsmrecalld is not running 16 3.5.4.6 Recovery procedure for issues above 16 3.5.4.7 HSMC cannot be restarted on a NAS IDP 17 3.5.4.8 dsmwatchd is restarted every 30-60 seconds 17 3.5.4.9 Configuring file groups for non-HSM file systems 18 3.5.4.10 HSM failover environment contains two entries for the same

ISP 18 3.5.4.11 ViNS on 8200 19 3.5.4.12 CS ViNS – Disaster Recovery for backend 19 3.5.4.13 FICON direct connections at z/OS mainframes 19 3.5.4.14 Configuration of DeDup LVGs 19

3.6 Procedure in the event of errors 20 4 ETERNUS CS8000 models 21

CS_V6.0A_SP02_ReleasNote_eng.docx 4

1 General

This Release Notes document is a summary of the major functions, requirements and operating in-formation with regard to ETERNUS CS8000 V6.0A

*.

A collection of hints and information which are necessary for Fujitsu Service and Service Partners can be found in the appropriate and subsequent Service Bulletins, if applicable.

The names used in this Release Notes document may be trademarks, which use by third parties for their own purposes may violate the rights of the owners.

The release level is that of May 2014. Changes related to the previous release level are marked with “*1”. Starting with V6.0A SP01 the name “ETERNUS CS HighEnd” has been replaced by “ETERNUS CS8000” for newly shipped systems and along with this also the names of the ETERNUS CS8000 models have been changed (see chapter 4). Unless stated otherwise, this Release Notes refers to all models running ETERNUS CS8000 V6.0A SP01; however, for the sake of readability the changed names are not marked anymore within these Release Notes. This Release Notes document is shipped on the CD "ETERNUS CS8000 V6.0A Service Pack 02 - Documentation" and is available online at http://manuals.ts.fujitsu.com/.

1.1 Ordering

ETERNUS CS8000 V6.0A can be ordered from your local distributors. The product includes coupled software and hardware components. The software product is supplied subject to single payment.

A license is required for ETERNUS CS8000. Appropriate licenses must be ordered and installed in order to use the product.

1.2 Delivery

The ETERNUS CS8000 V6.0A files are delivered together with the hardware on the following DVDs or CD ROMs:

ETERNUS CS8000 V6.0A Service Pack 02 ES Base – Basic Installation

ETERNUS CS8000 V6.0A Service Pack 02 ES Patch 01 – Patch Installation

ETERNUS CS8000 V6.0A Service Pack 02 OSS – Open Source Software Sources

ETERNUS CS8000 V6.0A Service Pack 02 HS – HSMS components (NAS-IDP only)

ETERNUS CS8000 V6.0A Service Pack 02 SU – SAS Update

ETERNUS CS8000 V6.0A Service Pack 02 AO - CS-AddOn

ETERNUS CS8000 V6.0A - BIOS Configuration Utility BCU-LX

ETERNUS CS8000 V6.0A - Graphical User Interface GUI**

ETERNUS CS8000 V6.0A Service Pack 02 Documentation

* ETERNUS ® is a registered trademark of Fujitsu

** The latest GUI level can be installed to your workstation by means of the GXCC update tool

*1

*1 *1

*1 *1 *1 *1 *1 *1 *1

*1

*1

*1 *1

*1

*1

*1

*1

*1 *1

CS_V6.0A_SP02_ReleasNote_eng.docx 5

1.3 Documentation

Documentation in the form of online manuals is available at http://manuals.ts.fujitsu.com and addition-ally it is shipped on the product medium "ETERNUS CS8000 V6.0A - Documentation". In addition to the documentation for ETERNUS CS8000 V6.0A the documentation for supported oper-ating systems, system level products, backup products and for supported robots and drives are strongly recommended. The GUI-CD contains the following README-files:

README.txt: ASCII text to be displayed in Windows

README.unix: ASCII text to be displayed in LINUX systems

README.pdf: platform independent PDF format

1.4 Licenses for Open Source Software In ETERNUS CS8000 Open Source Software (OSS) is used. The corresponding license agreements are gathered in the file "ThirdPartyLicenseReadme.txt". You will find this file in the root directory of the respective installation medium and on the documentation CD.

A media with the sources of the OSS software is part of the ETERNUS CS media kit which is delivered to the customer. In case of software update media (patch, hotfix) the corresponding source packages are part of the update media itself (directory suse/src/).

The sources of the used OSS software are determined automatically during the production of the media. If the provided sources are incomplete by mistake, Fujitsu, hereby offers to provide for a charge no more than the cost of physically performing source distribution, a copy of the missing source code on a medium used for software interchange. To obtain the missing source please email to [email protected] quoting “Open Source in ETERNUS CS”. Your requests will be accepted until three years from the shipping date.

1.5 Useful links

1.5.1 Partner Portal for Fujitsu Service and Service Partners

1.5.1.1 Manuals

This Website contains the complete documentation related to ETERNUS CS8000 V6.0A.

https://partners.ts.fujitsu.com/com/service/ps/Storage/ETERNUS_CS/ETERNUS_CS_HighEnd/ETERNUS_CS60/Pages/TechnicalInformationMaintenance.aspx

1.5.1.2 Release Notice

This Website contains the release notice for ETERNUS CS8000 V6.0A.

https://partners.ts.fujitsu.com/com/service/ps/Storage/ETERNUS_CS/ETERNUS_CS_HighEnd/ETERNUS_CS60/Pages/ServiceImplementation.aspx

1.5.1.3 Support Bulletins

Support bulletins contain detailed information about the operation of ETERNUS CS8000 V6.0A.

https://partners.ts.fujitsu.com/com/service/ps/Storage/ETERNUS_CS/ETERNUS_CS_HighEnd/ETERNUS_CS60/Pages/SupportBulletinsFCO.aspx 1.5.1.4 Support Matrix

CS_V6.0A_SP02_ReleasNote_eng.docx 6

The ETERNUS CS8000 Support Matrix contains the following information:

an up-to-date listing of backup and archiving applications supported by ETERNUS CS8000 as well as listing of the verified robot and device emulations

a listing of all real drives and real robots supported at the ETERNUS CS8000 backend

a listing of operating system versions on which the GUI-CD can be installed

an overview of browsers and operating systems that can be used for the Web-GUI

Besides the generally released connection modes all those released on request for specific projects and those planned for later general release are listed. 1.5.2 Internet

1.5.2.1 Release Notice and manuals

On this website the User Guide and this Release Notice related to ETERNUS CS8000 V6.0A are available.

http://manuals.ts.fujitsu.com/index.php?id=9662-9665-15290

1.5.2.2 General technical information about ETERNUS CS8000

http://www.fujitsu.com/fts/products/computing/storage/solutions/eternus/high-end/index.html

This website contains an overview about technical parameters of ETERNUS CS8000 and additionally hosts links to documents with further information; the most important of which are Datasheet ETERNUS CS8000 series V6 and Technical Concepts of ETERNUS CS8000 series

1.6 Brief product description

ETERNUS CS8000 provides a long-term, cost-effective basis for modern storage management.

For storage virtualization ETERNUS CS8000 offers two interfaces:

CS Virtual Tape Library (CS VTL) as the tape interface

CS Virtual Network Storage (CS ViNS) as the file interface (optionally with HSM) ETERNUS CS8000 systems can be used either as mixed systems (CS VTL and CS ViNS) or as mere VTL (only) or ViNS (only) systems. A mixed system can also be used as loopback configuration.

1.6.1 CS VTL With CS VTL, a virtual tape robot system is placed in front of the real tape robot system (with the real drives and cartridges). Thus the host and the real archive are fully decoupled. The virtual tape robot system knows what is referred to as virtual (logical) drives and virtual (logical) volumes. The core ele-ment here consists principally of a disk system as data cache, guaranteeing not only extremely high-speed access to the data, but also enabling clearance of bottlenecks which might occur in a real robot system. This can be achieved by the large number of virtual drives (up to 1280) and logical volumes (up to 3 million) which can be generated. With ETERNUS CS8000 V6.0 de-duplication technology is directly integrated to the internal storage cache, while of course the disk cache also remains available without de-duplication. By integrating de-duplication technology, enterprises can additionally use internal disk storage capacity as an extremely efficient final backup target, as the necessary disk space can be reduced by factor 10- to 30- times or more.

CS_V6.0A_SP02_ReleasNote_eng.docx 7

ETERNUS CS8000 provides the connected operating systems with specific virtual MTC drives in the virtual archive systems. Simultaneous connection to ETERNUS CS8000 is possible both on mainframes and Open System hosts via SAN based on Fibre Channel or FICON. A detailed listing of the supported applications, server operating systems, etc. can be found in the ETERNUS CS8000 Support Matrix, which is available for Fujitsu Service and Service Partners. 1.6.2 CS ViNS With CS ViNS a file based interface is provided, which allows direct access to applications on virtual-ized file systems in ETERNUS CS8000 based on the standard protocols NFS V2/V3/V4 and CIFS (V1.0). One part of the CS-internal RAID system is exclusively assigned to CS ViNS by means of ded-icated NAS file systems. When using hierarchical storage management (HSM) the disk space of the CS internal RAID reserved for ViNS can be extended transparently for the applications via tape librar-ies at the ETERNUS CS ViNS/HSM backend. HSM functionality is optional. ETERNUS CS8000 V6.0 additionally provides a replication function for NAS file systems. ViNS data are replicated asynchronously from a source CS ViNS system to a target CS ViNS system. The replica of a file system is activated and used for productive operation in disaster cases, e.g. when the original copy gets lost. It is also possible to switch operation to the replica in maintenance scenari-os. A detailed listing of the supported applications can be found in the ETERNUS CS8000 Support Ma-trix, which is available for Fujitsu Service and Service Partners. 1.6.3 Coexistence of CS VTL and CS ViNS In addition to its function as NAS file system, the ETERNUS CS8000 cluster which provides access to NAS file systems, and optionally the hierarchical storage management, also offers the classical VTL services.

Within the cluster the two functions - CS VTL and CS ViNS or CS ViNS/HSM - run in parallel because they hardly use the same components. As a result the two services are largely separated in logical terms.

In ETERNUS CS8000 communication between the various control units takes place via the LAN and the user data is transported to and from the RAID system via the SAN. The internal LAN and SAN and the central components VLP, SAS, console, etc. are used jointly by CS VTL and CS ViNS(/HSM). All the ISPs of CS VTL and CS ViNS(/HSM) also form a joint CAFS cluster. From the hardware viewpoint CS VTL and CS ViNS(/HSM) share the rack(s) and the RAID systems.

The backend services of CS VTL and CS ViNS/HSM are largely separated from each other. No com-ponents of CS VTL run on a NAS-IDP and vice versa. The services of CS VTL and CS ViNS(HSM) can jointly access tape libraries (depending on the library used). Tape drives and media must, however, be separated between CS VTL and CS ViNS/HSM, as a result of which the volumes of CS ViNS are not known to CS VTL and vice versa. In a CS ViNS loopback configuration backend services of CS ViNS/HSM are connected to the frontend services of the CS VTL system.

*1 *1

CS_V6.0A_SP02_ReleasNote_eng.docx 8

1.6.4 Hardware Architecture ETERNUS CS8000 consists of an internal LAN network of ISPs (Integrated Service Processor) and the connected RAID system. Depending on the role of an ISP it is referred to as VTL-ICP, NAS-ICP, VTL-IDP, NAS-IDP, IUP, VTC, VLP or TBP:

VTL-ICP = Integrated Channel Processor for CS VTL, may also act as Dedup-Client

NAS-ICP = Integrated Channel Processor for CS ViNS, may also run ViNS replication services

VTL-IDP = Integrated Device Processor for CS VTL, may also act as Dedup-Server

NAS-IDP = Integrated Device Processor for CS ViNS

IUP = ISP with ICP- und IDP functionality

VTC = ISP with ICP-, IDP- and VLP functionality

VLP = Virtual Library Processor

TBP = Tie Breaker Processor (only in models ETERNUS CS8400 and ETERNUS CS8800) As service access an additional server is connected to the internal LAN network:

SAS = Service Access System

All ISPs but TBP in models ETERNUS CS8400 and ETERNUS CS8800 are connected in an FC-SAN network, which is implemented with FC switches.

VTL-ICP The emulations of the virtual tape drives run on the Integrated Channel Processor (ICP). An ICP supports five connection technologies:

o OCLINK on Fujitsu MSP and XSP hosts o FICON on z/OS- and OS/390 hosts o FCLINK on Fujitsu MSP Hosts o FC on hosts with Fibre Channel connection, e.g. BS2000, i5/OS, AIX, HP-UX, LINUX,

SOLARIS, WINDOWS o ETHER for “Long Distance Volume Replication” of other ETERNUS CS8000 systems

A maximum of 10 ICPs is available for connection to the hosts, where each ICP can control a maximum of 128 virtual drives. The essential task of an ICP is the emulation of tape drives and the subsequent conversion to an internal disk format or, after having been field-upgraded, to run additional Dedup-Client services such as hashing. All tape drives emulated by ETERNUS CS8000 are listed in the support matrix. Compression is optional for all tape emulations. Hint: For connection with the NAS-IDPs in a loopback configuration, 2 FC-ports are connected to the internal SAN.

NAS-ICP Within the ETERNUS CS cluster the NAS-ICP forms the interface to the customer LAN for accessing network file systems which have been released via NFS and/or CIFS. A NAS-ICP is equipped with 4 LAN ports (GBit LAN). For the connection to the NAS cache, the NAS-ICP has two internal FC ports. The central task of the NAS-ICP is to make file systems available for applications which run on the hosts in the customer LAN. The CTDB service ensures high availability by means of IP failover within IP pools. No compression is available for any shares. The NAS-ICP additionally is used to run asynchronous replication of NAS file systems. It periodically selects data to be replicated and copies the data to a second, remote CS ViNS system.

CS_V6.0A_SP02_ReleasNote_eng.docx 9

VTL-IDP The control processes for the real drives in the robot connected via FC/SCSI and for the FC/SCSI robot controls run on the Integrated Device Processor (IDP). An IDP is connected to the remaining ETERNUS CS8000 components via FC. Up to 10 IDPs with up to 16 real drives each are available. A maximum of 4 FC drives can be connected to one HBA port. This means: Being equipped with 2 FC boards with dual ports up to 16 drives can be used on each IDP. It may also be field-upgraded to act as a Dedup-Server which interacts with one or more VTL-ICPs which also have been upgraded to act as Dedup-Client.

NAS-IDP The services of the hierarchical storage management run on the NAS-IDP. These require direct access to the resources of the backend storage. The NAS-IDP is responsible for communication with the tape libraries and tape drives in order to store copies of files from NAS file systems on tape volumes and to restore them from these tape volumes. It is equipped with the SAN and/or LAN interfaces required for this purpose. LAN libraries as well as FC/SCSI libraries can be connected. LAN libraries connected to the NAS-IDP must be of type STKCSC. A maximum number of 4 active and 4 warm stand-by NAS-IDPs is available. Hint: For connection with the VTL-ICPs in a loopback configuration, the FC-ports are connected to the internal SAN.

IUP (CS VTL only) In ETERNUS CS8200 models both ISPs have ICP functionality as well as IDP functionality, identification: IUP.

VLP The complete ETERNUS CS8000 coordination takes place in the Virtual Library Processor (VLP) component; mainly by means of VLM and PLM.

TBP In ETERNUS CS HA models (ETERNUS CS8400 or CS8800) an additional Tie Breaker Processor (TBP) is available. The TBP is necessary to avoid “split brain” situations in case of any disaster.

SAS In all models except CS8200 at least one SAS computer is available outside of the FC-SAN network. The SAS is used for monitoring the ETERNUS CS8000 system. Additionally the SAS is used in a CS ViNS/HSM configuration to store directory structures for disaster recovery.

RAID system The heart of the entire virtual archive system is the RAID system. All data of CS VTL as well as of CS NAS are stored here in separate file systems. The following file system types are used:

o TFS = Tape Volume Cache (TVC) of CS VTL o NASFS = NAS file systems for data of CS ViNS o CSIFS = database and administrative data for CS VTL o NDBFS = database for CS ViNS/HSM

CS_V6.0A_SP02_ReleasNote_eng.docx 10

1.6.5 Graphical user interface The graphical user interface of ETERNUS CS8000 is completely web-based. All roles (csservice, csadmin, csobserve) supported by the system are covered by the ETERNUS CS8000 Web-GUI. The well-known Tcl/Tk based GUI (GXCC) is integrated within the complete concept, i.e. it is even possible to launch the GXCC main window with System and Location Explorer as well as the Global Status window. A standard web browser is the only prerequisite necessary to administrate an ETERNUS CS8000 system. 1.6.6 Monitoring of ETERNUS CS8000

On all models except ETERNUS CS8200, monitoring of ETERNUS CS8000 takes place on the SAS. Serious disruptions of ETERNUS CS8000 operation may be reported via various alarm paths:

via „call home“ to a hotline with “AIS Connect” or Teleservice operation

via „hot messages“ to BS2000 hosts with ROBAR connection

via SMS message

via email Furthermore, ETERNUS CS8000 can be integrated into SNMP remote monitoring. An integration package for Windows is available to activate the application launch connection especially for the "Uni-center" SNMP remote monitoring software from CA. By means of a so called SNMP status concentrator, an ETERNUS CS8000 system can be observed. .

CS_V6.0A_SP02_ReleasNote_eng.docx 11

2 Enhancements compared to the prev. release V6.0A SP00

2.1 General 2.1.1 LT270-S2 native Support (FJ) 2.1.2 SAS update installation 2.1.3 BCU-CD all available FWs

2.2 CS ViNS functionality 2.2.1 ViNS on 8200 2.2.2 ViNS Ext. File Protection BA: Support 2 billion inodes (A0599392)

2.3 CS VTL functionality 2.3.1 Increase max Lun Mapping value from 512 to 2048 (PB_877) 2.3.2 Web GUI enhancement. EPG 2.3.3 Brocade_config extension (A0588378) 2.3.4 DeDuplication enhancements 2.3.5 DeDuplication on CS8200 2.3.6 Service Request 1617976 – CS8000 -> CS800 Cascading configuration

CS_V6.0A_SP02_ReleasNote_eng.docx 12

3 Technical information

3.1 Resource requirements

ETERNUS CS8000 V6.0A provides all required resources on its own hardware. Connection cables, libraries and physical drives / MTCs have to be present or have to be ordered separately by the cus-tomer. A summary of connection modes at frontend and backend side of ETERNUS CS8000 is contained in the ETERNUS CS8000 Support Matrix, which is available for Fujitsu Service and Service Partners.

3.2 Product installation ETERNUS CS8000 V6.0A is installed and preconfigured in a customer individual process by Fujitsu Service or another authorized service provider. Prior to starting operation, settings in the host, the application, ETERNUS CS8000 itself and the phys-ical robot archive must be checked and modified if necessary. Upgrade installations from earlier ETERNUS CS HE software versions are generally possible as long as the existing hardware complies with the ETERNUS CS8000 V6.0A requirements. If hardware components have to be exchanged when upgrading to ETERNUS CS8000 V6.0A and this new hardware is not supported in the older system then the upgrade will take the form of a data migra-tion. Please contact your responsible Professional Services Manager.

CS_V6.0A_SP02_ReleasNote_eng.docx 13

3.3 Discontinued functions

3.3.1 Announcement for discontinuation of GUI-CD support ETERNUS CS8000 version V6.0A is the last one to support the GUI-CD (GXCC). It is recommended to use the Web based user interface.

3.4 Incompatibilities

3.4.1 Firewall Configuration Configuration of all Firewall related topics can only be done by means of the Web-GUI (with role “csservice”). 3.4.2 Administration part of the Web-GUI

Login to the administration part of the Web-GUI is allowed for users assigned to role “csad-min” (standard username “xtccuser”). Thus, when using the administration part of the Web-GUI, certain configuration changes can be applied to ETERNUS CS8000 which usually do not comply with role “csadmin” but only with role “csservice” (concerning the role concept of GXCC up to and including V5.1A).

The log files of the administration part of the Web-GUI are now located in the directory /var/log/fsc/CentricStor/web_gui.

The labels of the following input fields in the VLS configuration dialog have been changed, however the semantics of the related input fields remain unchanged:

VLS Old label New label

VAMU ROBAR_HACC Port VAMU Port

VDAS DAS-Port VDAS Port

VACS RPC-Port VACS Port

VLMF RPC-Port VLMF Port

CS_V6.0A_SP02_ReleasNote_eng.docx 14

3.4.3 GXCC The menu “Administration – Logical Volume Operation - Report Logical Volume” is no longer support-ed. 3.4.4 Retrieval of V4.0 accounting data by means of specific scripts

Use of project specific scripts to retrieve V4.0 accounting data may cause problems, because CLI commands that are used within these scripts have been changed incompatibly. In case of using a script supplied by your appropriate Fujitsu Service please contact your Fujitsu Pro-fessional Services representative.

3.4.5 Display of NAS-IDP/tape drives throughput

For NAS-IDPs no throughput data will be collected for IDP/tape drives. The following displays are concerned:

Globstat: o Performance area in main window: Live display for "DEVICES", "TOTAL" o Menu element: Statistics -> History of -> Channel/Device Performance

Administration part of the Web-GUI: o Tab “Overview”, Accordion "Interfaces Throughput", section "Backend": charts "History"

and "Live" o Tab “Throughput”, Accordion “Units” o Tab “Throughput”, Accordion “Virtual Network Storage”, section “Backend”: charts

"History" and "Live"

get_hist.sh: o History types "tape" and "total"

3.5 Restrictions The following restrictions have to be observed when using ETERNUS CS8000 V6.0A.

3.5.1 Documentation For ETERNUS CS8000 V6.0A the handbooks and the online help pages for the Web-GUI are availa-ble in English language only, also on the “Documentation” CD. 3.5.2 General Restrictions and Known Issues

3.5.2.1 CAFS Message There is a known error which may rarely cause the message

CAFS013 mmfs: Error=MMFS_PHOENIX, ID=0xAB429E38, Tag=5511241:

Reason code 668 Failure

Reason: Lost membership in cluster vlp0.cs-intern. U(n)mounting file

systems.

to be encountered during shutdown of the entire cluster. In this case the message can be safely ignored. 3.5.2.2 Errors after rebooting a RAID System (firmware update etc) Rebooting a RAID system might be necessary e.g. to activate the new FW after an update. While the ETERNUS DX series supports an online update, the "older" ETERNUS 4000 must be rebooted to activate new FW.

IMPORTANT: After a RAID system reboot, but BEFORE starting to resync file systems, you MUST rescan all SCSI busses on all ISPs with the following command:

mmdsh scansd -c scsiBus

CS_V6.0A_SP02_ReleasNote_eng.docx 15

3.5.3 VTL-Interface specific Restrictions and Known Issues 3.5.3.1 Host Operation SCSI Timeouts and Aborts A problem has been observed sporadically on a Linux host systems (SLES11 SP01) where these timeout values are sometimes being reset from 900 to 300 seconds. In High load situations it is possible for the ETERNUS CS to load the internal RAID system to such an extent that the time required to complete virtual tape operations that include a „sync‟ operation exceeds 5 minutes. This can lead to the host system aborting the operation and cancelling the jobs. In such cases the timeout value in the host system must be checked and if need reseted.

They can be found on the host in the sys file system under the following path

/sys/class/scsi_tape/*/device/timeout

and can be reseted using echo.

3.5.3.2 Long Distance Volume Replication, bandwith usage When Long Distance Volume Replication is used, be aware that there are no mechanisms implemented on ETERNUS CS that provide bandwidth management or quality of service settings. Replication of volumes using parallel connections between source and target can use all available bandwidth. If the source and target systems are located on the same LAN the bandwidth usage may be so aggressive that the network gets saturated and other traffic is severely affected.

3.5.3.3 Swap Memory overflow when using vjuk To avoid swap area overflowing the following configuration rules have to be adhered to: (These rules are neither checked nor enforced by configuration tools (GUI/CLI)) - max #VJUK per ICP/IUP/VTC: 64 - max #VJUK per CS50: 64 - max #VJUK per CS500/CS1000/CS1500/CS2000 (2 ICPs): 128 - max #VJUK per CS3000 (3 ICPs): 192

3.5.4 ViNS-Interface specific Restrictions and Known Issues

3.5.4.1 New check for owner parameter for filegroups added (A0598675) New behavior on a system without directory service (LDAP or Active Directory): The CLI and the Service Web-GUI will now enforce that any referenced owner of a filegroup is known to the system. Otherwise the configuration (add or modify filegroup) will fail. The same check will be done during a DNA. If a system already contains filegroups with references to non-existing users a DNA will fail. After having updated to the current version before running the next DNA make sure that all user names referred to by

filegroups are known on the system. This can be done with the ecs-conf-chk-filegroup command.

Missing users will be reported by messages like "Error: User myuser does not exist." A missing user can be added to the system easily with "ecs-add-

user",

e.g. "ecs-add-user --user=myuser --role=nas-client".

New behavior on a system with directory service (LDAP or Active Directory): Similar to the Admin Web-GUI the CLI and the Service Web-GUI will show a warning message if for a filegroup a user is configured which is not known to the system, e.g.: "Warning: The user name foo cannot be verified. Either it does not exist

in the directory or the directory service is not available."

3.5.4.2 dsmmigrate Jobs get hung if HPLS temporarily is out of Scratch Vols - ViNS backend

(A0598415) To avoid dsmmigrate jobs get hung when HPLS is temporarily out of scratch volumes make sure the tape library has enough scratch volumes. If the hung situation occurred (due to shortage of scratch volumes kill the hung jobs as a countermeasure.

3.5.4.3 Cancellation of ILMC Jobs Cancellation of ILMC jobs is mapped to kill of the underlying HSMC dsm-processes. These can take long since the in-terrupted commands may terminate ongoing transactions before exiting.

CS_V6.0A_SP02_ReleasNote_eng.docx 16

3.5.4.4 HSM failover environment differs between cluster nodes Normally all cluster nodes running an HSM service (NAS IDPs and VLPs) take part in the HSM failover environment.

On each node, the command dsmmigfs q –f should produce output like this:

(Drossel:A)VLP0:~ # dsmmigfs q -f

Current status of the failover environment on this system: Node: VLP0 Node ID: 1 Status: active

Node: VLP1 Node ID: 2 Status: active

Node: IDP1 Node ID: 8 Status: active

Node: IDP2 Node ID: 9 Status: active

Node: IDP3 Node ID: 10 Status: active

Node: IDP4 Node ID: 11 Status: active

The text “Status: active” means that the given node is an active member of the HSM failover environment. The displayed data should be the same on all nodes of the HSM failover environment, given that they are up and run-ning. For nodes which are missing in the list or whose failover status is not shown as “active”, call dsmmigfs enablefailover

on the respective node.

If the dsmmigfs enablefailover command generates an error

ANS9418W - File DSMNodeSet could not be acquired at the moment. It may be locked, or the var filesys-

tem may be full. Will try to acquire the SDR lock again in a few seconds.

Then try the recovery procedure listed below. The error message may be accompanied by further messages

ANS9592E A SOAP TCP connection error has happened!

ANS9590E The SOAP error information: CommunicationPartner::Check

failed, reason: Host not found

The recovery procedure is described below. 3.5.4.5 dsmrecalld is not running

The set of HSMC processes in the idle state (no HSM load) consists of

a single dsmwatchd process

a single dsmrootd process

three dsmrecalld processes

If the dsmrecalld processes are missing, but the others are present, run the command dsmmigfs restart.

The command may fail as follows:

# dsmmigfs restart ... ANS9418W dsmmigfs: Cannot access the GPFS SDR for writing. It might

be locked, or the var filesystem might be full.

ANS9414E dsmmigfs: Unable to create DSMNodeSet in the SDR. Abort-

ing

...

If this error is seen, apply the recovery procedure described below.

3.5.4.6 Recovery procedure for issues above This procedure must be run on each node which shows one of the above symptoms. Issue the following command on the affected node

dsmmigfs sdrreset

The command is expected to display the following message:

TSM All HSM related locks in the SDR are now unlocked.

If this message is not shown, it is no use to continue the procedure.

As the next step, verify that the portmap service /sbin/rpcbind is running.

Then issue the following commands:

CS_V6.0A_SP02_ReleasNote_eng.docx 17

mmdsm dsmPutHsmdata DELETE

rm /etc/adsm/SpaceMan/config/DSMNodeSet*

rm /etc/adsm/SpaceMan/config/DSMSDRVersion*

rm /etc/adsm/SpaceMan/config/instance

dsmmigfs SDRReset

In case of problem (1), call

dsmmigfs enablefailover

In case of problem (2), call

dsmmigfs restart

The final dsmmigfs enablefailover resp. dsmmigfs restart call is expected to be successful. When the problem (1) has occurred, the node becomes an active member of the HSM failover environment.

Please verify this with another call of dsmmigfs q –f.

In case of the problem (2), the dsmrecalld processes should now be running. The node should also be a member

of the HSM failover environment.

Please check this by calling dsmmigfs q –f.

It may happen that this procedure is not successful because of an inconsistent CAFS cluster configuration. In case of failure check the messages contained in $LOGD/log_gpfs_adm at the VLP master node.

If the file contains messages like

mmdelfs: 6027-1632 The GPFS cluster configuration data on vlp0.cs-

intern is different than the data on vlp1.cs-intern.

mmdelfs: 6027-1594 Run the mmchcluster -p LATEST command until success-

ful.

then call the command

mmchcluster -p LATEST

and try the repair procedure again.

3.5.4.7 HSMC cannot be restarted on a NAS IDP Symptom D&A hangs on a NAS IDP Diagnosis The command

# myps –sm brok

shows that a hanging dsmmigfs restart command, and the ps command displays less than three dsmrecalld process-es. Repair

1. Check if the issue is due to a GPFS SDR lock (see problem (2)). In this case, $LOGD/dsmerror.log contains messages

ANS9418W dsmmigfs: Cannot access the GPFS SDR for writing.

It might be locked, or the var filesystem might be full.

ANS9414E dsmmigfs: Unable to create DSMNodeSet in the SDR.

If these messages occur, repair the problem as described in 5.3.3.14. 2. Otherwise try to fix the problem by node-local actions:

Kill the existing dsmrecalld processes and the dsmmigfs restart process

Then retry the dsmmigfs restart command.

3. Regardless of whether step 2 was successful or not, reboot the NAS IDP. The ETERNUS CS configuration

should be successfully activated in the next startup phase.

3.5.4.8 dsmwatchd is restarted every 30-60 seconds

CS_V6.0A_SP02_ReleasNote_eng.docx 18

You can recognize this condition by the fact that the process ID of dsmwatchd often changes and that dsmwatchd sometimes disappears from the system process list. The dsmrecalld process is not started at all. This situation may be caused due to an inconsistent /var/mmfs/gen/mmsdrfs file. The file exists on each node and it should have been synchronized by GPFS. The file must contain an entry ~%DSM%%:920_HSMVERSION::1:1::::::::::::::::::::::: If an entry with the string <nnn>_HSMVERSION is missing, it must be manually entered.

1. Stop the HSMC completely on the node which has the problem:

# dsmmigfs stop

Comment out the entry starting with “ghsm” from the file /etc/inittab. Then call

# kill -1 1

Now the dsmwatchd is stopped, and no further restart attempts take place.

2. Enter the line

~%DSM%%:920_HSMVERSION::1:1:::::::::::::::::::::::

into the file /var/mmfs/gen/mmsdrfs before the first line which contains the string HSMDATA.

Example:

[...]

%%home%%:60_SG_DISKS:gpfs:2:gpfs2nsd:4194304:4001:dataAndMetadata:AC10010250D

1D690:nsd:RHEL6-HSM-GPFS.tivlab.private::other::generic:cmd::::::system:RHEL6-HSM-

GPFS.tivlab.private::

~%DSM%%:920_HSMVERSION::1:1:::::::::::::::::::::::

~%DSM%%:910_HSMDATA:%%home%%:1:<?xml version='1.0' encoding='ISO-8859-1'

?>:::::::::::::::::::::::

[...]

Uncomment the entry starting with “ghsm” in the file /etc/inittab. Then call # kill -1 1

Now the dsmwatchd is started again. The modified contents of the /var/mmfs/gen/mmsdrfs file is distributed to the other nodes, and the dsmrecalld processes get started.

3.5.4.9 Configuring file groups for non-HSM file systems The creation of NAS file systems which are not HSM managed together with their file groups is not possible in the same configuration cycle. If a new non-HSM file system is defined in the ETERNUS CS configuration, this configuration must be activated in a D&A before file groups in the file systems can be configured. The configuration of the new file groups must then be ac-tivated in a separate D&A procedure.

3.5.4.10 HSM failover environment contains two entries for the same ISP Symptom The HSM client failover environment as displayed by the command dsmmigfs q –f shows two entries for the same node:

(TRAUN:A)IDP0: # dsmmigfs q –f …

CS_V6.0A_SP02_ReleasNote_eng.docx 19

Current status of the failover environment on this system:

Node: VLP0 Node ID: 1 Status: active

Node: VLP1 Node ID: 2 Status: active

Node: IDP0 Node ID: 6 Status: active

Node: IDP1 Node ID: 7 Status: active

Node: IDP0 Node ID: 8 Status: active

Node IDP0 appears twice in the above output, with node Ids 6 and 8, respectively. It has not been observed that this symptom leads to further errors, but it should be cleared at any rate.

Repair One of the entries contains a wrong node ID. To identify the correct one, call

(TRAUN:A)VLP0:~ # mmdsh /usr/lpp/mmfs/bin/mmdsm dsmGetNodeNumber

icp0: 3

vlp0: 1

idp0: 8

tbp0: 5

idp1: 7

vlp1: 2

icp1: 4

The output shows that node IDP0 has the ID 8. To remove wrong entry from HSM client data repository, call

dsmmigfs cleanupSDR <wrong_node_ID>

on all nodes running HSM clients (the VLP nodes and the NAS IDP nodes). <wrong_node_ID> must be 6 in the example. The restart the HSM client on these nodes:

dsmmigfs restart

Then call dsmmigfs q –f again to verify that the wrong entries have disappeared.

3.5.4.11 ViNS on 8200 Only one gateway can be configured per ISP. The IP addresses of all interfaces using the gateway must lie in the same subnet. On IUP-based systems this also applies to the administrative interface and the IP addresses of the IP pools (used for the NAS frontend and the VRB backend) residing on the same IUP. 3.5.4.12 CS ViNS – Disaster Recovery for backend The Access Control Lists (ACLs) of file objects are not restored during recovery of a lost HSMS – managed NAS filesystem.

3.5.4.13 FICON direct connections at z/OS mainframes For FICON direct connections it may occur, that the FICON channel does not successfully reconnect after upgrade of the ICP and the corresponding channels show a “definition error”. In this case it is necessary to toogle the affected channels from the HMC console of the IBM z-Series (single-objects: configure off/configure on) for all LPAR‟s und re-boot the ICP.

3.5.4.14 Configuration of DeDup LVGs DeDup LVGs can be not configured via AdminWebGUI. Configuration via CLI and ServiceWebGUI are possible as well.

CS_V6.0A_SP02_ReleasNote_eng.docx 20

3.6 Procedure in the event of errors

Should an error occur, please proceed as follows:

Create and save the relevant error documentation.

Report the problem to the appropriate service provider.

A “Teleservice” connection with call back option or “AIS Connect” is essential for diagnostics! If this is not available, the service provider is entitled to invoice additional services rendered.

In case of any error the following diagnostic information is required:

A detailed description of the error condition, indicating whether and how the error can be reproduced.

The script „vtrc‟ is to be called on the VLP with user id diag for creating the error documentation. Thus all relevant data is saved in the directory /var/opt/fsc/CentricStor/diag/vtrc. The snapshot tool provided in the GUI can be used as an alternative.

For most CS control processes (i.e. vlm, vacs) it is set up in the Autostart-configuration per default that diagnostics data will be created by snapshot when processes were stopped abnormally (before restart is being processed). All relevant data will be saved in the /var/opt/fsc/CentricStor/diag/snap directory and should be included with the error documentation.

If an output of GXCC is faulty the script 'save_gui_diag.sh -a' provides diagnosis data in the tar archive /var/log/fsc/CentricStor/diag.yyyyddmm.hhmm.tar The broker data that causes the error should also be saved with the menu item „File/Save‟ in the GXCC main window. The output file should be appended to the error documentation.

If a complete RAID system fails in a CMF system (not the first RAID), ETERNUS CS8000 operation continues without disturbance but with a degraded mirror. The service provider is required for correcting the error and for restarting mirror operation.

Additional documents required are described in the User Manuals of the applications and operating systems.

*1

CS_V6.0A_SP02_ReleasNote_eng.docx 21

4 ETERNUS CS8000 models

Starting with ETERNUS CS8000 V6.0A SP01 the following new model names have been introduced for newly shipped systems:

CS8200 – scale-up system (VTL only)

CS8200 – ViNS only system (no HSM backend)

CS8400 – Scale-out-single-site System (VTL only / ViNS only / VTL and ViNS)

CS8800 – Scale-out-split-site System (VTL only / ViNS only / VTL and ViNS) An existing system e.g. CS2000 will remain a CS2000 even if upgraded to V6.0A SP01. In addition, dedicated ViNS-only or DL-models will not be marketed any longer under these names. However, a configuration as a ViNS-only or DL-style CS8000 model is still possible through selection or de-selection of respective keys. Additionally the following, already shipped, CS models are supported, as long as the existing hard-ware complies with the ETERNUS CS8000 V6.0A requirements:

Single VTC System (VTL only) CS50

Disk Library Systems CS500 DL (VTL only) CS1500 DL (VTL and ViNS)

ViNS Systems CS1500ViNS CS2000ViNS

VTL/ViNS Systems CS500 CS1000 CS1500 CS2000 CS3000 CS4000 CS5000

*1