Db2 Utility

33
1 Getting the most out of your BMC DB2 Utilities Steve Thomas BMC Software 6th November 2007 • 14:00 – 15:00 Platform: DB2 for z/OS Session: G06 This session will provide a set of tuning and usage recommendations which customers can adopt in order to optimize ease of use, performance, throughput and availability when using BMC DB2 for z/OS Utilities. It is aimed at Users of an Intermediate to Experienced level who have used the utilities for some time and are familiar with the general concepts but who may not be aware of all the fine tuning options available. Subjects covered include controlling multitasking, minimizing I/O by efficient use of memory, analyzing and tuning SORT processes, sizing workfile datasets, and optimizing Dynamic Dataset Allocation. Steve Thomas is a Principal Consultant at BMC Software, supporting customers in the UK, Northern Europe, the Middle East and Africa. He has been a Database specialist since 1985 and has worked with DB2 since 1989. Steve has presented on a wide range of topics at events across Europe and represents BMC on the European IDUG Conference Planning and UK DB2 User Group Committees.

Transcript of Db2 Utility

Page 1: Db2 Utility

1

Getting the most out of your BMC DB2 Utilities

Steve ThomasBMC Software

6th November 2007 • 14:00 – 15:00

Platform: DB2 for z/OS

Session: G06

This session will provide a set of tuning and usage recommendations which customers can adopt in order to optimize ease of use, performance, throughput and availability when using BMC DB2 for z/OS Utilities. It is aimed at Users of an Intermediate to Experienced level who have used the utilities for some time and are familiar with the general concepts but who may not be aware of all the fine tuning options available.

Subjects covered include controlling multitasking, minimizing I/O by efficient use of memory, analyzing and tuning SORT processes, sizing workfile datasets, and optimizing Dynamic Dataset Allocation.

Steve Thomas is a Principal Consultant at BMC Software, supporting customers in the UK, Northern Europe, the Middle East and Africa. He has been a Database specialist since 1985 and has worked with DB2 since 1989. Steve has presented on a wide range of topics at events across Europe and represents BMC on the European IDUG Conference Planning and UK DB2 User Group Committees.

Page 2: Db2 Utility

2

2

Agenda

• Introduction• Managing Dynamic Dataset Allocation • Analyzing and Tuning the SORT process• Hints and Tips for COPY PLUS, RECOVER PLUS and

REORG PLUSControlling MultitaskingMinimizing I/O and/or CPUHow to get more out of each utility

• Due to time constraints a follow on presentation will cover UNLOAD PLUS and LOADPLUS

This presentation will explain how to get the most out of your DB2 Utilities. The first half of the presentation covers general topics relevant to all BMC utilities, including Dynamic Allocation and BMCSORT. The second half consists of hints and tips when using COPY PLUS, RECOVER PLUS and REORG PLUS. I had originally hoped to cover UNLOAD PLUS for DB2 and LOADPLUS for DB2 as well but there was far too much material for an hour so I will now discuss tips for these utilities in a future presentation.I am also assuming that the reader has access to either SNAPSHOT UPGRADE FACILITY for DB2 (SUF) or Extended Buffer Manager (XBM), either of which enable the online capabilities of BMC utilities. SUF provides a subset of XBM functionality, but for the purposes this presentation they can regarded as identical. For simplicity I shall refer to either product as XBM during the remainder of this presentation since that is usually the name of the Started Task associated with both of these products.

Page 3: Db2 Utility

3

3

Executing BMC Utilities

• PARM Syntax varies slightly – check manualsCan use individual DB2 Subsystem or Group NameDefault utilid is Userid.JobnameRecommend using NEW/RESTARTUse MSGLEVEL(1) if you can spare SpoolDOPT = Default Options Module

//Step EXEC PGM=AMUUMAIN,REGION=0M,// PARM=‘ssid,utilid,restart_parm,,MSGLEVEL(n),DOPT’//Step EXEC PGM=AMUUMAIN,REGION=0M,//Step EXEC PGM=AMUUMAIN,REGION=0M,// PARM=// PARM=‘‘ssid,utilid,restart_parm,,MSGLEVEL(n),DOPTssid,utilid,restart_parm,,MSGLEVEL(n),DOPT’’

ADUUMAIN - UNLOADAMUUMAIN - LOAD

ARUUMAIN - REORGAFRMAIN - RECOVER

ACPMAIN - COPY

If not permitted and DB2 V8 or 9 ensure the utility allowed to use

Storage above Bar by coding MEMLIMIT, using IEFUSI exit or setting SMFPRMxx in PARMLIB

BMC utilities are executed directly rather than running under DSNUTILB as with IBM utilities. Use REGION=0M if allowed, otherwise when using DB2 V8 or DB2 9 you need to ensure the system allows the utility to allocate storage above the Bar by using one of the methods listed.The exact syntax of the PARM option varies between the utilities – see the Chapter titled ‘Building and Executing jobs’ in the appropriate Reference manual for details (usually Chapter 4). However many of the keywords are common:•SSID is the DB2 subsystem. When using data sharing all our utilities support the Group attach name. After a failure you can restart any utility using a different member of the Group than was used for the original execution.•UTILID is utility identifier, similar to the IBM utilities, and must be a unique entry in the BMCUTIL table. The default is Userid.Jobname•We recommend using NEW/RESTART for the Restart Parm. This will restart an existing utility if one exists or else start a new one and allows most failed utilities to be resubmitted with no JCL changes, particularly if using Dynamic Allocation.•MSGLEVEL(0) returns minimal output. MSGLEVEL(1) provides more information, including the Maintenance applied and the Default Options used and can be very useful for tuning or debugging purposes in the event of a failure. It will almost always be needed by our Support teams so it’s usually a good idea to use this if you can spare the space on your JES Spool. Note that RECOVER PLUS also supports MSGLEVEL(2) which provides even more data.•The Default Options module or DOPT provides a user-customisable set of default values for the utility parameters and will be discussed next.

Page 4: Db2 Utility

4

4

Default Options Modules

• Customized Defaults for most keywords• Macro Assembled and Linked into Load Module

Create using Install JCL library member xxx$OPTS

• Displayed in Output if MSGLEVEL > 0• Documented in Reference Manual Appendix• Update to suit your own environment

Saves coding Control CardsMakes Utilities much easier to use

• New features sometimes disabled by default to maintain consistency across new releases

Definitely worth reviewing DOPTS for your site

The defaults for many BMC Utility Syntax keywords can be customized for your own site by setting up a Default Options Module or DOPT. During the installation we provide a Macro for each utility together with a job to Assemble and Link this into your Load Library (HLQ.DBLINK). The default name is xxx$OPTS where xxx is the product code – you can see what these are by looking at the Execution Load module name on the previous slide, so for instance COPY PLUS uses ACP$OPTS. You can override most defaults using Command Syntax, although this is not allowed where it makes no sense to do so, for example Plan Names. The DOPT settings used by each utility are displayed in the job output whenever you have a MSGLEVEL setting > 0 which is a good reason for using MSGLEVEL(1).It’s well worth the effort reviewing the DOPT settings for each utility to ensure they match the requirements of your organization. If they’re correct then the utility syntax can often be reduced to very few statements saying what utility you want to run and on what objects, leaving the rest of the settings to default. This makes setting up and maintaining Utility JCL much simpler. Another point worth mentioning at this point is that we sometimes disable new features by default and you need to explicitly switch them on either in the DOPT or by using utility syntax. Where this has been done it’s usually to maintain consistency of operation between releases - an example was FASTSWITCH support in REORG PLUS where the default remains NO.

Page 5: Db2 Utility

5

5

Some useful DOPT settings

What XBM subsystem to useXBMID

Take Full or Partition level copiesCOPYLVL

How many concurrent Sorts can we run?SMAX

Limit how many tape units can be usedMAXTAPE or MAXDRIVE

Whether to keep compression dictionaryKEEPDICTIONARY

Whether to take Inline Copies INLINECP

Fastswitch or Rename for Online UtilitiesFASTSWITCH

Control Drain processingDRNDELAY, DRNRETRY & DRNWAIT

Whether to save data in BMCHISTBMCHIST or HISTORY

DescriptionDOPT

This slide shows some of the more useful DOPT settings which can be defined for different utilities. Most of these are only relevant to certain utilities, although where the same parameter needs to be set we try and maintain consistency across the different products. There are a few instances where the keyword does vary, so for instance most utilities use HISTORY to define whether to save historical execution information but REORG PLUS uses BMCHISTORY instead. Another example are the MAXDRIVE and MAXTAPE options. There are very few of these differences and where they do exist it’s usually for historical reasons – if we tried to change them now then we would create problems for our existing customers so we’re stuck with what we have.

Page 6: Db2 Utility

6

6

Common Utility Database

• Default name BMCUTIL• Contains critical data on activities

Used by all our utilities

• RecommendationsTreat with same importance as DB2 Catalog & Directory

– Backup at the same time– Recover as soon as DB2 is back up before any User data

Use only a single set of tables– Share between all BMC products and versions

Do not update Catalog statistics for these tables– HLQ.DBSAMP(xxxRESET) will reset stats if needed

Do not run BMC utilities against these objects– Use IBM utilities instead

BMC utilities use a DB2 Database in which to store data such as what utilities are executing and what objects are being processed. These tables are as vital to us as the Catalog and Directory are to DB2 itself, so if you are using our utilities you should treat this database with the same degree of respect. For example, it’s normal to backup the database at the same time as the DB2 Catalog and Directory, and to restore it as soon as DB2 is back up during a Disaster Recovery, before any real User data is processed. You only need a single copy of the Common Utility Database regardless of how many BMC products you run and which versions are in use – the tables are designed to be backward version compatible and are updated automatically by our installation process.The most important point to consider for the purposes of this presentation is that you should never update the Catalog Statistics for these tables. Our products are designed to operate with the default Statistics, and if you update these then performance may well be degraded. Should you ever accidentally update statistics then they can be reset by executing the SQL member xxxRESETwhich will be found in the HLQ.DBSAMP library (or HLQ.CNTL in older releases before we used SMP/E for maintenance), were xxx is the respective product code, so for example COPY PLUS uses ACPRESET.We also recommend that you do not attempt to run our BMC utilities against these tables as this can cause contention problems. Use the native IBM utilities against them instead – they should not be particularly large with the possible exception of the BMCHIST and BMCXCOPY tables.

Page 7: Db2 Utility

7

7

Tables in BMCUTIL database

What utilities are executingBMCUTIL

Log Ranges where an object was open for updateBMCLGRNX

BMC specific data similar to the Catalog table SYSIBM.SYSCOPYBMCXCOPY

Compression dictionaries (REORG and LOAD)BMCDICT

History of past executionsBMCHIST

What Objects are being processed BMCSYNC

Used to StoreTable Name

This table shows the tables in the BMCUTIL database used by the utilities and describes the type of data we store in them. All the tables are important, but the ones to take particular care with are BMCUTIL, BMCSYNC and BMCXCOPY. The first two are our equivalents to the DB2 Directory object SYSUTILX and if you lost this data you would not be able to restart any in-flight BMC utilities. We use BMCXCOPY to store information about any non-standard Imagecopies which we may take, and losing this would mean that you may not be able to recover using these copies. Examples are Instant Snapshots and copies of Indexes where the Index was defined with the COPY NO attribute.

Page 8: Db2 Utility

8

8

Why use Dynamic Allocation?

• Supports wildcarding• Simplifies JCL setup and maintenance

No need for DD statementsCan save hundreds of lines of JCL

• Disk datasets sized automatically• Tape datasets can be stacked• Automatic creation of GDG base if none exists • Size based criteria can change allocation details• Simplifies restart after any failure

There are a number of reasons why you might choose to use Dynamic Allocation. As with IBM utilities using LISTDEFs, the first and most obvious of these is that it supports using wild cards as these prevent you using static dataset allocation in JCL. However dynamic allocation can provide a number of other useful benefits including: •Avoiding the need to code a DD card for each data set, which can save hundreds or even thousands of lines of JCL. •Disk based datasets will be automatically sized correctly. •Tape based datasets can automatically stacked if desired. •If a dataset is a GDG (generation data group) and the base does not exist, for example if you are processing a new object, we will create it automatically, based on a GDG base template which you provide to determine the number of cycles and other appropriate parameters.•Many of the utilities provide options to change the dataset allocation based on run-time criteria such as the expected dataset size or the type of space being copied. For example you can automatically direct larger objects to virtual tape instead of disk. This capability provides the basis for our Hybrid Copy which is discussed later.•Restarting after a failure is much simplified because there is no need to adjust the disposition, GDG numbers, or VOL=REF statements as can happen with using DD statements. You can literally simply re-submit the job as is.

Page 9: Db2 Utility

9

9

Specifying Dynamic Allocation

• We use 2 methods of specifying Dynamic AllocationCOPY, RECOVER & UNLOAD use OUTPUT DESCRIPTORSLOAD and REORG use DDTYPE syntax

• Both are used in place of coding DD statementsSpecify Allocation Options using Keywords similar to a DD card

• Default values specified in utility DOPT module• Symbolic variables can be used for data set names

&DB, &TS, &DATE, &TIME, &PART, &OBNOD, &TSIX, &JOBNAME, &STEPNAME, &UID, &TASK etc.

• Option to use DD instead if present in JCLCan also ignore DD and still allocate datasets dynamically

Largely for historical reasons BMC utilities use two different methods of controlling dynamic allocation. COPY PLUS, RECOVER PLUS and UNLOAD PLUS use Output Descriptors while LOADPLUS and REORG PLUS use DDTYPE syntax. However both methods use largely the same keywords and achieve the same purpose. They are used in place of a DD statement and specify the allocation options for output data sets. They support all the keywords found in a DD statement and more, for example, UNIT, DSNAME, SPACE and RETPD.As with other utility keywords, these options have defaults which are defined in the relevant DOPT module. If the DOPT is coded correctly then you often don’t need any keywords in your Syntax to obtain all the benefits of Dynamic Allocation. All you need to do is to code any keywords that differ from the DOPT in your Utility Syntax. Data set names can include a number of symbolic variables which are substituted at execution time. This enables wild carding and using a single OUTPUT descriptor or DDTYPE for many objects. Some of the more common symbolic variables are &DB, &TS, &DATE, &TIME, &TYPE, &PART – a full list can be found in the relevant Reference Manual. You can specify that a DD card specified in the JCL is used if it is coded, or it can be ignored and Dynamic Allocation will still take place

Page 10: Db2 Utility

10

10

A sample using each method

OUTPUT Descriptor

OUTPUT LOCALPDSNAME

&UID.&OBNOD.&TYPE(+1)UNIT SYSDA

COPY TABLESPACE DBSRT.*INDEXES NOCOPYDDN(LOCALP)RESETMOD NOSHRLEVEL CHANGEQUIESCE AFTER GROUP YES

DDTYPE

REORG TABLESPACE DBSRT.TS1COPY YES INLINE YES

COPYLVL PARTSHRLEVEL REFERENCEUNLOAD RELOADDDTYPE LOCPFCPY ACTIVE YES

IFALLOC USEUNIT (3390,VTAPE)THRSHLD 720000DSNPAT ‘&UID.&DB.&TSIX..P&PART.(+1)‘

GDGLIMIT 5

This slide shows an example of each type of dynamic output allocation. In the left hand example, we are copying a SHRLEVEL CHANGE backup of all the Tablespaces in Database DBSRT, and generating a common consistency point after the backup has been completed. The Image copies are using an output descriptor called LOCALP, which is highlighted in red. This shows that the image copy datasets are going to be using disk based GDG datasets. The &OBNOD variable in the dataset name expands to either database.tablespace ordatabase.indexspace depending on the context in which it’s used, which in this case would be the former as we’re processing tablespaces. If the relevant GDG base does not exist when the job was submitted we will dynamically create one using parameters found in the ACPGDG DD name in the JCL to provide the values. In the right hand example we are reorganizing a single tablespace in the same database using a Single Phase process, as specified by the UNLOAD RELOAD parameter. The Utility is taking an Inline copy at the partition level while the object is being reorganized. The DDTYPE LOCPFCPY handles the local primary imagecopy, and as you can see we are switching on dynamic allocation by using ACTIVE YES. By coding IFALLOC USE we are stating that if we have coded the correct DD cards in our JCL we should ignore the dynamic allocation request and use the datasets provided in the JCL for the copies. The UNIT parameter, together with the threshold, tells us that if the copy dataset is expected to be below 1,000 cylinders it will be allocated to Disk whereas if it’s larger it will be sent to Virtual Tape. This decision will be taken dynamically at execution time using the high Allocated RBA of the underlying dataset. Again any GDG base that does not exist will be created, in this instance with a limit of 5 generations.

Page 11: Db2 Utility

11

11

BMCSORT

• We use our own Sort package – BMCSORTHighly tuned Sort designed for use with BMC UtilitiesIBM do the same with DFSORT from DB2 V8 onwards

• Uses own DOPT modulePrimarily controls Dynamic AllocationOnly change settings under direction from BMC Support

• Recommend using recent versions of UtilitiesParticularly REORG PLUS 8.1 and LOADPLUS 8.3 or laterInternal improvements in how BMCSORT is called

• 2 primary factors to consider for end userHow are Sortwork datasets allocated?Ensuring Sorts have sufficient memory

All BMC utilities include a licence to use a customized Sort package called BMCSORT which has been tuned to provide optimal utility performance. BMCSORT is automatically included during product Installation. It is neither intended nor capable of replacing whatever system sort package your sites uses, but you must use it when running our utilities. It’s worth noting that IBM have adopted a similar approach with their own utilities, which always use DFSORT from DB2 V8 onwards.BMCSORT uses a DOPT module similar to the utility DOPTs mentioned earlier. It’s contents are primarily concerned with how Sortwork datasets are allocated. Most of the values can be overridden by individual utilities, and as a result there is usually no need to change the defaults provided unless you are either very experienced or are directed to do so by the BMC Support team.Some internal changes made to the utilities and released in mid-2006 make BMCSORT more efficient and help tuning it when processing large or complex objects. In particular we have improved how memory is used when running large numbers of concurrent Sorts, something which is fairly common particularly when running LOAD and REORG. As a result I recommend you ensure you use the versions quoted or later so that you benefit from these changes.The next few slides will explain how to optimize BMCSORT, with a focus on Sortwork datasets and memory usage.

Page 12: Db2 Utility

12

12

SORTWORK datasets

• SORTWORK datasets can be:Hard coded in your JCLAllocated dynamically by the UtilityAllocated dynamically by BMCSORT

• Highly recommend that BMCSORT does this Makes JCL simpler Ensures we can run the optimal number of parallel SortsBMCSORT knows exactly how much data requires sortingIt will allocate more itself anyway if it’s needed

• All you need do is code the SORTNUM optionSet the relevant Utility DOPT parameterDefault is 32 but can be up to 64 or 255 in recent versionsDon’t forget to turn off utility SORTWORK Dynamic Allocation

There are three basic methods of providing SORTWORK datasets. Hand coding them in your JCL is the usually the least efficient method. The only real reason to do this is when your site is struggling for work space in which case pre-allocating the datasets can help ensure space is available, but you should only do this in exceptional circumstances. The next method is to get the utility to allocate the datasets itself by switching on Dynamic Allocation for the SORTWORK file type. While this method usually works well, you need to remember the utility estimates how much space is required during the Analysis phase before the actual real work commences. While our estimates are usually pretty good at the end of the day they are only estimates and they can always be wrong, resulting in either wasted space or degraded performance.Getting BMCSORT itself to allocate your SORTWORK datasets is by far the best choice. When a Sort is needed the utility usually knows exactly how much data needs processing and passes this information onto BMCSORT internally, which allows it to allocate the optimal amount of space. In most cases BMCSORT will actually allocate any extra SORTWORK datasets it needs itself anyway, so you may as well let it do the whole job. The easiest method of getting BMCSORT to allocate Sortwork datasets simply code the SORTNUM option in your utility, which will override the setting provided in the BMCSORT DOPT module. The usual default for this option is 32, although it may be lower in your site if you’ve been using BMC utilities for some time and have never reviewed your DOPT settings. We won’t always allocate 32 datasets but this provides an upper limit. SORTNUM can go up to 64, or even 255 in recent versions, if needed. Don’t forget to turn of Dynamic Allocation for the SORTWORK files in your utility options and syntax.

Page 13: Db2 Utility

13

13

Tuning BMCSORT Memory

• BMCSORT can degrade to Standard PathUsual cause is insufficient memoryCauses increased elapsed time, CPU and EXCPs

• Key is to allocate enough storageUse REGION=0M or ensure suitable MEMLIMIT1Mb of memory per 1Gb of data is a good rule of thumbGet values from the largest SORT used by your utility

• Watch out for these messages:WER164B 264K BYTES OF VIRTUAL STORAGE AVAILABLE..IHJ000I CHECKPOINT job, step.step (????????) NOT TAKEN

(11) MODULE = IHJACP00Another hint is when >20K EXCPs to SORTWORK files

• All indicate we may have degraded to Standard pathIf you see these call BMC Support for advice

As we have discussed, BMCSORT usually provides a highly optimized and efficient Sort mechanism tuned to improve utility performance. However, if there is insufficient memory available, particularly above the Bar then it can degrade to a Standard Path rather than failing completely. In many ways this is good news as it prevents unnecessary job failures, but the performance implications mean that it better to avoid the situation by providing enough Storage in the first place.BMCSORT is a 64-bit application and uses storage above the 2Gb bar. It will only use as much storage as needed, so running with REGION=0M is by far the best option. If this is not possible then ensure you use a large REGION size and make sure that the system allows you to use above the Bar memory (see slide 3 for options). A good rule of thumb for how much memory is needed is 1Mb per Gigabyte of data sorted by your largest SORT. You can get these figures from the SORT messages provided in your utility job output.If the path does degrade then you will almost always see one or both of the messages in red on the slide. You always get a WER164B message in the SORT output, so it’s the 264K Bytes figures which is the critical factor. The IHJ0001 message will be in the main job log, possibly more than once and almost inevitably indicates problems. Another possible indicator to watch for is the number of EXCPs performed to your SORTWORK datasets by each individual SORT. If one shows more than 20K EXCPs for anything other than the largest sorts then it may warrant further investigation. A first step you can take on your own would be to check that you have a decent region size, but failing this please call BMC Support for further advice.

Page 14: Db2 Utility

14

14

Optimizing COPY PLUS for DB2

• To save elapsed time:Exploit Disk HardwareUse Multi-tasking Use Cabinet Copies Use RESETMOD NOIncrease the number of Read/Write Buffers (NBRUFS)

• To save CPU:Use CHECKLVL 0, SQUEEZE NO & COMPRESS NODo not collect Statistics during the Copy

• To reduce use of Output Media:STACK YES and SMARTSTACK for Tape mediaCOMPRESS YES and SQUEEZE YES

• Incremental Copies can improve all categoriesBut think about recovery times...

If you’re primarily concerned with reducing elapsed times the most obvious options are to exploit your disk hardware and to ensure we process objects in parallel by invoking multi-tasking, both of which will be covered in more detail shortly. If you have the Recovery Management Solution using Cabinet Copies usually generates big savings. Other options to consider include using RESETMOD NO (you should always use this if you only ever take Full Copies), and increasing the number of Read/Write Buffers in the DOPT although this will increase memory utilization. If your target is to save CPU time then you should minimize Page checking and avoid compressing the copies using SQUEEZE and COMPRESS. Avoiding collecting Statistics during the COPY can also help, although if you currently run a separate Statistics collection job with the same frequency as your copies then the decision isn’t quite so straightforward. Running with a minimal number of Buffers will also reduce CPU but will increase your elapsed time.Reducing the use of output media is not usually such a big problem provided you Stack your backups when using Tape. Smartstack will assist recovery times when using Incremental Copies but it won’t save on media use by the backup process. Squeezing or Compressing the Backups will save media but will increase both CPU and Elapsed time. Finally taking Incremental Copies can save resources in all three categories but it may well have a significant impact on Recovery times so you need to be careful. It can be a very good choice if the circumstances are right. One of my larger customers uses Incremental Copies very successfully but they have a large database of around 40 Terabytes, most of which has a very low update rate relative to it’s size so using Incremental copies was a natural choice for them.

Page 15: Db2 Utility

15

15

Exploiting Disk Hardware

Option 1 – Snapshot Copies• Specify SHRLEVEL CONCURRENT• Creates Standard and Consistent imagecopy

May be used by any Recovery utility

• Brief Outage (Quiesce) to create Consistency Point• Uses Volume/Dataset Snaps or Volume Mirrors

Hardware independent, may require Disk Vendor Software

• Software Snapshot also supportedXBM will determine which is used, not Utility Syntax

• Optional fallback to SHRLEVEL CHANGE backup Use REQUIRED or PREFERRED keywords

• STARTMSG used for Automation

Snapshot Copies are standard Consistent imagecopies registered in SYSCOPY. They can use any type of media and may be used by any Recovery Utility. Using Snapshot may not shorten the elapsed time of the copy job, but the objects will be available for Read/Write processing while the copy is being taken which effectively achieves the same end result. COPY PLUS uses XBM services to exploit the capabilities of your Disk Subsystem. When taking a Snapshot Copy, a consistency point is established using an IBM Quiesce. While this has exclusive access to the objects XBM establishes a source of consistent data which is then processed by the utility to create the Copy. In the case of a Hardware Snapshot this usually involves a Flashcopy or similar Snap operation either at the dataset or Volume level. If permitted we are also able to suspend a mirror so that the utility processes the suspended copy although you naturally need to ensure we don’t suspend any mirrors being used for Disaster Recovery purposes. COPY PLUS can achieve the same end result using XBM Software capabilities where a Dataspace is used to cache pre-updated page images. You can optionally fall back to a Software Snapshot if a Hardware Request fails.The keyword specified after SHRLEVEL CONCURRENT determines what action to take if a Snapshot request fails. The default, PREFERRED, indicates that should a Snapshot request fail for whatever reason the utility will revert to creating a SHRLEVEL CHANGE copy. The alternative is REQUIRED which terminates the COPY job if the Snapshot request is unsuccessful. It is always the XBM Configuration parameters which determine what type of Snapshot is taken –there is no syntax or options provided within the utility itself to influence this. The STARTMSG keyword inserts a message in the job output and the system log once the Snapshot operation is complete and the backup has started. This can be used to automate the submission of other jobs which can run alongside the COPY once the backup process is underway.

Page 16: Db2 Utility

16

16

Exploiting Disk Hardware

Option 2 – Instant Snapshots• Specify DSSNAP YES or AUTO in Output Descriptor

Supports any SHRLEVEL including CONCURRENTSHRLEVEL CHANGE Instant Snapshots are outage free

• Invokes Flashcopy or equivalent at the dataset levelHardware Independent, may require Disk Vendor Software

• Creates non-standard ImagecopyRegistered in BMCXCOPY table rather than SYSCOPYCan only be processed by BMC UtilitiesUse COPY IMAGECOPY to create standard backup

• Multi-task to improve throughput• As soon as Flashcopy is taken the Copy is complete

Backup is physical copy of the DB2 VSAM LDS

While a Snapshot Copy achieves savings by allowing other work to run alongside the copy, an Instant Snapshot reduces the actual elapsed time of the backup itself. We support any SHRLEVEL option, including SHRLEVEL CONCURRENT. A SHRLEVEL CHANGE Instant Snapshot will fully exploit your Disk Hardware with no outage whatsoever to your applications. Instant Snapshot uses a Flashcopy or other Disk based Snap operation on the underlying Datasets. The Flashed copy is registered in BMCXCOPY. This Type of Backup is non-standard and can only be used by a BMC utility such as RECOVER PLUS. If you don’t own this, you can use the COPY IMAGECOPY facility of COPY PLUS to create a Standard Imagecopy which can then be processed by any utility of your choice. Instant Snapshots can be multi-tasked to improve throughput, in which case more than one dataset will be processed concurrently. The limiting factor to throughput then becomes the capability of the Disk Subsystem rather than the utility itself. As soon as the Flashcopy operation has been completed the backup is registered and the utility completes. The process takes only a second or two per dataset, regardless of size. The backup itself is always Disk based and is a physical copy of the DB2 VSAM Linear Dataset. Recovery simply involves flashing the dataset back and so Restore times can be improved just as dramatically as the Backups. Most customers I see Copy their Instant Snapshots offline at a convenient time to create offsite backups and to free up the disk space for the next backup to be taken.

Page 17: Db2 Utility

17

17

Multi-tasking in COPY PLUS

• GROUP YES required for multi-tasking• Quiesce for SHRLEVEL REFERENCE & CONCURRENT • Specify MAXTASKS in DOPT or PARALLEL in syntax

Highest value used if both specifiedEach subtask can perform tape stackingACPPRTnn DD dynamically allocated for messagesTASK n can be used to direct objects to specific tasks

OPTIONS MAXTASKS 3OUTPUT OUTCPY UNIT CART STACK YESCOPY TABLESPACE DBSRT1.* COPYDDN(OUTCPY) TASK 1

TABLESPACE DBSRT2.* COPYDDN(OUTCPY)GROUP YES RESETMOD NO FULL NO READTYPE AUTO

One of the most useful features of COPY PLUS is multi-tasking. GROUP YES is a pre-requisite if you wish to use this feature. Remember that if you’re running SHRLEVEL REFERENCE or CONCURRENT copies and using GROUP YES then we will use a Quiesce in order to obtain a consistency point. To invoke multi-tasking, specify either the MAXTASKS Default Option or the PARALLEL keyword in your syntax. I find that most customers use MAXTASKS as in the example, perhaps because it’s more descriptive. If both parameters are specified COPY PLUS will use the highest value, but it will only ever start as many subtasks as it requires. Each subtask can perform tape stacking independently. COPY PLUS dynamically allocates ACPPRTnn output DD cards as necessary to store the output messages from each subtask.If your data is significantly skewed you can use the TASK n syntax to direct individual objects or groups of objects to a specific task. This can help ensure that the largest objects are processed first so the elapsed time of the utility is minimized. You can also use TASK syntax to stack the backups of objects belonging to a database together, as can be seen in the example.When using Multi-tasking remember that COPY PLUS is likely to have more than one open thread with DB2 so you need to ensure your CTHREAD, IDFORE and IDBACK DSNZPARM settings are large enough.

Page 18: Db2 Utility

18

18

Dynamic Allocation in COPY PLUS

• Different Output Descriptors based on Type of CopyIncremental Copies use COPYDDN & RECOVERYDDNFull copies use FULLDDN & FULLRECDDN if codedRemember the SMARTSTACK option for Incremental Copies

• Different Output Descriptors based on size of CopyOUTSIZE sets the threshold Large objects use BIGDDN & BIGRECDDN for Full copies Have priority over COPYDDN and FULLDDN when the OUTSIZE threshold is exceeded

• Copy Indexes based on sizeINDEXES YES invokes Index backups along with TablespacesIXSIZE governs whether indexes are copied

There are a number of very useful parameters that can be used to manage the devices that Imagecopies use, as well as to control whether indexes are backed up or not.The first set of options allow Incremental Copies to be placed onto Different devices and to use different naming standards than Full copies. If FULLDDN and FULLRECDDN are not coded then both Full and Incremental Copies will use the standard COPYDDN and RECOVERYDDN output descriptors. Remember that we support the SMARTSTACK keyword to automatically stack incremental copies in the same order as their respective full copies. We also allow Full copies for large objects to be placed into a different set of datasets than those for smaller objects. The threshold is defined using OUTSIZE, and if this is exceeded we will use the BIGDDN Descriptor for the Copy datasets in place of COPYDDN or FULLDDN.If you’re backing up Tablespaces INDEXES YES can be used along with IXSIZE to determine whether or not to backup Indexes based on size. Remember that we support backing up indexes defined in DB2 as COPY NO. Taken together these options provide a huge amount of flexibility when it comes to defining where your backups are stored for optimal Space utilization and Recovery Performance. They allow you to automatically place larger backups on Tape or to use Instant Snapshot for these while using Disk (or even a Cabinet Copy) for smaller objects. This type of Copy is known within BMC as a Hybrid Copy, and has the benefit of being self managing – if an object increases or decreases in size it will automatically be placed into the correct category of dataset the next time the Backup runs. An example of this type of backup can be found on the next slide.

Page 19: Db2 Utility

19

19

Sample Hybrid Copy Job

OPTIONS MAXTASKS 5XBMID(XBMP)OUTSIZE 100MIXSIZE 10M

OUTPUT CABCOPY DSNAME ... UNIT VTAPE STACK CABINET

OUTPUT INSTCOPY DSNAME ... UNIT 3390 DSSNAP YES

COPY TABLESPACE DBSRT.*INDEXES YESCOPYDDN(CABCOPY)BIGDDN(INSTCOPY)RESETMOD NOSHRLEVEL CHANGEGROUP YES

Here is a sample Hybrid Copy job. It uses two Output Descriptors:•CABCOPY which will be a Cabinet Copy used for any Smaller datasets. It is going to use a virtual tape device.•INSTCOPY is an Instant Snapshot which will be taken for any larger objects, in this case of over 100Mb. These will use Flashcopy technology and the backups will remain on disk although we may choose to process these offline afterwards to free up space for the next day’s copies.The Options command tells us how many subtasks to use (5 in this instance), so we will be processing 5 objects at a time and may end up with up to 5 Cabinet Copy datasets, once for each subtask. It also defines the name of the XBM subsystem to be used for the Instant Snapshots and the Size thresholds for both larger image copies and for whether Indexes will be processed.The COPY step itself merely pulls together these components, as well as specifying RESETMOD NO (which is required for Instant Snapshots) and the SHRLEVEL to be used. GROUP YES is required in this instance because we are multi-tasking.

Page 20: Db2 Utility

20

20

Other uses for COPY PLUS

• Don’t forget COPY PLUS can also be used for: COPY IMAGECOPYQUIESCEMODIFY RECOVERY

• All support our Wildcards and special keywords• MODIFY is particularly useful

DELETE based on maximum number of copies or using an SQL-like WHERE clauseDeletes and Uncatalogs old image copies Tidies up BMCXCOPY as well as SYSCOPYVerifies Object Recoverability Checks time and log data volumes since last Copy Can generate Copy of any unrecoverable or alerted objects

Don’t forget that COPY PLUS can be used for a number of purposes other than simply taking backups. It also supports the COPY IMAGECOPY (equivalent to COPYTOCOPY), QUIESCE and MODIFY RECOVERY utility functions. All these support our normal range of wildcards and special keywords which can be useful. For example, although we invoke the IBM Quiesce utility to perform the actual Quiesce, you can use COPY PLUS as an easy to use front end if you prefer our wildcarding syntax to using IBM’s LISTDEF functionality or to generate a Quiesce of the objects in a Recovery Manager Group.Of the capabilities listed MODIFY RECOVERY is probably the most useful. We provide additional keywords which allow features such as removing older Intermediate (Daily) copies from SYSIBM.SYSCOPY while retaining Weekly or Monthly copies. COPY PLUS will also verify the recoverability of objects and can be used to check the elapsed time and the number of Log datasets created since the last Full Copy. These checks can either generate warnings or the product can automatically execute an Imagecopy to backup any such objects if required.

Page 21: Db2 Utility

21

21

Optimizing RECOVER PLUS for DB2

• Recovery Plan created during UTILINIT/ANALYZEANALYZE ONLY describes plan and resources to be used

• Backout Recovery Point-in-time Recovery without Image Copies OK for Indexes even if no Image Copy or defined COPY NO

• Consider INDEXLOG AUTORecovers indexes if possible otherwise RebuildsYou may have to change some recovery JCL if you use this

• NOWORKDDN strategyEliminates SYSUT1 for Index Keys – piped straight to SORT

• Multiple Log readers• Consider UNLOADKEYS/BUILDINDEX strategy

For large partitioned objects with non-clustering indexes

Recovery is something most people don’t practice a lot, so running one needs to be as simple as possible. RECOVER PLUS develops a Recovery Plan during the first phase of execution. A number of reports on what we intend to do together with a summary of the objects affected and the recovery resources that will be used are generated. How much detail is provided in your job output can be adjusted using the MSGLEVEL parameter. The following are some options to consider to improve recovery times. Most are either DOPTs or are the default behaviour so you don’t need to specify them explicitly each time:•Backout Recovery will be covered shortly, but provides the ability to recover to a PIT without Image Copies by reading backwards through the log. It is available for Indexes you have never backed them up as well as for indexes defined using the COPY NO attribute.•When using INDEXLOG AUTO we attempt to recover indexes using backups and logs, and automatically convert the request to a REBUILD INDEX if this is not possible. The default for this Option setting is INDEXLOG NO so that all existing recovery jobs will work as some types are not eligible for conversion (see Manual for details).•The NOWORKDDN strategy avoids the need for SYSUT1 datasets to store index keys by piping them directly into the SORT process. This can affect restart processing but saves resources and improves recovery times. To invoke this either specify NOWORKDDN or avoid coding a WORKDDN in your job (this is the default).•Using Multiple log readers and the UNLOADKEYS/BUILDINDEX strategy are covered later in the presentation

Page 22: Db2 Utility

22

22

Point-in-Time Recovery - BACKOUT

Bad Update at000000001000

Quiesce at 000000000900

Pit Range

PIT_RBA START_RBA

RECOVER TABLESPACE EMP.PAYROLL TOLOGPOINT X’000000000900’BACKOUT

Image copyat 000000000100

000000001200

This diagram show how you might choose to use BACKOUT to avoid having to mount an imagecopy, apply logs and rebuild indexes.

Page 23: Db2 Utility

23

23

Why choose BACKOUT?

• Main benefit is speed of recoveryNo Mounts for Image Copies requiredNo key sort for index rebuilds

• Also saves resources in most cases

OPTIONS BACKOUT INDEXLOG YES

RECOVER TABLESPACE PAYROLL.EMPLOYEETABLESPACE PAYROLL.RULESTOLOGPOINT LASTQUIESCE

RECOVER INDEX (ALL) TABLESPACE PAYROLL.EMPLOYEERECOVER INDEX (ALL) TABLESPACE PAYROLL.RULES

The main advantage of using BACKOUT processing is the speed of recovery. It’s almost always going to be quicker to process the logs backwards than to recover to an imagecopy, apply the logs to a known consistency point and then rebuild the associated indexes. In the example job, I am recovering two objects to a known Consistency Point. I have also specified that all the indexes should be recovered as well, and since their associated tablespaces are being processed in the same recovery job using LOGPOINT syntax the indexes will automatically be recovered to the same point. This would also happen if I had used TORBA or TOCOPY in my recovery syntax.

Page 24: Db2 Utility

24

24

But remember...

• The Space must be physically undamaged• Process whole log for COPY NO indexes• You cannot Backout

Through the range of a LOAD, REORG or REBUILD utilityIf object is in a restricted status such as RECP or LPLA segmented TS through a DROP TABLE or Mass Delete

– Unless Data Capture Changes is ON or the segments affected have not been reused by later inserts

LOBS, Not Logged Tablespaces, an index defined using an expression or Compressed indexesUsing Keywords such as OBIDXLAT and OUTCOPY ONLY

• An index cannot be recovered through a BackoutOUTCOPY YES, run copy afterwards (SHRLEVEL CHANGE) or rebuild index if further recovery needed

One of the requirements of BACKOUT is that the space must be physically undamaged and in a state which reflects all updates done to the current time (i.e., the space cannot have been restored with DSN1COPY or some other process outside DB2). If COPY NO indexes are being processed all the log between the target LRSN and current will be read because SYSLGRNX does not record Update ranges for these objects. There are some restrictions listed but if you plan to use this feature in your system you should review the Reference manual for your release for exact details. Most of the restrictions are SYSCOPY events. RECOVER PLUS detects these during analysis and fails before performing any processing. Examples are any REORG or LOAD Utility (even if LOG YES), or a REBUILD INDEX. Also detected at analysis time are statuses which are unacceptable, like DEFER, LPL, GRECP, REFP, or RECP, as well as any attempt to BACKOUT into the middle of an existing PIT.RECOVER PLUS will fail during the execution if one of a number of conditions is encountered on a page image being recovered. The most common is that a mass delete or DROP TABLE has been done, on a table which is not defined as DATA CAPTURE CHANGES, where the segment logically deleted has been reused (so the data page images are not available). If you encounter such a case a normal forward recovery must be performed. An index cannot be forward recovered later through a BACKOUT. You can specify OUTCOPY YES during the index BACKOUT (but this causes all the pages to be processed and so will impact performance), you can start a SHRLEVEL CHANGE copy immediately after completion of the BACKOUT, or you can continue with processing and take the risk that you will have to REBUILD if a recovery of the index becomes necessary before it is next image copied.

Page 25: Db2 Utility

25

25

Using Multiple Log Readers

• Concurrent Log Reading is Always a Good ThingBMC Recommends setting MAXLOGS to 6Check MAXDRIVES settingExperiment with OPTION statement firstThen update AFR$OPTS Installation Options

– MAXLOGS (default is 1)– MAXDRIVE (default is 0 => unlimited tape drives)

0:00:00

0:30:00

1:00:00

1:30:00

2:00:00

Ela

psed

Tim

e

1:18:27

55:34 51:06 45:15 42:04 40:56

1 2 3 4 5 6

BMC Software recommends a value of 6 for MAXLOGS; this seems to achieve optimal results in many shops. The default value is 3.Unless you want to possibly allocate 6 tape drives during a recovery, ensure that MAXDRIVES is set to a number lower than 6. The default value of 0 for MAXDRIVE specifies that tape drive usage is unlimited. It does not mean that no tapes are used!You may want to experiment with different values of MAXLOGS and MAXDRIVES on the OPTION statement to find the best combination of settings for your environment. Once you are satisfied with these values, you can define them as default values in the AFR$OPTS macro. The chart shows that on a lightly loaded system with large archive log files on tape, a value of 6 for MAXLOGS reduces elapsed time by 50% compared to using the value of 1. In practice, the value of MAXLOGS is usually limited by the fact that the effect on elapsed time decreases as the value of MAXLOGS is increased.In this benchmark, which is a few releases old now, 15.8 billion log records were read from 29 archive logs and 2 active logs; 4 million of these records (520 million bytes) were sorted and applied to spaces. The elapsed time values in the chart are for the total recovery execution time.

Page 26: Db2 Utility

26

26

Sorted UNLOADKEYS - Step 1

• BenefitsGreater ConcurrencySmaller SortsSaves Unloaded Keys

PARTS 1-4 PARTS 5-8 PARTS 9-12

SKEYDDN1 SKEYDDN2 SKEYDDN3

If you need to REBUILD a large non-clustering Index on a partitioned object, the UNLOADKEYS/BUILDINDEX strategy allows you to concurrently extract the keys saving time during the unload phase. Each job uses its own sort task, which will be smaller and more efficient. A side benefit is that this allows you to save the files with the unloaded keys, for future builds. We usually recommend using the sorted UNLOADKEYS strategy if your sort would need more than 1 GB of storage. You must first set up your jobs to unload the keys. In this example, we have a 12 part space that will be unloaded in 3 separate jobs. Each job has produces a key file for the partitions being processed. Here is the first of the jobs, for partitions 1-4:

//BMCRCVR1 EXEC PGM=AFRMAIN,REGION=0M,PARM=‘DHN1,DMBRCVR1,NEW'

//STEPLIB DD DSN=SYS2.DB2V81M.DSNLOAD,DISP=SHR

// DD DSN=AFR.RUNLIB.LOAD,DISP=SHR

//SKEYDDN1 DD DSN=HLQ.SKEYDDN1,DISP=(NEW,CATLG),

// UNIT=SYSDA,SPACE=(CYL,(800,100),RLSE),VOL=SER=DMB004

//SYSOUT DD SYSOUT=*

//SYSPRINT DD SYSOUT=*

RECOVER UNLOADKEYS(ALL) TABLESPACE DMB16.TSB16 PART 1

RECOVER UNLOADKEYS(ALL) TABLESPACE DMB16.TSB16 PART 2

RECOVER UNLOADKEYS(ALL) TABLESPACE DMB16.TSB16 PART 3

RECOVER UNLOADKEYS(ALL) TABLESPACE DMB16.TSB16 PART 4

Page 27: Db2 Utility

27

27

Sorted UNLOADKEYS - Step 2

SKEYDDN1 SKEYDDN2 SKEYDDN3

PARTS 1-12

After the unloads have completed, you then submit another job to merge the keys from the Key files and build the index:

//BMCRCVR2 EXEC PGM=AFRMAIN,REGION=0M,PARM=‘DHN1,DMBRCVRB,NEW'

//STEPLIB DD DSN=SYS2.DB2V81M.DSNLOAD,DISP=SHR

// DD DSN=AFR.RUNLIB.LOAD,DISP=SHR

//SKEYDDN1 DD DSN=HLQ.SKEYDDN1,DISP=OLD

//SKEYDDN2 DD DSN=HLQ.SKEYDDN2,DISP=OLD

//SKEYDDN3 DD DSN=HLQ.SKEYDDN3,DISP=OLD

//SYSOUT DD SYSOUT=*

//SYSPRINT DD SYSOUT=*

RECOVER BUILDINDEX(ALL) TABLESPACE DMB16.TSB16

The merge referenced is RECOVER PLUS code . Do not confuse it with the merge from your sort package.

Page 28: Db2 Utility

28

28

Optimizing REORG PLUS for DB2

• Use SHRLEVEL REREFENCE or CHANGEUses more space but avoids need to recover after failure

• Use Dynamic Allocation... but not for SORTWORK filesUse SORTNUM 32 (Default) to get BMCSORT to allocate these

• COPY YES INLINE YESSet COPYLVL PART (default DOPT is FULL)

– Also ensures best use of multi-tasking – Maybe not if you’re processing a large number of partitions

Also use ICTYPE UPDATE when using Tape

• Consider Single Phase Reorg UNLOAD RELOAD – especially for Online Reorgs

– Utility terminates before UTILTERM phase anyway

This first slide on REORG PLUS focuses on all types of reorganization. A later section covers Online Reorgs in more detail. My first recommendation is to always use one of the non-disruptive techniques, in other words try and use SHRLEVEL REFERENCE or CHANGE, rather than NONE. The reason is that the operational impact of any failure is much reduced because the shadow objects can be simply discarded whereas with an Offline Reorg the original objects are placed into Recover Pending if the utility is terminated. The obvious downside is the increased disk space used by the shadow datasets, but you only need this for a short time while the Reorg is running and the benefits obtained usually far outweigh this additional requirement.As with all our utilities we recommend using Dynamic Allocation to maximize multi-tasking opportunities and to simplify the JCL. The exception to this is SORTWORK files which have been covered during the section on BMCSORT.When using REORG you almost always want to use Inline copy. Provided you can stand backing up to Disk (or have enough tape units available) then using Partition level copies also helps maximize multi-tasking opportunities, but you may want to reconsider this if you’re processing large numbers of partitions together. One quick tip here if you are using Tape backups during an Online Reorg is to specify ICTYPE UPDATE rather than the normal AUTO. We will then append the pages updated during the Log apply to the end of the copy rather than running an Incremental copy during the LOGFINAL phase when some objects are in a restricted state.Using a Single Phase Load by specifying UNLOAD RELOAD will improve performance although it may impact restart, particularly for SHRLEVEL NONE. It also makes the use of SYSREC and SYSUT1 datasets optional, so you can turn off Dynamic Allocation for these two dataset types if you wish to save disk space. A Single Phase Reorg should be an almost automatic choice for SHRLEVEL CHANGE as it does not affect Restartability.

Page 29: Db2 Utility

29

29

Some DOPTS to review

• COPYDDN use 4 characters (BMCCPY,BMCCPZ)Allows Dynamic Copy datasets for >999 partitions

• DELFILES=YES (NO)• FASTSWITCH=YES (NO)• KEEPDICTIONARY=YES (NO)

Provided your data is relatively staticMay be worth rebuilding the Dictionary occasionally

• REDEFINE=NO (YES)Saves CPU but may not achieve optimal allocation

• STAGEDSN DSN (BMC)Avoids unnecessary messages

• Don’t change Multi-tasking Options (ending-MAX)Except if asked to do so by BMC Support

This slide lists some Installation Default Options for REORG PLUS which you might want to review. The first of these is COPYDDN which usually defaults to (BMCCPY,BMCCPZ). If you have more than 99 partitions and you’re using Partition level copies this will result in an invalid name, so you may want to reduce this to a 4 character string to allow for >999 parts. Don’t forget the name still needs to be unique within the job when the partition number is added to it!Most remaining options are fairly self explanatory, so I will only cover them briefly below. •Using DELFILES=YES is safe – we only try and delete the files if we know they’re not going to be needed again. Using this can save you a lot of manual effort afterwards.•If the nature of your data does not change rapidly then using KEEPDICTIONARY=YES will save CPU. We ignore this if we’re performing partition rebalancing during the Reorg, and we will also build a compression dictionary if one does not exist even if your specify KEEPDICTIONARY=YES.•If the main purpose of your Reorg is to reorganize the data rather than save space then using REDEFINE=NO may well make sense and again saves CPU cycles, especially if there are a large number of datasets involved in the Reorg. It’s unlikely to have much effect during an Online Reorg as the staging datasets are likely to be redefined anyway. •For largely historical reasons we still use the old default of BMC for the STAGEDSN option. This option is ignored anyway for Online or Reference Reorgs using Fastswitch and you can save a warning message by changing the default back to the IBM standard of DSN.•As long as you’re using Dynamic Allocation then REORG PLUS multi-tasking is controlled by a set of DOPTs whose names end in MAX, notably TASKMAX and SMAX. These normally work best using the defaults, so don’t change them unless requested to do so by BMC Support.

Page 30: Db2 Utility

30

30

RW RW

Init

DA

TSIX

Original ObjectsOriginal Objects

TSIX

Analyze Unload

IX TS

New ObjectsNew Objects

ReloadBuildCopy

LogApply

DW

LogFinal

& Copy Update

SIX Copy *

* Copies NPI’s to staging datasets for Partial Reorg Only* Copies NPI’s to staging datasets for Partial Reorg Only

Log Control

Log Records for these objectsRIDMap

Online Reorganizations

Switch

Term

This diagram runs through the processing phases undertaken by our Online Reorganization. I will discuss tuning this process shortly, but this is an opportunity to briefly mention two other features you may wish to use.First is the SIXSNAP of Snapshot Index Copy. If you’re performing partial Reorgs of partitioned objects with NPIs, we create a shadow copy of the NPI in order to avoid the need for a BUILD2 phase. If you have the appropriate hardware this can be achieved using the SIXSNAP feature, which is invoked using the SIXSNAP DOPT, setting it to either YES or AUTO in place of the default value of NO. See the manual for more details as I won’t have time to go into this during the session.The second point is how to send commands to the Utility to switch it into LOGFINAL if you’re using MAXRO DEFER or to simply see how the work is progressing. You can do this in two ways, by using the XBM online interface or by sending MVS commands to the utility job itself via the Console or Automated Operations. Again this is too complex an area for the hour we have available, so please see the REORG PLUS Reference Manual or the XBM User Guide for more details.

Page 31: Db2 Utility

31

31

Providing Highest Availability

• DOPT keywords provide all the control you needCan be changed using Syntax in brackets, along with defaultDRNWAIT=n, UTIL, SQL or NONE (DRAIN_WAIT,UTIL)

– Using NONE Provides best Availability option– Using UTIL gives REORG the best chance of completing

DRNRETRY=n (RETRY,10)DRNDELAY=n (RETRY_DELAY,3) DRAINTYP=WRITERS or ALL (DRAIN, WRITERS)

– Use ALL if heavy SQL Update or Long running Read UOWs and you’re getting many -911’s

– Slightly longer outage but can be less disruptive in the endDSPLOCKS=NONE,RETRY or DRNFAIL (DSPLOCKS,NONE)

– Provides information on active URIDs if Drain failuresMAXRO=n or DEFER (MAXRO,300)

– 300 seconds is an awfully long time!

These are the primary keywords we provide to allow you to completely control how REORG PLUS will impact your applications during an Online Reorg. Most of the control comes around the time we take to obtain the Drains we need, at the start of the utility when we invoke XBM services and at the end when we start LOGFINAL and subsequently the SWITCH phase. The main keywords to consider changing are DRNWAIT, DRAINTYP and MAXRO.DRNWAIT tells us how long a Drain request should wait for the access it needs before the request is cancelled and the utility waits to try again. The default for this is UTIL, which is the utility timeout from your DB2 DSNZPARMS (IRLMRWT x UTIMOUT) which is usually over 4 minutes, far too long to prevent SQL failures. If your applications are more important than the Reorg then consider setting this to NONE, so that if a Drain itself times out if it is not immediately successful. You could increase the Retry count and Delay to compensate, but at most customers we find this is the preferred setting.DRAINTYP is relevant at the end of the Log Apply phase when the utility is about to go into LOGFINAL. It tells us whether to Drain just the Writers before the LogFinal or all SQL. In a system with heavy update activity or long running Read UOWs you may find it less disruptive to change this to DRAINTYP ALL. Contact me if you’d like to know more.MAXRO is time we estimate it will take us to complete the final Log apply process. Dropping below this triggers the start of LOGFINAL. You can use DEFER here and trigger the switch yourself at a convenient time, but whatever you do 300 seconds sounds too long to me – Read UOWs will time out if you leave it here. Something below your SQL timeout figure sounds more reasonable to me.

Page 32: Db2 Utility

32

32

Other uses for REORG PLUS

• All these are available using any SHRLEVEL• Rebalancing Partitions

Provide limit keys manually using a DDLIN datasetOr use the new REBALANCE keyword

• Archiving, Updating or Deleting rowsRows can be deleted using SELECT or DELETE syntax

– Deleted rows can be saved in the SYSARC datasetRows can be Updated using UPDATE syntax

• Resize Datasets during REORGChange Primary and Secondary quantitiesReorder or only use subset of volumes in a STOGROUP

– Cannot add new volumesRedefine Yes can be changed to Redefine No at object level

I don’t have time to go into details of these features within the hour we have available, but if you are interested in any of these then please hunt me out during the remainder of the Conference, or feel free to send me an email afterwards – my address is on the last slide of the presentation material. The first option is to rebalance the partition boundaries of a partitioned tablespace. This works for both Table controlled and Index controlled partitions. You provide the new limits via a DD card called DDLIN, or you can ask us to rebalance up to 255 ranges of logically contiguous partitions.The next feature allows you to Delete (or archive) rows during a Reorg. You specify which ones using relatively simplistic SQL-like syntax contains in a SELECT or DELETE statement (depending on which is easier to code). Rows that have not been reloaded can be placed into a Archive dataset for subsequent processing, such as loading into a long term history table or an archive database. We can also update columns in a table during a REORG, again only using fairly simple syntax. It’s worth noting that these options do not check any referential constraints before processing the data, nor do they set the Check Pending flag so they do need to be used with a degree of caution. However they can be very useful.Finally we are able to resize an object during a Reorg, as well as make a number of other changes to the way the dataset is reallocated. The simplest use of this feature is to change the primary and secondary quantities, but we can also reorder the volumes in a STOGROUP or even limit the dataset to a subset of the volumes. A Reorg that runs using REDEFINE YES can also be dynamically changed to REDFINE NO at the object level using the same mechanism.

Page 33: Db2 Utility

33

33

Steve ThomasSteve ThomasBMC Software

[email protected]

Session G06Getting the most out of your BMC DB2 Utilities