Dynamic SQL – Performance tuning Challenges

51
Dynamic SQL – Performance tuning Challenges Heart of America DB2 User Group Sep 10, 2007 1:15 pm – 2:30 pm Billy Sundarrajan Lead DBA Fifth Third Bank [email protected] 0

description

 

Transcript of Dynamic SQL – Performance tuning Challenges

Page 1: Dynamic SQL – Performance tuning Challenges

Dynamic SQL –Performance tuning Challenges

Heart of America DB2 User GroupSep 10, 2007 1:15 pm – 2:30 pm

Billy Sundarrajan

Lead DBA

Fifth Third Bank

[email protected]

0

Page 2: Dynamic SQL – Performance tuning Challenges

1

1

Agenda

� Fifth Third Bank Environment

� Performance tuning challenges

� DB2 V8 Enhancements

� DB2 9 Enhancements

� SQL Performance Warehouse

� Summary

� Questions?

This slides gives a high-level overview of the agenda/outline for the presentation.

-Brief discussion of the Fifth Third Environment – Primarily dynamic SQL –Multi terabyte tables

-Discuss performance tuning challenges in a dynamic SQL intensiveenvironment

-Enhancements in DB2 v8 which help tackle performance issues

-Enhancement in DB2 9

-Brief discussion of Performance warehouse

-Recap of the enhancements

Page 3: Dynamic SQL – Performance tuning Challenges

2

2

Fifth Third Bank Environment

� Data Sharing (2-way in Dev/Test/Load, 2-way in

Prod-DW, 6-way in PROD-OLTP)

� 22 TB DASD

� DB2 V8 NFM

� Data Marts, Data Warehouse, Reporting, OLTP

� Stored Procedures (SQL and Cobol)

� Majority of SQL workload is dynamic SQL (~22

Million dynamic SQLs)

� Majority of large tables populated by LOAD SHRLEVEL CHANGE or MQ

In Fifth Third, majority of the DB2 work is done in the form of dynamic SQL. Most applications are either web-based or assembler based.

The dynamic SQLs get executed under functional/application IDs from a variety of application servers. 80% of the space is occupied by 20+ tables.

There are 6 multi terabyte tables.

Page 4: Dynamic SQL – Performance tuning Challenges

3

Dynamic SQL Performance Tuning Challenges

Page 5: Dynamic SQL – Performance tuning Challenges

4

4

Dynamic SQL performance tuning challenges

� Identifying the “actual” end user performing a

dynamic SQL – Most dynamic SQL get executed by

an application server/common authorization ID

� Ability to drill down accounting information to a more granular level than just the authorization ID

– Middleware such as Business Objects, Webshpere perform

the connect to DB2 using common authorization/RACF ID

� Identifying the contents of the dynamic SQL cache and the access paths being used

� Reducing the number of accounting (SMF 101)

records for DISTSERV

This slides addresses the most important and most common challenges faced by a DBA in a dynamic SQL environment.

-Distributed applications usually connect to the database using afunctional/application server ID. Every user appears to be the same to the

database administrator. This poses a challenge to the DBA as well as the

DB2 Connect administrators and Auditors.

-DBAs are often faced with the challenge of identifying resource intensive

SQLs. This can be quite challenging if the SQLs do not use parameter

markers (i.e., the SQLs are not using the prepared form). A bad-query (that

is not prepared) which runs 200,000 times (with different host variables) will not show up in the dynamic SQL cache.

-Often information is needed at a more granular level than just authorization ID

-Too many SMF101 records are cut in a dynamic SQL intensive

environment.

-Dynamic SQL cache cannot be externalized easily in DB2 v7 to GTF/SMF.

Page 6: Dynamic SQL – Performance tuning Challenges

5

5

Dynamic SQL performance tuning challenges

(addressed by DB2 9 Enhancements)

� Using Resource Limit Facility to control the service units for dynamic SQL – at a more granular level than

AUTHID/PLAN/PACKAGE

� Using DB2 traces for dynamic SQL applications –

How to control the unit of work being traced

� Optimizing prepared dynamic SQL where the host

variables change

� Securing the application server user ID

This slides addresses additional challenges (DB2 9 enhancements addresses these challenges) faced by a DBA in a dynamic SQL

environment.

-Using Resource Limit Facility, the DBA is able to control the number of

service units consumed by a AUTHID/PLAN/PACKAGE. However, this

granularity may not be adequate. In a dynamic SQL based application,

different portions of the application may need different service unit limits.

-Tracing dynamic SQL can be a challenging task – As most dynamic

applications use a single /application server ID to connect to DB2, the

amount of SMF data produced can be extremely voluminous, OPx buffers can be flooded. There is a CPU overhead associated with tracing

unnecessary threads.

-Using a common application server ID poses security challenges, as the ID

can be used outside of the application server.

Page 7: Dynamic SQL – Performance tuning Challenges

6

Identifying the “actual”user in a 3-tier architecture

Page 8: Dynamic SQL – Performance tuning Challenges

7

7

Identifying the “actual” end user:

� Most dynamic SQL intensive applications use a functional ID/ common Authorization ID– Single ID avoids the creation and maintenance of a

large number of RACF IDs in a web-based application

– DBA does not have additional information about the “actual” end-user

– In most instances, the IP address in the DB2 monitor is the IP address of the DB2 connect gateway

As the number of web based DB2 applications, and the number of users of web based applications increased, DBAs and RACF administrators soon

realized that it was not pragmatic to create a RACF ID for every user. A

front-end/app server/LDAP user validation followed by a single user-id

logging on to DB2 became the preferred approach.

This approach while helpful, posed its own set of challenges. The DBA often

did not know anything more than the application server/functional ID, and the

IP address in the accounting records (IFCID 0003) was usually the IP address of the connect gateway.

CLI APIs were provided in DB2 v7 – But the information passed by the APIs

to the accounting record could not be easily trapped.

Page 9: Dynamic SQL – Performance tuning Challenges

8

8

Identifying the “actual” end user:

� DB2 v7 provided ability to pass “client accounting” / “client identification” information to SMF 101 records.

� V7 –Enhancements to supply client accounting information to the SMF 101 records and to the monitor– SQLESETI

– RRSAF

– JDBC Universal driver

� What does V8 provide?

V8 provides four new registers to trap the information set by the sqleseti APIs.

Page 10: Dynamic SQL – Performance tuning Challenges

9

9

Client Identification registers –V8:

� V8 - provides new registers

– CURRENT CLIENT_ACCTNG

– CURRENT CLIENT_APPLNAME

– CURRENT CLIENT_USERID

– CURRENT CLIENT_WRKSTNNAME

� Allows applications to intercept and use the information supplied/set by a web interface

V8 provides four new registers to trap the information set by the sqleseti APIs. These registers can trap the information set by the client programs.

The four pieces of information are: Accounting Information, Application (i.e.,

transaction) name, End User ID, End User Workstation. Most of the

information set is available in the correlation header QWHC and additionally

in QMDA

Page 11: Dynamic SQL – Performance tuning Challenges

10

10

Client Accounting Information register –V8:

� CURRENT CLIENT_ACCTNG

QMDASUFX – VARCHAR(200)

contains the value of the accounting string from the client information that is specified for the connection. Set using:– sqleseti (SQLE_CLIENT_INFO_ACCTSTR)

– RRS DSNRLI

– DB2Connection.

setDB2ClientAccountingInformation(String info)

� In a distributed environment, if the value set by the API exceeds 200 bytes, it is truncated to 200 bytes.

CLIENT ACCTG – This information consists of a standard piece of

accounting information plus the variable portion which gets passed on to

QMDASUFX. The suffix is also stored in QWHCTOKEN (22 bytes)

Information can be set using:

•Set Client Information (sqleseti)

•DB2Connection.setDB2ClientAccountingInformation(String info)

•The RRS DSNRLI SIGNON, AUTH SIGNON, CONTEXT SIGNON,

or SET_CLIENT_ID function

For RRS applications, options are available to either set the accounting

string or the accounting token.

If the accounting string is set, then the DDF suffix cannot be retrieved. If the

accounting string is not set, then the accounting token value is stored in

QWHCTOKEN and can be retrieved by using CURRENT CLIENT_ACCNTG.

You can also change the value of the DB2 accounting token with RRS

SIGNON, AUTH SIGNON, or CONTEXT #SIGNON. You can retrieve the

DB2 accounting token with the CURRENT CLIENT_ACCTNG special

register only if the DDF accounting string is not set.

Page 12: Dynamic SQL – Performance tuning Challenges

11

11

Client Userid register – V8:

� CURRENT CLIENT_USERID

QWHCEUID – CHAR(16)

contains the value of the client user ID from the client information that is specified for the connection. Set using:– sqleseti (SQLE_CLIENT_INFO_USERID)

– RRS DSNRLI

– DB2Connection.setDB2ClientUser(String info)

� In a distributed environment, if the value set by the API exceeds 16 bytes, it is truncated to 16 bytes.

Client Userid – Most installations use this field to pass on the Network/actual

login information used by the person accessing the application.

Applications that need row level security can exploit this register by creating a view with a WHERE predicate like (col1 = CURRENT CLIENT_USERID)

Can be set using:

sqleseti, RRS DSNRLI

DB2Connection.setDB2ClientUser(String info)

The RRS DSNRLI SIGNON, AUTH SIGNON, CONTEXT SIGNON, or

SET_CLIENT_ID function

Page 13: Dynamic SQL – Performance tuning Challenges

12

12

Client Workstation register – V8:

� CURRENT CLIENT_WRKSTNNAME QWHCEUWN – CHAR(18)

contains the value of the workstation name from the client information that is specified for the connection. Set using: – sqleseti (SQLE_CLIENT_INFO_WRKSTNNAME)

– RRS DSNRLI

– DB2Connection.setDB2ClientWorkstation(String info)

� In a distributed environment, if the value set by the API exceeds 18 bytes, it is truncated to 18 bytes.

This field could be used to identify the workstation/IP address of the person originating the transaction.

In a application with internal/external users, this feature is very useful for differentiating internal and external users

Can be set using:

Set Client Information (sqleseti)

#DB2Connection.setDB2ClientWorkstation(String info)

The RRS DSNRLI SIGNON, AUTH SIGNON, CONTEXT SIGNON, or

SET_CLIENT_ID function

Page 14: Dynamic SQL – Performance tuning Challenges

13

13

Client Application Identifier register – V8:

� CURRENT CLIENT_APPLNAME

QWHCEUTX - CHAR(32)

contains the value of the application name from

the client information that is specified for the connection. Set using:– sqleseti (SQLE_CLIENT_INFO_APPLNAME)

– RRS DSNRLI

– DB2Connection.

setDB2ClientApplicationInformation(String info)

� In a distributed environment, if the value set by the API exceeds 32 bytes, it is truncated to 32 bytes.

This field can be used to set additional information about the application (usually a transaction identifier).

If set correctly, this field can be used to break down the accounting report for dynamic SQLs to a much granular level than the authorization ID.

This field should not be mistaken for the correlation identifier, which is

usually set to the name of the executable.

Page 15: Dynamic SQL – Performance tuning Challenges

14

14

EXCSQLSET – DRDA Applications – V8:

� DRDA Applications can use EXCSQLSET– SET CLIENT USERID ‘my_userid’

– SET CLIENT WRKSTNNAME ‘my_wrkstn’

– SET CLIENT APPLNAME ‘my_applname’

– SET CLIENT USERID ‘my_userid’

� Additional information allows the ability to group dynamic SQL by client specific information

� EXCSAT EXTNAM/RRS is used for passing the correlation ID – stored in QWHCCV

DRDA requesters can use the “SET CLIENT” syntax to pass client specific

information to the accounting records.

Page 16: Dynamic SQL – Performance tuning Challenges

15

15

Drilling down on accounting information

� V7 – No easy mechanism to drill down information at

a more detailed level than the authorization ID

� V8 –

– With additional CLIENT registers that are available to the application, additional client information can be easily externalized into a DB2 table

– Reduces the need to wait until SMF records are available

With additional pieces of information sent to the accounting records, DBAs can use tools like MXG, Omegamon, PM etc to report and group the

accounting information by client specific information.

Page 17: Dynamic SQL – Performance tuning Challenges

16

What is in my Dynamic SQL cache?

Page 18: Dynamic SQL – Performance tuning Challenges

17

17

What’s in my Dynamic SQL cache?

� Dynamic SQL cache (DSC) contains statements that are prepared and inserted into the global cache (except SQLs using GTTs)

– CACHEDYN=YES

� With IFCID 318 (Monitor trace) turned on, cache contains information about SQL execution, CPU time, GETPAGES etc

� Until V8, information on cached statements and access paths could not be easily externalized

– Required a tool such as DB2 PE, Omegamon etc

– Reading IFCID 22 (SQL statement)/63 (mini-plan)

� What does V8 provide?

Unlike static SQL, which is available in the catalog tables, dynamic SQLs pose significant challenges to a DBA. With ZPARM CACHEDYN set to YES,

Most dynamic statement go into a global cache, with the exception of

statement that use DGTTs etc.

The dynamic SQL statement cannot be externalized in v7 to GTF/SMF. It is

available only to IFI monitor programs

Page 19: Dynamic SQL – Performance tuning Challenges

18

18

What’s in my Dynamic SQL cache –V8

� Unlike static SQL, DBAs do not have easy access to the dynamic SQL statements run by an application

� V8 provides additional enhancements to EXPLAIN to view information in the DSC

� New EXPLAIN options: – EXPLAIN STMTCACHE ALL

– EXPLAIN STMTCACHE STMTID

– EXPLAIN STMTCACHE STMTTOKEN

• Differs from traditional EXPLAIN – Does not re-explain to obtain access path

• Can be run using SPUFI, DSNTEP2, Visual Explain

V8 provides three new options for the EXPLAIN statement to intercept and obtain dynamic SQL information and access path information from the

dynamic SQL cache

Page 20: Dynamic SQL – Performance tuning Challenges

19

19

New EXPLAIN options – V8

� EXPLAIN STMTCACHE ALL– Dumps the SQL statement cache –

– Populates a table <current sqlid>. DSN_STATEMENT_CACHE _TABLE

– DB2 V8 NFM required

– Uses LOBs

– Member specific – In a Data sharing environment

should be run on all members

EXPLAIN STMTCACHE ALL dumps the entire statement cache into the DSN_STATEMENT_CACHE table. If you have multiple members in a data

sharing group, you will have to perform the EXPLAIN STMTCACHE for each

member. In TSO this is easily achieved by connecting to the LPAR where

the member(s) are operational. In a DDF environment, this can be achieved

by establishing multiple aliases from your desktop or on the DB2 Connect gateway or using a type-4 driver to directly connect to the member. This

statement merely externalizes the cache and does not externalize the

access paths for all the statements.

The Optimization Service Center (OSC) allows the ability to filter the

dynamic SQL cache and save it as a workload. The workload can then be used as an input for the EXPLAIN process which can be done “en masse”

Page 21: Dynamic SQL – Performance tuning Challenges

20

20

New EXPLAIN options – V8

� EXPLAIN STMTCACHE ALL– Information identical to information from IFCID 316

and IFCID 318

– Collection and reset of statistics is controlled by

IFCID 318

– Visual Explain provides easy to use interface to

perform EXPLAIN command

This option dumps the entire statement cache. If you have multiple members in a data sharing group, you will have to do this multiple times. The

performance trace IFCID 318 which can be started under Class 30 controls

the collection and reset of statistics. The statistics and traces are member

specific. The EXPLAIN option does not provide access path information on

the statements.

Page 22: Dynamic SQL – Performance tuning Challenges

21

21

New EXPLAIN options – V8

� EXPLAIN STMTCACHE STMTID

– Specifies that the cached statement associated with

the statement ID is to be explained.

– Statement ID is an integer that uniquely determines a statement that has been cached in dynamic

statement cache. The statement ID can be retrieved

through IFI monitor facilities from IFCID 316 or 124 and is shown in some diagnostic IFCID trace records

such as 172, 196, and 337.

This option obtains the statement as well as the access path information. Information is retrieved for statements that match the STMTID

Page 23: Dynamic SQL – Performance tuning Challenges

22

22

New EXPLAIN options – V8

� EXPLAIN STMTCACHE STMTID

– Column QUERYNO is given the value of the

statement ID in every row inserted into the plan table,

statement table, or function table by the EXPLAIN statement.

– Writes information to the

DSN_STATEMENT_CACHE_TABLE and PLAN_TABLE

This option obtains the statement as well as the access path information. Information is retrieved for statements that match the STMTID

Page 24: Dynamic SQL – Performance tuning Challenges

23

23

New EXPLAIN options – V8

� EXPLAIN STMTCACHE STMTTOKEN

– Explains all cached statements associated with a statement token (statement token up to 240 bytes)

– STMTTOKEN is associated with the cached statement

by the application program that originally prepares and inserts the statement into the cache.

This option obtains the statement as well as the access path information. Information is retrieved for statements that match the STMTTOKEN. This is

a new field in the PLAN_TABLE

Page 25: Dynamic SQL – Performance tuning Challenges

24

24

New EXPLAIN options – V8

� EXPLAIN STMTCACHE STMTTOKEN

– Application programs use RRSAF SET_ID function, or

sqleseti API from a remotely-connected program.

– Column STMTTOKEN is given the value of the statement token, and the column QUERYNO is given

the value of the statement ID for the cached statement

with the statement token.

– Writes information to the

DSN_STATEMENT_CACHE_TABLE and

PLAN_TABLE

This option obtains the statement as well as the access path information. Information is retrieved for statements that match the STMTTOKEN. This is

a new field in the PLAN_TABLE

.

Page 26: Dynamic SQL – Performance tuning Challenges

25

25

EXPLAIN versus EXPLAIN STMTCACHE

� Unlike EXPLAIN, EXPLAIN STMTCACHE does not go through the EXPLAIN process

� DB2 writes the current access path to the

PLAN_TABLE; re-explaining the statement can provide different results

This slide talks about the differences between a traditional EXPLAIN PLAN and EXPLAIN STMTID/STMTTOKEN options

PROGNAME will contain DSNDCACH for cached statements that are explained

Page 27: Dynamic SQL – Performance tuning Challenges

26

26

PLAN_TABLE – How do I identify a cached

statement?

� PROGNAME Column in PLAN_TABLE, and

DSN_STATEMENT_CACHE_TABLE will contain the value DSNDCACH for cached statements

� New column STMTTOKEN in PLAN_TABLE contains the statement token value for the

Statement being explained

This slide talks about the differences between a traditional EXPLAIN PLAN and EXPLAIN STMTID/STMTTOKEN options

PROGNAME will contain DSNDCACH for cached statements that are explained

Page 28: Dynamic SQL – Performance tuning Challenges

27

How do I reduce the volume of DISTSERV SMF101s?

Page 29: Dynamic SQL – Performance tuning Challenges

28

28

Too many SMF101 records for DISTSERV?

� Most installation of DB2 use type 2 inactive connection support (i.e., CMSTAT=INACTIVE)

� In DB2 v7, with CMSTAT=INACTIVE, SMF101

record is cut every time a thread becomes Inactive

� The number of SMF records could exceed several million (We cut 22 million+ DB2 SMF101 records

in a day for DISTSERV)

This slide address one of the most common problems faced by applications using CMSTAT=INACTIVE.

CMSTAT=INACTIVE results in a significant increase in the volume of SMF101 records in a dynamic SQL intensive environment.

Page 30: Dynamic SQL – Performance tuning Challenges

29

29

Too many SMF101 records for DISTSERV?

� Large volumes inhibit the ability to extract useful

information for DDF/RRSAF threads

� Large volumes result in increased batch run times (for processing SMF records)

� What does V8 provide?

This slide address one of the most common problems faced by applications using CMSTAT=INACTIVE.

CMSTAT=INACTIVE results in a significant increase in the volume of SMF101 records in a dynamic SQL intensive environment.

Page 31: Dynamic SQL – Performance tuning Challenges

30

30

SMF 101 records rollup- V8

� New dynamic ZPARM ACCUMACC, ACCUMUID rolls-up accounting information from DDF/RRSAF threads

� Can be turned on/off dynamically

� Allows the DBA to obtain detailed information if needed without disruption

� Accounting data accumulated by either, or combination of– User ID

– Transaction name

– Workstation name

With V7 – CMSTAT=INACTIVE controlled when an accounting record was

cut

With V8 – New ZPARMS ACCUMACC, ACCUMUID provide the ability to

roll-up accounting information.

These ZPARMS are dynamic – So, DBA has the ability to turn off these

parms when detailed information is needed.

Page 32: Dynamic SQL – Performance tuning Challenges

31

31

SMF 101 records rollup- V8

� How do I set the ZPARMS and register used for roll-up?

– ACCUMACC

– ACCUMID

– User ID

– Transaction name

– Workstation name

This slide provides as a transition to the slides that address how the ZPARMS ACCUMACC, ACCUMUID, and the special registers used by

ACCUMUID are set

Page 33: Dynamic SQL – Performance tuning Challenges

32

32

SMF 101 records rollup- V8

� ZPARM– “DDF/RRSAF Accum” ACCUMAC (DSNTIPN)

� Controls whether DB2 accounting data should be

accumulated for DDF and RRSAF threads. The type of accumulation is controlled by ZPARM

ACCUMUID.

ACCUMACC ZPARM serves as a on/off switch that controls the aggregation of accounting records. The default behavior is “NO”. Accounting records will

continue to be written the way they are in v7. If ACCUMACC has an integer 2-65535, DB2 rolls up accounting records for every “n” instances of the

aggregation field.

The aggregation fields are controlled by the ZPARM ACCUMUID.

Page 34: Dynamic SQL – Performance tuning Challenges

33

33

SMF 101 records rollup- V8

� ZPARM– “DDF/RRSAF Accum” ACCUMAC (DSNTIPN)

– If “10” is specified (the default), then DB2 continues to write an accounting record for ’10’ occurrences of the “Aggregation field”

– If “NO” is specified, an accounting record is cut when a DDF thread is made inactive, or when signon occurs for

an RRSAF thread.

– If 2-65535 is specified, then DB2 writes an accounting record every 'n' occurrences of the “Aggregation field” on the thread, where 'n' is the number specified for this parameter.

– “Aggregation field” defined by ACCUMUID

ACCUMACC ZPARM serves as a on/off switch that controls the aggregation of accounting records. The default behavior is “10”. If “NO” is specified,

accounting records will continue to be written the way they are in v7. If

ACCUMACC has an integer 2-65535, DB2 rolls up accounting records for every “n” instances of the aggregation field.

The aggregation fields are controlled by the ZPARM ACCUMUID.

Page 35: Dynamic SQL – Performance tuning Challenges

34

34

SMF 101 records rollup- V8

� ACCUMUID – ZPARM – Defines the “Aggregation Fields” to be used for DDF and RRS accounting record rollup.

� Aggregation is based on the following three fields that are provided to the application:

– ID of the end user (QWHCEUID, 16 bytes)

– End user transaction name (QWHCEUTX, 32 bytes) contains the name of the executable

– End user workstation name (QWHCEUWN, 18 bytes)

This slide talks about the special registers used by ACCUMUID to control roll-up. The fields used are QWHCEUXX fields.

Page 36: Dynamic SQL – Performance tuning Challenges

35

35

SMF 101 records rollup- V8

� Client accounting fields can be set using

– “Server Connect” and “Set Client” (SQLESETI) calls

– RRSAF threads via the RRSAF SIGN, AUTH SIGNON, and CONTEXT SIGNON functions

– Java programs when using the new Java Universal Driver

This slide addresses the techniques used to set the special registers used by ACCUMUID.

Page 37: Dynamic SQL – Performance tuning Challenges

36

36

SMF 101 records rollup- V8

� ACCUMUID values:– 0 : End user ID, AND end user transaction name, AND end

user workstation name

– 1 : End user ID

– 2 : End user transaction name

– 3 : End user workstation name

– 4 : End user ID AND end user transaction name

– 5 : End user ID AND workstation name

– 6 : End user transaction name AND workstation name

� The default value is 0 (zero). The ACCUMUID value is ignored if ACCUMACC=NO (no

DDF/RRS rollup).

This slide discusses the various possible values for ACCUMUID.

Page 38: Dynamic SQL – Performance tuning Challenges

37

37

SMF 101 records rollup- V8

� DB2 can override ACCUMACC in certain situations:– Storage threshold is reached for the accounting rollup

blocks

– No updates have been made to the rollup block for 10 minutes

– Not all of the fields specified in the aggregation criteria are supplied

� May provides relief to the “SMF flooding” issue

� May want to check CPU changes

This slide addresses the situations when the roll-up ZPARM settings are overriden by DB2.

Whenever the threshold is reached, it is reported in the accounting report:

������������� ����������������������������������

Page 39: Dynamic SQL – Performance tuning Challenges

38

38

SMF 101 records rollup- Advantages

� Provides useful accounting information

� Example:

– Trap CPU intensive SQL - Application executes a single SQL 2 million times (using a different literal each time)

– Rolling up information by “end user” allows the DBA to drill down and obtain CPU utilization information

This slide addresses the advantages of rolling-up dynamic SQL accounting information. By rolling-up information, DBAs are able to narrow down the

CPU utilization and tune SQLs which provide the best returns.

Page 40: Dynamic SQL – Performance tuning Challenges

39

DB2 v8 and DB2 9 Enhancements

Page 41: Dynamic SQL – Performance tuning Challenges

40

40

Other DB2 v8 Enhancements

� Dynamic SQL statements – like static SQL was limited to 32KB – Version 8 – limit changed to 2 MB

� “Execute Immediate” can use a LOB/CLOB

� Entire statement information is provided by IFCID 350 –similar to IFCID 63

� Invalidate dynamic SQL cache - RUNSTATS UPDATE NONE REPORT NONE, CREATE INDEX

� New ZPARM EDMSTMTC – Controls size of dynamic SQL cache

This slide addresses some of the “other” enhancements that are available to the DBAs when monitoring and tuning dynamic SQL

Page 42: Dynamic SQL – Performance tuning Challenges

41

41

DB2 9 Enhancements

� Enhancement to DSNRLST based on client variables

� REOPT AUTO enhancement

� -START TRACE enhancement

� Trusted Context – Prevent the application server ID from getting used outside of the app server

This slide addresses some of the “other” enhancements that are available to the DBAs when monitoring and tuning dynamic SQL

Page 43: Dynamic SQL – Performance tuning Challenges

42

42

Using RLF enhancements – DB2 9

� DB2 9 allows the RLF to be set at a more granular level then AUTHID/PLAN/PACKAGE

– For dynamic SQL PLAN is always DISTSERV which does not

provide adequate granularity

– Utilizes a table DSNRLMTxx – Middleware resource limit

facility

• Allows the option to specify

– RLFEUAN – End User application name

– RLFEUWN – End user workstation name

– RLFEUID – End User ID

– RLFIP – IP from which dynamic SQL originated

• START RLIMIT starts RLST as well as the RLMT tables

• Only one RLMT table can be active at any point in time

DB2 9 introduces a new set of tables which complement the DSNRLST tables. These tables are specifically geared towards the implementation of

RLST for Distributed applications.

Page 44: Dynamic SQL – Performance tuning Challenges

43

43

REOPT(AUTO) – DB2 9

� Enhancement to REOPT – REOPT(AUTO)

– DB2 determines if a new access path based on the parameter values

– DB2 saves the access path in the dynamic SQL Cache

– Combines the advantages of

REOPT(ALWAYS) and REOPT(ONCE)

DB2 9 introduces a enhancement to the REOPT option which was first introduced in DB2 v5. REOPRT(AUTO) tries to combines the benefits of REOPT(ALWAYS) and REOPT(ONCE) by re-preparing the statement when the host variables have changed.

Page 45: Dynamic SQL – Performance tuning Challenges

44

44

-START TRACE enhancements (DB2 9)

� -START TRACE has new keywords– USERID - End Client UserID

APPNAME - Application Name

WRKSTN - Client Workstation Name

CONNID - Connection

CORRID - Correlation ID

– Constraint block modified to include specific end client User ID, Application name, workstation name, connection, or correlation ID

– Filtering block allows XUSERID, XAPPNAME, XWRKSTN, XCONNID, XCORRID – To exclude threads matching specific end clients

– X option allowed with wildcards but a x(*) is not allowed

This slide addresses enhancements to the –START TRACE command specifically useful for dynamic SQL intensive application.

-STARTING TRACES for dynamic SQLs can be challenging as the common function ID used can create vast amount of data or flood the OP buffers.

DB2 9 for z/OS allows the ability to start traces for specific end clients. In addition, a filtering block allows the option to exclude specific end clients.

Page 46: Dynamic SQL – Performance tuning Challenges

45

45

TRUSTED CONTEXT

� Addresses security/audit challenges using a common middleware/functional ID to connect to DB2

– Database entity which allows for a unique set of interaction based on Authorization ID and connection attributes

– Two types of trusted context

• Implicit – No application changes are needed

• Explicit – Application changes are needed

– Allows Authorization IDs to connect from a specific IP address without authentication and assign default roles

– Explicit trusted context allows for user ID switching

– Switching User IDs allows the actual user to be used for logging all database changes

TRUSTED context address issues in a three tier architecture where a common functional/application server ID is used to connect to the database. Creating a trusted network context helps address these issues. Trusted context is a database entity which allows for a unique set of interaction based on a authorization ID and connection attributes.

Page 47: Dynamic SQL – Performance tuning Challenges

46

46

TRUSTED CONNECTION

� Addresses challenges using a common middleware/functional ID to connect to DB2

– ROLE is a Database entity to which authorities can be granted

– TRUSTED CONNECTION is a database

connection established in a TRUSTED CONTEXT

– Misuse of application IDs can be alleviated by

granting security on DB2 objects to roles

– Common application IDs can be granted the ROLE

under a TRUSTED CONTEXT

A database connection which matched a TRSUTED CONTEXT’s attributes is known as a TRUSTED connection. Default roles can be assigned to authorization IDs using a trusted connection. By assigning privileges to a ROLE, and assigning the ROLE to a common middleware functional ID under a trusted context, we can alleviate the misuse of the common userid. The common functional ID will not have any privileges unless connecting from the middleware server.

Page 48: Dynamic SQL – Performance tuning Challenges

47

Performance Warehouse

Page 49: Dynamic SQL – Performance tuning Challenges

48

48

Performance Warehouse

� Performance warehouse is a collection of DB2 performance data

– Performance information (Summary and exception) at a Plan, Package, SQL level stored in custom tables serves as a tuning repository

– Plan/Package Information can be extracted from SMF 101 (IFCID 3 and IFCID 239 records)

– Information augmented with client specific information

– Tables for Top “n” dynamic and static SQL

– SQL level data extracted from SQL level monitoring tools such as IBM DB2 Query Monitor, Dynamic SQL cache, DB2/OSC monitor profiles (DB2 9 onwards)

It is extremely important that performance information at a Plan/Package/SQL level is extracted from DB2 Accounting records and stored in tables.

Two types of information should be extracted: Summary, and Exception.

Information from DB2 should be augmented with “shop” specific information.

Tables can be created to capture Top “n” static and dynamic SQL.

SQL level performance can be extracted from IBM DB2 Query Monitor, Dynamic SQL cache, Optimization service center/DB2 Monitor profiles.

Page 50: Dynamic SQL – Performance tuning Challenges

49

49

Summary

� V8/9 – Provides a plethora of tools to “tame” the dynamic SQLs

– Special registers

– SQLESETI/JDBC/RRS Signon to set client variables

– ACCUMAC/ACCUMID to reduce SMF records

– IFCID 350

– RUNSTATS REPORT/UPDATE NONE to invalidate dynamic SQL cache

– ZPARM EDMSTMTC

– -START TRACE, DSNRLMT, REOPT(AUTO)

This slide summarizes in a nutshell the advances in dynamic SQL and how they help the database administrator in tuning dynamic SQL.

Page 51: Dynamic SQL – Performance tuning Challenges

50

50

Questions?

Questions?