Sappress Max Db Admin is Strati On

43
André Bögelsack, Stephan Gradl, Manuel Mayer, Helmut Krcmar SAP ® MaxDB tm Administration Bonn Boston

Transcript of Sappress Max Db Admin is Strati On

Page 1: Sappress Max Db Admin is Strati On

André Bögelsack, Stephan Gradl, Manuel Mayer, Helmut Krcmar

SAP® MaxDBtm Administration

Bonn � Boston

299 Book.indb 3 5/4/09 8:36:21 AM

Page 2: Sappress Max Db Admin is Strati On

Contents at a Glance

1 Introduction to SAP MaxDB ....................................... 11

2 Overview of SAP MaxDB ............................................ 21

3 SAP MaxDB and SAP .................................................. 89

4 Administration tasks ................................................... 115

5 Performance tuning .................................................... 207

6 Problem Situations ...................................................... 281

7 Summary and Outlook ................................................ 305

A Command Reference dbmcli ....................................... 309

B the Authors ................................................................. 317

299 Book.indb 5 5/4/09 8:36:21 AM

Page 3: Sappress Max Db Admin is Strati On

7

Contents

1 Introduction to SAP MaxDB ........................................ 11

1.1 History ............................................................................. 111.2 SAP MaxDB Features ........................................................ 12

1.2.1 General Features .................................................... 121.2.2 Flexibility during Operation .................................... 141.2.3 SQL Modes and Interfaces ..................................... 151.2.4 Areas of Use ........................................................... 16

1.3 Useful Internet Sources ..................................................... 161.3.1 Official SAP MaxDB Website .................................. 171.3.2 SAP MaxDB Wiki on the SAP Developer Network ... 171.3.3 SAP MaxDB FAQ .................................................... 171.3.4 SAP MaxDB Forum ................................................ 17

1.4 Structure of this Book ....................................................... 18

2 Overview of SAP MaxDB ............................................. 21

2.1 SAP MaxDB — Instance Types .......................................... 212.1.1 OLTP and OLAP ..................................................... 212.1.2 SAP liveCache ........................................................ 23

2.2 SAP MaxDB — Software ................................................... 262.2.1 The X Server .......................................................... 272.2.2 Database Studio ..................................................... 272.2.3 Database Manager GUI .......................................... 302.2.4 Database Manager CLI ........................................... 342.2.5 SQL Studio ............................................................. 352.2.6 SQL CLI .................................................................. 362.2.7 Web SQL ............................................................... 372.2.8 Other Utilities ........................................................ 38

2.3 SAP MaxDB — User Concept ............................................ 452.3.1 MaxDB Users ......................................................... 452.3.2 Operating System Users ......................................... 522.3.3 Security Aspects ..................................................... 54

2.4 Database Concepts ........................................................... 552.4.1 Kernel Threads ....................................................... 562.4.2 Caches ................................................................... 632.4.3 Data and Log Volumes ........................................... 702.4.4 Savepoints and Snapshots ...................................... 76

299 Book.indb 7 5/4/09 8:36:21 AM

Page 4: Sappress Max Db Admin is Strati On

8

ContentsContents

2.4.5 Locking .................................................................. 782.4.6 Directory Structure ................................................. 802.4.7 Operational States ................................................. 832.4.8 Database Parameters .............................................. 842.4.9 Configuration Files ................................................. 86

2.5 Summary .......................................................................... 87

3 SAP MaxDB and SAP ................................................... 89

3.1 SAP Architectures ............................................................. 893.1.1 ABAP and Java Stack .............................................. 903.1.2 Architecture Levels ................................................. 91

3.2 Communication with SAP MaxDB ..................................... 933.2.1 SAP MaxDB Interfaces ........................................... 963.2.2 Communication with SAP Systems ......................... 99

3.3 Important Transactions ...................................................... 1023.3.1 Transaction DB50 — Database Assistant ................. 1033.3.2 Transaction DB13 ................................................... 1093.3.3 Transaction RZ20 ................................................... 112

3.4 Summary .......................................................................... 114

4 Administration tasks ................................................... 115

4.1 Server Software Installation and Upgrade .......................... 1154.1.1 SDBINST/SDBSETUP .............................................. 1164.1.2 SDBUPD ................................................................ 126

4.2 Creating and Initializing the Database ............................... 1304.2.1 Planning the Database ........................................... 1304.2.2 Creating the Database via the GUI ......................... 1324.2.3 Creating the Database via the dbmcli Tool .............. 1404.2.4 Interaction with SAPInst ........................................ 144

4.3 Configuring the Database .................................................. 1464.3.1 Adding and Deleting Data/Log Volumes ................. 1464.3.2 Configuring Log Volumes and Log Mode ................ 1544.3.3 Updating the System Tables ................................... 1584.3.4 Parameter Changes ................................................ 159

4.4 Database Backup .............................................................. 1634.4.1 Backup Concepts ................................................... 1634.4.2 Creating a Backup Medium .................................... 1664.4.3 Incremental and Complete Backup ......................... 1714.4.4 Log Backups ........................................................... 174

299 Book.indb 8 5/4/09 8:36:21 AM

Page 5: Sappress Max Db Admin is Strati On

9

ContentsContents

4.4.5 Snapshots .............................................................. 1774.4.6 Checking Backups .................................................. 182

4.5 Database Recovery ............................................................ 1864.5.1 Recovery Types ...................................................... 1864.5.2 Recovery Strategy .................................................. 1874.5.3 Recovery/Recovery with Initialization ..................... 1884.5.4 Reintegrating Faulty Log Mirrors ........................... 1924.5.5 Bad Indexes ........................................................... 193

4.6 Consistency Checks ........................................................... 1964.6.1 General Description ............................................... 1964.6.2 Checking the Database Structure ........................... 197

4.7 Deleting the Database ...................................................... 2004.7.1 Deleting the Database ............................................ 2004.7.2 Server Software Uninstallation ............................... 203

4.8 Summary .......................................................................... 206

5 Performance tuning ..................................................... 207

5.1 Performance Optimization ................................................ 2085.2 Indexes ............................................................................. 208

5.2.1 B* Trees: Theory .................................................... 2095.2.2 Primary and Secondary Key .................................... 211

5.3 The Database Optimizer .................................................... 2185.3.1 Basic Principles ...................................................... 2185.3.2 Criteria for Selecting Specific Access Strategies ....... 223

5.4 Caches .............................................................................. 2265.4.1 Background ............................................................ 2265.4.2 The Various Caches ................................................ 2275.4.3 The Appropriate Size of the Caches ........................ 2305.4.4 The Most Important Information in Caches ............ 2305.4.5 Critical Region Statistics ......................................... 235

5.5 Analysis Tools ................................................................... 2375.5.1 Database Analyzer ................................................. 2385.5.2 Resource Monitor .................................................. 2495.5.3 Command Monitor ................................................ 2535.5.4 SQL Explain ........................................................... 260

5.6 Performance with SAP NetWeaver AS ............................... 2635.6.1 SAP NetWeaver AS Performance Analysis ............... 2645.6.2 Load Analysis ......................................................... 2665.6.3 Database Analysis in SAP NetWeaver AS ................ 267

5.7 Summary .......................................................................... 280

299 Book.indb 9 5/4/09 8:36:21 AM

Page 6: Sappress Max Db Admin is Strati On

10

ContentsContents

6 Problem Situations ...................................................... 281

6.1 Diagnostic Files ................................................................. 2816.1.1 Dev Traces ............................................................. 2826.1.2 SQL Trace ............................................................... 2826.1.3 SQLDBC Trace ........................................................ 2836.1.4 X Server Log: xserver_<hostname>.prt ................... 2846.1.5 appldiag ................................................................. 2856.1.6 dbm.prt ................................................................. 2856.1.7 KnlMsg (knldiag) .................................................... 2866.1.8 KnlMsgArchive (knldiag.err, dbm.utl) ..................... 2876.1.9 dbm.knl ................................................................. 2886.1.10 dbm.ebp ................................................................ 2896.1.11 dbm.ebl ................................................................. 2896.1.12 rtedump ................................................................. 2906.1.13 knltrace .................................................................. 2916.1.14 knldump ................................................................ 292

6.2 Error Types and Analysis .................................................... 2936.2.1 Installation Problems ............................................. 2936.2.2 Connection Problems ............................................. 2946.2.3 Log Full/Data Full .................................................. 2956.2.4 System Crash/System Error ..................................... 2996.2.5 System Blockade .................................................... 3006.2.6 Backup/Recovery Error ........................................... 3026.2.7 Hardware Error ...................................................... 303

6.3 Summary .......................................................................... 304

7 Summary and Outlook ................................................. 305

Appendices ........................................................................ 307

A Command Reference dbmcli ................................................... 309B The Authors ........................................................................... 317

Index ............................................................................................. 319

299 Book.indb 10 5/4/09 8:36:21 AM

Page 7: Sappress Max Db Admin is Strati On

207

Caches, indexes, and analysis tools and how they’re used efficiently — this chapter provides background information and describes how you can identify and eliminate the causes of per-formance bottlenecks.

Performance tuning5

Databases ensure both the persistency and integrity of data. That data-bases are widely used in nearly all IT areas is largely a result of the very fast and flexible access options to stored information. This chapter dis-cusses the theoretical and technical principles that enable this high-per-formance access. Furthermore, it introduces the means and methods for recognizing, analyzing, and eliminating performance bottlenecks.

Section 5.1 describes the performance concept and defines the database administrator’s options for optimizing performance. Section 5.2 intro-duces the structure of the database storage concept, the theoretical back-ground of the search structure used in SAP MaxDB — the B* tree — and its characteristics for primary and secondary indexes. When and how you use these search structures when accessing data and how you can provide the necessary information for optimized access is explained in Section 5.3. This section also describes how you can accelerate the execu-tion of slow SQL statements. The section on caches (Section 5.4) explains why accessing data is — despite these search structures — considerably slower than reading data from main memory. You’ll also learn how to benefit from the speed advantage of main memory in SAP MaxDB. Sec-tion 5.5 provides information on how you can monitor the database using the DB Analyzer. In addition, this chapter illustrates how you can use the Resource Monitor to identify SQL statements, which statements cause the greatest load on the database, how you can use the Command Monitor to search for single expensive SQL statements, and how you can analyze these statements using the SQL Explain statement. The last section covers the analysis process with SAP NetWeaver AS. It describes how you can use a transaction of an SAP system to identify performance bottlenecks and analyze and eliminate the causes of these bottlenecks.

299 Book.indb 207 5/4/09 8:37:26 AM

Page 8: Sappress Max Db Admin is Strati On

208

5 Performance Tuning

Performance Optimization5.1

A central aspect of database performance is the speed with which SQL statements are processed. The faster queries are processed, the greater the performance of the database system. That means that you can influ-ence the performance of the database by ensuring that the database sup-ports the expected queries in the best possible way. This way, SQL state-ments incur less cost, that is, they become less expensive.

Queries are Expensive…

if they query large datasets with a potentially high percentage of redundant EE

data.

if one or several tables need to be scanned for their execution.EE

The database developer is entirely responsible for the first scenario. You can only accelerate this type of query by variably and physically clustering the data in background memory. This, however, isn’t sup-ported by many database systems for secondary indexes — including SAP MaxDB.

The second type of cost-intensive queries can be optimized and thus made less expensive by tuning the database appropriately. To optimize the sec-ond query type, you first need to understand how the database system usually executes queries. This is done using an execution plan. The system generates a new execution plan for each request and defines the type of access to the data. Scanning the entire table is one intuitive option. Another option is to use data structures, which can considerably acceler-ate the scanning process, particularly for large tables. The following sec-tion also discusses the corresponding data structures and their usage.

Indexes5.2

Due to the size of today’s databases — in some cases they’re several petabytes in size — you must store the data on hard disks, because it can’t be stored in main memory. Because accessing data on hard disks is

Performance

Optimization

299 Book.indb 208 5/4/09 8:37:26 AM

Page 9: Sappress Max Db Admin is Strati On

209

Indexes 5.2

significantly slower than accessing data in main memory, the frequency of disk accesses that are required to read a date is a main criterion for performance. Therefore, the SQL Optimizers described in Section 5.3, The Database Optimizer, were implemented in all database systems to decide which access strategy requires the least number of disk accesses and consequently has the highest performance. The following text first explains the theoretical concepts that enable you to read a data record in background memory with a guaranteed maximum number of disk accesses. After discussing the theoretical principles, it then describes the properties of B* trees in SAP MaxDB.

B* trees: theory 5.2.1

As the name implies, this data structure is a tree. A tree is a robust and powerful data structure. It’s integrated with nearly all modern database systems and is also already integrated with the leading database sys-tems. It’s often referred to as the data structure of the relational data-base model.

In the internal nodes, the B* tree only uses reference keys, which don’t have to correspond to real keys. For SAP MaxDB, the reference keys correspond to real keys, but this isn’t a prerequisite for this data struc-ture and depends on the implementation of the database manufacturer. Because each node occupies a complete page in the background memory, the system can store many of these reference keys in one node. Even for large datasets, there are thus only a few levels in the tree so that fewer disk accesses are required to find a data record. Real keys are assigned to the data at the lowest level, that is, at the leaf level. At this level, the system implements another optimization of the data structure for sequential reading. Each background memory page contains additional references to the previous and the next page. That means that if you’ve found the entry point, you only have to follow the sequential references until the search predicate is no longer met (see Figure 5.1).

The algorithms for adding and deleting data are structured in such a way that the tree is always balanced. This means that the distance from the root of the tree to any leaf — that is, to any data record — is always the same.

B* tree

Structure

Balancing the value distribution

299 Book.indb 209 5/4/09 8:37:26 AM

Page 10: Sappress Max Db Admin is Strati On

210

5 Performance Tuning

V0

R1

V1

R2

… Rn

Vn

Free

PFree

N

S

D

1

1

S

D

j

j…

Sequential Search

Index Search

Schematic Structure of a Figure5.1 B* Tree

The following illustrates the benefits of the B* tree by comparing it with the B tree. Because this data structure is an internal search tree that also stores data in nodes, it’s less adapted to the properties of background memory than the B* tree, which references an even larger number of data pages but has the same height. The smaller the tree, the fewer accesses are required to find a data record.

If a tree has four levels, and each internal node can accommodate 200 reference keys, it references at least 1.6 × 107 items, that is, data records. For the same height, that is, for the same maximum number of disk accesses that are required to find a data record, this tree can expand to a size of 2.56 × 1010 items without losing performance. Because a por-tion of an index is in the cache, the system frequently only needs two to three disk accesses to find a data record from a dataset of 10 billion data records. For 1KB per data record, this corresponds to a table size of about 1TB.

Numerical example

299 Book.indb 210 5/4/09 8:37:26 AM

Page 11: Sappress Max Db Admin is Strati On

211

Indexes 5.2

Having described the theoretical properties of the B* tree in this section, in the following section we’ll describe where B* trees are integrated and how you use them.

Primary5.2.2 and Secondary Key

SAP MaxDB uses B* trees that are directly stored in tables. Figure 5.2 shows the structure of a B* tree in SAP MaxDB. The tree is created from the bottom, that is, from the leaf level. This lowest level contains the data records in ascending order according to the primary key.

The index level nodes are determined from the values of the leaf level. If the system reaches the end of a page at the leaf level, it creates a new entry at the index level. This entry differs significantly from the last entry of the page compared with the next page. In our example, this applies to Seattle. In the list of cities, Salem would be the last entry of the previ-ous page. Consequently, the entry at the index level must be “SEA.” The creation of the primary index continues until all references fit on one page, the root page.

F A

SEA F R ALB…

Houston Salem…

Seattle… …

Albany

Root Level

Index Level

Leaf Level

Frankfort Richmond

Sample Storage of a Table in a Figure5.2 B* Tree

If you add data records when you use the table and the references no longer fit on the page at the root level, the system divides the root page and converts the two resulting pages into index pages to which a new root page references. Figures 5.3 and 5.4 illustrate an example of this.

B* tree characteristics

Sample structure

Adding a data record

299 Book.indb 211 5/4/09 8:37:27 AM

Page 12: Sappress Max Db Admin is Strati On

212

5 Performance Tuning

Sea

Houston Salem Seattle

Root Level

Leaf Level

Situation Before the Data Records Have Been AddedFigure5.3

Sea F

Houston Salem Seattle Frankfort

Root Level

Leaf Level

Situation After the Data Records Have Been AddedFigure5.4

All entries at all levels are linked via sequential links that enable the sys-tem to also execute range queries with high performance. The maximum number of table entries is limited because the B* tree of the primary index in SAP MaxDB is restricted to a height of four index levels and one root level. However, because a logical page has a size of 8KB, sufficiently large tables can be managed.

On the data pages, the entries aren’t sorted according to the index but in the historical order in the initial area of the data page. The order regard-ing the primary key is created using an item list, which is located at the end of each data page. This item list is arranged from right to left so that item list and data continuously approximate each other. If the system now searches for a data record, it can find and read the record using the item list. Figure 5.5 shows the schematic structure of a data page.

299 Book.indb 212 5/4/09 8:37:27 AM

Page 13: Sappress Max Db Admin is Strati On

213

Indexes 5.2

If the system is supposed to read a data record of the table using a request, the index only supports this request optimally if the where condition is scanned for exactly the fields that are indexed by this index. Because SAP MaxDB creates a B* tree index for each primary index, a request for this example could be as follows:

Select * from inhabitants where city = 'Seattle’

Houston Los Angeles San Francisco Springfield

Albany Boston Seattle Chicago

Salem Detroit New York Houston

Los Angeles

San Francisco

Springfield

Albany

Boston

Seattle

Chicago

Salem

Detroit

Data Entries

(Not Sorted)

Data Entries

(Sorted)

Albany

35694562576132896578…

New York

Boston

Chicago

Detroit

Houston

Structure of a Data PageFigure5.5

Figure 5.6 illustrates the access to a data record via a primary index. First, the system scans the root page. When the searched value is smaller than the entry on the root page, the system follows the reference of this entry to the next index level. The system now scans the node reached at this level using the same concept. If the system reaches the end of the page without having found an entry whose logical size is greater than the search concept, the system uses the last reference on this page. This procedure is repeated until the system reaches the leaf level and finds the value via the already mentioned item list on the data pages.

To store field content of the LONG type, the system uses specific B* trees, depending on the respective length. Here, you distinguish between two types of LONG values: short LONG values, which fit on one logical page, and long LONG values, which require more than one logical page. The system manages all short LONG values in one B* tree. As a result, the

Accessing a data page

B* trees for LONG values

299 Book.indb 213 5/4/09 8:37:28 AM

Page 14: Sappress Max Db Admin is Strati On

214

5 Performance Tuning

data page of the table displays a reference to this B* tree of the short LONG values instead of the value of the LONG field. If the content of the LONG field exceeds one logical page, the system creates a separate B* tree for this value. The entry on the data page then references to the B* tree of this single value. Figure 5.7 shows a diagram of this concept.

R A

SEA F R ALB…

Houston Salem…

Seattle…

Richmond…

Albany

Root Level

Index Level

Leaf Level

Processing Root Page: Is

"Seattle" Less than an Entry?

Follow the Last Determined

Link

Is "Seattle" < "F"? Follow the Last Link

Accessing a Data RecordFigure5.6

ZIP City Information

<long1>

<long2>

<long3>

<long4>

<long5>

<long6>

<long7>

<long8>

<long9>

<long10>

<long11>

<long12>

98104

13340

23229

74354

77004

01106

38101

21209

<long1>

<long2>

<long 1 data>

<long 2 data>

<long 3 data>

<long 4 data>

Base Table City Short LONG Values

Long LONG Values

Storing LONG ValuesFigure5.7

299 Book.indb 214 5/4/09 8:37:29 AM

Page 15: Sappress Max Db Admin is Strati On

215

Indexes 5.2

The system automatically creates the previously mentioned indexes for each table in SAP MaxDB. That means that it creates the corresponding B* trees for the primary key of a table and for the LONG values. You can also add indexes in additional columns of a table. This is often done for secondary keys because this modeling logically links tables with other tables using these keys. This logical link would have a strong negative effect on performance if additional accesses to data records via secondary keys — and thus B* trees — weren’t supported. In general, the structure of a B* tree for additional indexes is identical to the structure of B* trees for primary keys.

However, a difference exists when it comes to the relational modeling of tables. The field or fields of the primary key uniquely identify each data record. Indexes use the same condition. For secondary keys, this condi-tion isn’t provided. The following illustrates this using address data as an example.

Table 5.1 uses the ZIP code as the primary key and the name of the city and a description as additional fields. This table was deliberately designed as simple as possible and lists every city only once, although of course, larger cities have numerous ZIP codes.

ZIP City …

48217 Detroit …

84113 Salt Lake City …

97306 Salem …

33149 Miami …

77004 Houston …

98104 Seattle …

75201 Dallas …

46205 Indianapolis …

08079 Salem …

12865 Salem …

80216 Denver …

94102 San Francisco …

Example for Data Records with Identical City Table5.1Names and Different ZIP Codes

B* trees for additional indexes

Inverted lists

Example

299 Book.indb 215 5/4/09 8:37:29 AM

Page 16: Sappress Max Db Admin is Strati On

216

5 Performance Tuning

ZIP City …

19118 Philadelphia …

30316 Atlanta …

74354 Miami …

89044 Las Vegas …

01106 Springfield …

53227 Milwaukee …

Table5.1 Example for Data Records with Identical City Names and Different ZIP Codes (Cont.)

If the system should now also support access to the data records of this index via the “City” field, there may be several ZIP codes for one city name because there are several cities that have the same name. As a result, the system uses inverted lists for this case, as shown in Table 5.2. These lists can be stored at the flat level as long as they fit on one data page.

City ZIP

Houston 77004

Dallas 75201

San Francisco 94102

Detroit 48217

Denver 80216

Philadelphia 19118

Miami 33149,74354

Salem 97306,08079,12865

Seattle 98104

Indianapolis 46205

Salt Lake City 84113

Springfield 01106

Milwaukee 53227

Las Vegas 89044

Table5.2 Inverted List for the Index via the Column “City”

299 Book.indb 216 5/4/09 8:37:29 AM

Page 17: Sappress Max Db Admin is Strati On

217

Indexes 5.2

Thus, this B* tree has a unique search criterion for the additional index. If this inverted list is too long for a city, the list is relocated and managed in a separate B* tree. In the original index that manages the inverted lists, the system then creates a reference to the B* tree created for this inverted list in the data area defined for this location.

Index for City Table with Primary Key for ZIP

1Miami

1 Mia

Index

13

1

33

13

rec rec

1 7

1

1743

33149M

33149 74354

74354

74

331

Additional IndexFigure5.8

Important!

Generally, SAP MaxDB stores data only in the B* tree of the primary key. In a B* tree of a secondary index, the inverted lists don’t store the values again but create references to the primary key. These references contain the entire primary key of the referenced data record. This is particularly critical for the selection of the access strategy and thus for the acceleration of data accesses.

The execution costs indicate how important it is to optimally support requests using high performance — that is, selective — indexes. With-out index support, execution can be more expensive, up to 1,000 times more in some cases. Conversely, this means that an expensive SQL state-ment may be reduced to a thousandth by optimizing the indexes and/or changing the statement. Note, however, that additional indexes also require resources because when changes are made to the data you must also maintain and store them in the data cache. As a result, you should first check the statement and the code of the application whether you can solve or alleviate the problem there.

Unique search criterion

Execution costs

299 Book.indb 217 5/4/09 8:37:29 AM

Page 18: Sappress Max Db Admin is Strati On

218

5 Performance Tuning

the Database Optimizer5.3

The maintenance and provision of effective indexes is important for high-performance queries. The program in the database, the optimizer, decides whether an index is used or — if there are multiple indexes — which index is used to search for data. Performance can significantly depend on the processes for database requests. To illustrate these processes, the fol-lowing sections first introduce the database optimizer, which is also often referred to as the SQL Query Optimizer. They describe the basic properties of the optimizer and explain which criteria are used to evaluate indexes. Furthermore, they introduce the most critical strategies using typical examples of SQL queries and discuss why the optimizer chooses them.

Basic Principles5.3.1

The execution plan is created by a database program, the database opti-mizer. Two types exist: the Rule Based Optimizer (RBO) and the Cost Based Optimizer (CBO). Of the database systems certified to use SAP, only Ora-cle lets you use an RBO; all others use a CBO. The following sections therefore illustrate the steps and behavior of a CBO.

A CBO decides which strategy is used to access data. The system first determines all possible access strategies and then their costs, which derive from the number of page accesses. Among others, the following criteria are used as a basis for a decision of whether an index is used:

EE Storage on the physical medium How effective an index is depends on the distribution of the data across the storage medium. If the data is highly distributed, the sys-tem needs more slow read accesses than would be necessary to read a lot of required data with one read access.

Distribution of the field contentEE The database optimizer also considers the distribution of the searched field content within a table because it’s critical for a decision whether the content is evenly distributed across the table or stored in clusters.

Number of different values of indexed fieldsEE The more different fields that are included in an indexed field, the more efficient the corresponding index and the higher its selectivity. That means selectivity refers to the number of different values of a column in relation to the total number. The literature says that the

Optimizer types

Procedure

Criteria for using indexes

299 Book.indb 218 5/4/09 8:37:29 AM

Page 19: Sappress Max Db Admin is Strati On

219

The Database Optimizer 5.3

database optimizer only uses indexes if this reduces the dataset to be scanned around 5 – 10%.

Table sizeEE If the tables are small, it may be less expensive to scan the entire table because this reduces the number of read accesses (that is, the costs).

Using Optimizer Statistics

The SQL Database Optimizer uses optimizer statistics only for joins or opera-tions for views to select the appropriate execution strategy. Views are usu-ally tables that are linked via particular columns; this means that technically speaking, they are also joins.

In part, the database stores this information for optimizer statistics in the internal fi le directory itself. The creation and updating of additional statistical information on the existing database tables must be initiated by the database administrator. The information is then stored in the database catalog. You should run these statistics at least once a week, or, at the latest when the content of a table has signifi cantly changed. You can manually or automatically run the statistical information using the Database Manager GUI or directly via the command line. Note that only the fi rst 1,022 bytes of a column value are considered. This may lead to small uncertainties if the column values match in the fi rst 1,022 bytes.

The DBMGUI enables you to create these statistics for single tables or all required tables, as well as for all tables for which creating statistics is possible. Figure 5.9 shows the dialog box in which you can confi gure the necessary settings.

Settings for Updating the Figure5.9 Optimizer Statistics in the DBMGUI

To navigate to the screen displayed in Figure 5.9 and update the opti-mizer statistics , proceed as follows:

In the DBMGUI, connect to the database instance.1.

Select 2. Instance • Tuning • Optimizer Statistics.

Optimizer statistics

Manually — DBMGUI

299 Book.indb 219 5/4/09 8:37:30 AM

Page 20: Sappress Max Db Admin is Strati On

220

5 Performance Tuning

Select the desired tables.3.

Start the search by selecting 4. Search in the Actions menu item.

Confi gure the update process.5.

Start the update via 6. Actions • Execute.

The three columns, Search, Estimate, and Advanced serve to confi gure the update process of the optimizer statistics. If you use the default set-tings, the system lists all tables for which an update is required.

However, if you want to display all tables that can be updated, you must select the Select From Tables option in the Advanced area. If you want to do this for single tables, you can search for the respective table or a single column via Search.

Depending on the size of the tables and the level of distribution, you may have to change the scope of the sample in the Estimate column. For a size of 1 billion data records or more SAP recommends setting the sample to 20% to obtain a suffi ciently reliable result. In rare cases, you may have to increase the size of the sample to 100%. If you want to exclude a table from the update run, you can do so by specifying a value of 0% for this fi eld.

As already mentioned, you can also have the system schedule the update of the optimizer statistics automatically. Figure 5.10 shows the screen in which you can confi gure this setting.

Perform the following steps:

In the DBMGUI, connect to the database instance.1.

Select 2. Instance • Automatic Statistics Update… .

Click on the 3. On button.

The columns and tables that are listed in the SYSUPDSTATWANTED system table are now event-controlled, that is, the optimizer statistics are auto-matically updated.

Automatically Updating the Figure5.10 Optimizer Statistics in the DBMGUI

Selection of the tables to be

updated

Confi guring the update process

Scheduling in the DBMGUI

299 Book.indb 220 5/4/09 8:37:30 AM

Page 21: Sappress Max Db Admin is Strati On

221

The Database Optimizer 5.3

You can also manually carry out these functions at the command line. update_statistics_statement uses the parameters outlined in Table 5.3.

Parameter Description

Schema_name Name of the database schema

table_name Table name of a basis table

Column_name Column name

Sample_Definition

ESTIMATE <Sample_Definition> ::=

SAMPLE <unsigned_integer> ROWS

|SAMPLE <unsigned_integer> PERCENT

As per System table

Causes the statistics for all tables that are listed in the SYSUPDSTATWANTED system table to be updated

Identifier Name of a basis table

update_statistics_statement ParametersTable5.3

Note for this statement that a user can only update tables and fields for which he has access rights. You can now select the statistics values from the OPTIMIZERINFORMATION system table. Here, each row maps the statis-tics values of indexes, columns, or sizes of a table.

To update the optimizer statistics for all basis tables, proceed as follows:

Connect to the database instance with:1.

/opt/sdb/programs/bin/dbmcli –u <SYSDBA user>,<password> -d <database> [-n <database_host>]

Update the statistics of all tables:2.

UPDATE STATISTICS *

You can manually control the number of data records that should be analyzed for each table by setting SAMPLE_DEFINITION to the Estimate parameter. This enables you to configure how many table rows or what percentage of the tables or column values the system scans. If you don’t specify a SAMPLE_DEFINITION, the system uses random values.

The size of the sample may considerably affect the runtime of the update run. If you don’t specify this parameter, the system imports the size of the sample from the definition of the table. You should thus also consider

Manual update — SQL statement

Fine-tuning the update run

299 Book.indb 221 5/4/09 8:37:30 AM

Page 22: Sappress Max Db Admin is Strati On

222

5 Performance Tuning

this aspect when creating tables, because it’s critical for the performance of the database. Because tables and their usage can change over time, you can also change or correct this value retroactively using the Alter Table statement. You can also exclude a table from the entire optimiza-tion run by setting the size of the sample to 0 using the Alter Table statement. If you don’t specify a value for the Estimate parameter, the system scans the entire table, which may lead to long runtimes for com-prehensive tables.

If you use the Update-Statistics-AS-PER-SYSTEM-TABLE option, the sys-tem updates the statistics of the tables that are listed in the SYSUPDSTAT-WANTED system table (similar to the variant with the DBMGUI). When this process completes successfully, the system deletes the table names form this system table.

To schedule the update of the optimizer statistics automatically via the command line, you can use the auto_update_statistics statement:

Connect to the database instance with:1.

/opt/sdb/programs/bin/dbmcli –u <SYSDBA user>, <password> -d <database> [-n <database_host>]

Start the automatic, event-controlled update process:2.

auto_update_statistics <mode>

Three modes are available for the update:

OnEE : Enables the automatic update function. Note that this is event-controlled and based on the frequently mentioned SYSUPDSTATWANTED system table. Because this DBM command also requires a separate event task, ensure that the size of the _MAXEVENTTASKS database param-eter is sufficient.

OffEE : Disables the automatic update function.

ShowEE : Returns the current status of the automatic update function; possible values include:

On: The automatic update function is enabled.EE

Off: The automatic update function is disabled.EE

Unknown: The system couldn’t determine the status of the auto-EE

matic update function.

Automatic update — SQL

statement

299 Book.indb 222 5/4/09 8:37:30 AM

Page 23: Sappress Max Db Admin is Strati On

223

The Database Optimizer 5.3

Criteria for Selecting Specific Access Strategies5.3.2

Only for join operations are up to date optimizer statistics critical for the optimizer to select the correct access strategy. This section illustrates sev-eral significant query examples and describes why the respective access strategy has been selected.

Which access strategy to select depends on numerous factors:

What kind of query is it, that is, between which columns does the EE

Where differentiate?

Do indexes exist, and what selectivity do they have?EE

The optimizer considers all of these aspects when it selects the access strategy.

the Sample table

A table (Table) with seven columns (Column1 to Column7) that has a pri-mary key of three columns (Column1, Column2, Column3) and an addi-tional index for the fifth column (Column5) will serve as an example. The columns of the primary key have different selectivity. Column1 has a very low selectivity, while Column3 has a very high selectivity. Column2 has an average selectivity. Column5, which has an additional index, has a very high selectivity, similar to Column3.

Access via the Primary Key

For queries on tables, you should, in general, use all fields of the primary key in the query:

select * from table where Column1 = 'John’ AND Column2 = ‘Doe’ AND Column3 = ‘10/12/1970’

This query is executed with the equal condition for key column execution strategy, that is, the system accesses the required data record(s) via the primary key. Because the data is also physically stored according to the order of the primary key, the primary key is ideal for supporting queries that don’t use all fields of the primary key.

select * from table where Column1 = ‘John’ AND Column2 = ‘Doe’

Factors

Equal condition and range condition

299 Book.indb 223 5/4/09 8:37:30 AM

Page 24: Sappress Max Db Admin is Strati On

224

5 Performance Tuning

For this query, the system also uses the primary key. In this case, due to the physical arrangement of the data according to the primary key, the system can access the data via the first two key fields and identify the required data records in the primary key index, which includes all fields of the primary key. The strategy that implements this behavior is called range condition for key column.

Primary Key versus Index

However, the execution plan mentioned isn’t necessarily effective. In many tables in the SAP environment, the client is a part of the primary key. If a system only has one client, which is often the case for BI, a query for all users from client “800” with street “Main Street” may result in a full table scan:

select * from table where Column1 = ‘800’ AND Column4 = ‘Main Street’

For this query, the range condition for key column strategy is used, but the system has to scan all data records of the table. You can accelerate this query significantly by using an additional index for the Column4 column. This index would likely have a high selectivity. A major advantage of an index for Column4 is the structure of secondary indexes. Here, you can use the values of the primary key, which are stored in the second-ary index, to select the data. In this example, if you create a secondary index for Column4, this means that the access strategy wouldn’t use the primary key. Instead the access takes place via the index for Column4 with the equal condition for indexed column strategy.

It’s also possible that the system uses the index for Column4 for the access, despite its presumably bad selectivity and the very high selectivity of the column Column1. This is the case when, during the check of the various access strategies, the system determines that Column4 doesn’t contain the searched value and that the result set therefore is empty.

Access Strategies

This chapter has distinguished between two strategies so far: The equal con-dition for index column strategy is a search strategy that evaluates data in a comparison operation but uses an inverted list. This strategy directly addresses table entries. For the range condition for key column strategy, the system scans portions of the table sequentially. In addition to the search strategies dis-cussed here, you can view additional strategies using the Explain statement.

Only one client

299 Book.indb 224 5/4/09 8:37:30 AM

Page 25: Sappress Max Db Admin is Strati On

225

The Database Optimizer 5.3

Index versus Full table Scan

The system uses a full table scan if the query isn’t sufficiently supported by the primary key or additional indexes. A full table scan is also used if a table is very small and the system needs to load fewer pages than for access via an index. After all, accessing an index also incurs costs and the system also has to scan all data records for small tables.

To support queries, you can often avoid full table scans by using only the fields that, individually, have a very low selectivity:

select * from table where Column4 = ‘Financial Accounting’ AND Column6 = ‘Team Lead’

To considerably accelerate the execution of this statement, you can use a composite index for the columns Column4 and Column6. Individually, each column has a very low selectivity; however, in combination, they can represent an acceptable decision criterion. As a result, this index can provide a sufficiently high selectivity to increase performance — com-pared to a full table scan — when accessing data. For small tables, you can determine this by proceeding as follows:

Open an SQL dialog via SQL Studio or via the dbmcli tool.1.

Enter the following statement:2.

Select distinct Column4,Column6 from table

The statement provides all combinations of the values of the two col-umns, Column4 and Column6. If the result set contains many values, you can assume that an index for these columns has enough selectivity.

Joins

Joins are database queries that link several tables using the values of one or more columns. It would go far beyond the scope of this book to describe the execution strategies for joins or queries on database views (equivalent to join queries). Remember that optimizer statistics assume a central role for selecting the execution strategy. Although the statistics aren’t used to access basis tables, they form a critical basis for the deci-sion on the execution strategy of joins. If you come across unexpected execution strategies when analyzing joins or queries on database views, obsolete optimizer statistics may be the reason. In this case, update all tables that are used by the join or view.

Avoiding a full table scan

Distinctive optimizer statistics

299 Book.indb 225 5/4/09 8:37:31 AM

Page 26: Sappress Max Db Admin is Strati On

226

5 Performance Tuning

Furthermore, you should generally provide an index with a sufficient selectivity for those table columns you want to use for a join. If this is impossible, you can also create an index for several tables, which — due to the combination — is provided with sufficient selectivity. However, afterward, you have to adapt the join condition to the new index. Unfor-tunately, there is no definite solution to this problem because each prob-lem usually has several, often very individual approaches to its solution.

5.4 Caches

Among other things, the caching strategies used at the database level are also responsible for the high access speeds of today’s database systems. An incorrect configuration of these caches can have very negative effects on performance. This section again introduces the various caches of SAP MaxDB and their use and describes how you can analyze the optimal hit ratio. In addition, it covers the problem of the appropriate cache size.

Background5.4.1

“Disk access is excruciatingly slow.” This statement from Database – Prin-ciples, Programming and Performance by O’Neil, 2001, addresses to the point the core problem responsible for the existence of caches. To read data from a hard disk, the read/write heads must be put on the right track. This is called the seek time. Because of the rotation, the read head then has to wait until it’s positioned above the correct page. This time is also called response time. It’s followed by the read time, also called transfer time, during which the required pages are read. Because all of these pro-cesses are mechanical actions, the access time is “painfully” slow, com-pared to main memory access time.

In a size comparison, the result of reading several thousand bytes is an elapsed time of approximately 0.003 seconds. The same amount of data can be loaded from main memory in about 0.00000001 seconds. It’s thus beneficial to keep data you need frequently in main memory, in caches. However, main memory doesn’t ensure persistent storage of data because it’s lost in the event of power outages or when the computer is shut down. Because there is less space available in main memory than on hard disks, how you can optimally assign main memory space to dif-ferent applications is an issue.

Indexes for join columns are critical

Why caches?

0.003 sec versus 0.00000001 sec

299 Book.indb 226 5/4/09 8:37:31 AM

Page 27: Sappress Max Db Admin is Strati On

227

Caches 5.4

the Various Caches5.4.2

SAP MaxDB uses three caches: I/O buffer cache, catalog cache, and log I/O queue. These caches are divided into different regions to enable paral-lel access and thus increase the write rate. When a region is accessed, it’s locked against usage by a different user task. Collisions for access to regions lead to wait times until the regions are released. This indicates a heavy CPU load. Usually, these locks are released within 1 microsecond. However, if the processor is experiencing a high load, the operating system dispatcher may withdraw the CPU from this user kernel thread (UKT), and the UKT may still be locked. This increases the risk of colli-sions or queues.

Data Cache

Due to the large data caches, more than 98% of the read and write accesses in today’s live SAP MaxDB installations are processed via the cache. Because it’s very likely that the data in the cache will be modified again, you should perform all data changes in the cache and make this persistent by defining an entry in the redo log. The system then writes the data records from the data cache to the data volumes — and thus to the disk — at regular intervals. If the system can’t find data in the cache, it reads the entire page from the data volumes and writes it to the data cache so that the page can be reused from there. Because access to data in the data volumes is very slow and consequently expensive, a maximum data cache hit rate is always beneficial.

A hit rate of 99% or more is nevertheless not a sufficient criterion because the large amount of statements that are processed via the data cache hides a transaction with low performance. If a single statement has to load 10 pages with 1,000 data records to read a record and then be able to process the next 990 queries from the cache, the hit rate is 99% — still, this single statement has low performance. As long as enough physi-cal main memory is available, the size of the I/O buffer cache should be as large as possible because the data read times in a large cache do not differ from the read times in a small cache. However, the “risk” of physi-cal data accesses is reduced.

Several reasons can exist for the data cache hit rate to be 99% over a long period of time. In most cases, the cache is too small and/or the SQL statements are inefficient. Section 5.5, Analysis Tools, describes how you can determine the cause.

Three caches

98% of accesses processed via cache

Hit rate in the data cache

299 Book.indb 227 5/4/09 8:37:31 AM

Page 28: Sappress Max Db Admin is Strati On

228

5 Performance Tuning

Converter Cache

Because the database uses only logical pages, automation is required that assigns logical pages to physical pages on the hard disk. The converter is responsible for this. The system imports the entire assignment table into the cache when the instance starts. That is, you can’t configure the size of the cache because the system automatically assigns the required size at startup. If memory requirements increase during the operation because new data volumes were dynamically added, the I/O buffer cache assigns memory to this cache.

Catalog Cache

The catalog cache stores SQL statement information. This includes infor-mation on the parse process, input parameters, and output values. If the SHAREDSQL parameter has the value “yes,” the system stores these values for each user individually. If the same SQL statement is triggered by various users, the system also stores the statement several times if this parameter is enabled. For each user task, the system reserves a specific area in the catalog cache and releases it as soon as the user session is completed. If this cache has reached its maximum fill level, the system moves the information to the data cache. The catalog cache should have a hit rate of more than 90%.

OMS Cache

The OMS cache is only used in the MaxDB liveCache instance type. This cache stores and manages data in a heap data structure. This structure consists of several linked trees. In this context, the system stores local copies of the OMS data, which are written to the heap when the system accesses a consistent view for the first time. The database system copies the data of each OMS version to the heap when it’s read. To read a persis-tent object, SAP MaxDB first scans this heap. If it doesn’t find the object, it scans the data cache. Finally, the system writes the searched data from the data area to the data cache and then to the HEAP. Here, the HEAP serves as a work center where the data is changed and rewritten to the data cache when a COMMIT is triggered. Because this buffer assumes a central role for liveCache instances, you should provide it with memory generously, within the scope of your hardware requirements.

Converter cache for the assignment

table

Catalog cache for SQL statements

OMS Cache for liveCache instances

299 Book.indb 228 5/4/09 8:37:31 AM

Page 29: Sappress Max Db Admin is Strati On

229

Caches 5.4

Log I/O Queue

To avoid having to write data across data volumes — which has a nega-tive effect on the performance of write processes — the system stores data changes in a redo log. The system writes to this redo log sequen-tially, which leads to write processes with high performance. Because the system stores all data changes in the redo log, you must use high-performance disks for this log volume. To accelerate the write processes to the redo log, the system caches them in log queues. The MAX_LOG_QUEUE_COUNT parameter defines the maximum number of log queues. The database — or the administrator using the LOG_QUEUE_COUNT parameter — determines how many queues are used. The LOG_IO_QUEUE parameter defines the size of the log queue(s) in pages of 8KB.

The problem of the appropriate memory size applies to this cache as well. It should be large enough to buffer write process peaks in the redo log. The Database Analyzer described in a moment enables you to deter-mine whether log queue overflows have occurred. These indicate that the log queue is full before the system can write the data to the log volumes. These situations lead to performance bottlenecks. In this case, check the hardware speed. If the hardware speed is too low for the amount of data that should be processed, by expanding the log queue you only delay the overflow situation. To avoid this situation, you can use the MaxLogWri-terTasks parameter to increase the number of tasks that can simultane-ously write data to the log volumes. If you combine this with locating the log volumes on different hard disks, you increase performance and thus prevent log queue overflows.

You can solve the log queue overflow performance problem by expand-ing the log queue only if the hardware on which the log volumes are located is fast enough overall and if the overflows occur as a result of single peaks of the dataset that should be processed.

You can also determine the maximum number of log queue pages the system has used so far. This information indicates the quality of the con-figured log queue size. If this value is significantly below the number of available pages in the cache over a long period of time, you can release main memory for other applications or caches by decreasing the size of this cache. However, you should keep a margin of safety for possible load peaks.

Increasing performance

Appropriate cache size

299 Book.indb 229 5/4/09 8:37:31 AM

Page 30: Sappress Max Db Admin is Strati On

230

5 Performance Tuning

the Appropriate Size of the 5.4.3 Caches

An insufficient cache size has a negative effect on SAP MaxDB perfor-mance. As a rule of thumb, 66% of the entire main memory should be used by caches. If you configure too much cache that isn’t physically available on the hardware, this leads to swaps. This situation should be avoided at all costs because it decreases system performance. SAP MaxDB allocates the configured cache (the memory space in the main memory of the server) during startup, that is, at the beginning of the “Admin” phase. This means that the configured cache is no longer avail-able for other applications. If you configure too much cache, this may lead to memory bottlenecks for other applications. In general, the fol-lowing is true: As long as the system provides enough main memory, a cache that’s too large doesn’t do any harm. The duration of a search for a data record in main memory doesn’t depend on the size of the cache.

the Most Important Information in Caches5.4.4

This section is a reference to enable you to quickly obtain the necessary cache information. It explains how you can obtain critical cache values — such as their size and hit rates — in the SAP system, in the DBMGUI, and via dbmcli.

In the SAP system, Transaction DB50 provides a useful tool to acquire a quick and detailed overview of the current cache states. This is also possible using the DBMGUI. Unfortunately, requesting cache status via dbmcli isn’t particularly convenient. Nonetheless, it’s described as a pos-sible option.

Viewing Caches in transaction DB50

Transaction DB50 (see Figure 5.11) provides detailed cache and cache utilization information. To navigate to this data, proceed as follows:

First, log on to the SAP system.1.

Call Transaction DB50 to display the current status of SAP MaxDB. 2. Next, access the overview screen.

Now, follow the path 3. Current Status • Memory Areas • Caches.

The top area of the overview displays the cache sizes as bytes and pages. These values are very useful because you can’t explicitly configure the size of some caches.

Basic issues

Transaction DB50 in the SAP system

299 Book.indb 230 5/4/09 8:37:31 AM

Page 31: Sappress Max Db Admin is Strati On

231

Caches 5.4

Cache Information OverviewFigure5.11

Viewing Caches in the DBMGUI

In the DBMGUI, you can fi nd the same number of values (see Figure 5.12) as in Transaction DB50 described previously. The only difference relates to the defi nition of the caches sizes. The DBMGUI uses megabytes as the unit, rounded to two decimal places. Transaction DB50 displays the values in kilobytes. At fi rst glance, the values seem to be different; however, this is due to the rounding and conversion. The values are in fact identical.

The Most Critical Cache Values as Displayed in the DBMGUIFigure5.12

Unit: megabytes

299 Book.indb 231 5/4/09 8:37:32 AM

Page 32: Sappress Max Db Admin is Strati On

232

5 Performance Tuning

To obtain this information, perform the following steps in the DBMGUI:

Double-click on an instance to connect to the database.1.

Next, open the cache overview via the 2. Information • Caches menu path.

Use the 3. Refresh button at the top to update the values because they may change during operation. The DBMGUI outputs the same data as Transaction DB50.

Viewing the Caches via dbmcli

You can also view the cache data via dbmcli at the command line. This, however, involves more effort because the system must write the data that was mentioned in the two previous sections to tables. The data must therefore be queried using SQL commands. This process is less user-friendly than in SQL Studio. Nonetheless, this section introduces these queries and their results using the dbmcli tool. The following SQL state-ment illustrates that some of the values are included in the IOBUFFER-CACHES table. Because you can’t explicitly configure the sizes of the data and converter caches, you can’t obtain these values by outputting param-eters. Instead, the database must provide them using tables.

To have the system display cache data, proceed as follows:

1. Connect to the database:

/opt/sdb/programs/bin/dbmcli –d MAXDB -n <host> -u <user>,<password>

Execute the following SQL command, which outputs the cache data. 2. You don‘t have to place the statement inside of quotation marks; sim-ply write it after the sql_execute command.

/opt/sdb/programs/bin/dbmcli ON MAXDB> sql_execute Select TOTALSIZE AS IOBUFFERCACHE_kB, round(TOTALSIZE/8,0) AS TOTALSIZE_Pages DATACACHEUSEDSIZE AS DATACACHE_kB, round( DATACACHEUSEDSIZE/8) AS DATACACHE_Pages,

Steps in the DBMGUI

More effort

Viewing caches via dbmcli

299 Book.indb 232 5/4/09 8:37:32 AM

Page 33: Sappress Max Db Admin is Strati On

233

Caches 5.4

CONVERTERUSEDSIZE AS CONVERTERCACHE_kB, round(CONVERTERUSEDSIZE/8) AS CONVERTERUSEDSIZE_Pages, (TOTALSIZE-DATACACHEUSEDSIZE-CONVERTERUSEDSIZE) AS MISC round((TOTALSIZE-DATACACHEUSEDSIZE- CONVERTERUSEDSIZE)/8,4) AS MISC_Pages From IOBUFFERCACHES

Figure 5.13 shows sample output. It lists the individual selected values sequentially. However, this is very complex, and you must be able to interpret the values accordingly. You should consequently log the val-ues in regular intervals to create analyses and determine and eliminate bottlenecks at an early stage.

Result of an SQL Query on the Size of the Data and Figure5.13Converter Caches

Reading Additional Caches via dbmcli

In addition to the already described caches, you can directly confi gure the size of additional caches. As shown in Figure 5.14, you can easily read the sizes from the database parameters. Proceed as follows:

Connect to the database:1.

/opt/sdb/programs/bin/dbmcli –d MAXDB -n <host> -u <user>,<password>

Execute the following commands to output the current sizes of the 2. caches:

param_directget CAT_CACHE_SUPPLYparam_directget SEQUENCE_CACHE

Sample output

Confi guring sizes

299 Book.indb 233 5/4/09 8:37:32 AM

Page 34: Sappress Max Db Admin is Strati On

234

5 Performance Tuning

Reading the Sizes of the Remaining Caches from the Database ParametersFigure5.14

Reading Cache Hit Rates via dbmcli

Reading the hit rates of the various caches is much easier. To do so, you again need SQL because the data changes dynamically during operations and is thus provided in tables by the database. To be able to evaluate the data more easily, the system provides descriptions of the individual values. The DESCRIPTION column contains a brief description of the respective value.

1. Connect to the database:

/opt/sdb/programs/bin/dbmcli –d MAXDB -u control,control

Execute the following command to output the current size of the 2. cache:

sql_execute select * from monitor_caches

The result of this query is illustrated in Figure 5.15. In contrast to the previous statements, this statement doesn’t involve additional calcula-tion work because the system can determine the hit rate from the ratio of the number of all accesses to successful accesses. This result is stored in the monitor_caches system table. The values of the OMS caches indi-cate that this example is not a liveCache instance: For example, the size of the OMS cache is zero.

Displaying hit rates via dbmcli

299 Book.indb 234 5/4/09 8:37:33 AM

Page 35: Sappress Max Db Admin is Strati On

235

Caches 5.4

Cache Hit Rates from the monitor_caches TableFigure5.15

Critical Region Statistics5.4.5

The caches are divided into different access areas — also referred to as critical regions — to accelerate competitive accesses that use locks for data areas. This section describes how you identify critical regions using the most important tools and transactions.

Critical Regions in transaction DB50

You can use Transaction DB50 to display critical regions as a table. Figure 5.16 shows sample output.

Recognizing critical regions

299 Book.indb 235 5/4/09 8:37:34 AM

Page 36: Sappress Max Db Admin is Strati On

236

5 Performance Tuning

Statistics of the Critical Regions in Transaction DB50Figure5.16

To navigate to an overview such as the one shown in Figure 5.16, pro-ceed as follows:

Log on to the SAP system.1.

Start Transaction DB50.2.

Navigate to the overview of critical regions via 3. Current Status • Crit-ical Regions.

If you determine that the collision rate shown in the overview is too high, you should take appropriate countermeasures such as increasing the size of the cache.

Displaying Critical Regions via dbmcli

Like the data on cache sizes, the data on access statistics for critical regions isn’t static but is logged regularly by SAP MaxDB and defi ned in the REGION_STATISTICS table in aggregated form.

To have the system display the region data via the command line, pro-ceed as follows:

Connect to the database:1.

/opt/sdb/programs/bin/dbmcli –d MAXDB -u control,control

Execute the following command to output the current cache sizes:2.

dbmcli> sql_execute

Displaying critical regions

299 Book.indb 236 5/4/09 8:37:34 AM

Page 37: Sappress Max Db Admin is Strati On

237

Analysis Tools 5.5

select REGIONID AS ID, REGIONNAME AS Name, round((COLLISIONCOUNT*100)/ACCESSCOUNT,2) AS Collision Rate, WAITCOUNT AS Waits, ACCESSCOUNT AS Accesses from REGIONSTATISTICS where ACCESSCOUNT > 0

This example avoided all values that would result in a division by zero using the Where condition. However, this doesn’t affect the information content because the system divides by the value of the ACCESSCOUNT col-umn. If this value is zero, the critical region hasn’t been accessed and thus didn’t cause wait times. Figure 5.17 shows the output of this SQL statement.

Critical Region Access Statistics Figure5.17

Analysis tools5.5

When you have correctly confi gured all indexes and suffi ciently sized all caches, it may be possible that — due to data growth and changes in usage

Avoiding a division by zero

299 Book.indb 237 5/4/09 8:37:34 AM

Page 38: Sappress Max Db Admin is Strati On

321

Index

32-bit, 14

A

ABAP stack, 90Absolute path, 81ACTION, 126ADABAS, 12ADA_SQLDBC, 107After image, 67Analysis tool, 237appldiag, 285Application level, 91Application server, 89Area of use, 16AS ABAP, 90Asdev thread, 58AS Java, 90Automatic log backup, 139, 174

B

Backup, 163Backup strategy, 166Backup types, 164Call state, 173Check last backup, 183Check medium/template, 184Check via the dbmcli, 184Check via the DBMGUI, 182Concepts, 163Duration, 172History, 165, 180, 191Implement via the dbmcli, 172Implement via the DBMGUI, 171Incremental, 171State of check, 184Template, 166

Backup medium, 166Create via the dbmcli, 169Create via the DBMGUI, 167Delete, 170Delete via the dbmcli, 170Delete via the DBMGUI, 170Parallel medium, 168Properties, 168Types of media, 166

Bad index, 193Recognize, 193Remove via the DBMGUI, 194Resolve via the dbmcli, 195

Before image, 67B* tree, 65, 210, 211

C

Cache, 63, 226, 230Hit rate via the dbmcli, 234Size, 70, 230

CacheMemorySize, 141Catalog cache, 69, 227, 228Central user administration, 54Client-server architecture, 92Command, 253Command Monitor, 254, 256, 276

Configuration, 256Command reference, 35Component group, 116Configuration file, 86, 87Configuration type

Custom, 133, 135Desktop PC/Laptop, 133Desktop PC/Laptop , 133My Templates, 133

Configuration Type, 133Configuration type My Templates, 139Consistency check, 196

Database structure, 197Consisteny check

Check database structure via the dbmcli, 199

Console thread, 57Control user, 128Converter, 64, 65Converter cache, 228Cooperative multitasking, 60Coordinator thread, 56Critical region, 235Cyclical writing, 73

D

Data area extension, 298

299 Book.indb 321 5/4/09 8:38:05 AM

Page 39: Sappress Max Db Admin is Strati On

322

Index

DatabaseActivate, 142Assistant, 103Configure, 146Console, 106Create on command line, 140Create via GUI, 132Create via script, 143Delete via the dbmcli, 202Delete via the Installation Manager, 200Instance, 55, 56, 84Level, 92Operators, 47, 48Parameter, 84Plan, 130Trace, 106

Database Analyzer, 42, 238, 268Configuration file, 241Log file, 247Start via the command line, 240Start via the dbmcli, 239Start via the DBMGUI, 238

Database managerOperator, 46, 136

Database ManagerCLI, 34, 84GUI, 30, 83

Database Monitor, 112, 113Database optimizer, 218

Cost based optimizer, 218Optimizer statistics, 219Rule based optimizer, 218Size of the sample, 221Updating optimizer statistics, 219

Database Studio, 27, 29, 83Database system administrator, 138, 201Data cache, 64, 227Data export, 43Data import, 43Data record lock, 300Data transport, 43Data volume, 71, 131

Add via the dbmcli, 148Add via the DBMGUI, 146Adjust, 138Create, 137Create a dynamic data volume, 149Delete, 153Properties, 147Volume restriction, 131

Data warehouse, 22

DB50, 230, 235, 268DBA, 49DBA history, 109DBA Planning Calendar, 106, 110dbmcli, 34dbm.ebl, 289, 303dbm.ebp, 289, 303DBMGETF, 44DBMGui, 30dbm.knl, 288DBM operator, 47dbm.prt, 285dbm.utl, 287DB Time, 266Dependent program path, 80, 81, 120Dev thread, 57Dev trace, 282Diagnosis file, 281Directory structure, 80, 82Dispatcher, 90Documentation, 17Drill down, 22

E

Equal condition for index column, 224Event, 62Exclusive lock, 79Execution costs, 217Execution plan, 208EXPLAIN, 260

F

FILE, 76File directory, 64, 66Full table scan, 225

G

Garbage collector, 25GETDBROOT, 45

H

Hard disk, 55History of origins, 11Hot standby, 33

I

Independent data path, 80, 81, 120Independent program path, 80, 81, 120, 125, 147

299 Book.indb 322 5/4/09 8:38:05 AM

Page 40: Sappress Max Db Admin is Strati On

323

Index

Indexes, 208B* tree, 209Execution costs, 217Inverted list, 215LONG values in B* trees, 213Primary key, 211Secondary key, 211

InstallationIn the background, 121In the dialog, 116Log file, 125Manager, 123, 200Phase, 119Troubleshooting, 125Type, 124

Installation profile, 117, 118INSTALLER_INFO, 125Instance type, 21Interface, 15, 96Inverted list, 215, 216I/O buffer cache, 24, 64, 227I/O worker thread, 58IPC (Inter-Process Communication), 99Isolation level, 79

J

Java stack, 90JDBC interface, 38JDBC (Java Database Connectivity), 97Joins, 225

K

Kernel, 55Thread, 56Trace, 291Variant, 84

knldiag, 286knldiag.err, 287knldump, 292KnlMsg, 286KnlMsgArchive, 287knltrace, 291

L

License, 12LINK, 76Linux, 14, 28liveCache, 23, 228Load analysis, 266Load balancing, 62

Loader, 42Lock escalation, 79, 80Lock list, 79Log backup, 74, 111, 174

Automatic, 174Implement via the dbmcli, 177Implement via the DBMGUI, 175

Log file, 247Log full, 75Log I/O queue, 227, 229Log mode

Configuration, 156Overwrite mode, 156Redo log management, 157

Log partition, 73Log queue, 67Log segment, 75Log volume, 72

Adjust, 138Create, 137Create via the DBMGUI, 150Create via the dbmcli, 151Mirror, 154Overwrite mode, 142Properties, 150Reintegrate mirrors, 192

Log writer, 62

M

Main memory, 55MaxCPUs, 59Microsoft Windows, 14Mirroring, 74Monitor, 244MSG, 126

N

.NET Wrapper, 98

O

Object identifier, 24ODBC, 96Offline mode, 129OLAP cubes, 22OLAP (Online Analytical Processing), 16, 21, 22OLTP (Online Transaction Processing), 16, 21OMS cache, 228OMS heap, 25

299 Book.indb 323 5/4/09 8:38:05 AM

Page 41: Sappress Max Db Admin is Strati On

324

Index

OMS (Object Management System), 24One-layer architecture, 92Operating systems, 14Operating system user, 52Operational state, 33, 83Optimistic lock, 79Optimizer statistics, 219, 220, 225Optimizer types, 218Overwrite mode, 139

P

Page, 24Page chain, 24Pager, 62Parallelization, 61Parameter

Change, 159Change via the dbmcli, 161Change via the DBMGUI, 160Commit, 141Copy, 162Copy to another database, 162Group, 85Initialize, 141_IOPROCS_PER_DEV, 130Parameter category, 160Session, 86Start parameter session, 140

Parameter initialization, 136Copy parameters from existing database, 136Initialize parameters with default value, 136Restore parameters from a backup, 136Use current parameters, 137

Performance, 208Perl, 99pgm/kernel, 129PHP, 99Pointer, 24Port, 100Position index, 24Preparing phase, 119Presentation level, 91Primary key, 223Problem situation

Connection problems, 294Data full situation, 295, 297Hardware error, 303

Log full situation, 295System blockade, 300System crash, 299

Python, 99

R

RAID, 71, 131Range condition for key column, 224RAW, 76RAW device, 132Recovery, 186

Implement via the dbmcli, 190Implement via the DBMGUI, 188Strategy, 187Type, 186With initialization, 188, 191Without initialization, 191

Requestor thread, 56Resource Monitor, 249, 250, 251, 271, 272ROLAP (Relational OLAP), 22Role concept, 50Roll up, 22root, 53Root page, 211rtedump, 290RUNDIRECTORY, 163

S

SAP, 94, 96SAP architecture, 89SAPCAR, 116SAP CCMS, 103, 112SAP Content Server, 16SAP DB, 12SAP Developer Network, 17SAPInst, 72, 144

Error case, 146Log file, 145Log file for MaxDB installation, 145Phases, 144

SAP landscape, 101SAP NetWeaver AS, 263, 267SAProuter, 101SAP standard user, 51Savepoint, 64, 71, 76sdb, 53sdba, 53SDBINST, 115, 116, 121

299 Book.indb 324 5/4/09 8:38:05 AM

Page 42: Sappress Max Db Admin is Strati On

325

Index

SDBREGVIEW, 44SDBSETUP, 115, 123, 204SDBUPD, 126, 128, 129Search criterion, 217Security aspects, 54Selectivity, 258Sequence cache, 70Server landscapes, 29Server software

Uninstallation, 203Uninstallation via sdbunsint, 205

Server task, 61Service session, 184Servlet container, 38Shadow page mechanism, 64Shared lock, 79Shared SQL cache, 69Slice and dice, 22Snapshot, 177

Create via the dbmcli, 179Create via the DBMGUI, 178Delete via the dbmcli, 181Delete via the DBMGUI, 181Functionality, 178Revert via the dbmcli, 181Revert via the DBMGUI, 180

Software component group, 117sql6, 100sql30, 100SQL CLI, 36SQLDBC, 97SQLDBC trace, 283SQLDBC Trace, 107SQL editor, 29SQL Explain, 260SQL interface, 102SQL mode, 15SQL Studio, 35, 106SQL trace, 282SQL user, 49Standard SAP user, 94Star schema, 22STDIN, 125STDOUT, 125Striping, 72Support groups, 54SYS, 125SYSDBA user, 46, 50SYSDB user, 45SYSMONITOR, 254, 255

System table, 129, 142, 158Load, 159System table category, 158

t

Table editor, 29TCP port, 32Template, 134Three-layer architecture, 92Timer, 62Timer thread, 57, 63Tomcat, 38Trace file, 108Trace writer, 62Transaction, 102

CCMS, 304DB12, 108DB13, 109DB50, 103, 230, 267, 271, 276, 279DBCO, 95RZ20, 112ST03N, 264, 266

Transaction profile, 264Tutorial data, 139Two-layer architecture, 92

U

Uninstallation Summary, 204UNIX, 14Update, 126Upgrade, 129User kernel thread, 61User rights, 47, 48User task, 60, 94User type, 46Utility, 62Utility session, 177, 191

Open, 173

V

Version name, 14, 15View, 242Visual query editor, 29Vwait, 301

W

Watchdog process, 58Web Database Manager, 38

299 Book.indb 325 5/4/09 8:38:05 AM

Page 43: Sappress Max Db Admin is Strati On

326

Index

Web SQL, 37Work process, 94

X

X_CONS, 39, 59XINSTINFO, 44

X_PING, 45X Server, 27X†Server, 100, 127, 128X Server log, 284XUSER, 40

Marty McCormick, Matt Stratford

Content Integration withSAP NetWeaver Portal

A must-read for SAP Professionals who are looking to take their NetWeaver Portal implementations to the next level. Using this book as your exclusive guide, explore the various architectural and developmental impacts of implementing NetWeaver Portal content for several SAP applications, includingComposite Applications and Business Packages. Readers will learn the intricate details of Federated Portals and where it makes sense to use them related to portal content integration scenarios such as Business Intelligence, Employee & Manager Self Service, SRM, and more. In addition, readers can leverage the

book's examples as a basis for their own specific requirements.

388 pp., 2008, 79,95 Euro / US$ 79.95

ISBN 978-1-59229-226-4

>> www.sap-press.de/1842

Gain expert insights on the various options for and impacts of integrating content into SAP NetWeaver Portal

Learn about organizational, technical, and architectural requirements and restrictions toachieve high-performance solutions

Explore best practices on SAP ERP, CRM, andSRM, SAP NetWeaver BI, Business Packages,and Composite Applications

www.sap-press.com

299 Book.indb 326 5/4/09 8:38:06 AM