1 Implementation of Relational Operators. 2 Steps of processing a high-level query Parser Query...
-
Upload
donald-barrett -
Category
Documents
-
view
213 -
download
0
Transcript of 1 Implementation of Relational Operators. 2 Steps of processing a high-level query Parser Query...
1
Implementation of Relational Operators
2
Steps of processing a high-level query
Parser QueryOptimizer
Statistics Cost Model
QEPParsed Query
Database
High Level Query Query Result
QueryEvaluator
3
Relational Operations We will consider how to implement:
– Selection () Selects a subset of rows from relation.– Projection ( ) Deletes unwanted columns from
relation.– Join ( ) Allows us to combine two relations.– Set-difference ( - ) Tuples in reln. 1, but not in reln.
2.– Union ( U ) Tuples in reln. 1 and in reln. 2.– Aggregation (SUM, MIN, etc.) and GROUP BY
4
Paradigm
Iteration Index
– B+-tree, Hash assume index entries to be (rid,pointer)
pair
– Clustered, Unclustered Sort Hash
5
Schema for Examples
Reserves (R):– pR tuples per page, M pages. pR = 100. M = 1000.
Sailors (S):– pS tuples per page, N pages. pS = 80. N = 500.
Cost metric: # of I/Os (pages)– We will ignore output costs in the following discussion.
Sailors (sid: integer, sname: string, rating: integer, age: real)Reserves (sid: integer, bid: integer, day: dates, rname: string)
6
Equality Joins With One Join Column
In algebra: R S. Most frequently used operation; very costly operation. join_selectivity = join_size/(#R tuples #S tuples)
SELECT *FROM Reserves R, Sailors SWHERE R.sid=S.sid
7
Join Example
sid sname rating age22 dustin 7 45.028 yuppy 9 35.031 lubber 8 55.544 guppy 5 35.058 rusty 10 35.0
sid bid day rname
31 101 10/11/96 lubber58 103 11/12/96 dustin
8
Join Example
sid sname rating age22 dustin 7 45.028 yuppy 9 35.031 lubber 8 55.544 guppy 5 35.058 rusty 10 35.0
sid bid day rname
31 101 10/11/96 lubber58 103 11/12/96 dustin
sid sname rating age bid day rname
31 lubber 8 55.5 101 10/11/96 lubber58 rusty 10 35.0 103 11/12/96 dustin
9
Simple Nested Loops Join
For each tuple in the outer relation R, we scan the entire inner relation S. – I/O Cost?– Memory?
foreach tuple r in R doforeach tuple s in S do
if r.sid == s.sid then add <r, s> to result
10
Simple Nested Loops Join
For each tuple in the outer relation R, we scan the entire inner relation S. – Cost: M + pR * M * N = 1000 + 100*1000*500 I/Os.
foreach tuple r in R doforeach tuple s in S do
if r.sid == s.sid then add <r, s> to result
11
Simple Nested Loops Join
For each tuple in the outer relation R, we scan the entire inner relation S. – Cost: M + pR * M * N = 1000 + 100*1000*500 I/Os.– Memory: 3 pages!
foreach tuple r in R doforeach tuple s in S do
if r.sid == s.sid then add <r, s> to result
12
Block Nested Loops Join
Use one page as an input buffer for scanning the inner S, one page as the output buffer, and use all remaining pages to hold ``block’’ of outer R.– For each matching tuple r in R-block, s in S-page, add <r, s> to result.
Then read next R-block, scan S, etc.
. . .
. . .
R & SHash table for block of R
(k < B-1 pages)
Input buffer for S Output buffer
. . .
Join Result
13
Examples of Block Nested Loops
Cost: Scan of outer + #outer blocks * scan of inner– #outer blocks ?
14
Examples of Block Nested Loops
Cost: Scan of outer + #outer blocks * scan of inner– #outer blocks = no. of pages in outer relation / block size
15
Examples of Block Nested Loops
Cost: Scan of outer + #outer blocks * scan of inner– #outer blocks = no. of pages in outer relation / block size
With R as outer, block size of 100 pages:– Cost of scanning R is 1000 I/Os; a total of 10 blocks.– Per block of R, we scan S; 10*500 I/Os.– If block size for just 90 pages of R, scan S 12 times.
16
Examples of Block Nested Loops
Cost: Scan of outer + #outer blocks * scan of inner– #outer blocks = no. of pages in outer relation / block size
With R as outer, block size of 100 pages:– Cost of scanning R is 1000 I/Os; a total of 10 blocks.– Per block of R, we scan S; 10*500 I/Os.– If block size for just 90 pages of R, scan S 12 times.
With 100-page block of S as outer?
17
Examples of Block Nested Loops
Cost: Scan of outer + #outer blocks * scan of inner– #outer blocks = no. of pages in outer relation / block size
With R as outer, block size of 100 pages:– Cost of scanning R is 1000 I/Os; a total of 10 blocks.– Per block of R, we scan S; 10*500 I/Os.– If block size for just 90 pages of R, scan S 12 times.
With 100-page block of S as outer:– Cost of scanning S is 500 I/Os; a total of 5 blocks.– Per block of S, we scan R; 5*1000 I/Os.
18
Index Nested Loops Join
If there is an index on the join column of one relation (say S), can make it the inner and exploit the index.
– Cost: M + ( (M*pR) * cost of finding matching S tuples) For each R tuple, cost of probing S index is about 1.2 for hash index, 2-4 for B+ tree. Cost of then
finding S tuples (assuming leaf data entries are pointers) depends on clustering.– Clustered index: 1 I/O (typical), unclustered: upto 1 I/O per matching S tuple.
foreach tuple r in R dosearch index of S on sid using Ssearch-key = r.sidfor each matching key
retrieve s; add <r, s> to result
19
Examples of Index Nested Loops
Hash-index on sid of S (as inner):– Scan R: 1000 page I/Os, 100*1000 tuples.– For each R tuple: 1.2 I/Os to get data entry in index, plus 1 I/O to get (the
exactly one) matching S tuple. Total: 220,000 I/Os. Hash-index on sid of R (as inner)?
20
Examples of Index Nested Loops
Hash-index on sid of S (as inner):– Scan R: 1000 page I/Os, 100*1000 tuples.– For each R tuple: 1.2 I/Os to get data entry in index, plus 1 I/O to get (the
exactly one) matching S tuple. Total: 220,000 I/Os. Hash-index on sid of R (as inner):
– Scan S: 500 page I/Os, 80*500 tuples.– For each S tuple: 1.2 I/Os to find index page with data entries, plus cost of
retrieving matching R tuples. – Assuming uniform distribution, 2.5 reservations per sailor (100,000 / 40,000).
Cost of retrieving them is 1 or 2.5 I/Os depending on whether the index is clustered.
21
Sort-Merge Join Sort R and S on the join column, then scan them to do a ``merge’’ (on join col.), and
output result tuples.– Advance scan of R until current R-tuple >= current S tuple, then advance scan of S until
current S-tuple >= current R tuple; do this until current R tuple = current S tuple.– At this point, all R tuples with same value in Ri (current R group) and all S tuples with same
value in Sj (current S group) match; output <r, s> for all pairs of such tuples.– Then resume scanning R and S.
R is scanned once; each S group is scanned once per matching R tuple. (Multiple scans of an S group are likely to find needed pages in buffer.)
22
Example of Sort-Merge Join
Cost?
sid sname rating age22 dustin 7 45.028 yuppy 9 35.031 lubber 8 55.544 guppy 5 35.058 rusty 10 35.0
sid bid day rname
28 103 12/4/96 guppy28 103 11/3/96 yuppy31 101 10/10/96 dustin31 102 10/12/96 lubber31 101 10/11/96 lubber58 103 11/12/96 dustin
23
Example of Sort-Merge Join
Cost: 2M*K1+ 2N*K2+ (M+N)
– K1 and K2 are the number of passes to sort R and S respectively– The cost of scanning, M+N, could be M*N (very unlikely!)
sid sname rating age22 dustin 7 45.028 yuppy 9 35.031 lubber 8 55.544 guppy 5 35.058 rusty 10 35.0
sid bid day rname
28 103 12/4/96 guppy28 103 11/3/96 yuppy31 101 10/10/96 dustin31 102 10/12/96 lubber31 101 10/11/96 lubber58 103 11/12/96 dustin
24
Example of Sort-Merge Join
Cost: 2M*K1+ 2N*K2+ (M+N) – K1 and K2 are the number of passes to sort R and S respectively– The cost of scanning, M+N, could be M*N (very unlikely!)
With 35, 100 or 300 buffer pages, both R and S can be sorted in 2 passes; total join cost: 7500.
sid sname rating age22 dustin 7 45.028 yuppy 9 35.031 lubber 8 55.544 guppy 5 35.058 rusty 10 35.0
sid bid day rname
28 103 12/4/96 guppy28 103 11/3/96 yuppy31 101 10/10/96 dustin31 102 10/12/96 lubber31 101 10/11/96 lubber58 103 11/12/96 dustin
(BNL cost: 2500 to 15000 I/Os)
25
GRACE Hash-Join
Operates in two phases:– Partition phase
Partition relation R using hash fn h. Partition relation S using hash fn h. R tuples in partition i will only match S tuples in partition i.
– Join phase Read in a partition of R Hash it using h2 (<> h!) Scan matching partition of S, search for matches.
26
GRACE Hash-Join
X X XX X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X
R
S
0
1
2
3
0 1 2 3
bucketID = X mod 4
27
Partitioning Phase
B main memory buffers DiskDisk
Original Relation OUTPUT
2INPUT
1
hashfunction
hB-1
Partitions
1
2
B-1
. . .
28
Joining Phase
Partitionsof R & S
Input bufferfor Si
Hash table for partitionRi (k < B-1 pages)
B main memory buffersDisk
Output buffer
Disk
Join Result
hashfnh2
h2
29
Cost of Hash-Join
In partitioning phase, read+write both relns– 2(M+N).
In matching phase, read both relns– M+N I/Os.
In our running example, this is a total of 4500 I/Os.
30
Observations on Hash-Join
#partitions k B-1 (why?), and B-2 size of largest partition to be held in memory. Assuming uniformly sized partitions, and maximizing k, we get:– k= B-1, and M/(B-1) B-2, i.e., B must be M
If we build an in-memory hash table to speed up the matching of tuples, a little more memory is needed.
If the hash function does not partition uniformly, one or more R partitions may not fit in memory. Can apply hash-join technique recursively to do the join of this R-partition with corresponding S-partition.
What if B < M ?
31
General Join Conditions
Equalities over several attributes (e.g., R.sid=S.sid AND R.rname=S.sname): – Join on one predicate, and treat the rest as selections;– For Index NL, build index on <sid, sname> (if S is inner); use existing indexes
on sid or sname.– For Sort-Merge and Hash Join, sort/partition on combination of the two join
columns
Inequality join (R.sid < S.sid)?
32
Simple Selections
Of the form: R.attr op value (R) Size of result = R * selectivity With no index, unsorted: Must essentially scan the whole relation; cost is
M (#pages in R). With an index on selection attribute: Use index to find qualifying data
entries, then retrieve corresponding data records. (Hash index useful only for equality selections.)
SELECT *FROM Reserves RWHERE R.rname < ‘C%’
33
Using an Index for Selections
Cost depends on #qualifying tuples, and clustering.– Cost of finding qualifying data entries (typically small) plus cost of
retrieving records (could be large w/o clustering).– In example, assuming uniform distribution of names, about 10% of
tuples qualify (100 pages, 10000 tuples). Clustered index? Unclustered index?
34
Using an Index for Selections
Cost depends on #qualifying tuples, and clustering.– Cost of finding qualifying data entries (typically small) plus cost of
retrieving records (could be large w/o clustering).– In example, assuming uniform distribution of names, about 10% of
tuples qualify (100 pages, 10000 tuples). Clustered index: ~ 100 I/Os Unclustered: upto 10000 I/Os!
35
Two Approaches to General Selections First approach: Find the most selective access path, retrieve tuples
using it, and apply any remaining terms that don’t match the index:– Most selective access path: An index or file scan that we estimate will
require the fewest page I/Os.– Terms that match this index reduce the number of tuples retrieved; other
terms are used to discard some retrieved tuples, but do not affect number of tuples/pages fetched.
– Consider day<8/9/94 AND bid=5 AND sid=3. A B+ tree index on day can be used; then, bid=5 and sid=3 must be checked for each retrieved tuple. Similarly, a hash index on <bid, sid> could be used; day<8/9/94 must then be checked.
36
Intersection of Rids
Second approach (if we have 2 or more matching indexes (assuming leaf data entries are pointers):– Get sets of rids of data records using each matching index.– Then intersect these sets of rids (we’ll discuss intersection soon!)– Retrieve the records and apply any remaining terms.– Consider day<8/9/94 AND bid=5 AND sid=3. If we have a B+ tree index on
day and an index on sid, we can retrieve rids of records satisfying day<8/9/94 using the first, rids of recs satisfying sid=3 using the second, intersect, retrieve records and check bid=5.
37
The Projection Operation(Duplicate Elimination)
An approach based on sorting:– Modify Pass 0 of external sort to eliminate unwanted fields. Thus, runs are produced, but
tuples in runs are smaller than input tuples. (Size ratio depends on # and size of fields that are dropped.)
– Modify merging passes to eliminate duplicates. Thus, number of result tuples smaller than input. (Difference depends on # of duplicates.)
– Cost: In Pass 0, read original relation (size M), write out same number of smaller tuples. In merging passes, fewer tuples written out in each pass.
Hash-based scheme?
SELECT DISTINCT R.sid, R.bidFROM Reserves R
38
Set Operations
Intersection and cross-product special cases of join. Union (Distinct) and Difference similar. Sorting based approach to union:
– Sort both relations (on combination of all attributes).– Scan sorted relations and merge them.
Hash based approach to union?
39
Set Operations
Intersection and cross-product special cases of join. Union (Distinct) and Difference similar. Sorting based approach to union:
– Sort both relations (on combination of all attributes).– Scan sorted relations and merge them.
Hash based approach to union:– Partition R and S using hash function h.– For each S-partition, build in-memory hash table (using h2), scan corr. R-
partition and add tuples to table while discarding duplicates.
40
Aggregate Operations (AVG, MIN, etc.) Without grouping:
– In general, requires scanning the relation.– Given index whose search key includes all attributes in the SELECT or WHERE clauses, can do
index-only scan. With grouping:
– Sort on group-by attributes, then scan relation and compute aggregate for each group. – Similar approach based on hashing on group-by attributes.– Given tree index whose search key includes all attributes in SELECT, WHERE and GROUP BY
clauses, can do index-only scan; if group-by attributes form prefix of search key, can retrieve data entries/tuples in group-by order.
41
Iterators for Implementation of Operators
Most operators can be implemented as an iterator An iterator allows a consumer of the result of the operator to
get the result one tuple at a time– Open – starts the process of getting tuples, but does not get a
tuple. It initializes any data structures needed.– GetNext – returns the next tuple in the result and adjusts the data
structures as necessary to allow subsequent tuples to be obtained. It may calls GetNext one or more times on its arguments. It also signals whether a tuple was produced or there were no more tuples to be produced.
– Close – ends the iteration after all tuples have been obtained.
42
More on Iterators
Why iterators?– Do not need to materialize intermediate results– Many operators are active at once, and tuples flow from one
operator to the next, thus reducing the need to store intermediate results
In some cases, almost all the work would need to be done by the Open function, which is tantamount to materialization
We shall regard Open, GetNext, Close as overloaded names of methods. – Assume that for each physical operator, there is a class whose
objects are the relations that can be produced by this operator. If R is a member of such a class, then we use R.Open(), R.GetNext, and R.Close() to apply the functions of the iterator for R.
43
An iterator for table-scan operatorOpen(R) {
b := first block of R; t := first tuple of block b;
Found := TRUE;}
Close(R) {}
GetNext(R) {If (t is past the last tuple on b)
b := next blockIf (there is no next block)
Found := FALSE;RETURN;
Elset := first tuple in
b;oldt := t;t := next tuple of b RETURN oldt;
}
44
An iterator for tuple-based nested-loops join operator
Open(R,S) {R.Open();S.Open();s := S.GetNext();
}
Close(R,S) {R.Close();S.Close();
}
GetNext(R,S) {REPEAT
r := R.GetNext();If (NOT Found) {
R.Close();s := S.GetNext();IF (NOT Found)
Return;R.Open();r := R.GetNext();
}UNTIL (r and s join);Return the join of r and s;
}
45
Impact of Buffering
If several operations are executing concurrently, estimating the number of available buffer pages is guesswork.
Repeated access patterns interact with buffer replacement policy.– e.g., Inner relation is scanned repeatedly in Simple Nested Loop Join.
With enough buffer pages to hold inner, replacement policy does not matter. Otherwise, MRU is best, LRU is worst (sequential flooding).
– Does replacement policy matter for Block Nested Loops?– What about Index Nested Loops? Sort-Merge Join?
46
Summary A virtue of relational DBMSs: queries are composed of a few basic
operators; the implementation of these operators can be carefully tuned (and it is important to do this!).
Many alternative implementation techniques for each operator; no universally superior technique for most operators.
Must consider available alternatives for each operation in a query and choose best one based on system statistics, etc. This is part of the broader task of optimizing a query composed of several ops.
Overview of Query Optimization
Plan: Tree of R.A. ops, with choice of alg for each op.– Each operator typically implemented using a `pull’
interface: when an operator is `pulled’ for the next output tuples, it `pulls’ on its inputs and computes them.
Two main issues:– For a given query, what plans are considered?
Algorithm to search plan space for cheapest (estimated) plan.
– How is the cost of a plan estimated? Ideally: Want to find best plan. Practically: Avoid
worst plans! We will study the System R approach.
Highlights of System R Optimizer
Impact:– Most widely used currently; works well for < 10 joins.
Cost estimation: Approximate art at best.– Statistics, maintained in system catalogs, used to
estimate cost of operations and result sizes.– Considers combination of CPU and I/O costs.
Plan Space: Too large, must be pruned.– Only the space of left-deep plans is considered.
Left-deep plans allow output of each operator to be pipelined into the next operator without storing it in a temporary relation.
– Cartesian products avoided.
Schema for Examples
Similar to old schema; rname added for variations.
Reserves:– Each tuple is 40 bytes long, 100 tuples per page,
1000 pages. Sailors:
– Each tuple is 50 bytes long, 80 tuples per page, 500 pages.
Sailors (sid: integer, sname: string, rating: integer, age: real)Reserves (sid: integer, bid: integer, day: dates, rname: string)
Motivating Example
Cost: 500+500*1000 I/Os By no means the worst plan! Misses several opportunities:
selections could have been `pushed’ earlier, no use is made of any available indexes, etc.
Goal of optimization: To find more efficient plans that compute the same answer.
SELECT S.snameFROM Reserves R, Sailors SWHERE R.sid=S.sid AND R.bid=100 AND S.rating>5
Reserves Sailors
sid=sid
bid=100 rating > 5
sname
Reserves Sailors
sid=sid
bid=100 rating > 5
sname
(Simple Nested Loops)
(On-the-fly)
(On-the-fly)
RA Tree:
Plan:
Alternative Plans 1 (No Indexes)
Main difference: push selects. With 5 buffers, cost of plan:
– Scan Reserves (1000) + write temp T1 (10 pages, if we have 100 boats, uniform distribution).
– Scan Sailors (500) + write temp T2 (250 pages, if we have 10 ratings).– Sort T1 (2*2*10), sort T2 (2*3*250), merge (10+250), total=1830 – Total: 3560 page I/Os.
If we used BNL join, join cost = 10+4*250, total cost = 2770. If we `push’ projections, T1 has only sid, T2 only sid and
sname:– T1 fits in 3 pages, cost of BNL drops to under 250 pages, total < 2000.
Reserves Sailors
sid=sid
bid=100
sname(On-the-fly)
rating > 5(Scan;write to temp T1)
(Scan;write totemp T2)
(Sort-Merge Join)
Alternative Plans 2With Indexes
With clustered index on bid of Reserves, we get 100,000/100 = 1000 tuples on 1000/100 = 10 pages.
INL with pipelining (outer is not materialized).
Decision not to push rating>5 before the join is based on availability of sid index on Sailors. Cost: Selection of Reserves tuples (10 I/Os); for each, must get matching Sailors tuple (1000*1.2); total 1210 I/Os.
Join column sid is a key for Sailors.–At most one matching tuple, unclustered index on sid OK.
–Projecting out unnecessary fields from outer doesn’t help.
Reserves
Sailors
sid=sid
bid=100
sname(On-the-fly)
rating > 5
(Use hashindex; donot writeresult to temp)
(Index Nested Loops,with pipelining )
(On-the-fly)
Query Blocks: Units of Optimization
An SQL query is parsed into a collection of query blocks, and these are optimized one block at a time.
Nested blocks are usually treated as calls to a subroutine, made once per outer tuple. (This is an over-simplification, but serves for now.)
SELECT S.snameFROM Sailors SWHERE S.age IN (SELECT MAX (S2.age) FROM Sailors S2 GROUP BY S2.rating)
Nested blockOuter block For each block, the plans considered are:
– All available access methods, for each relation in FROM clause.– All left-deep join trees (i.e., all ways to join the relations one-at-a-time, with the inner relation in the FROM clause, considering all relation permutations and join methods.)
Cost Estimation
For each plan considered, must estimate cost:– Must estimate cost of each operation in plan tree.
Depends on input cardinalities. We’ve already discussed how to estimate the cost of
operations (sequential scan, index scan, joins, etc.)
– Must estimate size of result for each operation in tree! Use information about the input relations. For selections and joins, assume independence of predicates.
We’ll discuss the System R cost estimation approach.– Very inexact, but works ok in practice.– More sophisticated techniques known now.
Statistics and Catalogs
Need information about the relations and indexes involved. Catalogs typically contain at least:– # tuples (NTuples) and # pages (NPages) for each
relation.– # distinct key values (NKeys) and NPages for each index.– Index height, low/high key values (Low/High) for each
tree index. Catalogs updated periodically.
– Updating whenever data changes is too expensive; lots of approximation anyway, so slight inconsistency ok.
More detailed information (e.g., histograms of the values in some field) are sometimes stored.
Size Estimation and Reduction Factors
Consider a query block: Maximum # tuples in result is the product of the
cardinalities of relations in the FROM clause. Reduction factor (RF) associated with each term
reflects the impact of the term in reducing result size. Result cardinality = Max # tuples * product of all RF’s.– Implicit assumption that terms are independent!– Term col=value has RF 1/NKeys(I), given index I on col– Term col1=col2 has RF 1/MAX(NKeys(I1), NKeys(I2))– Term col>value has RF (High(I)-value)/(High(I)-Low(I))
SELECT attribute listFROM relation listWHERE term1 AND ... AND termk
Relational Algebra Equivalences
Allow us to choose different join orders and to `push’ selections and projections ahead of joins.
Selections: (Cascade)
c cn c cnR R1 1 ... . . .
c c c cR R1 2 2 1 (Commute)
Projections: a a anR R1 1 . . . (Cascade)
Joins: R (S T) (R S) T (Associative)
(R S) (S R) (Commute)
R (S T) (T R) S Show that:
More Equivalences
A projection commutes with a selection that only uses attributes retained by the projection.
Selection between attributes of the two arguments of a cross-product converts cross-product to a join.
A selection on just attributes of R commutes with join R S. (i.e., (R S) (R) S )
Similarly, if a projection follows a join R S, we can `push’ it by retaining only attributes of R (and S) that are needed for the join or are kept by the projection.
Enumeration of Alternative Plans
There are two main cases:– Single-relation plans– Multiple-relation plans
For queries over a single relation, queries consist of a combination of selects, projects, and aggregate ops:– Each available access path (file scan / index) is considered,
and the one with the least estimated cost is chosen.– The different operations are essentially carried out
together (e.g., if an index is used for a selection, projection is done for each retrieved tuple, and the resulting tuples are pipelined into the aggregate computation).
Cost Estimates for Single-Relation Plans
Index I on primary key matches selection:– Cost is Height(I)+1 for a B+ tree, about 1.2 for hash index.
Clustered index I matching one or more selects:– (NPages(I)+NPages(R)) * product of RF’s of matching selects.
Non-clustered index I matching one or more selects:– (NPages(I)+NTuples(R)) * product of RF’s of matching selects.
Sequential scan of file:– NPages(R).
Note: Typically, no duplicate elimination on projections! (Exception: Done on answers if user says DISTINCT.)
Example
If we have an index on rating:– (1/NKeys(I)) * NTuples(R) = (1/10) * 40000 tuples retrieved.– Clustered index: (1/NKeys(I)) * (NPages(I)+NPages(R)) =
(1/10) * (50+500) pages are retrieved. (This is the cost.)– Unclustered index: (1/NKeys(I)) * (NPages(I)+NTuples(R)) =
(1/10) * (50+40000) pages are retrieved. If we have an index on sid:
– Would have to retrieve all tuples/pages. With a clustered index, the cost is 50+500, with unclustered index, 50+40000.
Doing a file scan:– We retrieve all file pages (500).
SELECT S.sidFROM Sailors SWHERE S.rating=8
Queries Over Multiple Relations
Fundamental decision in System R: only left-deep join trees are considered.– As the number of joins increases, the number of alternative
plans grows rapidly; we need to restrict the search space.– Left-deep trees allow us to generate all fully pipelined
plans. Intermediate results not written to temporary files. Not all left-deep trees are fully pipelined (e.g., SM join).
BA
C
D
BA
C
D
C DBA
Enumeration of Left-Deep Plans
Left-deep plans differ only in the order of relations, the access method for each relation, and the join method for each join.
Enumerated using N passes (if N relations joined):– Pass 1: Find best 1-relation plan for each relation.– Pass 2: Find best way to join result of each 1-relation plan
(as outer) to another relation. (All 2-relation plans.) – Pass N: Find best way to join result of a (N-1)-relation plan
(as outer) to the N’th relation. (All N-relation plans.) For each subset of relations, retain only:
– Cheapest plan overall, plus– Cheapest plan for each interesting order of the tuples.
Enumeration of Plans (Contd.)
ORDER BY, GROUP BY, aggregates etc. handled as a final step, using either an `interestingly ordered’ plan or an addional sorting operator.
An N-1 way plan is not combined with an additional relation unless there is a join condition between them, unless all predicates in WHERE have been used up.– i.e., avoid Cartesian products if possible.
In spite of pruning plan space, this approach is still exponential in the # of tables.
Example Pass1:
– Sailors: B+ tree matches rating>5, and is probably cheapest. However, if this selection is expected to retrieve a lot of tuples, and index is unclustered,
file scan may be cheaper. Still, B+ tree plan kept (because tuples are in rating order).
– Reserves: B+ tree on bid matches bid=500; cheapest.
Sailors: B+ tree on rating Hash on sidReserves: B+ tree on bid
Pass 2:– We consider each plan retained from Pass 1 as the outer, and consider how to join it with the (only) other relation.
e.g., Reserves as outer: Hash index can be used to get Sailors tuples that satisfy sid = outer tuple’s sid value.
Reserves Sailors
sid=sid
bid=100 rating > 5
sname
Nested Queries
Nested block is optimized independently, with the outer tuple considered as providing a selection condition.
Outer block is optimized with the cost of `calling’ nested block computation taken into account.
Implicit ordering of these blocks means that some good strategies are not considered. The non-nested version of the query is typically optimized better.
SELECT S.snameFROM Sailors SWHERE EXISTS (SELECT * FROM Reserves R WHERE R.bid=103 AND R.sid=S.sid)
Nested block to optimize: SELECT * FROM Reserves R WHERE R.bid=103 AND S.sid= outer valueEquivalent non-nested query:
SELECT S.snameFROM Sailors S, Reserves RWHERE S.sid=R.sid AND R.bid=103
Summary Query optimization is an important task in a
relational DBMS. Must understand optimization in order to
understand the performance impact of a given database design (relations, indexes) on a workload (set of queries).
Two parts to optimizing a query:– Consider a set of alternative plans.
Must prune search space; typically, left-deep plans only.
– Must estimate cost of each plan that is considered. Must estimate size of result and cost for each plan node. Key issues: Statistics, indexes, operator implementations.
Summary (Contd.) Single-relation queries:
– All access paths considered, cheapest is chosen.– Issues: Selections that match index, whether index key has all
needed fields and/or provides tuples in a desired order. Multiple-relation queries:
– All single-relation plans are first enumerated. Selections/projections considered as early as possible.
– Next, for each 1-relation plan, all ways of joining another relation (as inner) are considered.
– Next, for each 2-relation plan that is `retained’, all ways of joining another relation (as inner) are considered, etc.
– At each level, for each subset of relations, only best plan for each interesting order of tuples is `retained’.
69
Query Tuning
Two issues:– issue too many disk accesses (eg. Scan for a
point query)?– Relevant indexes are not used?
70
Queries
Settings: employee(ssnum, name, dept, salary, numfriends);student(ssnum, name, course, grade);techdept(dept, manager, location);
clustered index i1 on employee (ssnum);nonclustered index i2 on employee (name);nonclustered index i3 on employee (dept);
clustered index i4 on student (ssnum);nonclustered index i5 on student (name);
clustered index i6 on techdept (dept);
– 100000 rows in employee, 100000 students, 10 departments; Cold buffer
– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb drives (10000RPM), Windows 2000.
71
Redundant DISTINT
SELECT DISTINCT ssumFROM EmployeeWHERE dept = ‘computer sc’.
72
Subqueries
SELECT ssumFROM EmployeeWHERE dept IN (SELECT dept FROM Techdept)
SELECT ssnumFROM Employee, TechdeptWHERE Employee.dept = Techdept.dept
73
Temporary tablesSELECT * INTO TempFROM EmployeeWHERE Salary > 40000
SELECT ssnumFROM TempWHERE Temp.dept = ‘information’
-- query optimization problem, updates of catalog etc
74
Correlated Query Rewriting
SELECT ssumFROM Employee e1WHERE salary = (SELECT MAX(salary) FROM Employee e2 WHERE e2.dept = e1.dept)
Problems?
75
Query Rewriting … cont
SELECT MAX(salary) as maxsalary, dept INTO Temp
From EmployeeGROUP BY dept
SELECT ssnumFROM Employee, TempWHERE salary = maxsalary AND Employee.dept = Temp.dept
4 - Relational Systems 7676
Query Rewriting – Correlated Subqueries
SQL Server 2000 does a good job at handling the correlated subqueries (a hash join is used as opposed to a nested loop between query blocks)– The techniques
implemented in SQL Server 2000 are described in “Orthogonal Optimization of Subqueries and Aggregates” by C.Galindo-Legaria and M.Joshi, SIGMOD 2001.
-10
0
10
20
30
40
50
60
70
80
correlated subquery
Th
rou
gh
pu
t im
pro
vem
ent p
erce
nt
SQLServer 2000
Oracle 8i
DB2 V7.1
> 10000> 1000
77
Join on clustering index, and integer
SELECT Employee.ssnumFROM Employee, StudentWHERE Employee.name = Student.name-->SELECT Employee.ssnumFROM Employee, StudentWHERE Employee.ssnum =
Student.ssnum
78
HavingSELECT AVG(salary) as avgsalary, deptFROM EmployeeGROUP BY deptHAVING dept = ‘information’;
SELECT AVG(salary) as avgsalaryFROM EmployeeWHERE dept = ‘information’GROUP BY dept;
79
Idiosyncrasies
OR may stop the index being used– break the query and use UNION
Order of Tables may affect join implementation
View can cause query to be executed inefficiently
80
Queries – View on Join
View Techlocation:create view techlocation as select ssnum, techdept.dept, location from employee, techdept where employee.dept = techdept.dept;
Queries:– Original:
select dept from techlocation where ssnum = ?;
– Rewritten:select dept from employee where ssnum = ?;
4 - Relational Systems 8181
Query Rewriting - Views
All systems expand the selection on a view into a join
The difference between a plain selection and a join (on a primary key-foreign key) followed by a projection is greater on SQL Server than on Oracle and DB2 v7.1.
0
10
20
30
40
50
60
70
80
view
Th
rou
gh
pu
t im
pro
vem
ent
per
cen
t
SQLServer 2000
Oracle 8i
DB2 V7.1
82
Aggregate Maintenance -- data
Settings:orders( ordernum, itemnum, quantity, purchaser, vendor );create clustered index i_order on orders(itemnum);
store( vendor, name );
item(itemnum, price);create clustered index i_item on item(itemnum);
vendorOutstanding( vendor, amount);
storeOutstanding( store, amount);
– 1000000 orders, 10000 stores, 400000 items; Cold buffer– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb
drives (10000RPM), Windows 2000.
83
Aggregate Maintenance -- triggers
Triggers for Aggregate Maintenancecreate trigger updateVendorOutstanding on orders for insert asupdate vendorOutstandingset amount =
(select vendorOutstanding.amount+sum(inserted.quantity*item.price)from inserted,itemwhere inserted.itemnum = item.itemnum)
where vendor = (select vendor from inserted) ;
create trigger updateStoreOutstanding on orders for insert asupdate storeOutstandingset amount =
(select storeOutstanding.amount+sum(inserted.quantity*item.price) from inserted,item where inserted.itemnum = item.itemnum)
where store = (select store.name from inserted, store where inserted.vendor = store.vendor) ;
84
Aggregate Maintenance -- transactions
Concurrent Transactions:– Insertions
insert into orders values (1000350,7825,562,'xxxxxx6944','vendor4');
– Queries (first without, then with redundant tables)select orders.vendor, sum(orders.quantity*item.price)from orders,itemwhere orders.itemnum = item.itemnumgroup by orders.vendor;
vs. select * from vendorOutstanding;
select store.name, sum(orders.quantity*item.price)from orders,item, storewhere orders.itemnum = item.itemnum
and orders.vendor = store.vendorgroup by store.name;
vs. select * from storeOutstanding;
4 - Relational Systems 8585
Aggregate Maintenance
SQLServer 2000 on Windows 2000
Using triggers for view maintenance
If queries frequent or important, then aggregate maintenance is good.
pect. of gain with aggregate maintenance
21900
31900
- 62.2-5000
0
5000
10000
15000
20000
25000
30000
35000
insert vendor total store total
86
Superlinearity -- data
Settings:sales( id, itemid, customerid, storeid, amount, quantity);item (itemid);customer (customerid);store (storeid);
A sale is successful if all foreign keys are present.
successfulsales(id, itemid, customerid, storeid, amount, quantity);
unsuccessfulsales(id, itemid, customerid, storeid, amount, quantity);
tempsales( id, itemid, customerid, storeid, amount,quantity);
87
Superlinearity -- indexes
Settings (non-clustering, dense indexes):index s1 on item(itemid);
index s2 on customer(customerid);
index s3 on store(storeid);
index succ on successfulsales(id);
– 1000000 sales, 400000 customers, 40000 items, 1000 stores
– Cold buffer– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb
drives (10000RPM), Windows 2000.
88
Superlinearity -- queries
Queries:– Insert/create indexdelete
insert into successfulsalesselect sales.id, sales.itemid, sales.customerid, sales.storeid,
sales.amount, sales.quantityfrom sales, item, customer, storewhere sales.itemid = item.itemidand sales.customerid = customer.customeridand sales.storeid = store.storeid;
insert into unsuccessfulsales select * from sales;godelete from unsuccessfulsaleswhere id in (select id from successfulsales)
89
Superlinearity -- batch queries
Queries:– Small batches
DECLARE @Nlow INT;DECLARE @Nhigh INT;DECLARE @INCR INT;set @INCR = 100000set @NLow = 0set @Nhigh = @INCRWHILE (@NLow <= 500000)BEGIN
insert into tempsales select * from sales where id between @NLow and @Nhighset @Nlow = @Nlow + @INCRset @Nhigh = @Nhigh + @INCRdelete from tempsales
where id in (select id from successfulsales);insert into unsuccessfulsales select * from tempsales;delete from tempsales;
END
90
Superlinearity -- outer join
Queries:– outerjoin
insert into successfulsalesselect sales.id, item.itemid, customer.customerid, store.storeid, sales.amount,
sales.quantityfrom ((sales left outer join item on sales.itemid = item.itemid)left outer join customer on sales.customerid = customer.customerid)left outer join store on sales.storeid = store.storeid;
insert into unsuccessfulsalesselect *from successfulsaleswhere itemid is nullor customerid is nullor storeid is null;godelete from successfulsaleswhere itemid is nullor customerid is nullor storeid is null
4 - Relational Systems 9191
Circumventing Superlinearity
SQL Server 2000 Outer join achieves the
best response time. Small batches do not
help because overhead of crossing the application interface is higher than the benefit of joining with smaller tables.
0
20
40
60
80
100
120
140
small large
Re
spo
nse
Tim
e (s
ec)
insert/delete no indexinsert/delete indexedsmall batchesouter join
5 - Tuning the API 9292
Tuning the Application Interface
4GL– Power++, Visual basic
Programming language + Call Level Interface– ODBC: Open DataBase Connectivity– JDBC: Java based API– OCI (C++/Oracle), CLI (C++/ DB2), Perl/DBI
In the following experiments, the client program is located on the database server site. Overhead is due to crossing the application interface.
93
Looping can hurt -- data
Settings:lineitem ( L_ORDERKEY, L_PARTKEY , L_SUPPKEY,
L_LINENUMBER, L_QUANTITY, L_EXTENDEDPRICE , L_DISCOUNT, L_TAX , L_RETURNFLAG, L_LINESTATUS , L_SHIPDATE, L_COMMITDATE, L_RECEIPTDATE, L_SHIPINSTRUCT , L_SHIPMODE , L_COMMENT );
– 600 000 rows; warm buffer.– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb
drives (10000RPM), Windows 2000.
94
Looping can hurt -- queries
Queries:– No loop:
sqlStmt = “select * from lineitem where l_partkey <= 200;”odbc->prepareStmt(sqlStmt);odbc->execPrepared(sqlStmt);
– Loop:sqlStmt = “select * from lineitem where l_partkey = ?;”odbc->prepareStmt(sqlStmt);for (int i=1; i<100; i++){
odbc->bindParameter(1, SQL_INTEGER, i);odbc->execPrepared(sqlStmt);
}
5 - Tuning the API 9595
Looping can Hurt
SQL Server 2000 on Windows 2000
Crossing the application interface has a significant impact on performance.
Why would a programmer use a loop instead of relying on set-oriented operations: object-orientation?
0
100
200
300
400
500
600
loop no loop
thro
ug
hp
ut
(rec
ord
s/se
c)
96
Cursors are Death -- data
Settings:employees(ssnum, name, lat, long, hundreds1,
hundreds2);
– 100000 rows ; Cold buffer– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb
drives (10000RPM), Windows 2000.
97
Cursors are Death -- queries
Queries:– No cursor
select * from employees;
– CursorDECLARE d_cursor CURSOR FOR select * from employees;
OPEN d_cursorwhile (@@FETCH_STATUS = 0)
BEGIN
FETCH NEXT from d_cursorEND
CLOSE d_cursor
go
5 - Tuning the API 9898
Cursors are Death
SQL Server 2000 on Windows 2000
Response time is a few seconds with a SQL query and more than an hour iterating over a cursor.
0
1000
2000
3000
4000
5000
cursor SQL
Th
rou
gh
pu
t (r
eco
rds/
sec)
99
Retrieve Needed Columns Only - data
Settings:lineitem ( L_ORDERKEY, L_PARTKEY , L_SUPPKEY,
L_LINENUMBER, L_QUANTITY, L_EXTENDEDPRICE , L_DISCOUNT, L_TAX , L_RETURNFLAG, L_LINESTATUS , L_SHIPDATE, L_COMMITDATE, L_RECEIPTDATE, L_SHIPINSTRUCT , L_SHIPMODE , L_COMMENT );
create index i_nc_lineitem on lineitem (l_orderkey, l_partkey, l_suppkey, l_shipdate, l_commitdate);
– 600 000 rows; warm buffer.– Lineitem records are ~ 10 bytes long– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb
drives (10000RPM), Windows 2000.
100
Retrieve Needed Columns Only - queries
Queries:– All
Select * from lineitem;
– Covered subsetSelect l_orderkey, l_partkey, l_suppkey, l_shipdate,
l_commitdate from lineitem;
5 - Tuning the API 101101
Retrieve Needed Columns Only
Avoid transferring unnecessary data
May enable use of a covering index.
In the experiment the subset contains ¼ of the attributes.– Reducing the amount of
data that crosses the application interface yields significant performance improvement.
0
0.25
0.5
0.75
1
1.25
1.5
1.75
no index index
Th
rou
gh
pu
t (q
uer
ies/
mse
c)
all
covered subset
Experiment performed on Oracle8iEE on Windows 2000.
102
Bulk Loading Data
Settings:lineitem ( L_ORDERKEY, L_PARTKEY , L_SUPPKEY,
L_LINENUMBER, L_QUANTITY, L_EXTENDEDPRICE , L_DISCOUNT, L_TAX , L_RETURNFLAG, L_LINESTATUS , L_SHIPDATE, L_COMMITDATE, L_RECEIPTDATE, L_SHIPINSTRUCT , L_SHIPMODE , L_COMMENT );
– Initially the table is empty; 600 000 rows to be inserted (138Mb)
– Table sits one disk. No constraint, index is defined.– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb
drives (10000RPM), Windows 2000.
103
Bulk Loading Queries
Oracle 8isqlldr directpath=true control=load_lineitem.ctl data=E:\Data\lineitem.tbl
load data infile "lineitem.tbl"into table LINEITEM appendfields terminated by '|' (
L_ORDERKEY, L_PARTKEY, L_SUPPKEY, L_LINENUMBER, L_QUANTITY, L_EXTENDEDPRICE, L_DISCOUNT, L_TAX, L_RETURNFLAG, L_LINESTATUS, L_SHIPDATE DATE "YYYY-MM-DD", L_COMMITDATE DATE "YYYY-MM-DD", L_RECEIPTDATE DATE "YYYY-MM-DD", L_SHIPINSTRUCT, L_SHIPMODE, L_COMMENT
)
5 - Tuning the API 104104
Direct Path
Direct path loading bypasses the query engine and the storage manager. It is orders of magnitude faster than for conventional bulk load (commit every 100 records) and inserts (commit for each record).
650
10000
20000
30000
40000
50000
conventional direct path insert
Th
rou
gh
pu
t (r
ec/s
ec)
Experiment performed on Oracle8iEE on Windows 2000.
5 - Tuning the API 105105
Batch Size
Throughput increases steadily when the batch size increases to 100000 records.Throughput remains constant afterwards.
Trade-off between performance and amount of data that has to be reloaded in case of problem.
0
1000
2000
3000
4000
5000
0 100000 200000 300000 400000 500000 600000
Th
rou
gh
pu
t (r
eco
rds/
sec)
Experiment performed on SQL Server 2000 on Windows 2000.