Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

56
Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees

Transcript of Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Page 1: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Advanced Algorithms Analysis and Design

Lecture 10

Hashing,Heaps and Binomial trees

Page 2: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

HASHING

Page 3: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Hash Tables

• All search structures so far• Relied on a comparison operation• Performance O(n) or O( log n)

• Assume I have a function• f ( key ) integer ie one that maps a key to an integer

• What performance might I expect now?

Page 4: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Hash Tables -

• Keys are integers• Need a hash function

h( key ) integer ie one that maps a key to

an integer• Applying this function to the

key produces an address

• If h maps each key to a unique

integer in the range 0 .. m-1then search is O(1)

Page 5: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Hash Tables - Hash functions

• Form of the hash function• Example - using an n-character key int hash( char *s, int n ) { int sum = 0; while( n-- ) sum = sum + *s++; return sum % 256; }returns a value in 0 .. 255

• xor function is also commonly used sum = sum ^ *s++;

• Example • hash( “AB”, 2 ) and hash( “BA”, 2 )

return the same value! This is called a collision• A variety of techniques are used for resolving collisions

Page 6: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

6

Hashing: Collision Resolution Schemes• Collision Resolution Techniques

• Separate Chaining

• Separate Chaining with String Keys

• Separate Chaining versus Open-addressing

• Implementation of Separate Chaining

• Introduction to Collision Resolution using Open Addressing

• Linear Probing

Page 7: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

7

Collision Resolution Techniques

• There are two broad ways of collision resolution:

1. Separate Chaining:: An array of linked list implementation.

2. Open Addressing: Array-based implementation.

(i) Linear probing (linear search) (ii) Quadratic probing (nonlinear search) (iii) Double hashing (uses two hash functions)

Page 8: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

8

Separate Chaining• The hash table is implemented as an array of linked lists.

• Inserting an item, r, that hashes at index i is simply insertion into the linked list at position i.

• identicals are chained in the same linked list.

Page 9: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

9

Separate Chaining (cont’d)• Retrieval of an item, r, with hash address, i, is simply retrieval from

the linked list at position i.• Deletion of an item, r, with hash address, i, is simply deleting r from

the linked list at position i.

• Example: Load the keys 23, 13, 21, 14, 7, 8, and 15 , in this order, in a hash table of size 7 using separate chaining with the hash function: h(key) = key % 7

h(23) = 23 % 7 = 2 h(13) = 13 % 7 = 6

h(21) = 21 % 7 = 0h(14) = 14 % 7 = 0 collisionh(7) = 7 % 7 = 0 collisionh(8) = 8 % 7 = 1 h(15) = 15 % 7 = 1 collision

Page 10: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

10

Separate Chaining with String Keys• Recall that search keys can be numbers, strings or some other

object.• A hash function for a string s = c0c1c2…cn-1 can be defined as:

hash = (c0 + c1 + c2 + … + cn-1) % tableSize this can be implemented as:

• Example: The following class describes commodity items:

public static int hash(String key, int tableSize){ int hashValue = 0; for (int i = 0; i < key.length(); i++){

hashValue += key.charAt(i); } return hashValue % tableSize; }

class CommodityItem { String name; // commodity name int quantity; // commodity quantity needed double price; // commodity price}

Page 11: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Separate Chaining with String Keys (cont’d)• Use the hash function hash to load the following commodity

items into a hash table of size 13 using separate chaining:onion 1 10.0tomato 1 8.50cabbage 3 3.50carrot 1 5.50okra 1 6.50mellon 2 10.0potato 2 7.50Banana 3 4.00olive 2 15.0salt 2 2.50cucumber 3 4.50mushroom 3 5.50orange 2 3.00

• Solution:

hash(onion) = (111 + 110 + 105 + 111 + 110) % 13 = 547 % 13 = 1hash(salt) = (115 + 97 + 108 + 116) % 13 = 436 % 13 = 7hash(orange) = (111 + 114 + 97 + 110 + 103 + 101)%13 = 636 %13 = 12

Page 12: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

12

Separate Chaining with String Keys (cont’d)

0

1

2

3

4

5

6

7

8

9

10

11

12

onion

okra

mellon

banana

tomato olive

cucumber

mushroom

salt

cabbage

carrot

potato

orange

Item Qty Price h(key)onion 1 10.0 1tomato 1 8.50 10cabbage 3 3.50 4carrot 1 5.50 1okra 1 6.50 0mellon 2 10.0 10 potato 2 7.50 0Banana 3 4.0 11olive 2 15.0 10salt 2 2.50 7cucumber 3 4.50 9mushroom 3 5.50 6orange 2 3.00 12

Page 13: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

13

Separate Chaining versus Open-addressing• Organization Advantages Disadvantages• Chaining Unlimited number of elements

Unlimited number of collisions Overhead of multiple linked lists

Page 14: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

14

Introduction to Open Addressing• All items are stored in the hash table itself. • In addition to the cell data (if any), each cell keeps one of the three states:

EMPTY, OCCUPIED, DELETED.• While inserting, if a collision occurs, alternative cells are tried until an empty

cell is found.

• Deletion: (lazy deletion): When a key is deleted the slot is marked as DELETED rather than EMPTY otherwise subsequent searches that hash at the deleted cell will be unsuccessful.

• Probe sequence: A probe sequence is the sequence of array indexes that is followed in searching for an empty cell during an insertion, or in searching for a key during find or delete operations.

• The most common probe sequences are of the form: hi(key) = [h(key) + c(i)] % n, for i = 0, 1, …, n-1. where h is a hash function and n is the size of the hash table

• The function c(i) is required to have the following two properties: Property 1: c(0) = 0 Property 2: The set of values {c(0) % n, c(1) % n, c(2) % n, . . . , c(n-1) % n}

must be a permutation of {0, 1, 2,. . ., n – 1}, that is, it must contain every integer between 0 and n - 1 inclusive.

Page 15: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

15

Introduction to Open Addressing (cont’d)• The function c(i) is used to resolve collisions.

• To insert item r, we examine array location h0(r) = h(r). If there is a collision, array locations h1(r), h2(r), ..., hn-1(r) are examined until an empty slot is found.

• Similarly, to find item r, we examine the same sequence of locations in the same order.

• Note: For a given hash function h(key), the only difference in the open addressing collision resolution techniques (linear probing, quadratic probing and double hashing) is in the definition of the function c(i).

• Common definitions of c(i) are:

Collision resolution technique c(i)

Linear probing i

Quadratic probing ±i2

Double hashing i*hp(key)

where hp(key) is another hash function.

Page 16: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

16

Introduction to Open Addressing (cont'd)• Advantages of Open addressing:

• All items are stored in the hash table itself. There is no need for another data structure(MEANS NO LINKLIST).

• Open addressing is more efficient storage-wise.

• Disadvantages of Open Addressing:• The keys of the objects to be hashed must be distinct.• Dependent on choosing a proper table size.• Requires the use of a three-state (Occupied, Empty, or

Deleted) flag in each cell.

Page 17: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Open Addressing Facts • In general, the best table size is most important.• With any open addressing method of collision resolution, as the table fills,

there can be a severe degradation in the table performance. • Hashing has two parameters that affect its performance: initial capacity

and load factor.

• The capacity is the number of buckets in the hash table, and the initial capacity is simply the capacity at the time the hash table is created.

• The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased. i.e

• When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the capacity is roughly doubled by calling the rehash method.

• As a general rule, the default load factor (.75) offers a good tradeoff between time and space costs.

• The load factor of the table is m/N, where m is the number ofdistinct indexes used in the table or is the number of records currently in the table. and N is the size of the array used to implement it.

• Load factors between 0.6 and 0.7 are common. • Load factors > 0.7 are undesirable.

Page 18: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

18

Open Addressing : Linear Probing (cont’d)Example: Perform the operations given below, in the given

order, on an initially empty hash table of size 13 using linear probing with c(i) = i and the hash function: h(key) = key % 13:

insert(18), insert(26), insert(35), insert(9), find(15), find(48), delete(35), delete(40), find(9), insert(64), insert(47), find(35)

• The required probe sequences are given by:

hi(key) = (h(key) + i) % 13

i = 0, 1, 2, . . ., 12

Page 19: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

19

a

Index Status Value

0 O 26

1 E

2 E

3 E

4 E

5 O 18

6 E

7 E

8 O 47

9 D 35

10 O 9

11 E

12 O 64

Linear Probing (cont’d)

Page 20: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

20

Disadvantage of Linear Probing: Primary Clustering

• Linear probing is subject to a primary clustering phenomenon.

• Elements tend to cluster around table locations that they originally hash to.

• Primary clusters can combine to form larger clusters. This leads to long search

sequences and hence deterioration in hash table efficiency.

Example of a primary cluster: Insert keys: 18, 41, 22, 44, 59, 32, 31, 73, in this order, in an originally empty hash table of size 13, using the hash function h(key) = key % 13 and c(i) = i:h(18) = 5h(41) = 2h(22) = 9h(44) = 5+1h(59) = 7h(32) = 6+1+1h(31) = 5+1+1+1+1+1h(73) = 8+1+1+1

Page 21: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

HEAPS

Page 22: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Heaps• A heap is a special kind of rooted tree that can be

implemented efficiently in an array without any explicitpointers.

It can be used for heap sort and the efficient representation ofcertain dynamic priority lists, such as the event list in asimulation or the list of tasks to be scheduled by an operatingsystem.

A heap is an essentially complete binary tree

Page 23: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Heaps

• Figure illustrates an essentially complete binary treecontaining 10 nodes. The five internal nodes occupy level 3(the root), level 2, and the left side of level 1; the five leavesfill the right side of level 1 and then continue at the left oflevel 0.

• If an essentially complete binary tree has height k, then thereis one node (the root) on level k, there are two nodes on levelk-1 and so on; there are 2k-1

nodes on level 1, and at least 1and not more than 2k on level 0.

• A heap is an essentially complete binary tree, each of whosenodes includes an element of information called the value ofthe node, and which has the property that the value of eachinternal node is greater than or equal to the values of itschildren.

Page 24: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

An essentially complete binary tree

T[1]

T[3]

T[2]

T[4]

T[8] T[9]

T[5]

T[10]

T[6]T[7]

Page 25: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

A heap10

7 9

4 7 5 2

2 1 6

Figure shows an example of a heap with 10 nodes.

Page 26: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Heaps• Now we have marked each node with its value.

• This same heap can be represented by the following array

10 7 9 4 7 5 2 2 1 6

• The crucial characteristic of this data structure is that the heapproperty can be restored efficiently if the value of a node is modified.

• If the value of a node increases to the extent that it becomes greaterthan the value of its parent, it should be sufficient to exchange these two values,and then to continue the same process upwards in the tree ifnecessary until the heap property is restored.

• The modified value is percolated up to its new position in the heap

• This operation is often called sifting up

• If the value 1 in Figure is modified so that it becomes 8, we canrestore the heap property by exchanging the 8 with its parent 4, andthen exchanging it again with its new parent 7.

Page 27: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

The heap, after percolating 8 to its place

10

8 9

7 7 5 2

2 4 6

Page 28: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Heaps• If on the contrary the value of a node is decreased so that it

becomes less than the value of at least one of its children, itsuffices to exchange the modified value with the larger of thevalues in the children, and then to continue this processdownwards in the tree if necessary until the heap property isrestored.

• The modified value has been sifted down to its new position.

9

The heap, after sifting 3 (originally 10)down to its place

8 5

7 7 3 2

2 4 6

Page 29: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Heaps• The following procedures describe more formally the basic

processes for manipulating a heap.

Procedure alter-heap (T[1..n], i, v){T[1..n] is a heap. The value of T[i] is set to v andthe heap property is re-established. Suppose that1≤ i ≤ n.}x←T[i]T[i] ←v

if v < x then sift-down(T,i)else percolate (T,i)

Page 30: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Procedure sift-down (T[1…n], i){This procedure sifts node i down so as to re-establish theheap property in T[1..n]. Suppose that T would be a heap ifT[i] were sufficiently large and that 1≤ i ≤ n.}k ← i

repeatj ← k{find the larger child of node j}

if 2j ≤ n and T[2j]> T[k] then k ← 2jif 2j < n and T[2j+1]> T[k] then k ← 2j+1exchange T[j] and T[k]

{if j=k, then the node has arrived at its final position}until j=k

Page 31: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Procedure percolate (T[1…n], i){This procedure percolate node i so as to re-establish the heapproperty in T[1..n]. Suppose that T would be a heap if T[i]were sufficiently small and that 1≤ i ≤ n. The parameter n isnot used here}k ← irepeat

j ← kif j > 1 and T[j ÷ 2]< T[k] then k ← j ÷2exchange T[j] and T[k]

{if j=k, then the node has arrived at its finalposition}until j=k

Page 32: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Heaps• Heap is an ideal data structure for finding the largest element

of a set, removing it, adding a new node, or modifying a node.These are exactly the operations we need to implementdynamic priority lists efficiently. The value of a node givesthe priority of the corresponding event, the event with highestpriority is always found at the root of the heap, and thepriority of an event can be changed dynamically at any time.This is particularly useful in computer simulations and in thedesign of schedulers for an operating system.

• Some typical procedures are illustrated below.

Page 33: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

function find-max (T[1..n]){Returns the larges element of the heap T[1..n]}return T[1]

Procedure delete-max (T[1…n]){Removes the largest element of the heap T[1..n] andrestores the heap property in T[1..n - 1]}

T[1] ← T[n]sift-down( T[1..n - 1], 1)

Page 34: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Procedure insert node (T[1…n], v){Adds an element whose value is v to the heap T[1..n]and restores the heap property in T[1..n + 1]}T[n+1] ← v

percolate(T[1..n + 1], n+1)

Page 35: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Heaps• There exists a cleverer algorithm for making a heap. Suppose, for

example, that our starting point is the following array represented by thetree in Figure.

1 6 9 2 7 5 2 7 4 10

The starting situation

2

7

6

7

104

1

9

5 2

Page 36: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Heaps

• We first make each of the subtrees whose roots are at level 1into a heap, this is done by sifting down these tools, asillustrated in Figure.

7

2 4 7

10 5 2

The level 1 subtrees are made into heaps

Page 37: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

HeapsThis figure shows the process for the left subtree. The other subtree at level 2is already a heap. This results in an essentially complete binary treecorresponding to the array.

1 10 9 7 7 5 2 2 4 6

6 10 10

7 10 7 6 7 7

2 4 7 2 4 7 2 4 6

One level 2 subtree is made into a heap (the other already is a heap)

Page 38: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

It only remains to sift down its root to obtain the desiredheap. This process thus goes as follows:

10 1 9 7 7 5 2 2 4 610 7 9 1 7 5 2 2 4 610 7 9 4 7 5 2 2 1 6

10

7 9

4 7 5 2

2 1 6

Page 39: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Construct the heap using the array A=(16, 4, 10, 14, 7, 9, 3, 2, 8, 1)

16

4 10

14 7 9 3

2 8 1

8

Maintaining heap property2

The initial configuration

16

14 10

7 9 3

4 1

How to Sort Heap

Page 40: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

A= 16 14 10 8 7 9 3 2

for i←n down 2 doexchange T(1) and T(i) 4Sift-down(T[..i-1],1)

82

4

2

8

4

1

14

7

91

116

310

5 769 3

10Make-heap (T)

How to Sort Heap

Page 41: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

i = 10, exchange T[1] & T[10] and sift-down (T[1..9],1)

14 8 10 4 7 9 3 2 1 161 2 3 4 5 6 7 8 9 10

i = 9, exchange T[1] & T[9] and sift-down (T[1..8],1)

10 8 9 4 7 1 3 2 14 16

1 2 3 4 5 6 7 8 9 10

i = 8, exchange T[1] & T[8] and sift-down (T[1..7],1)

9 8 3 4 7 1 2 10 14 161 2 3 4 5 6 7 8 9 10

i = 7, exchange T[1] & T[7] and sift-down (T[1..6],1)

8 7 3 4 2 1 9 10 14 16

1 2 3 4 5 6 7 8 9 10i = 6, exchange T[1] & T[6] and sift-down (T[1..5],1)

7 4 3 1 2 8 9 10 14 161 2 3 4 5 6 7 8 9 10

i = 5, exchange T[1] & T[5] and sift-down (T[1..4],1)

Page 42: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

i = 5, exchange T[1] & T[5] and sift-down (T[1..4],1)

4 2 3 1 7 8 9 10 14 161 2 3 4 5 6 7 8 9 10

i = 4, exchange T[1] & T[4] and sift-down (T[1..3],1)

1 2 3 4 7 8 9 10 14 161 2 3 4 5 6 7 8 9 10

i = 3, exchange T[1] & T[3] and sift-down (T[1..2],1)

2 1 3 4 7 8 9 10 14 161 2 3 4 5 6 7 8 9 10

i = 2, exchange T[1] & T[2] and sift-down (T[1..1],1)

1 2 3 4 7 8 9 10 14 161 2 3 4 5 6 7 8 9 10 End of Sorting

Page 43: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Sorted heap1

4

10 14

2 3

7 8 9

16

Page 44: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

B0 B1 B2 B3 B4

Binomial trees B0 to B4

Binomial trees

Page 45: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

Binomial trees

Page 46: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

max

3 25

12 9

23

14

1 5

20

17

8

A binomial heap containing 11 items

Parent node greater than child node: Max binomial heap

Page 47: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

15

1 9

6

+

4

12

18

3

15

912

6 4 8

3

Linking two B2’s to make a B3

Page 48: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

7 12 9 13 10

3 1 11 merged with

6

yields 9 13 12

2 7 1 11

3 6

Merging two binomial heaps

2 5 8

4

10

5 8

4

Page 49: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

head [H1] 12

head [H2] 18 3

37

45

55

7 15

25 28 33

41

BINOMIAL-HEAP MERGE

6

8 29 10 44

30 23 22 48 31 17

32 24 50

Note: Check Heap type max /min before start

Page 50: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

head [H] 12

18

3

7 37 28

25 41

15

33

30 23

45 32 24

55

6

29 10 448

22 48 31 17

50

Page 51: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

head [H] 12

18 15

28 33

41

3

7 37

25 30

45 32

55

6

8 29 10 44

23 22 48 31 17

24 50

Page 52: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

head [H] 37

41

10

28 13

77 8

11 17

27

1

16

12

256

14 29 26 23 18

38 42

The node with value 1 to be deleted

(b) head [H] 37

41

Separated into two heaps

10

28 13

77 8

11 17

27

1

16

12

256

14 29 26 23 18

38 42

Page 53: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

head [H] 37

41

10 head [H]

28 13

77

25 12 16 6

18 26 23 8 14 29

42 11 17 38

27

Node with value 1 has been deleted, two heaps H & H

head [H] 25 12 6

37 18 10 8 14 29

41 16 28 13 11 17 38

26 23 77 27

42Merging heaps H & H

Page 54: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

head [H] 25 12

37

18

41 y 7

42

Node y value decreased from 26 to 7

head [H] 25 12

37

18

41 16

42

10

16 28 13

23 77

10

7 28 13

23 77

6

14

29811 17 38

27

6

14

29811 17 38

27

Page 55: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

head [H] 25 12

37 18

41 10

16 23

42

6

8

14

297

28 13 11 17 38

77 27

Page 56: Advanced Algorithms Analysis and Design Lecture 10 Hashing,Heaps and Binomial trees.

head [H] 10 47

61

8

26 11

75 19

30 29

50

12 4

14

15

2413

22 36 27 31 16

44 43