Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

21
WattDB – Energy-Proportionality on a Cluster Scale Daniel Schall , Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern

Transcript of Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

Page 1: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

WattDB – Energy-Proportionality

on a Cluster ScaleDaniel Schall, Volker Höfner, Prof. Dr. Theo Härder

TU Kaiserslautern

Page 2: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

Outline

Energy efficiency in database sytemsMulti-Core vs. ClusterWattDB

RecentCurrent WorkFuture

2

Page 3: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

In-memory technology

Electricity Cost

MotivationMore and more data

Bigger servers

3

Page 4: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

Power BreakdownLoad between 0 – 50 %Energy Consumption: 50 – 90%!

‘‘Analyzing the Energy Efficiency of a Database Server“,D. Tsirogiannis, S. Harizopoulos, and M. A. ShahSIGMOD 2010

‘‘Distributed Computing at Multi-dimensional Scale“,Alfred Z. SpectorKeynote on MIDDLEWARE 2008

4

Page 5: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

Growth of Main Memory makes it worse

%20 40 60 80 100System utilization

Power(Watt)

0

%

20

40

60

80

100power@utilization

energy-proportional

behavior

In-memory data management assumes continuous peak loads!Energy consumption of memory linearly grows with size and

dominates all other components across all levels of system utilization

Page 6: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

Mission: Energy-Efficiency!Energy cost > HW and SW cost

Energy Efficiency =

‚‚Green IT‘‘

Work

Energy Consumption

7

Page 7: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

Average Server Utilization

Google Servers: load at about 30 %SPH AG: load between 5 and 30 %

8

Page 8: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

Energy Efficiency - Related WorkSoftware

Delaying queriesOptimize external storage access patternsForce sleep states„Intelligent“ data placement

Narrow approaches Only small improvements

9

HardwareSleep statesOptimize energy consumption when idleSelect energy-efficient hardwareDynamic Voltage Scaling

Page 9: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

Goal: Energy-Proportionality

%20 40 60 80 100System utilization

Power(Watt)

0

%

20

40

60

80

100power@utilization

energy-proportional

behavior

1) reduce idle power consumption2) eliminate disproportional energy consumption

1

2

Page 10: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

From Multi-Core to Multi-Node

CPU CPU CPU CPU

CPU CPU CPU CPU

Cache Cache Cache Cache

Cache Cache Cache Cache

Main memory Main memory

Main memory Main memory

1Gb ethernet switch

Core Core Core Core

Core Core Core Core

L1 Cache

L1 Cache

L1 Cache

L1 Cache

L1 Cache

L1 Cache

L1 Cache

L1 Cache

L2 Cache L2 Cache

L2 Cache L2 Cache

L3 Cache

11

%20 40 60 80 100System utilization

Power(Watt)

power@utilization

0

%

20

40

60

80

100

Page 11: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

A dynamic cluster of wimpy nodesenergy-proportional DBMS

Load

Time

12

Page 12: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

Cluster OverviewLight-weighted nodes, low-power hardware

Each node Intel Atom D510 CPU2 GB DRAM80plus Gold power supply1Gbit Ethernet interconnect23 W (idle) - 26 W (100% CPU)41 W (100% CPU + disks)

Considered Amdahl-balancedScale down the CPUs to the disks and network!

13

Page 13: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

14

Page 14: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

Shared Disk AND Shared Nothing

Physical hardware layout: Shared Diskevery node can access every pagelocal vs. remote latency

Logical implementation: Shared Nothing:data is mapped to node n:1exclusive accesstransfer of control

Combine the benefits of both worlds!

15

Page 15: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

16

Recent WorkSIGMOD 2010 Programming Contest

First prototypedistributed DBMS

BTW 2011 Demo TrackMaster node powering cluster up/down acc. to load

SIGMOD 2011 Demo TrackEnergy-proportional query processing

Page 16: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

17

Current WorkIncorporate GPU-Operators

improved energy-efficiency?more tuples/Watt?

Monitoring & Load ForecastingFor management decisionsact instead of react

Energy-Proportional Storage storage needs vs. processing needs

Page 17: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

Future WorkPolicies for powering up / down nodesLoad distribution and balancing among nodesWhich use cases fit for the proposed

architecture, which don‘t?Alternative hardware configurations

Heterogeneous HW environmentSSDs, other CPUs

Energy-efficient self-tuning

18

Page 18: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

19

Node3Node3

Current WorkTable

Partition Partition

Node1 Node2

Partition

Page 19: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

20

Node3

Future WorkTable

Partition Partition

Node1 Node2

Partition

Node2

Page 20: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

Conclusion

Energy consumption matters!Current HW is not energy-proportionalSystems most of the time at 20% - 50% utilizationWattDB as a prototype for an energy-proportional DBMSSeveral challenges ahead

21

Page 21: Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder TU Kaiserslautern.

22

Thank You!

Energy Proportionality on a Cluster Scale