Проектирование крупномасштабных приложений сбора...

Post on 14-Jun-2015

865 views 0 download

Tags:

Transcript of Проектирование крупномасштабных приложений сбора...

Firehose Engineeringdesigninghigh-volumedata collection systems

Josh BerkusHiLoad, Moscow

October 2011

Firehose Database Applications (FDA)

(1) very high volume of data input from many producers

(2) continous processing of incoming data

Mozilla Socorro

Upwind

Fraud Detection System

Firehose Challenges

1. Volume

● 100's to 1000's facts/second● GB/hour

1. Volume

● spikes in volume● multiple uncoorindated sources

1. Volume

volume always grows over time

2. Constant flow

since data arrives 24/7 …

while the user interface can be down, data collection can never be down

ETL

2. Constant flow● can't stop

receiving to process

● data can arrive out of order

3. Database size

● terabytes to petabytes● lots of hardware● single-node DBMSes aren't enough● difficult backups, redundancy,

migration● analytics are resource-consumptive

3. Database size

● database growth● size grows quickly● need to expand storage● estimate target data size● create data ageing policies

3. Database size

“We will decide on a data retention policy when we run out

of disk space.”– every business user everywhere

4.

many components= many failures

4. Component failure

● all components fail● or need scheduled downtime● including the network

● collection must continue● collection & processing must

recover

solving firehose problems

socorro project

http://crash-stats.mozilla.com

Mozilla Socorro

collectors

processors

webservers

reports

socorro data volume

● 3000 crashes/minute● avg. size 150K

● 40TB accumulated raw data● 500GB accumulated metadata /

reports

dealing with volume

load balancers collectors

dealing with volume

monitor processors

dealing with size

data40TB

expandible

metadataviews500GBfixed size

dealing with component failure

● 30 Hbase nodes

● 2 PostgreSQL servers

● 6 load balancers

● 3 ES servers● 6 collectors● 12 processors● 8 middleware &

web servers

… lots of failures

Lots of hardware ...

load balancing & redundancy

load balancers collectors

elastic connections

● components queue their data● retain it if other nodes are down

● components resume work automatically

● when other nodes come back up

elastic connections

collector

reciever local file queue

crash mover

server management

● puppet● controls configuration of all servers● makes sure servers recover● allows rapid deployment of

replacement nodes

Upwind

Upwind

● speed● wind speed● heat● vibration● noise● direction

Upwind

1. maximize power generation

2. make sure turbine isn't damaged

dealing with volume

each turbine:

90 to 700 facts/second

windmills per farm: up to 100

number of farms: 40+

est. total: 300,000 facts/second

(will grow)

dealing with volume

localstorage

historian analyticdatabase

reports

localstorage

historian analyticdatabase

reports

dealing with volume

localstorage

historian analyticdatabase

localstorage

historian analyticdatabase

masterdatabase

multi-tenant partitioning

● partition the whole application● each customer gets their own

toolchain

● allows scaling with the number of customers

● lowers efficiency● more efficient with virtualization

dealing with:constant flow and size

historianminutebuffer

hourstable

daystable

monthstable

yearstable

historianminutebuffer

hourstable

daystable

monthstable

historianminutebuffer

hourstable

daystable

time-based rollups

● continuously accumulate levels of rollup

● each is based on the level below it● data is always appended, never

updated● small windows == small resources

time-based rollups

● allows:● very rapid summary reports for

different windows● retaining different summaries for

different levels of time● batch/out-of-order processing● summarization in parallel

firehose tips

data collection must be:

● continuous● parallel● fault-tolerant

data processing must be:

● continuous● parallel● fault-tolerant

every component must be able to fail

● including the network● without too much data loss● other components must

continue

5 tools to use

1. queueing software

2. buffering techniques

3. materialized views

4. configuration management

5. comprehensive monitoring

4 don'ts

1. use cutting-edge technology

2. use untested hardware

3. run components to capacity

4. do hot patching

firehose mastered?

Contact● Josh Berkus: josh@pgexperts.com

● blog: blogs.ittoolbox.com/database/soup

● PostgreSQL: www.postgresql.org● pgexperts: www.pgexperts.com

● Upcoming Events● PostgreSQL Europe: http://2011.pgconf.eu/

● PostgreSQL Italy: http://2011.pgday.it/

The text and diagrams in this talk is copyright 2011 Josh Berkus and is licensed under the creative commons attribution license. Title slide image is licensed from iStockPhoto and may not be reproduced or redistributed. Socorro images are copyright 2011 Mozilla Inc.

1

Firehose Engineeringdesigninghigh-volumedata collection systems

Josh BerkusHiLoad, Moscow

October 2011

I created this talk because, for some reason, I've been working on a bunch of the same kind of data systems lately.

2

Firehose Database Applications (FDA)

(1) very high volume of data input from many producers

(2) continous processing of incoming data

I call these “Firehose Database Applications”. They're characterized by two factors, and have a lot of things in common regardless of which industry they're used in.

1. These applications are focused entirely on the collection of large amounts of data coming in from many high-volume, producers, and

2. they process data into some more useful form, such as summaries, continuously 24 hours a day.

Some examples ...

3Mozilla Socorro

Mozilla's Socorro system, which collects firefox & thunderbird crash data from around the world.

4

Upwind

Upwind, which collects a huge amount of metrics from large wind turbine farms in order to monitor and analyze them.

5

Fraud Detection System

And numerous other systems, such as a fraud detection system which receives data from over 100 different transaction processing systems and analyzes it to look for patterns which would indicate fraud.

6

Firehose Challenges

regardless of how these firehose systems are used, they share the same four challenges, all of which need to be overcome in some way to get the system running and keep it running.

7

1. Volume

● 100's to 1000's facts/second● GB/hour

The first and most obvious is volume. A firehose system is collecting many, many facts per second, which may add up to gigabytes per hour.

8

1. Volume

● spikes in volume● multiple uncoorindated sources

You also have to plan for unpredicted spikes in volume which may be 10X normal.

Also your data sources may come in at different rates, and with interruptions or out of order data.

But, the biggest challenge of volume always is ...

9

1. Volume

volume always grows over time

… there is always, always, always more of it than there used to be.

10

2. Constant flow

since data arrives 24/7 …

while the user interface can be down, data collection can never be down

the second challenge is dealing with the all-day, all-week, all-year nature of the incoming data.

for example, it is often permissiable for the user interface or reporting system be out of service, but data collection must go on at all times without interruption. Otherwise, data would be lost.

this makes for different downtime plans than standard web applications, who can have “read only downtimes”. Firehose systems can't; they must arrange for alternate storage if the primary collections is down.

11

ETL

2. Constant flow● can't stop

receiving to process

● data can arrive out of order

it also means that conventional Extract Transform Load (ETL) approaches to data warehousing don't work. You can't wait for the down period to process the data. Instead, data must be processed continuously while still collecting.

Data is also never at rest; you must be prepared for interruptions and out-of-order data.

12

3. Database size

● terabytes to petabytes● lots of hardware● single-node DBMSes aren't enough● difficult backups, redundancy,

migration● analytics are resource-consumptive

another obvious challenge is the sheer size of the data.

once you start accumulating data 24x7, you will rapidly find yourself dealing with terabytes or even petabytes of data. This requires you to deal with lots of hardware and large clustered systems or server farms, including clustered or distributed databases.

this makes backups and redundancy either very difficult or very expensive.

performing analytics on the data and reporting on it becomes very resource-intensive and slow at large data sizes.

13

3. Database size

● database growth● size grows quickly● need to expand storage● estimate target data size● create data ageing policies

the database is also growing rapidly, which means that solutions which work today might not work next month. You need to estimate the target size of your data and engineer for that, not the amount of data you have now.

you will also have to create a data ageing or expiration policy so that you can eventually purge data and limit the database to some finite size. this is the hardest thing to do because ...

14

3. Database size

“We will decide on a data retention policy when we run out

of disk space.”– every business user everywhere

… users never ever ever want to “throw away” data. No matter how old it is or how infrequently it's accessed.

This isn't special to firehose systems except that they accumulate data faster than any other kind of system.

15

4.

problem #4 is illustrated by this architecture diagram of the Socorro system. Firehose systems have lots and lots of components, often using dozens of servers and hundreds of application instances. In addition to cost and administrative overhead, what having that much stuff means is ...

16

many components= many failures

… every day of the week something is going to be crashing on you. You simply can't run a system with 90+ servers and expect them to be up all the time.

17

4. Component failure

● all components fail● or need scheduled downtime● including the network

● collection must continue● collection & processing must

recover

Even when components don't fail unexpectedly, they have to be taken down for upgrades, maintenance, replacement, or migration to new locations and software. But the data collection and processing has to go on while components are unavailable or offline. Even when processing has to stop, components must recover quickly when the system is ready again, and must do so without losing data.

18

solving firehose problems

So let's talk about how a couple of different teams solved some of these firehose problems.

19

socorro project

First we're going to talk about Mozilla's Socorro system. I'll bet a lot of you have seen this Firefox crash screen. The data it produces gets sent to the socorro system at Mozilla, where it's collected and processed.

20

http://crash-stats.mozilla.com

At the end of the processing we get pretty reports like this, which let the mozilla developers know when releases are stable and ready, and where frequent crashing and hanging issues are. You can visit it right now: It's http://crash-stats.mozilla.com

21

There's quite a rich set of data there collecting everything about every aspect of what's making Firefox and other Mozilla products crash, individually and in the aggregate.

22

Mozilla Socorro

collectors

processors

webservers

reports

this is a simplified architecture diagram showing the very basic data flow of Socorro.

1. Crashes come in through the internet, 2. are harvested by a cluster of collectors3. raw crashes are stored in Hbase4. a cluster of processors pull raw crashes from Hbase

process crashes, and write processed crashes and metadata to Hbase and Postgres

5. Postgres populates views which are used to support the web interface and reports used internally by mozilla developers.

23

a full architecture diagram would look like this one, but even this doesn't show all components.

24

socorro data volume

● 3000 crashes/minute● avg. size 150K

● 40TB accumulated raw data● 500GB accumulated metadata /

reports

socorro receives up to 3000 crashes every minute from around the world. each of these crashes comes with 50 to 1000K of binary crash data which needs to be processed.

within the target data lifetime of 6-12 months, socorro has accumulated nearly 40 terabytes of raw data and half a terabyte of metadata, views and reports.

25

dealing with volume

load balancers collectors

how does Mozilla deal with this large data volume? In one word:

parallelism

1. crashes come in from the internet2. they hit a set of round-robin load balancers3. the load balancers randomly assign them to one of

six crash collectors4. the crash collectors receive the crash information,

and then write the raw crash to a 30-server Hbase cluster.

26

dealing with volume

monitor processors

5. the raw crashes in HBase are pulled out in parallel by twelve processor servers running 10 processor instances each.

6. these processors do a lot of work to process the binary crashes and then write the processed crash results to Hbase an the processed crash metadata and summary data to Postgres.

7. for scheduling and preventing duplication, the processors are coordinated by a monitor server, which runs a queue and monitoring system backed by Postgres.

27

dealing with size

data40TB

expandible

metadataviews500GBfixed size

Why have both Hbase and PostgreSQL? Isn't one database enough?

Well, the 30-server Hbase cluster holds 40 terabytes of data. It's good at doing this in a redundant, scalable way and is very fast for pulling individual crashes out of the billion or so which are stored there.

However Hbase is very poor at ad-hoc queries, or simple browsing of data over the web. It also often requires downtime for maintenance

So we use Postgres which holds metadata and holds summary “materialized views” which are used by the web application and the reports to present analytics to users.

Postgres can't store the raw data though because it can't easily expand to the sizes needed.

28

dealing with component failure

● 30 Hbase nodes

● 2 PostgreSQL servers

● 6 load balancers

● 3 ES servers● 6 collectors● 12 processors● 8 middleware &

web servers

… lots of failures

Lots of hardware ...

Of course, with the more than 60 servers we're dealing with, something is always going down. That's simply reality. Some of the worst failures have involved the entire network going out and all components losing communication with each other.

This means that we need to be prepared to have failures and to recover from them. How do we do that?

29

load balancing & redundancy

load balancers collectors

One obvious way is to have lots of redundant components for each role in the application. As we already saw, for incoming data there is more than one load balancer and are 6 collectors. This means that we can lose several machines and still be receiving user data, and that in catastrophic failure we only lose an minority of the data.

30

elastic connections

● components queue their data● retain it if other nodes are down

● components resume work automatically

● when other nodes come back up

A more important strategy we've learned through hard experience is the requirement to have “elastic connections”. That is, each component is prepared to lose communication with the components on either side of it. They queue data trough several mechanisms, and resume work when the next component in line comes back up.

31

elastic connections

collector

reciever local file queue

crash mover

For example, let's look at the collectors. They receive data from the web and write to Hbase. But it's more complex than that.

1. The collectors can lose communication to Hbase for a variety of reasons. It can be down for maintenance, or there can be network issues.

2. The crashes are received by receiver processes3. which write them to local file storage in a primitive,

filename-based queue.4. A separate “crashmover” process looks for files in

the queue5. When Hbase is available, the crashmover saves the

files to Hbase.

32

server management

● puppet● controls configuration of all servers● makes sure servers recover● allows rapid deployment of

replacement nodes

Another thing which has been essential to managing component failure is comprehensive server management through Puppet. Having servers under central management means that we can make sure services are up and configured correctly when they are supposed to be. And it also means that if we permanently lose nodes we can replace them quickly with identically configured nodes.

33

Upwind

let's move on to a different team and different methods of facing firehose challenges. Upwind.

34

Upwind

● speed● wind speed● heat● vibration● noise● direction

power-generating wind turbines have onboard computers which produce a lot of data about the status and operation of the wind turbine Upwind takes that data, collects it, summarizes it and turns it into graphical summary reports for customers who own wind farms.

35

Upwind

1. maximize power generation

2. make sure turbine isn't damaged

the customers need this data for two reasons:1. wind turbines are very expensive and they need to

make sure they're maximizing power generation from them.

2. to intervene and keep the turbines from catching fire or otherwise becoming permanently damaged.

36

dealing with volume

each turbine:

90 to 700 facts/second

windmills per farm: up to 100

number of farms: 40+

est. total: 300,000 facts/second

(will grow)

turbines come with a lot of different monitors, between 90 and 700 of them depending on the model. each of these monitors produces a reading every second.

an individual “wind farm” can have up to 100 turbines, and Upwind expects more than 40 farms to make use of the service once it's fully deployed. Plus they want to get more customers.

this means an estimated 300,000 metrics per second total by next year.

37

dealing with volume

localstorage

historian analyticdatabase

reports

localstorage

historian analyticdatabase

reports

Upwind chose a very different route than Mozilla for dealing with this volume. Let's explain the data flow first:

1. Wind farms write metrics to local storage, which can hold data for a couple hours.

2. This data is then read by an industry-specific application called the historian which interprets the raw data and stores each second of data.

3. that data is then read from the historian by Postgres4. which produces nice summaries and analytics for

the reports.As it turns out, data is not shared between wind farms

most of the time. So Upwind can scale simply by adding a whole additional service tool chain for each windfarm it brings online.

38

dealing with volume

localstorage

historian analyticdatabase

localstorage

historian analyticdatabase

masterdatabase

In the limited cases where we need to do inter-farm reports, we will have a master Postgres node which will use pl/proxy and/or foreign data wrappers to summarize data from multiple wind farms.

39

multi-tenant partitioning

● partition the whole application● each customer gets their own

toolchain

● allows scaling with the number of customers

● lowers efficiency● more efficient with virtualization

this is what's known as multi-tenant partitioning, where you give each customer, or “tenant”, a full stack of their own. It's not very resource-efficient, but it does easily scale as you add customers, and eliminates the need to use complex clustered storage or databases.

cloud hosting and virtualization is also making this strategy easier to manage.

40

dealing with:constant flow and size

historianminutebuffer

hourstable

daystable

monthstable

yearstable

historianminutebuffer

hourstable

daystable

monthstable

historianminutebuffer

hourstable

daystable

In order to deal with the constant flow of data and the large database size, we continously produce a series of time-based summaries of the data:

1. a buffer holding a several minutes of data. this also accounts for lag and out-of-order data.

2. hour summaries 3. day summaries4. month summaries 5. annual summaries

41

time-based rollups

● continuously accumulate levels of rollup

● each is based on the level below it● data is always appended, never

updated● small windows == small resources

For efficiency, each level builds on the level beneath it. This means that no analytic operation ever has to work with a very large set, an so can be very fast.

Even better, the time-based summaries can be run in parallel, so that we can make maximum use of the processors on the server.

42

time-based rollups

● allows:● very rapid summary reports for

different windows● retaining different summaries for

different levels of time● batch/out-of-order processing● summarization in parallel

this allows us to have a set of “views” which summarize the data at the time windows users look at. This means almost “instant” reports for users.

43

firehose tips

based on my experiences with several of these firehose projects, here's some general rules for building this kind of an application

44

data collection must be:

● continuous● parallel● fault-tolerant

data collection has to be 24x7. It must be done in parallel by multiple processes/servers, and above all must be fault-tolerant.

45

data processing must be:

● continuous● parallel● fault-tolerant

data processing has to be built the same way.

46

every component must be able to fail

● including the network● without too much data loss● other components must

continue

the lesson most people don't seem to be prepared for is that every component must be able to fail without permanently downing the system, or causing unacceptable data loss.

47

5 tools to use

1. queueing software

2. buffering techniques

3. materialized views

4. configuration management

5. comprehensive monitoring

here's some classes of tools you should be using in any firehose application.

queuing and buffering for fault-tolerancematerialized views to deal with volume and sizeconfiguration management and comprehensive

monitoring to keep the system up.

48

4 don'ts

1. use cutting-edge technology

2. use untested hardware

3. run components to capacity

4. do hot patching

some things you will be tempted to do and shouldn't* don't use cutting edge hardware or software because

it's not reliable and will lead to repeated unacceptable downtimes

* don't put hardware into service without testing it first beause it will be unreliable or perform poorly, dragging the system down

* never run components or layers of your stack at capacity, because then you have no headroom for when you have a surge in data

* and never do hot patching of production services because you can't manage it and it will lead to mistakes and taking the system down.

49

firehose mastered?

so now that we know how to do it, it's time for you to build your own firehose applications. it's challenging and fun.

50

Contact● Josh Berkus: josh@pgexperts.com

● blog: blogs.ittoolbox.com/database/soup

● PostgreSQL: www.postgresql.org● pgexperts: www.pgexperts.com

● Upcoming Events● PostgreSQL Europe: http://2011.pgconf.eu/

● PostgreSQL Italy: http://2011.pgday.it/

The text and diagrams in this talk is copyright 2011 Josh Berkus and is licensed under the creative commons attribution license. Title slide image is licensed from iStockPhoto and may not be reproduced or redistributed. Socorro images are copyright 2011 Mozilla Inc.

contact and copyright information.

Questions?