Www.jiahenglu.net. 3 Google GFS Bigtable Mapreduce Yahoo Hadoop.

Post on 27-Mar-2015

220 views 1 download

Tags:

Transcript of Www.jiahenglu.net. 3 Google GFS Bigtable Mapreduce Yahoo Hadoop.

云计算与云数据管理

陆嘉恒中国人民大学

www.jiahenglu.net

主要内容

3

云计算概述

Google 云计算技术: GFS , Bigtable 和Mapreduce

Yahoo 云计算技术和 Hadoop

云数据管理的挑战

Cloud computing

Why we use cloud computing?

Why we use cloud computing?

Case 1:

Write a file

Save

Computer down, file is lost

Files are always stored in cloud, never lost

Why we use cloud computing?

Case 2:

Use IE --- download, install, use

Use QQ --- download, install, use

Use C++ --- download, install, use

……

Get the serve from the cloud

What is cloud and cloud computing?

Cloud

Demand resources or services over Internet

scale and reliability of a data center.

What is cloud and cloud computing?

Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a serve over the Internet.

Users need not have knowledge of, expertise in, or control over the technology infrastructure in the "cloud" that supports them.

Characteristics of cloud computing

Virtual. software, databases, Web servers,

operating systems, storage and networking as virtual servers.

On demand. add and subtract processors, memory,

network bandwidth, storage.

IaaSInfrastructure as a Service

PaaSPlatform as a Service

SaaSSoftware as a Service

Types of cloud service

Software delivery model

No hardware or software to manage Service delivered through a browser Customers use the service on demand Instant Scalability

SaaS

Examples

Your current CRM package is not managing the load or you simply don’t want to host it in-house. Use a SaaS provider such as Salesforce.com

Your email is hosted on an exchange server in your office and it is very slow. Outsource this using Hosted Exchange.

SaaS

Platform delivery model

Platforms are built upon Infrastructure, which is expensive

Estimating demand is not a science! Platform management is not fun!

PaaS

Examples

You need to host a large file (5Mb) on your website and make it available for 35,000 users for only two months duration. Use Cloud Front from Amazon.

You want to start storage services on your network for a large number of files and you do not have the storage capacity…use Amazon S3.

PaaS

Computer infrastructure delivery model

A platform virtualization environment

Computing resources, such as storing and processing capacity.

Virtualization taken a step further

IaaS

Examples

You want to run a batch job but you don’t have the infrastructure necessary to run it in a timely manner. Use Amazon EC2.

You want to host a website, but only for a few days. Use Flexiscale.

IaaS

Cloud computing and other computing techniques

The 21st Century Vision Of Computing

Leonard Kleinrock , one of the chief scientists of the original Advanced Research Projects Agency Network (ARPANET) project which seeded the Internet, said: “

As of now, computer networks are still in theirinfancy, but as they grow up and become sophisticated, we will probably see the spread of ‘computer utilities’ which, like present electric and telephone utilities, will service individual homes and offices across the country.”

The 21st Century Vision Of Computing

Sun Microsystemsco-founder Bill Joy

The 21st Century Vision Of Computing

Definitions

CloudGrid

Cluster

utility

Definitions

CloudGrid

Cluster

utility

Utility computing is the packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility

Definitions

CloudGrid

Cluster

utility A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer.

Definitions

CloudGrid

Cluster

utility Grid computing is the application of several computers to a single problem at the same time — usually to a scientific or technical problem that requires a great number of computer processing cycles or access to large amounts of data

Definitions

CloudGrid

Cluster

utility Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet.

Grid Computing & Cloud Computing

share a lot commonality intention, architecture and technology Difference programming model, business model,

compute model, applications, and Virtualization.

Grid Computing & Cloud Computing

the problems are mostly the samemanage large facilities;

define methods by which consumers discover, request and use resources provided by the central facilities;

implement the often highly parallel computations that execute on those resources.

Grid Computing & Cloud Computing

Virtualization Grid

do not rely on virtualization as much as Clouds do, each individual organization maintain full control of their resources

Cloudan indispensable ingredient for

almost every Cloud

23/4/10 33

Any question and any comments ?

主要内容

34

云计算概述

Google 云计算技术: GFS , Bigtable 和Mapreduce

Yahoo 云计算技术和 Hadoop

云数据管理的挑战

Google Cloud computing techniques

Cloud Systems

BigTable HBase HyperTable Hive HadoopDB GreenPlum CouchDB Voldemort PNUTS SQL Azure

MapR

educe

BigTable-like

DBMS-based

VLDB’09

VLDB’09

VLDB’08

OSDI’06

The Google File System

The Google File System(GFS)

A scalable distributed file system for large distributed data intensive applications

Multiple GFS clusters are currently deployed.

The largest ones have:1000+ storage nodes

300+ TeraBytes of disk storage

heavily accessed by hundreds of clients on distinct machines

Introduction

Shares many same goals as previous distributed file systems

performance, scalability, reliability, etc

GFS design has been driven by four key observation of Google application workloads and technological environment

Intro: Observations 1

1. Component failures are the normconstant monitoring, error detection, fault tolerance and automatic recovery are integral to the system

2. Huge files (by traditional standards)Multi GB files are common

I/O operations and blocks sizes must be revisited

Intro: Observations 2

3. Most files are mutated by appending new data

This is the focus of performance optimization and atomicity guarantees

4. Co-designing the applications and APIs benefits overall system by increasing flexibility

The Design

Cluster consists of a single master and multiple chunkservers and is accessed by multiple clients

The Master

Maintains all file system metadata.names space, access control info, file to chunk mappings, chunk (including replicas) location, etc.

Periodically communicates with chunkservers in HeartBeat messages to give instructions and check state

The Master

Helps make sophisticated chunk placement and replication decision, using global knowledge

For reading and writing, client contacts Master to get chunk locations, then deals directly with chunkservers

Master is not a bottleneck for reads/writes

Chunkservers

Files are broken into chunks. Each chunk has a immutable globally unique 64-bit chunk-handle.

handle is assigned by the master at chunk creation

Chunk size is 64 MB

Each chunk is replicated on 3 (default) servers

Clients

Linked to apps using the file system API.

Communicates with master and chunkservers for reading and writing

Master interactions only for metadata

Chunkserver interactions for data

Only caches metadata informationData is too large to cache.

Chunk Locations

Master does not keep a persistent record of locations of chunks and replicas.

Polls chunkservers at startup, and when new chunkservers join/leave for this.

Stays up to date by controlling placement of new chunks and through HeartBeat messages (when monitoring chunkservers)

Operation Log

Record of all critical metadata changes

Stored on Master and replicated on other machines

Defines order of concurrent operations

Also used to recover the file system state

System Interactions: Leases and Mutation Order

Leases maintain a mutation order across all chunk replicas

Master grants a lease to a replica, called the primary

The primary choses the serial mutation order, and all replicas follow this order

Minimizes management overhead for the Master

Atomic Record Append

Client specifies the data to write; GFS chooses and returns the offset it writes to and appends the data to each replica at least once

Heavily used by Google’s Distributed applications.

No need for a distributed lock managerGFS choses the offset, not the client

Atomic Record Append: How?

• Follows similar control flow as mutations

• Primary tells secondary replicas to append at the same offset as the primary

• If a replica append fails at any replica, it is retried by the client.

So replicas of the same chunk may contain different data, including duplicates, whole or in part, of the same record

Atomic Record Append: How?

• GFS does not guarantee that all replicas are bitwise identical.

Only guarantees that data is written at least once in an atomic unit.

Data must be written at the same offset for all chunk replicas for success to be reported.

Detecting Stale Replicas

• Master has a chunk version number to distinguish up to date and stale replicas

• Increase version when granting a lease

• If a replica is not available, its version is not increased

• master detects stale replicas when a chunkservers report chunks and versions

• Remove stale replicas during garbage collection

Garbage collection

When a client deletes a file, master logs it like other changes and changes filename to a hidden file.

Master removes files hidden for longer than 3 days when scanning file system name space

metadata is also erased

During HeartBeat messages, the chunkservers send the master a subset of its chunks, and the master tells it which files have no metadata.

Chunkserver removes these files on its own

Fault Tolerance:High Availability

• Fast recoveryMaster and chunkservers can restart in seconds

• Chunk Replication

• Master Replication“shadow” masters provide read-only access when primary master is down

mutations not done until recorded on all master replicas

Fault Tolerance:Data Integrity

Chunkservers use checksums to detect corrupt data

Since replicas are not bitwise identical, chunkservers maintain their own checksums

For reads, chunkserver verifies checksum before sending chunk

Update checksums during writes

Introduction to Introduction to MapReduce MapReduce

MapReduce: InsightMapReduce: Insight

”Consider the problem of counting the

number of occurrences of each word in a large collection of documents”

How would you do it in parallel ?

MapReduce Programming ModelMapReduce Programming Model Inspired from map and reduce operations

commonly used in functional programming languages like Lisp.

Users implement interface of two primary methods:1. Map: (key1, val1) → (key2, val2)2. Reduce: (key2, [val2]) → [val3]

Map operationMap operation Map, a pure function, written by the user, takes

an input key/value pair and produces a set of intermediate key/value pairs. e.g. (doc—id, doc-content)

Draw an analogy to SQL, map can be visualized as group-by clause of an aggregate query.

Reduce operationReduce operation On completion of map phase, all the

intermediate values for a given output key are combined together into a list and given to a reducer.

Can be visualized as aggregate function (e.g., average) that is computed over all the rows with the same group-by attribute.

Pseudo-codePseudo-codemap(String input_key, String input_value): // input_key: document name // input_value: document contents

for each word w in input_value: EmitIntermediate(w, "1");

reduce(String output_key, Iterator intermediate_values): // output_key: a word // output_values: a list of counts

int result = 0; for each v in intermediate_values:

result += ParseInt(v); Emit(AsString(result));

MapReduce: Execution overviewMapReduce: Execution overview

MapReduce: ExampleMapReduce: Example

MapReduce in Parallel: ExampleMapReduce in Parallel: Example

MapReduce: Fault ToleranceMapReduce: Fault Tolerance Handled via re-execution of tasks.

Task completion committed through master

What happens if Mapper fails ? Re-execute completed + in-progress map tasks

What happens if Reducer fails ? Re-execute in progress reduce tasks

What happens if Master fails ? Potential trouble !!

MapReduce: MapReduce:

Walk through of One more Application

MapReduce : PageRankMapReduce : PageRank

PageRank models the behavior of a “random surfer”.

C(t) is the out-degree of t, and (1-d) is a damping factor (random jump)

The “random surfer” keeps clicking on successive links at random not taking content into consideration.

Distributes its pages rank equally among all pages it links to.

The dampening factor takes the surfer “getting bored” and typing arbitrary URL.

n

i i

i

tC

tPRddxPR

1 )(

)()1()(

PageRank : Key InsightsPageRank : Key Insights

Effects at each iteration is local. i+1th iteration

depends only on ith iteration

At iteration i, PageRank for individual nodes can be computed independently

PageRank using MapReducePageRank using MapReduce

Use Sparse matrix representation (M)

Map each row of M to a list of PageRank “credit” to assign to out link neighbours.

These prestige scores are reduced to a single PageRank value for a page by aggregating over them.

PageRank using MapReducePageRank using MapReduceMap: distribute PageRank “credit” to link targets

Reduce: gather up PageRank “credit” from multiple sources to compute new PageRank value

Iterate untilconvergence

Source of Image: Lin 2008

Phase 1: Phase 1: Process HTMLProcess HTML

Map task takes (URL, page-content) pairs

and maps them to (URL, (PRinit

, list-of-urls))PRinit is the “seed” PageRank for URLlist-of-urls contains all pages pointed to by URL

Reduce task is just the identity function

Phase 2: Phase 2: PageRank DistributionPageRank Distribution

Reduce task gets (URL, url_list) and many

(URL, val) valuesSum vals and fix up with d to get new PREmit (URL, (new_rank, url_list))

Check for convergence using non parallel component

MapReduce: Some More AppsMapReduce: Some More Apps

Distributed Grep.

Count of URL Access Frequency.

Clustering (K-means)

Graph Algorithms.

Indexing Systems

MapReduce Programs In Google Source Tree

MapReduce: Extensions and MapReduce: Extensions and similar appssimilar apps PIG (Yahoo)

Hadoop (Apache)

DryadLinq (Microsoft)

Large Scale Systems Architecture using Large Scale Systems Architecture using MapReduceMapReduce

BigTable: A Distributed Storage System for Structured Data

Introduction

BigTable is a distributed storage system for managing structured data.

Designed to scale to a very large size Petabytes of data across thousands of servers

Used for many Google projects Web indexing, Personalized Search, Google Earth,

Google Analytics, Google Finance, … Flexible, high-performance solution for all of

Google’s products

Motivation Lots of (semi-)structured data at Google

URLs: Contents, crawl metadata, links, anchors, pagerank, …

Per-user data: User preference settings, recent queries/search results, …

Geographic locations: Physical entities (shops, restaurants, etc.), roads, satellite

image data, user annotations, … Scale is large

Billions of URLs, many versions/page (~20K/version) Hundreds of millions of users, thousands or q/sec 100TB+ of satellite image data

Why not just use commercial DB?

Scale is too large for most commercial databases

Even if it weren’t, cost would be very high Building internally means system can be applied

across many projects for low incremental cost Low-level storage optimizations help

performance significantly Much harder to do when running on top of a database

layer

Goals

Want asynchronous processes to be continuously updating different pieces of data Want access to most current data at any time

Need to support: Very high read/write rates (millions of ops per second) Efficient scans over all or interesting subsets of data Efficient joins of large one-to-one and one-to-many

datasets Often want to examine data changes over time

E.g. Contents of a web page over multiple crawls

BigTable

Distributed multi-level map Fault-tolerant, persistent Scalable

Thousands of servers Terabytes of in-memory data Petabyte of disk-based data Millions of reads/writes per second, efficient scans

Self-managing Servers can be added/removed dynamically Servers adjust to load imbalance

Building Blocks Building blocks:

Google File System (GFS): Raw storage Scheduler: schedules jobs onto machines Lock service: distributed lock manager MapReduce: simplified large-scale data processing

BigTable uses of building blocks: GFS: stores persistent data (SSTable file format for

storage of data) Scheduler: schedules jobs involved in BigTable

serving Lock service: master election, location bootstrapping Map Reduce: often used to read/write BigTable data

Basic Data Model

A BigTable is a sparse, distributed persistent multi-dimensional sorted map

(row, column, timestamp) -> cell contents

Good match for most Google applications

WebTable Example

Want to keep copy of a large collection of web pages and related information

Use URLs as row keys Various aspects of web page as column names Store contents of web pages in the contents: column

under the timestamps when they were fetched.

Rows

Name is an arbitrary string Access to data in a row is atomic Row creation is implicit upon storing data

Rows ordered lexicographically Rows close together lexicographically usually on

one or a small number of machines

Rows (cont.)

Reads of short row ranges are efficient and typically require communication with a small number of machines.

Can exploit this property by selecting row keys so they get good locality for data access.

Example: math.gatech.edu, math.uga.edu, phys.gatech.edu, phys.uga.edu

VS

edu.gatech.math, edu.gatech.phys, edu.uga.math, edu.uga.phys

Columns

Columns have two-level name structure: family:optional_qualifier

Column family Unit of access control Has associated type information

Qualifier gives unbounded columns Additional levels of indexing, if desired

Timestamps

Used to store different versions of data in a cell New writes default to current time, but timestamps for writes can also be

set explicitly by clients Lookup options:

“Return most recent K values” “Return all values in timestamp range (or all values)”

Column families can be marked w/ attributes: “Only retain most recent K values in a cell” “Keep values until they are older than K seconds”

Implementation – Three Major Components

Library linked into every client One master server

Responsible for: Assigning tablets to tablet servers Detecting addition and expiration of tablet servers Balancing tablet-server load Garbage collection

Many tablet servers Tablet servers handle read and write requests to its

table Splits tablets that have grown too large

Implementation (cont.)

Client data doesn’t move through master server. Clients communicate directly with tablet servers for reads and writes.

Most clients never communicate with the master server, leaving it lightly loaded in practice.

Tablets

Large tables broken into tablets at row boundaries Tablet holds contiguous range of rows

Clients can often choose row keys to achieve locality Aim for ~100MB to 200MB of data per tablet

Serving machine responsible for ~100 tablets Fast recovery:

100 machines each pick up 1 tablet for failed machine Fine-grained load balancing:

Migrate tablets away from overloaded machine Master makes load-balancing decisions

Tablet Location

Since tablets move around from server to server, given a row, how do clients find the right machine? Need to find tablet whose row range covers the

target row

Tablet Assignment Each tablet is assigned to one tablet server at

a time. Master server keeps track of the set of live

tablet servers and current assignments of tablets to servers. Also keeps track of unassigned tablets.

When a tablet is unassigned, master assigns the tablet to an tablet server with sufficient room.

API Metadata operations

Create/delete tables, column families, change metadata Writes (atomic)

Set(): write cells in a row DeleteCells(): delete cells in a row DeleteRow(): delete all cells in a row

Reads Scanner: read arbitrary cells in a bigtable

Each row read is atomic Can restrict returned rows to a particular range Can ask for just data from 1 row, all rows, etc. Can ask for all columns, just certain column families, or specific

columns

Refinements: Compression

Many opportunities for compression Similar values in the same row/column at different

timestamps Similar values in different columns Similar values across adjacent rows

Two-pass custom compressions scheme First pass: compress long common strings across a

large window Second pass: look for repetitions in small window

Speed emphasized, but good space reduction (10-to-1)

Refinements: Bloom Filters

Read operation has to read from disk when desired SSTable isn’t in memory

Reduce number of accesses by specifying a Bloom filter. Allows us ask if an SSTable might contain data for a

specified row/column pair. Small amount of memory for Bloom filters drastically

reduces the number of disk seeks for read operations Use implies that most lookups for non-existent rows or

columns do not need to touch disk

Refinements: Bloom Filters

Read operation has to read from disk when desired SSTable isn’t in memory

Reduce number of accesses by specifying a Bloom filter. Allows us ask if an SSTable might contain data for a

specified row/column pair. Small amount of memory for Bloom filters drastically

reduces the number of disk seeks for read operations Use implies that most lookups for non-existent rows or

columns do not need to touch disk

主要内容

100

云计算概述

Google 云计算技术: GFS , Bigtable 和Mapreduce

Yahoo 云计算技术和 Hadoop

云数据管理的挑战

Yahoo ! Cloud computing

Yahoo! Cloud StackPr

ovis

ioni

ng (

Self-

serv

e)

Horizontal Cloud Services …YCS YCPI Brooklyn

EDGEM

onito

ring/

Met

erin

g/Se

curit

y

Horizontal Cloud Services…Hadoop

BATCH

Horizontal Cloud Services…Sherpa MOBStor

STORAGE

Horizontal Cloud ServicesVM/OS …

APP

Horizontal Cloud ServicesVM/OS yApache

WEB

Dat

a H

ighw

ay

Serving Grid

PHP App Engine

Web Data Management

Large data analysis(Hadoop)

Structured record storage

(PNUTS/Sherpa)

Blob storage(SAN/NAS)

• Scan oriented workloads

• Focus on sequential disk I/O

• $ per cpu cycle

• CRUD • Point lookups

and short scans

• Index organized table and random I/Os

• $ per latency

• Object retrieval and streaming

• Scalable file storage

• $ per GB

The World Has Changed

Web serving applications need: Scalability!

Preferably elastic Flexible schemas Geographic distribution High availability Reliable storage

Web serving applications can do without: Complicated queries Strong transactions

PNUTS /

SHERPA

To Help You Scale Your Mountains of Data

Yahoo! Serving Storage Problem

Small records – 100KB or less

Structured records – lots of fields, evolving

Extreme data scale - Tens of TB

Extreme request scale - Tens of thousands of requests/sec

Low latency globally - 20+ datacenters worldwide

High Availability - outages cost $millions

Variable usage patterns - as applications and users change

108

E 75656 C

A 42342 EB 42521 W

C 66354 W

D 12352 E

F 15677 E

What is PNUTS/Sherpa?

E 75656 C

A 42342 EB 42521 W

C 66354 W

D 12352 E

F 15677 E

CREATE TABLE Parts (ID VARCHAR,StockNumber INT,Status VARCHAR…

)

CREATE TABLE Parts (ID VARCHAR,StockNumber INT,Status VARCHAR…

)

Parallel databaseParallel database Geographic replicationGeographic replication

Structured, flexible schemaStructured, flexible schema

Hosted, managed infrastructureHosted, managed infrastructure

A 42342 E

B 42521 W

C 66354 W

D 12352 E

E 75656 C

F 15677 E

110

What Will It Become?

E 75656 C

A 42342 EB 42521 W

C 66354 W

D 12352 E

F 15677 E

E 75656 C

A 42342 EB 42521 W

C 66354 W

D 12352 E

F 15677 E

E 75656 C

A 42342 EB 42521 W

C 66354 W

D 12352 E

F 15677 E

Indexes and viewsIndexes and views

Scalability Thousands of machines Easy to add capacity Restrict query language to avoid costly queries

Geographic replication Asynchronous replication around the globe Low-latency local access

High availability and fault tolerance Automatically recover from failures Serve reads and writes despite failures

Design Goals

113

Consistency Per-record guarantees Timeline model Option to relax if needed

Multiple access paths Hash table, ordered table Primary, secondary access

Hosted service Applications plug and play Share operational cost

Technology Elements

PNUTS • Query planning and execution• Index maintenance

Distributed infrastructure for tabular data • Data partitioning • Update consistency• Replication

YDOT FS • Ordered tables

Applications

Tribble• Pub/sub messaging

YDHT FS • Hash tables

Zookeeper• Consistency service

YC

A:

Aut

hori

zati

on

PNUTS API Tabular API

114

Data Manipulation

Per-record operations Get Set Delete

Multi-record operations Multiget Scan Getrange

115

Tablets—Hash Table

Apple

Lemon

Grape

Orange

Lime

Strawberry

Kiwi

Avocado

Tomato

Banana

Grapes are good to eat

Limes are green

Apple is wisdom

Strawberry shortcake

Arrgh! Don’t get scurvy!

But at what price?

How much did you pay for this lemon?

Is this a vegetable?

New Zealand

The perfect fruit

Name Description Price

$12

$9

$1

$900

$2

$3

$1

$14

$2

$8

0x0000

0xFFFF

0x911F

0x2AF3

116

Tablets—Ordered Table

117

Apple

Banana

Grape

Orange

Lime

Strawberry

Kiwi

Avocado

Tomato

Lemon

Grapes are good to eat

Limes are green

Apple is wisdom

Strawberry shortcake

Arrgh! Don’t get scurvy!

But at what price?

The perfect fruit

Is this a vegetable?

How much did you pay for this lemon?

New Zealand

$1

$3

$2

$12

$8

$1

$9

$2

$900

$14

Name Description PriceA

Z

Q

H

Flexible Schema

Posted date Listing id Item Price

6/1/07 424252 Couch $570

6/1/07 763245 Bike $86

6/3/07 211242 Car $1123

6/5/07 421133 Lamp $15

Color

Red

Condition

Good

Fair

Storageunits

Routers

Tablet Controller

REST API

Clients

Local region Remote regions

Tribble

Detailed Architecture

119

Tablet Splitting and Balancing

120

Each storage unit has many tablets (horizontal partitions of the table)Each storage unit has many tablets (horizontal partitions of the table)

Tablets may grow over timeTablets may grow over timeOverfull tablets splitOverfull tablets split

Storage unit may become a hotspotStorage unit may become a hotspot

Shed load by moving tablets to other serversShed load by moving tablets to other servers

Storage unitTablet

QUERY PROCESSING

121

Accessing Data

122

SUSU SU

1

Get key k

2Get key k3 Record for key k

4 Record for key k

Bulk Read

123

SUScatter/gather server

SU SU

1

{k1, k2, … kn}

2Get k1

Get k2Get k3

Storage unit 1 Storage unit 2 Storage unit 3

Range Queries in YDOT Clustered, ordered retrieval of records

Storage unit 1Canteloupe

Storage unit 3Lime

Storage unit 2Strawberry

Storage unit 1

Router

AppleAvocadoBananaBlueberry

CanteloupeGrapeKiwiLemon

LimeMangoOrange

StrawberryTomatoWatermelon

AppleAvocadoBananaBlueberry

CanteloupeGrapeKiwiLemon

LimeMangoOrange

StrawberryTomatoWatermelon

Grapefruit…Pear?Grapefruit…Lime?

Lime…Pear?

Storage unit 1Canteloupe

Storage unit 3Lime

Storage unit 2Strawberry

Storage unit 1

Updates

1

Write key k

2Write key k7

Sequence # for key k

8

Sequence # for key k

SU SU SU

3Write key k

4

5SUCCESS

6Write key k

RoutersMessage brokers

125

ASYNCHRONOUS REPLICATION AND CONSISTENCY

126

Asynchronous Replication

127

Goal: Make it easier for applications to reason about updates and cope with asynchrony

What happens to a record with primary key “Alice”?

Consistency Model

128

Time

Record inserted

Update Update Update UpdateUpdate Delete

Timev. 1 v. 2 v. 3 v. 4 v. 5 v. 7

Generation 1

v. 6 v. 8

Update Update

As the record is updated, copies may get out of sync.

Example: Social Alice

User Status

Alice Busy

West East

User Status

Alice Free

User Status

Alice ???User Status

Alice ???

User Status

Alice Busy

User Status

Alice ______

Busy

Free

Free

Record Timeline

Timev. 1 v. 2 v. 3 v. 4 v. 5 v. 7

Generation 1

v. 6 v. 8

Current version

Stale versionStale version

Read

Consistency Model

130

In general, reads are served using a local copy

Timev. 1 v. 2 v. 3 v. 4 v. 5 v. 7

Generation 1

v. 6 v. 8

Read up-to-date

Current version

Stale versionStale version

Consistency Model

131

But application can request and get current version

Timev. 1 v. 2 v. 3 v. 4 v. 5 v. 7

Generation 1

v. 6 v. 8

Read ≥ v.6

Current version

Stale versionStale version

Consistency Model

132

Or variations such as “read forward”—while copies may lag themaster record, every copy goes through the same sequence of changes

Timev. 1 v. 2 v. 3 v. 4 v. 5 v. 7

Generation 1

v. 6 v. 8

Write

Current version

Stale versionStale version

Consistency Model

133

Achieved via per-record primary copy protocol(To maximize availability, record masterships automaticlly transferred if site fails)

Can be selectively weakened to eventual consistency (local writes that are reconciled using version vectors)

Timev. 1 v. 2 v. 3 v. 4 v. 5 v. 7

Generation 1

v. 6 v. 8

Write if = v.7

ERROR

Current version

Stale versionStale version

Consistency Model

134

Test-and-set writes facilitate per-record transactions

Consistency Techniques

Per-record mastering Each record is assigned a “master region”

May differ between records Updates to the record forwarded to the master region Ensures consistent ordering of updates

Tablet-level mastering Each tablet is assigned a “master region” Inserts and deletes of records forwarded to the master region Master region decides tablet splits

These details are hidden from the application Except for the latency impact!

136

Mastering

A 42342 EB 42521 W

C 66354 W

D 12352 EE 75656 C

F 15677 E A 42342 EB 42521 W

C 66354 W

D 12352 EE 75656 C

F 15677 EA 42342 EB 42521 W

C 66354 W

D 12352 EE 75656 C

F 15677 E

Tablet master

Bulk Insert/Update/Replace

Client

Source Data

Bulk manager

1. Client feeds records to bulk manager

2. Bulk loader transfers records to SU’s in batches• Bypass routers and

message brokers• Efficient import into

storage unit

Bulk Load in YDOT

YDOT bulk inserts can cause performance hotspots

Solution: preallocate tablets

Index Maintenance

How to have lots of interesting indexes and views, without killing performance?

Solution: Asynchrony! Indexes/views updated asynchronously when

base table updated

SHERPAIN CONTEXT

140

Types of Record Stores

Query expressiveness

Simple Feature rich

Object retrieval

Retrieval from single table of

objects/records

SQL

S3 PNUTS Oracle

Types of Record Stores

Consistency model

Best effort Strong guaranteesEventual

consistencyTimeline

consistencyACID

S3 PNUTS Oracle

Program centric

consistency

Program centric

consistencyObject-centric consistency

Object-centric consistency

Types of Record Stores

Data model

Flexibility,Schema evolution

Optimized forFixed schemas

CouchDB

PNUTS

Oracle

Consistency spans objectsConsistency

spans objectsObject-centric consistency

Object-centric consistency

Types of Record Stores

Elasticity (ability to add resources on demand)

Inelastic Elastic

Limited (via data

distribution)

VLSD(Very Large

Scale Distribution /Replication)

OraclePNUTS

S3

Data Stores Comparison

User-partitioned SQL stores Microsoft Azure SDS Amazon SimpleDB

Multi-tenant application databases Salesforce.com Oracle on Demand

Mutable object stores Amazon S3

Versus PNUTS

More expressive queries Users must control partitioning Limited elasticity

Highly optimized for complex workloads

Limited flexibility to evolving applications

Inherit limitations of underlying data management system

Object storage versus record management

Application Design Space

Records Files

Get a few things

Scan everything

Sherpa MObStor

Everest Hadoop

YMDBMySQL

Filer

Oracle

BigTable

146

Alternatives Matrix

Ela

stic

Ope

rabi

lity

Glo

bal l

ow

late

ncy

Ava

ilab

ilit

y

Stru

ctur

ed

acce

ss

Sherpa

Y! UDB

MySQL

Oracle

HDFS

BigTable

DynamoU

pdat

esCassandra

Con

sist

ency

m

odel

SQL

/AC

ID

147

QUESTIONS?

148

Hadoop

Problem

How do you scale up applications? Run jobs processing 100’s of terabytes of data Takes 11 days to read on 1 computer

Need lots of cheap computers Fixes speed problem (15 minutes on 1000 computers),

but… Reliability problems

In large clusters, computers fail every dayCluster size is not fixed

Need common infrastructure Must be efficient and reliable

Solution

Open Source Apache ProjectHadoop Core includes:

Distributed File System - distributes dataMap/Reduce - distributes application

Written in JavaRuns on

Linux, Mac OS/X, Windows, and SolarisCommodity hardware

Hardware Cluster of Hadoop

Typically in 2 level architecture Nodes are commodity PCs 40 nodes/rack Uplink from rack is 8 gigabit Rack-internal is 1 gigabit

Distributed File System

Single namespace for entire cluster Managed by a single namenode. Files are single-writer and append-only. Optimized for streaming reads of large files.

Files are broken in to large blocks. Typically 128 MB Replicated to several datanodes, for reliability

Access from Java, C, or command line.

Block Placement

Default is 3 replicas, but settableBlocks are placed (writes are pipelined):

On same nodeOn different rackOn the other rack

Clients read from closest replica If the replication for a block drops below

target, it is automatically re-replicated.

How is Yahoo using Hadoop?

Started with building better applications Scale up web scale batch applications

(search, ads, …) Factor out common code from existing

systems, so new applications will be easier to write

Manage the many clusters

Running Production WebMap

Search needs a graph of the “known” web Invert edges, compute link text, whole graph heuristics

Periodic batch job using Map/Reduce Uses a chain of ~100 map/reduce jobs

Scale 1 trillion edges in graph Largest shuffle is 450 TB Final output is 300 TB compressed Runs on 10,000 cores Raw disk used 5 PB

Terabyte Sort Benchmark

Started by Jim Gray at Microsoft in 1998 Sorting 10 billion 100 byte records Hadoop won the general category in 209

seconds 910 nodes 2 quad-core Xeons @ 2.0Ghz / node 4 SATA disks / node 8 GB ram / node 1 gb ethernet / node 40 nodes / rack 8 gb ethernet uplink / rack

Previous records was 297 seconds

Hadoop clusters

We have ~20,000 machines running Hadoop Our largest clusters are currently 2000 nodes Several petabytes of user data (compressed, unreplicated) We run hundreds of thousands of jobs every month

Research Cluster Usage

Who Uses Hadoop? Amazon/A9 AOL Facebook Fox interactive media Google / IBM New York Times PowerSet (now Microsoft) Quantcast Rackspace/Mailtrust Veoh Yahoo! More at http://wiki.apache.org/hadoop/PoweredBy

Q&A

For more information:Website:

http://hadoop.apache.org/coreMailing lists:

core-dev@hadoop.apachecore-user@hadoop.apache

主要内容

162

云计算概述

Google 云计算技术: GFS , Bigtable 和Mapreduce

Yahoo 云计算技术和 Hadoop

云数据管理的挑战

Summary of Applications

Data Analysis Internet Service Private Cloud

Web Applications Some operations that can tolerate relaxed

consistency

BigTable HBase HyperTable Hive HadoopDB…

PNUTS

Architecture

MapReduce-based

BigTable HBase Hypertable Hive

scalability

fault toleranceability to run in a heterogeneous environment

DBMS-based

SQL Azure PNUTS Voldemort

easy to support SQL

easy to utilize index, optimization method

bottleneck of data storage

HadoopDB

sounds good

Performance?

Hybrid of MapReduce and DBMS

a lot of work to do to support SQL

data replication in file system

data replication upon DBMS

Consistency

Two kinds of consistency: strong consistency –

ACID(Atomicity Consistency Isolation Durability)

weak consistency – BASE(Basically Available Soft-state Eventual consistency )

CP

A BigTable,HBase, Hive,Hypertable,HadoopDB

CP

A

PNUTS

SQL Azure ?

A tailor

3NFTRANSACTION

LOCK ACID

SAFETY

RDBMS

Further ReadingEfficient Bulk Insertion into a Distributed Ordered Table (SIGMOD 2008)Adam Silberstein, Brian Cooper, Utkarsh Srivastava, Erik Vee, Ramana Yerneni, Raghu Ramakrishnan

PNUTS: Yahoo!'s Hosted Data Serving Platform (VLDB 2008)Brian Cooper, Raghu Ramakrishnan, Utkarsh Srivastava, Adam Silberstein, Phil Bohannon, Hans-Arno Jacobsen, Nick Puz, Daniel Weaver, Ramana Yerneni

Asynchronous View Maintenance for VLSD Databases,Parag Agrawal, Adam Silberstein, Brian F. Cooper, Utkarsh Srivastava and Raghu RamakrishnanSIGMOD 2009

Cloud Storage Design in a PNUTShellBrian F. Cooper, Raghu Ramakrishnan, and Utkarsh SrivastavaBeautiful Data, O’Reilly Media, 2009

Further Reading

F. Chang et al. Bigtable: A distributed storage system for structured data. In OSDI, 2006. J. Dean and S. Ghemawat. MapReduce: Simplified data processing on large clusters. In OSDI, 2004. G. DeCandia et al. Dynamo: Amazon’s highly available key-value store. In SOSP, 2007.

S. Ghemawat, H. Gobioff, and S.-T. Leung. The Google File System. In Proc. SOSP, 2003.

D. Kossmann. The state of the art in distributed query processing. ACM Computing Surveys, 32(4):422–469, 2000.

即将出版云计算教材 清华大学出版社 2010 年 6 月

分布式系统和云计算

三大部分: 分布式系统 云计算技术概述 云计算平台和编程指导

全书章节《分布式系统及云计算概论》 第 1 章 绪论 1.1 分布式系统概述 1.2 分布式云计算的兴起 1.3 分布式云计算的主要服务和应用 1.4 小结 分布式系统综述 第 2 章 分布式系统入门 2.1 分布式系统的定义 2.2 分布式系统中的软硬件 2.3 分布系统中的主要特征(比如安全性,容错性,安全性等等) 2.4 小结 第 3 章 客户 - 服务器端构架 3.1 客户 - 服务器端构架和体系结构 3.2 客户 - 服务器端通信协议 3.3 客户 - 服务器端模型的变种 3.4 小结

全书章节《分布式系统及云计算概论》

第 4 章 分布式对象 4.1 分布式对象的基本模型 4.2 远程过程调用 4.3 远程方法调用 4.3 小结 第 5 章 公共对象请求代理结构 (CORBA) 5.1 CORBA 基本概述 5.2 CORBA 的基本服务 5.3 容错性和安全性 5.4 Java IDL 语言 5.5 小结 分布式云计算技术 第 6 章 分布式云计算概述 6.1 云计算入门 6.2 云服务 6.3 云计算与其他技术比较 6.4 小结 第 7 章 Google 云平台的三大技术 7.1 Google 文件系统 7.2 Bigtable 技术 7.3 Mapreduce 技术 7.4 小结

全书章节《分布式系统及云计算概论》

第 8 章 Yahoo 云平台的技术 8.1 PNUTS: 灵活通用的表存储平台 8.2 Pig: 分析大型数据集的平台 8.3 ZooKeeper: 提供团体服务的集中化服务平台 8.4 小结 第 9 章 Aneka 云平台的技术 9.1 Aneka 云平台 9.2 面向市场的云架构 9.3 Aneka: 从企业网格到面向市场的云计算 9.4 小结 第 10 章 Greenplum 云平台的技术 10.1 GreenPlum 系统概述 10.2 GreenPlum 分析数据库 10.3 GreenPlum 数据库的体系结构和特点 10.4 GreenPlum 的关键特性和优点 10.5 小结 第 11 章 Amazon dynamo 云平台的技术 11.1 Amazon dynamo 概述 11.2 Amazon dynamo 的研发背景 11.3 Amazon dynamo 系统体系结构 11.4 小结 第 12 章 IBM 技术 12.1 IBM 云计算概述 12.2 IBM 云风暴 12.3 IBM 智能商业服务 12.4 IBM 智慧地球计划 12.5 IBM Z 系统 12.6 IBM 虚拟化的动态基础架构技术 12.7 小结

全书章节《分布式系统及云计算概论》 分布式云计算的程序开发 第 13 章 基于 Hadoop 系统开发 13.1 Hadoop 系统概述 13.2 Map/Reduce 用户接口 13.3 任务执行和执行环境 13.4 实际编程例子 13.5 小结 第 14 章 基于 HBase 系统开发 14.1 什么是 HBase 系统 14.2 HBase 的数据模型 14.3 HBase 的结构和功能 14.4 如何使用 HBase 14.5 小结

全书章节《分布式系统及云计算概论》 第 15 章 基于 Google Apps 系统开发 15.1 Google App Engine 简介 15.2 如何使用 Google App Engine 15.3 基于 Google Apps 的应用程序开发实例 15.4 小结 第 16 章 基于 MS Azure 系统开发 16.1 MS Azure 系统简介 16.2 WINDOWS AZURE 服务使用 16.3 小结 第 17 章 基于 Amazon EC2 系统开发实例 17.1 Amazon Elastic Compute Cloud 介绍 17.2 如何使用 AmazonEC2 17.3 小结