Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source...

23
Big Data Programming: an Introduction Spring 2015, X. Zhang Fordham Univ.

Transcript of Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source...

Page 1: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

Big Data Programming: an Introduction

Spring 2015, X. Zhang

Fordham Univ.

Page 2: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

Outline

• What the course is about? • scope

• Introduction to big data programming • Opportunity and challenge of big data • Origin of Hadoop • High-level overview: HDFS, MapReduce,

YARN

Page 3: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

Learning Goal

• Understand concepts in distributed computing for big data

• Able to develop MapReduce programs to crunch big data

• Able to perform basic management/administration/troubleshooting of Hadoop cluster

• Able to understand and use tools in Hadoop ecosystems by self-learning • final projects/presentations

Page 4: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

Prerequisite

• Proficiency in C++, Java or Python – And being able to pick up a new language

quickly

• Familiarity with Unix/Linux systems – Understanding of Unix file systems, users and

permissions… – Basics Unix commands, – Shell scripting: to automate running your

programs and collecting results…

Page 5: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

What is Big Data

• Data sets that grow so large that they become awkward to work with using on hand database management tools. (Wikipedia)

Page 6: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

Where do they come from?

• New York Stock Exchange: one terabyte of new trade data per day

• Facebook: 10 billion photos, one petabyte of storage

• Data generated by machines: logs, sensor networks, GPS traces, electronic transactions, …

• Have you collected data? – Network traces projects: Internet measurements…

Page 7: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

Multiple of Bytes: decimal prefix

• 1000 kB kilobyte • 10002 MB megabyte • 10003 Gb gigabyte • 10004 TB terabyte • 10005 PB petabyte • 10006 EB exabyte • 10007 ZB zettabyte • 10008 YB yottabyte

Page 8: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

Cost of Storage

• 1991, consumer grade, 1 gigabyte (1/1000 TB) disk drives, US$2699

• 1995, 1 GB drives, US$849 • 2007: 1 terabyte hard disk, $375 • 2010: 2 terabyte hard disk costs US$200 • 2012: 4 terabyte hard disk US$450, 1 terabyte hard disk

US$100 • 2013: 4 terabyte hard disk US$179, 3 terabyte hard disk

$129, 2 terabyte hard disk $100, 1 terabyte hard disk US$80

• 2014: 4 terabyte hard disk US$150, 3 terabyte hard disk $129, 2 terabyte hard disk $90, 1 terabyte hard disk US$60

Page 9: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

Challenges

• General Problems in Big Data Era: • How to process very big volume of data in a reasonable amount

of time? • It turns out: disk bandwidth has become bottleneck, i.e., hard

disk cannot read data fast enough… • Solutions: parallel processing

• Google’s problems: to crawl, analyze and rank web pages into giant inverted index (to support search engine)

• Google engineers went ahead to build their own systems: • Google File System, “exabyte-scale data management using

commodity hardware” • Google MapReduce (GMR), “implementation of design pattern

applied to massively parallel processing”

Page 10: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

Background: Inverted Index• Goal: to support search query, where we need to locate documents containing some given words, and then rank these documents by relevance

• Means: create inverted index, which stores a list of the documents containing each word !! Example: Word: Documents where the work appears the Document 1, Document 3, Document 4, Document 5 cow Document 2, Document 3, Document 4 says Document 5 moo Document 7 !!

10

Page 11: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

Hadoop History• Originally Yahoo Nutch Project: crawl and index a large number of

web pages

• Idea: a program is distributed, and process part of data stored with them

• Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework)

• Hadoop: schedule and resource management framework for execute map and reduce jobs in a cluster environment

• Now an open source project, Apache Hadoop

• Hadoop ecosystem: various tools to make it easier to use

• Hive, Pig: tools that can translate more abstract description of workload to map-reduce pipelines.

Page 12: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

High-level View: HDFS, MapReduce

12

Page 13: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

HDFS (Hadoop Distributed File System)

• A file system running on clusters of commodity hardware

• Capable of storing very large files

• Optimized for streaming data access (i.e., sequential reads) • initial intent of Hadoop for

large parallel, batch processing jobs

• resilient to node failures through replication • via replication

Page 14: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

HDFS as a file system

• Command line operations: • hadoop fs -ls (-mkdir, -cd, …) • hadoop fs -copyFromLocal … • hadoop fs -copyToLocal …

• Java Programming API: • open file, close file, read and write

file,… from programs

14

Page 15: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

MapReduce

• End-user MapReduce API for programming

MapReduce application. • MapReduce framework, the runtime

implementation of various phases such as map phase, sort/shuffle/merge aggregation and reduce phase.

• MapReduce system, which is the backend infrastructure required to run the user’s MapReduce application, manage cluster resources, schedule thousands of concurrent jobs etc.

15

Page 16: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

MapReduce Programming Model

16

Input: a set of [key,value] pairs Output: a set of

[key,value] pairs

Split

intermediate [key,value] pairs

[k1,v11,v12, …] [k2,v21,v22,…] …

Shuffle

Page 17: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

Woud Count Example• Example: Counting number of occurrences of each word in a large collection of

documents. • pseudo-code: map(String key, String value): // key: document name // value: document contents for each word w in value: EmitIntermediate(w, "1"); !

reduce(String key, Iterator values): // key: a word // values: a list of counts int result = 0; for each v in values: result += ParseInt(v); Emit(AsString(result));

17

Page 18: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

WeatherData Example• Problem: analyze highest temperature for each year • Input: a single file contains multiple year weather

data • Output: [year, highest_temp] pairs

18

Input: [k,v] pairs

intermediate: [k,v] pairs

Output: [k,v] pairs

Page 19: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

Parallel Execution: Scaling OutA MapReduce job is a unit of work that client/user wants to be performed

• input data

• MapReduce program

• Configuration information

Hadoop system:

* divides job into map and reduce tasks.

* divides input into fixed-size pieces called input splits, or splits.

* Hadoop creates one map task for each split, which runs the user-defined map function for each record in the split

19

Page 20: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

MapReduce and HDFSParallism of MapReduce + very high aggregate I/O bandwidth across a large cluster provided by HDFS => economics of the system are extremely compelling – a key factor in the popularity of Hadoop.

Keys: lack of data motion i.e. move compute to data, and do not move data to compute node via network. 

Specifically, MapReduce tasks can be scheduled on the same physical nodes on which data is resident in HDFS, which exposes the underlying storage layout across the cluster. 

Benefits: reduces network I/O and keeps most of the I/O on local disk or within same rack.

20

Page 21: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

Hadoop 1.xThere are two types of nodes that control the job execution process: a jobtracker and a number of tasktrackers.

• jobtracker: coordinates all jobs run on the system by scheduling tasks to run on tasktrackers.

• Tasktrackers: run tasks and send progress reports to the jobtracker, which keeps a record of the overall progress of each job. If a task fails, the jobtracker can reschedule it on a different tasktracker.

21

Page 22: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

YARN: Yet Another Resource Negotiator

!• Resource management => a global ResourceManager

• Per-node resource monitor => NodeManager

• Job scheduling/monitoring => per-application ApplicationMaster (AM).

22

Hadoop Deamons are Java processes,

running in background, talking to other via RPC over SSH protocol.

Page 23: Big Data Programming: an Introduction · • Two Google papers => Hadoop project (an open source implementation of Distributed File system and MapReduce framework) • Hadoop: schedule

YARN: • Master-slave System: ResourceManager and per-node slave,

NodeManager (NM), form the new, and generic, system for managing applications in a distributed manner.

• ResourceManager: ultimate authority that arbitrates resources among all applications in the system.

• Pluggable Scheduler, allocate resources to various running applications

•based on the resource requirements of the applications

• based on abstract notion of a Resource Container which incorporates resource elements such as memory, cpu, disk, network etc.

•Per-application ApplicationMaster: negotiate resources from ResourceManager and working with NodeManager(s) to execute and monitor component tasks.

23