Hadoop Technology

73
TURKCELL DAHİLİ Hadoop Kıvanç Urgancıoğlu

Transcript of Hadoop Technology

Page 1: Hadoop Technology

TURKCELL DAHİLİ

Hadoop

Kıvanç Urgancıoğlu

Page 2: Hadoop Technology

TURKCELL DAHİLİBig Data

Velocity:Often time sensitive , big data must be used as it is streaming in to the enterprise it order

to maximize its value to the business.Batch ,Near time , Real-time ,streams

Volume:Big data comes in one size : large . Enterprises are awash with data ,easy amassing terabytes

and even petabytes of information.TB , Records , Transactions ,Tables , Files.

Variety:Big data extends beyond structured data ,

including semi-structured and unstructured data to all varieties :text , audio , video ,click

streams ,log files and more Structured , Unstructured , Semi-structured

Verification:With all the big data there will be bad data and with diverse data there will be more

diverse quality and security levels of users. Good , Undefined , bad , Inconsistency ,

Incompleteness , Ambiguity

Value

Page 3: Hadoop Technology

TURKCELL DAHİLİBig Data – Data Sources

Page 4: Hadoop Technology

TURKCELL DAHİLİBig Data – Data Growth

Page 5: Hadoop Technology

TURKCELL DAHİLİHadoop Characteristics

• Open Source• Distributed data replication• Commodity hardware• Data and analysis co-location• Scalability• Reliable error handling

Page 6: Hadoop Technology

TURKCELL DAHİLİHadoop Storyline

2003 2006 2008 2009 2011 2012

Google published GFS & MapReduce Paper

Apache Hadoop Project started for Yahoo requirements

Cloudera founded First

commercial Hadoop Distribution released. Enterprise support is available

Hortonworks founded Ecosystem

reaches 300 companies

Page 7: Hadoop Technology

TURKCELL DAHİLİHadoop for Enterprise

Page 8: Hadoop Technology

TURKCELL DAHİLİRDBMS vs. Hadoop

Page 9: Hadoop Technology

TURKCELL DAHİLİRDBMS vs. HadoopRDBMS Hadoop

Data Size Terabytes Petabytes

Schema Required on Write Required on ReadSpeed Reads are fast Writes are fastAccess Interactive and Batch BatchUpdates Write and Read Many Times Write once Read many timesScaling Scale up Scale out

Data Types Structured Multi and unstructuredIntegrity High LowBest Use Interactive OLAP Analytics

Complex ACID transactionsOperational Data Store

Data DiscoveryProcessing unstructured dataMassive storage/processing

Page 10: Hadoop Technology

TURKCELL DAHİLİBenefits of Analysing with Hadoop

• Previously impossible or impractical to do analysis

• Analysis conducted at lower cost• Greater flexibility

Page 11: Hadoop Technology

TURKCELL DAHİLİBigData &Hadoop in Turkcell• Processing «Big Data» since 2009 with Cirrus• Hadoop is on production since December’12• ~4.5B records/~3.5TB data is processed with

Cirrus• Data not stored for future analysis• Cloudera Distribution for Hadoop (non-

supported)• 5 x 24 core machines with SAN storage (not

reference arch)

Page 12: Hadoop Technology

TURKCELL DAHİLİCommon Hadoop-able Problems• Modeling True Risk• Customer Churn Analysis• Recommendation Engine• Ad Targeting• Pos Transaction Analysis

• Analyze Network Data to Predict failure

• Threat Analysis• Search Quality• Data ‘sandbox’

Page 13: Hadoop Technology

TURKCELL DAHİLİModeling True Risk

Page 14: Hadoop Technology

TURKCELL DAHİLİModeling True Risk• Source, parse and aggregate disparate data

sources to build comprehensive data picture• E.g. Credit card records, call recordings, chat

sessions,emails, banking activity

• Structure and analyze• Sentiment analysis, graph creation, pattern

recognition

• Typical industry• Financial Services(Banks, Insurance)

Page 15: Hadoop Technology

TURKCELL DAHİLİCustomer Churn Analysis

Page 16: Hadoop Technology

TURKCELL DAHİLİCustomer Churn Analysis• Rapidly test and build behavioral model of

customer from disparate sources• Structure and analyse with Hadoop

• Traversing• Graph creation• Pattern recognition

• Typical Industry• Telecommunications, Financial Services

Page 17: Hadoop Technology

TURKCELL DAHİLİRecommendation Engine

Page 18: Hadoop Technology

TURKCELL DAHİLİRecommendation Engine• Batch processing framework

• Allow execution in parallel over large datasets

• Collaborative filtering• Collecting ‘taste’ information from many users• Utilizing information to predict what similar

users like

• Typical industry• Ecommerce, Manufacturing, Retail

Page 19: Hadoop Technology

TURKCELL DAHİLİAd Targeting

Page 20: Hadoop Technology

TURKCELL DAHİLİAd Targeting• Data analysis can be conducted in parallel,

reducing processing times from days to hours

• With hadoop, as data volumes grow the only expansion cost is hardware

• Add more nodes without degradation in performance

• Typical Industry• Advertising

Page 21: Hadoop Technology

TURKCELL DAHİLİPoint of Sale Transaction Analysis

Page 22: Hadoop Technology

TURKCELL DAHİLİPoint of Sale Transaction Analysis• Batch processing framework

• Allow execution in parallel over large datasets

• Pattern Recognition• Optimizing over multiple data sources• Utilizing information to predict demand

• Typical Industry• Retail

Page 23: Hadoop Technology

TURKCELL DAHİLİAnalyzing Network Data to Predict Failure

Page 24: Hadoop Technology

TURKCELL DAHİLİAnalyzing Network Data to Predict Failure

• Take the computation to the data• Extending the range of indexing techniques from

simple scans to more complex data mining

• Better understand how network reacts to fluctuations• How previously thought discrete anomalies may,

in fact, be interconnected

• Identify leading indicators of components• Typical Industry

• Utilities, Telecommunications, Datacenters

Page 25: Hadoop Technology

TURKCELL DAHİLİThreat Analysis

Page 26: Hadoop Technology

TURKCELL DAHİLİThreat Analysis• Parallel processing over huge datasets• Pattern recognition to identify anomalies

i.e. Threats• Typical Industry

• Security, Financial Services, click fraud..

Page 27: Hadoop Technology

TURKCELL DAHİLİSearch Quality

Page 28: Hadoop Technology

TURKCELL DAHİLİSearch Quality• Analysing search attempts in conjunction

with structured data• Pattern recognition

• Browsing pattern of users performing searches in different categories

• Typical Industry• Web• Ecommerce

Page 29: Hadoop Technology

TURKCELL DAHİLİData ‘Sandbox’

Page 30: Hadoop Technology

TURKCELL DAHİLİData ‘Sandbox’• With Hadoop an organization can dump all

this data into HDFS cluster• Then use Hadoop to start trying out

different analysis on data• See patterns or relationships that allow the

organization to derive additional value from data

• Typical Industry• Common across all industries

Page 31: Hadoop Technology

TURKCELL DAHİLİ

Hadoop Core

Page 32: Hadoop Technology

TURKCELL DAHİLİApache Hadoop Core

• Hadoop is a distributed storage and processing technology for large scale applications

• HDFS: Self healing, distributed file system for multi-structured data; breaks files into blocks & stores redundantly across cluster.

• Map Reduce: Framework for running large data processing jobs in parallel across many nodes & combining results.

Page 33: Hadoop Technology

TURKCELL DAHİLİMaster/Slave Model

Page 34: Hadoop Technology

TURKCELL DAHİLİHadoop Distributed File System• The Hadoop Distributed File System (HDFS) stores

files across all of the nodes in a Hadoop cluster. • It handles breaking the files into large blocks and

distributing them across different machines. • It also makes multiple copies of each block so that

if any one machine fails, no data is lost or unavailable.

Page 35: Hadoop Technology

TURKCELL DAHİLİHDFS- Features• Highly fault-tolerant• High throughput• Suitable for applications with large data sets• Streaming access to file system data• Can be built out of commodity hardware

Page 36: Hadoop Technology

TURKCELL DAHİLİHadoop Distributed File System• The brain of HDFS is the NameNode.

• Maintains the master list of files in HDFS• Handles mapping of filenames to blocks• Knows where each block is stored• Ensure each block is replicated the appropriate number

of times.

• DataNodes are machines that store HDFS data. • Each DataNode is colocated with a TaskTracker to

allow moving of the computation to data.

Page 37: Hadoop Technology

TURKCELL DAHİLİHDFS-Design• Very Large files• Streaming data access

• Time to read the whole file is more important than the reading the first record

• Commodity hardware

• Optimized for high throughput• Not fit for

• Low latency data access• Lots of small files• Multiple writers, arbitrary file modifications

Page 38: Hadoop Technology

TURKCELL DAHİLİHDFS architecture

Page 39: Hadoop Technology

TURKCELL DAHİLİMapReduce• MapReduce is the framework for running jobs

in Hadoop. It provides a simple and powerful paradigm for parallelizing data processing.

• The JobTracker is the central coordinator of jobs in MapReduce. It controls which jobs are being run, which resources they are assigned, etc.

• On each node in the cluster there is a TaskTracker that is responsible for running the map or reduce tasks assigned to it by the JobTracker.

Page 40: Hadoop Technology

TURKCELL DAHİLİ

Hadoop Ecosystem

Page 41: Hadoop Technology

TURKCELL DAHİLİHadoop Ecosystem

Page 42: Hadoop Technology

TURKCELL DAHİLİYARN

Page 43: Hadoop Technology

TURKCELL DAHİLİYARN• The YARN resource manager, which coordinates the

allocation of compute resources on the cluster.• The YARN node managers, which launch and monitor

the compute containers on machines in the cluster.• The MapReduce application master, which

coordinates the tasks running the MapReduce job. The application master and the MapReduce tasks run in containers that are scheduled by the resource manager and managed by the node managers.

Page 44: Hadoop Technology

TURKCELL DAHİLİPig• Pig provides an engine for executing data flows in

parallel on Hadoop.• PigLatin is a simple-to-understand data flow

language used in the analysis of large data sets.• Pig scripts are automatically converted into

MapReduce jobs by the Pig interpreter • Pig has an optimizer that rearranges some

operations in Pig Latin scripts to give better performance, combines MapReduce jobs together

Page 45: Hadoop Technology

TURKCELL DAHİLİHive• Is a datawarehouse system layer built on Hadoop• Allows you to define a structure for your unstructured

Big Data• Simplifies analysis and queries with an SQL like

scripting language called HiveQL• Produces MapReduce jobs in background• Extensible (UDFs,UDAFs,UDTFs)• Support uses such as:

• Adhoc queries• Summarization• Data Analysis

Page 46: Hadoop Technology

TURKCELL DAHİLİHive is not• … a relational database• … designed for online transaction processing• … suited for realtime queries and row-level

updates

Page 47: Hadoop Technology

TURKCELL DAHİLİStinger for Hive

Page 48: Hadoop Technology

TURKCELL DAHİLİAmbari

• Ambari for Hadoop Clusters

• Provision• Manage• Monitor

Page 49: Hadoop Technology

TURKCELL DAHİLİAmbari

• Provides step-by-step wizard for installing Hadoop services across any number of hosts

• Handles configuration of Hadoop services for the cluster.

Page 50: Hadoop Technology

TURKCELL DAHİLİSqoop and Flume• Apache Sqoop(TM) is a tool designed for

efficiently transferring bulk data between Apache Hadoop and structured datastores such as relational databases.

• Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large quantities of streaming data(ex:logs) into HDFS. It has a simple and flexible architecture based on streaming data flows.

Page 51: Hadoop Technology

TURKCELL DAHİLİSchemas - HCatalog• A table and storage management service for data

created using Apache Hadoop• Providing a shared schema and data type mechanism.• Providing a table abstraction so that users need not

be concerned with where or how their data is stored.• Providing interoperability across data processing tools

such as Pig, Map Reduce, and Hive.• Example• stocks_daily= load ‘nyse_daily' using HCatLoader();• cleansed = filter stocks_daily by symbol is not null;

Page 52: Hadoop Technology

TURKCELL DAHİLİMahout• The Apache Mahout™ machine learning library's

goal is to build scalable machine learning libraries.• Core algorithms for clustering, classification and

batch based collaborative filtering are implemented on top of Apache Hadoop using the map/reduce paradigm.

• The core libraries are highly optimized to allow for good performance also for non-distributed algorithms

Page 53: Hadoop Technology

TURKCELL DAHİLİ

Hadoop Core in Detail

Page 54: Hadoop Technology

TURKCELL DAHİLİMap Phase• In the map phase, MapReduce gives the user an

opportunity to operate on every record in the data set individually. This phase is commonly used to project out unwanted fields, transform fields, or apply filters.

• Certain types of joins and grouping can also be done in the map (e.g., joins where the data is already sorted or hash-based aggregation).

Page 55: Hadoop Technology

TURKCELL DAHİLİData Locality

Page 56: Hadoop Technology

TURKCELL DAHİLİCombiner Phase• Minimize the data transferred between map and

reduce tasks.• The combiner gives applications a chance to apply

their reducer logic early on.

Page 57: Hadoop Technology

TURKCELL DAHİLİShuffle Phase• Data arriving on the reducer has been partitioned

and sorted by the map, combine, and shuffle phases.

• By default, the data is sorted by the partition key. For example, if a user has a data set partitioned on user ID, in the reducer it will be sorted by user ID as well. Thus, MapReduce uses sorting to group like keys together.

• It is possible to specify additional sort keys beyond the partition key

Page 58: Hadoop Technology

TURKCELL DAHİLİShuffle

Page 59: Hadoop Technology

TURKCELL DAHİLİReduce Phase• The input to the reduce phase is each key from

the shuffle plus all of the records associated with that key.

• Because all records with the same value for the key are now collected together, it is possible to do joins and aggregation operations such as counting.

• The MapReduce user explicitly controls parallelism in the reduce.

Page 60: Hadoop Technology

TURKCELL DAHİLİReduce Phase

Page 61: Hadoop Technology

TURKCELL DAHİLİOutput Phase• The reducer (or map in a map-only job) writes its

output via an OutputFormat. • OutputFormat is responsible for providing a

RecordWriter, which takes the key-value pairs produced by the task and stores them.

• This includes serializing, possibly compressing, and writing them to HDFS, HBase, etc

Page 62: Hadoop Technology

TURKCELL DAHİLİMap Reduce Logical Flow

Page 63: Hadoop Technology

TURKCELL DAHİLİMap Reduce Logical Flow

Page 64: Hadoop Technology

TURKCELL DAHİLİMapReduce Processing Model

Page 65: Hadoop Technology

TURKCELL DAHİLİSpeculative Execution• If a Mapper runs slower than the others, a new

instance of the Mapper will be started on another machine operating on the same data.

• The result of the first Mapper to finish will be used.

• Hadoop will kill of the Mapper which is still running.

Page 66: Hadoop Technology

TURKCELL DAHİLİDistributed Cache• Sometimes all or many of the tasks in a MapReduce job will

need to access a single file or a set of files.• When thousands of map or reduce tasks attempt to open the

same HDFS file simultaneously, this puts a large strain on the NameNode and the DataNodes storing that file.

• To avoid this situation, MapReduce provides the distributed cache.

• The distributed cache allows users to specify—as part of their MapReduce job—any HDFS files they want every task to have access to.

• These files are then copied onto the local disk of the task nodes as part of the task initiation. Map or reduce tasks can then read these as local files.

Page 67: Hadoop Technology

TURKCELL DAHİLİSetting up Environment• Hortonworks Sandbox:

http://hortonworks.com/products/sandbox-instructions/

• VMware: http://www.vmware.com/products/player/overview.html

• Setup Guide:http://hortonworks.com/wp-content/uploads/2013/03/InstallingHortonworksSandboxonWindowsUsingVMwarePlayerv2.pdf

Page 68: Hadoop Technology

TURKCELL DAHİLİHortonworks Sandbox

Page 69: Hadoop Technology

TURKCELL DAHİLİHortonworks Sandbox

Page 70: Hadoop Technology

TURKCELL DAHİLİMapReduce Demo• Eclipse Plugin:

• HDFS Operations• Running WordCount, TopK• Generating jars for HDP Sandbox

• Sandbox: • HDFS Operations• Loading and Running jar files• Oozie and Ambari

Page 71: Hadoop Technology

TURKCELL DAHİLİHive Demo• Create Table with HCatalog• Load Data in to Hive• Query Data• OUTPUT to Table/HDFS/Local• JOIN

Page 72: Hadoop Technology

TURKCELL DAHİLİPig Demo• Load Data • Transform• Grouping• JOIN

Page 73: Hadoop Technology

TURKCELL DAHİLİ

Teşekkürler