Intro to Apache Apex (Next Gen Hadoop) & Comparison to Spark Streaming
-
Upload
datatorrent -
Category
Technology
-
view
26 -
download
0
Transcript of Intro to Apache Apex (Next Gen Hadoop) & Comparison to Spark Streaming
Devendra Tagare <[email protected]>Software Engineer @DataTorrent Inc@devtagareJuly 6h, 2016
The next generation native Hadoop platformIntroduction to Apache Apex
What is Apex
2
• Platform and runtime engine that enables development of scalable and fault-tolerant distributed applications
• Hadoop native• Process streaming or batch big data• High throughput and low latency• Library of commonly needed business logic• Write any custom business logic in your application
Applications on Apex
3
• Distributed processing• Application logic broken into components called operators that run in a distributed
fashion across your cluster• Scalable
• Operators can be scaled up or down at runtime according to the load and SLA• Fault tolerant
• Automatically recover from node outages without having to reprocess from beginning• State is preserved• Long running applications
• Operators• Use library to build applications quickly• Write your own in Java using the API
• Operational insight – DataTorrent RTS• See how each operator is performing and even record data
Apex Stack Overview
4
Apex Operator Library - Malhar
5
Native Hadoop Integration
6
• YARN is the resource manager
• HDFS used for storing any persistent state
Application Development Model
7
A Stream is a sequence of data tuplesA typical Operator takes one or more input streams, performs computations & emits one or more output streams
• Each Operator is YOUR custom business logic in java, or built-in operator from our open source library• Operator has many instances that run in parallel and each instance is single-threaded
Directed Acyclic Graph (DAG) is made up of operators and streams
Directed Acyclic Graph (DAG)
Filtered
Stream
Output StreamTuple Tuple
Filtered Stream
Enriched Stream
Enriched
Stream
er
Operator
er
Operator
er
Operator
er
Operator
er
Operator
er
Operator
Advanced Windowing Support
8
Application window Sliding window and tumbling window Checkpoint window No artificial latency
Application in Java
9
Operators
10
Operators (contd)
11
Partitioning and unification
12
NxM PartitionsUnifier
0 1 2 3
Logical DAG
0 1 2
1
1 Unifier
1
20
Logical Diagram
Physical Diagram with operator 1 with 3 partitions
0
Unifier
1a
1b
1c
2a
2b
Unifier 3
Physical DAG with (1a, 1b, 1c) and (2a, 2b): No bottleneck
Unifier
Unifier0
1a
1b
1c
2a
2b
Unifier 3
Physical DAG with (1a, 1b, 1c) and (2a, 2b): Bottleneck on intermediate Unifier
Advanced Partitioning
13
0
1a
1b
2 3 4Unifier
Physical DAG
0 4
3a2a1a
1b 2b 3b
Unifier
Physical DAG with Parallel Partition
Parallel Partition
Container
uopr
uopr1
uopr2
uopr3
uopr4
uopr1
uopr2
uopr3
uopr4
dopr
dopr
doprunifier
unifier
unifier
unifier
Container
Container
NIC
NIC
NIC
NIC
NIC
Container
NIC
Logical Plan
Execution Plan, for N = 4; M = 1
Execution Plan, for N = 4; M = 1, K = 2 with cascading unifiers
Cascading Unifiers
0 1 2 3 4
Logical DAG
Dynamic Partitioning
14
• Partitioning change while application is runningᵒ Change number of partitions at runtime based on statsᵒ Determine initial number of partitions dynamically
• Kafka operators scale according to number of kafka partitionsᵒ Supports re-distribution of state when number of partitions changeᵒ API for custom scaler or partitioner
2b
2c3
2a
2d
1b
1a1a 2a
1b 2b
31a 2b
1b 2c 3b
2a
2d
3a
Unifiers not shown
How tuples are partitioned
15
• Tuple hashcode and mask used to determine destination partitionᵒ Mask picks the last n bits of the hashcode of the tupleᵒ hashcode method can be overridden
• StreamCodec can be used to specify custom hashcode for tuplesᵒ Can also be used for specifying custom serialization
tuple: {Name, 24204842, San Jose}
Hashcode: 001010100010101
Mask (0x11)
Partition
00 1
01 2
10 3
11 4
Custom partitioning
16
• Custom distribution of tuples ᵒ E.g.. Broadcast
tuple:{Name, 24204842, San Jose}
Hashcode: 001010100010101
Mask (0x00)
Partition
00 1
00 2
00 3
00 4
Fault Tolerance
17
• Operator state is checkpointed to a persistent storeᵒ Automatically performed by engine, no additional work needed by
operatorᵒ In case of failure operators are restarted from checkpoint stateᵒ Frequency configurable per operatorᵒ Asynchronous and distributed by defaultᵒ Default store is HDFS
• Automatic detection and recovery of failed operatorsᵒ Heartbeat mechanism
• Buffering mechanism to ensure replay of data from recovered point so that there is no loss of data
• Application master state checkpointed
• In-memory PubSub• Stores results emitted by operator until committed• Handles backpressure / spillover to local disk• Ordering, idempotency
Operator 1
Container 1
BufferServer
Node 1
Operator 2
Container 2
Node 2
Buffer Server
18
Recovery Scenario… EW2, 1, 3, BW2, EW1, 4, 2, 1, BW1
sum0
… EW2, 1, 3, BW2, EW1, 4, 2, 1, BW1sum
7
… EW2, 1, 3, BW2, EW1, 4, 2, 1, BW1sum10
… EW2, 1, 3, BW2, EW1, 4, 2, 1, BW1sum
7
19
Processing Guarantees - Recovery
20
Atleast once• On recovery data will be replayed from a previous checkpoint
ᵒ Messages will not be lostᵒ Default mechanism and is suitable for most applications
• Can be used in conjunction with following mechanisms to achieve exactly-once behavior in fault recovery scenariosᵒ Transactions with meta information, Rewinding output, Feedback
from external entity, Idempotent operationsAtmost once• On recovery the latest data is made available to operator
ᵒ Useful in use cases where some data loss is acceptable and latest data is sufficient
Exactly once• At least once + state recovery + operator logic to achieve end-
to-end exactly once
Stream Locality
21
• By default operators are deployed in containers (processes) randomly on different nodes across the Hadoop cluster
• Custom locality for streamsᵒ Rack local: Data does not traverse network switchesᵒ Node local: Data is passed via loopback interface and frees up
network bandwidthᵒ Container local: Messages are passed via in memory queues between
operators and does not require serializationᵒ Thread local: Messages are passed between operators in a same
thread equivalent to calling a subsequent function on the message
Next Gen Stream Data Processing• Data from variety of sources (IoT, Kafka, files, social media etc.)• Unbounded, continuous data streams
ᵒ Batch can be processed as stream (but a stream is not a batch)• (In-memory) Processing with temporal boundaries (windows)• Stateful operations: Aggregation, Rules, … -> Analytics• Results stored to variety of sinks or destinations
ᵒ Streaming application can also serve data with very low latency
22
Browser
Web Server
Kafka Input(logs)
Decompress, Parse, Filter
Dimensions Aggregate Kafka
LogsKafka
Batch vs. Streaming
Credit: Gyula Fóra & Márton Balassi: Large-Scale Stream Processing in the Hadoop Ecosystem
23
Architecture and FeaturesSpark Streaming Apex
Model micro-batch native streaming/data-in-motion
Language Java, Scala, client bindings Java (Scala)
API declarative compositional (DAG), declarative*
Locality data locality advanced processing locality
Latency high very low (millis)
Throughput very high very high
Scalability scheduler limit horizontal
Partitioning standard advanced (parallel pipes, unifiers)
Connector Library Limited (certification), externally maintained
Rich library of connectors and processors, part of Apex (Malhar)
24
OperabilitySpark Streaming Apex
State Management RDD, user code checkpointing
Recovery RDD lineage incremental (buffer server)
Processing Sem. exactly-once* end-to-end exactly-once
Backpressure user configuration Automatic (buffer server memory + disk)
Elasticity yes w/ limited control yes w/ full user control
Dynamic topology no yes
Security Kerberos Kerberos, RBAC*, LDAP*
Multi-tennancy depends on cluster manager
YARN, full isolation
DevOps tools basic REST API, DataTorrent RTS25
Data Processing Pipeline ExampleApp Builder
26
Monitoring ConsoleLogical View
27
Monitoring ConsolePhysical View
28
Real-Time DashboardsReal Time Visualization
29
Resources
30
• Apache Apex - http://apex.apache.org/• Subscribe - http://apex.apache.org/community.html• Download - https://www.datatorrent.com/download/• Twitter
ᵒ @ApacheApex; Follow - https://twitter.com/apacheapexᵒ @DataTorrent; Follow – https://twitter.com/datatorrent
• Meetups - http://www.meetup.com/topics/apache-apex• Webinars - https://www.datatorrent.com/webinars/• Videos - https://www.youtube.com/user/DataTorrent• Slides - http://www.slideshare.net/DataTorrent/presentations • Startup Accelerator Program - Full featured enterprise product
ᵒ https://www.datatorrent.com/product/startup-accelerator/
Maximize Revenue w/ real-time insights
31
PubMatic is the leading marketing automation software company for publishers. Through real-time analytics, yield management, and workflow automation, PubMatic enables publishers to make smarter inventory decisions and improve revenue performance
Business Need Apex based Solution Client Outcome
• Ingest and analyze high volume clicks & views in real-time to help customers improve revenue
- 200K events/second data flow• Report critical metrics for campaign
monetization from auction and client logs - 22 TB/day data generated
• Handle ever increasing traffic with efficient resource utilization
• Always-on ad network
• DataTorrent Enterprise platform, powered by Apache Apex
• In-memory stream processing• Comprehensive library of pre-built
operators including connectors• Built-in fault tolerance• Dynamically scalable• Management UI & Data Visualization
console
• Helps PubMatic deliver ad performance insights to publishers and advertisers in real-time instead of 5+ hours
• Helps Publishers visualize campaign performance and adjust ad inventory in real-time to maximize their revenue
• Enables PubMatic reduce OPEX with efficient compute resource utilization
• Built-in fault tolerance ensures customers can always access ad network
Industrial IoT applications
32
GE is dedicated to providing advanced IoT analytics solutions to thousands of customers who are using their devices and sensors across different verticals. GE has built a sophisticated analytics platform, Predix, to help its customers develop and execute Industrial IoT applications and gain real-time insights as well as actions.
Business Need Apex based Solution Client Outcome
• Ingest and analyze high-volume, high speed data from thousands of devices, sensors per customer in real-time without data loss
• Predictive analytics to reduce costly maintenance and improve customer service
• Unified monitoring of all connected sensors and devices to minimize disruptions
• Fast application development cycle• High scalability to meet changing business
and application workloads
• Ingestion application using DataTorrent Enterprise platform
• Powered by Apache Apex• In-memory stream processing• Built-in fault tolerance• Dynamic scalability• Comprehensive library of pre-built operators• Management UI console
• Helps GE improve performance and lower cost by enabling real-time Big Data analytics
• Helps GE detect possible failures and minimize unplanned downtimes with centralized management & monitoring of devices
• Enables faster innovation with short application development cycle
• No data loss and 24x7 availability of applications
• Helps GE adjust to scalability needs with auto-scaling
We Are Hiring
33
• [email protected]• Developers/Architects• QA Automation Developers• Information Developers• Build and Release• Community Leaders
End
34