Olap with Spark and Cassandra

Post on 15-Jan-2015

1.857 views 4 download

Tags:

description

 

Transcript of Olap with Spark and Cassandra

OLAP WITH SPARK ANDCASSANDRA

EVAN CHANJULY 2014

WHO AM I?Principal Engineer, @evanfchan

Creator of

Socrata, Inc.

http://github.com/velviaSpark Job Server

WE BUILD SOFTWARE TO MAKE DATA USEFUL TO MOREPEOPLE.

data.edmonton.ca finances.worldbank.org data.cityofchicago.orgdata.seattle.gov data.oregon.gov data.wa.govwww.metrochicagodata.org data.cityofboston.govinfo.samhsa.gov explore.data.gov data.cms.gov data.ok.govdata.nola.gov data.illinois.gov data.colorado.govdata.austintexas.gov data.undp.org www.opendatanyc.comdata.mo.gov data.nfpa.org data.raleighnc.gov dati.lombardia.itdata.montgomerycountymd.gov data.cityofnewyork.usdata.acgov.org data.baltimorecity.gov data.energystar.govdata.somervillema.gov data.maryland.gov data.taxpayer.netbronx.lehman.cuny.edu data.hawaii.gov data.sfgov.org

WE ARE SWIMMING IN DATA!

BIG DATA AT OOYALA2.5 billion analytics pings a day = almost a trillion events ayear.Roll up tables - 30 million rows per day

BIG DATA AT SOCRATAHundreds of datasets, each one up to 30 million rowsCustomer demand for billion row datasets

HOW CAN WE ALLOW CUSTOMERS TO QUERY AYEAR'S WORTH OF DATA?

Flexible - complex queries includedSometimes you can't denormalize your data enough

Fast - interactive speeds

RDBMS? POSTGRES?Start hitting latency limits at ~10 million rowsNo robust and inexpensive solution for querying across shardsNo robust way to scale horizontallyComplex and expensive to improve performance (eg rolluptables)

OLAP CUBES?Materialize summary for every possible combinationToo complicated and brittleTakes forever to computeExplodes storage and memory

When in doubt, use brute force- Ken Thompson

CASSANDRAHorizontally scalableVery flexible data modelling (lists, sets, custom data types)Easy to operateNo fear of number of rows or documentsBest of breed storage technology, huge communityBUT: Simple queries only

APACHE SPARKHorizontally scalable, in-memory queriesFunctional Scala transforms - map, filter, groupBy, sortetc.SQL, machine learning, streaming, graph, R, many more pluginsall on ONE platform - feed your SQL results to a logisticregression, easy!THE Hottest big data platform, huge community, leavingHadoop in the dustDevelopers love it

SPARK PROVIDES THE MISSING FAST, DEEPANALYTICS PIECE OF CASSANDRA!

INTEGRATING SPARK AND CASSANDRAScala solutions:

Datastax integration:

(CQL-based)https://github.com/datastax/cassandra-driver-sparkCalliope

A bit more work:

Use traditional Cassandra client with RDDsUse an existing InputFormat, like CqlPagedInputFormat

EXAMPLE CUSTOM INTEGRATION USINGASTYANAX

val cassRDD = sc.parallelize(rowkeys). flatMap { rowkey => columnFamily.get(rowkey).execute().asScala }

A SPARK AND CASSANDRAOLAP ARCHITECTURE

SEPARATE STORAGE AND QUERY LAYERSCombine best of breed storage and query platformsTake full advantage of evolution of eachStorage handles replication for availabilityQuery can replicate data for scaling read concurrency -independent!

SCALE NODES, NOTDEVELOPER TIME!!

KEEPING IT SIMPLEMaximize row scan speedColumnar representation for efficiencyCompressed bitmap indexes for fast algebraFunctional transforms for easy memoization, testing,concurrency, composition

SPARK AS CASSANDRA'S CACHE

EVEN BETTER: TACHYON OFF-HEAP CACHING

INITIAL ATTEMPTSval rows = Seq( Seq("Burglary", "19xx Hurston", 10), Seq("Theft", "55xx Floatilla Ave", 5) )

sc.parallelize(rows) .map { values => (values[0], values) } .groupByKey .reduce(_[2] + _[2])

No existing generic query engine for Spark when we started(Shark was in infancy, had no indexes, etc.), so we built our ownFor every row, need to extract out needed columnsAbility to select arbitrary columns means using Seq[Any], notype safetyBoxing makes integer aggregation very expensive and memoryinefficient

COLUMNAR STORAGE AND QUERYING

The traditional row-based data storageapproach is dead- Michael Stonebraker

TRADITIONAL ROW-BASED STORAGESame layout in memory and on disk:

Name AgeBarak 46

Hillary 66

Each row is stored contiguously. All columns in row 2 come afterrow 1.

COLUMNAR STORAGE (MEMORY)Name column

0 10 1

Dictionary: {0: "Barak", 1: "Hillary"}

Age column

0 146 66

COLUMNAR STORAGE (CASSANDRA)Review: each physical row in Cassandra (e.g. a "partition key")stores its columns together on disk.

Schema CF

Rowkey TypeName StringDict

Age Int

Data CF

Rowkey 0 1Name 0 1

Age 46 66

ADVANTAGES OF COLUMNAR STORAGECompression

Dictionary compression - HUGE savings for low-cardinalitystring columnsRLE

Reduce I/OOnly columns needed for query are loaded from disk

Can keep strong types in memory, avoid boxingBatch multiple rows in one cell for efficiency

ADVANTAGES OF COLUMNAR QUERYINGCache locality for aggregating column of dataTake advantage of CPU/GPU vector instructions for ints /doublesavoid row-ifying until last possible momenteasy to derive computed columnsUse vector data / linear math libraries

COLUMNAR QUERY ENGINE VS ROW-BASED INSCALA

Custom RDD of column-oriented blocks of dataUses ~10x less heap10-100x faster for group by's on a single nodeScan speed in excess of 150M rows/sec/core for integeraggregations

SO, GREAT, OLAP WITH CASSANDRA ANDSPARK. NOW WHAT?

DATASTAX: CASSANDRA SPARK INTEGRATIONDatastax Enterprise now comes with HA Spark

HA master, that is.cassandra-driver-spark

SPARK SQLAppeared with Spark 1.0In-memory columnar storeCan read from Parquet now; Cassandra integration comingQuerying is not column-based (yet)No indexesWrite custom functions in Scala .... take that Hive UDFs!!Integrates well with MLBase, Scala/Java/Python

WORK STILL NEEDEDIndexesColumnar querying for fast aggregationEfficient reading from columnar storage formats

GETTING TO A BILLION ROWS / SECBenchmarked at 20 million rows/sec, GROUP BY on twocolumns, aggregating two more columns. Per core.50 cores needed for parallel localized grouping throughput of1 billion rows~5-10 additional cores budget for distributed exchange andgrouping of locally agggregated groups, depending on resultsize and network topology

Above is a custom solution, NOT Spark SQL.

Look for integration with Spark/SQL for a proper solution

LESSONSExtremely fast distributed querying for these use cases

Data doesn't change much (and only bulk changes)Analytical queries for subset of columnsFocused on numerical aggregationsSmall numbers of group bys, limited network interchange ofdata

Spark a bit rough around edges, but evolving fastConcurrent queries is a frontier with Spark. Use additionalSpark contexts.

THANK YOU!

SOME COLUMNARALTERNATIVES

Monetdb and Infobright - true columnar stores (storage +querying)Cstore-fdw for PostGres - columnar storage onlyVoltDB - in-memory distributed columnar database (but needto recompile for DDL changes)Google BigQuery - columnar cloud database, Dremel basedAmazon RedShift