Cassandra Core Concepts - Cassandra Day Toronto

47
©2013 DataStax Confidential. Do not distribute without consent. Jon Haddad Technical Evangelist, DataStax @rustyrazorblade Cassandra Core Concepts 1

Transcript of Cassandra Core Concepts - Cassandra Day Toronto

Page 1: Cassandra Core Concepts - Cassandra Day Toronto

©2013 DataStax Confidential. Do not distribute without consent.

Jon Haddad Technical Evangelist, DataStax @rustyrazorblade

Cassandra Core Concepts

1

Page 2: Cassandra Core Concepts - Cassandra Day Toronto

Small Data• 100's of MB to low GB, single user • sed, awk, grep are great • sqlite • Limitations: • bad for multiple concurrent users (file sharing!)

Page 3: Cassandra Core Concepts - Cassandra Day Toronto

Medium Data• Fits on 1 machine • RDBMS is fine • postgres • mysql

• Supports hundreds of concurrent users • ACID makes us feel good • Scales vertically

Page 4: Cassandra Core Concepts - Cassandra Day Toronto

Can RDBMS work for big data?

Page 5: Cassandra Core Concepts - Cassandra Day Toronto

Replication: ACID is a lie

Master

Page 6: Cassandra Core Concepts - Cassandra Day Toronto

Replication: ACID is a lie

Client

Master

Page 7: Cassandra Core Concepts - Cassandra Day Toronto

Replication: ACID is a lie

Client

Master

Page 8: Cassandra Core Concepts - Cassandra Day Toronto

Replication: ACID is a lie

Client

Master Slave

Page 9: Cassandra Core Concepts - Cassandra Day Toronto

Replication: ACID is a lie

Client

Master Slave

Page 10: Cassandra Core Concepts - Cassandra Day Toronto

Replication: ACID is a lie

Client

Master Slave

replication lag

Page 11: Cassandra Core Concepts - Cassandra Day Toronto

Replication: ACID is a lie

Client

Master Slave

replication lag

Page 12: Cassandra Core Concepts - Cassandra Day Toronto

Replication: ACID is a lie

Client

Master Slave

replication lag

Consistent results? Nope!

Page 13: Cassandra Core Concepts - Cassandra Day Toronto

Third Normal Form Doesn't Scale• Queries are unpredictable • Users are impatient • Data must be denormalized • If data > memory, you = history • Disk seeks are the worst

(SELECT CONCAT(city_name,', ',region) value, latitude, longitude, id, population, ( 3959 * acos( cos( radians($latitude) ) * cos( radians( latitude ) ) * cos( radians( longitude ) - radians($longitude) ) + sin( radians($latitude) ) * sin( radians( latitude ) ) ) ) AS distance, CASE region WHEN '$region' THEN 1 ELSE 0 END AS region_match FROM `cities` $where and foo_count > 5 ORDER BY region_match desc, foo_count desc limit 0, 11) UNION (SELECT CONCAT(city_name,', ',region) value, latitude, longitude, id, population, ( 3959 * acos( cos( radians($latitude) ) * cos( radians( latitude ) ) * cos( radians( longitude ) - radians($longitude) ) + sin( radians($latitude) ) * sin( radians( latitude ) ) ) ) AS distance,

Page 14: Cassandra Core Concepts - Cassandra Day Toronto

Sharding is a Nightmare• Data is all over the place •No more joins •No more aggregations • Denormalize all the things • Querying secondary indexes

requires hitting every shard • Adding shards requires manually

moving data • Schema changes

Page 15: Cassandra Core Concepts - Cassandra Day Toronto

Sharding is a Nightmare• Data is all over the place •No more joins •No more aggregations • Denormalize all the things • Querying secondary indexes

requires hitting every shard • Adding shards requires manually

moving data • Schema changes

Page 16: Cassandra Core Concepts - Cassandra Day Toronto

Sharding is a Nightmare• Data is all over the place •No more joins •No more aggregations • Denormalize all the things • Querying secondary indexes

requires hitting every shard • Adding shards requires manually

moving data • Schema changes

Page 17: Cassandra Core Concepts - Cassandra Day Toronto

Sharding is a Nightmare• Data is all over the place •No more joins •No more aggregations • Denormalize all the things • Querying secondary indexes

requires hitting every shard • Adding shards requires manually

moving data • Schema changes

Page 18: Cassandra Core Concepts - Cassandra Day Toronto

High Availability.. not really•Master failover… who's responsible? • Another moving part… • Bolted on hack

•Multi-DC is a mess • Downtime is frequent • Change database settings (innodb buffer

pool, etc) • Drive, power supply failures • OS updates

Page 19: Cassandra Core Concepts - Cassandra Day Toronto

Summary of Failure• Scaling is a pain • ACID is naive at best • You aren't consistent

• Re-sharding is a manual process •We're going to denormalize for

performance • High availability is complicated,

requires additional operational overhead

Page 20: Cassandra Core Concepts - Cassandra Day Toronto

Summary of Failure• Scaling is a pain • ACID is naive at best • You aren't consistent

• Re-sharding is a manual process •We're going to denormalize for

performance • High availability is complicated,

requires additional operational overhead

Page 21: Cassandra Core Concepts - Cassandra Day Toronto

Lessons Learned

Page 22: Cassandra Core Concepts - Cassandra Day Toronto

Lessons Learned• Consistency is not practical

Page 23: Cassandra Core Concepts - Cassandra Day Toronto

Lessons Learned• Consistency is not practical• So we give it up

Page 24: Cassandra Core Concepts - Cassandra Day Toronto

Lessons Learned• Consistency is not practical• So we give it up

•Manual sharding & rebalancing is hard

Page 25: Cassandra Core Concepts - Cassandra Day Toronto

Lessons Learned• Consistency is not practical• So we give it up

•Manual sharding & rebalancing is hard• So let's build in

Page 26: Cassandra Core Concepts - Cassandra Day Toronto

Lessons Learned• Consistency is not practical• So we give it up

•Manual sharding & rebalancing is hard• So let's build in

• Every moving part makes systems more complex

Page 27: Cassandra Core Concepts - Cassandra Day Toronto

Lessons Learned• Consistency is not practical• So we give it up

•Manual sharding & rebalancing is hard• So let's build in

• Every moving part makes systems more complex• So let's simplify our architecture - no more master / slave

Page 28: Cassandra Core Concepts - Cassandra Day Toronto

Lessons Learned• Consistency is not practical• So we give it up

•Manual sharding & rebalancing is hard• So let's build in

• Every moving part makes systems more complex• So let's simplify our architecture - no more master / slave

• Scaling up is expensive

Page 29: Cassandra Core Concepts - Cassandra Day Toronto

Lessons Learned• Consistency is not practical• So we give it up

•Manual sharding & rebalancing is hard• So let's build in

• Every moving part makes systems more complex• So let's simplify our architecture - no more master / slave

• Scaling up is expensive• We want commodity hardware

Page 30: Cassandra Core Concepts - Cassandra Day Toronto

Lessons Learned• Consistency is not practical• So we give it up

•Manual sharding & rebalancing is hard• So let's build in

• Every moving part makes systems more complex• So let's simplify our architecture - no more master / slave

• Scaling up is expensive• We want commodity hardware

• Scatter / gather no good

Page 31: Cassandra Core Concepts - Cassandra Day Toronto

Lessons Learned• Consistency is not practical• So we give it up

•Manual sharding & rebalancing is hard• So let's build in

• Every moving part makes systems more complex• So let's simplify our architecture - no more master / slave

• Scaling up is expensive• We want commodity hardware

• Scatter / gather no good• We denormalize for real time query performance

Page 32: Cassandra Core Concepts - Cassandra Day Toronto

Lessons Learned• Consistency is not practical• So we give it up

•Manual sharding & rebalancing is hard• So let's build in

• Every moving part makes systems more complex• So let's simplify our architecture - no more master / slave

• Scaling up is expensive• We want commodity hardware

• Scatter / gather no good• We denormalize for real time query performance• Goal is to always hit 1 machine

Page 33: Cassandra Core Concepts - Cassandra Day Toronto

What is Apache Cassandra?• Fast Distributed Database • High Availability • Linear Scalability • Predictable Performance •No SPOF •Multi-DC • Commodity Hardware • Easy to manage operationally •Not a drop in replacement for

RDBMS

Page 34: Cassandra Core Concepts - Cassandra Day Toronto

Hash Ring•No master / slave / replica sets •No config servers, zookeeper • Data is partitioned around the ring • Data is replicated to RF=N servers • All nodes hold data and can answer

queries (both reads & writes) • Location of data on ring is

determined by partition key

Page 35: Cassandra Core Concepts - Cassandra Day Toronto

CAP Tradeoffs• Impossible to be both consistent and

highly available during a network partition • Latency between data centers also

makes consistency impractical • Cassandra chooses Availability &

Partition Tolerance over Consistency

Page 36: Cassandra Core Concepts - Cassandra Day Toronto

Replication• Data is replicated automatically • You pick number of servers • Called “replication factor” or RF • Data is ALWAYS replicated to each

replica • If a machine is down, missing data

is replayed via hinted handoff

Page 37: Cassandra Core Concepts - Cassandra Day Toronto

Consistency Levels• Per query consistency • ALL, QUORUM, ONE • How many replicas for query to respond OK

Page 38: Cassandra Core Concepts - Cassandra Day Toronto

Multi DC• Typical usage: clients write to local

DC, replicates async to other DCs • Replication factor per keyspace per

datacenter • Datacenters can be physical or logical

Page 39: Cassandra Core Concepts - Cassandra Day Toronto

Reads & Writes

Page 40: Cassandra Core Concepts - Cassandra Day Toronto

The Write Path

•Writes are written to any node in the cluster (coordinator) •Writes are written to commit log, then to

memtable • Every write includes a timestamp •Memtable flushed to disk periodically

(sstable) •New memtable is created in memory • Deletes are a special write case, called a

“tombstone”

Page 41: Cassandra Core Concepts - Cassandra Day Toronto

Compaction• sstables are immutable • updates are written to new sstables • eventually we have too many files on disk • Partition are spread across multiple

SSTables • Same column can be in multiple SSTables •Merged through compaction, only latest

timestamp is kept

sstable sstable sstable

sstable

Page 42: Cassandra Core Concepts - Cassandra Day Toronto

The Read Path• Any server may be queried, it acts as the

coordinator • Contacts nodes with the requested key • On each node, data is pulled from

SSTables and merged • Consistency< ALL performs read repair

in background (read_repair_chance)

Page 43: Cassandra Core Concepts - Cassandra Day Toronto

What are your options?

Page 44: Cassandra Core Concepts - Cassandra Day Toronto

Open Source• Latest, bleeding edge features • File JIRAs • Support via mailing list & IRC • Fix bugs • cassandra.apache.org • Perfect for hacking

Page 45: Cassandra Core Concepts - Cassandra Day Toronto

DataStax Enterprise• Integrated Multi-DC Search • Integrated Spark for Analytics • Free Startup Program • <3MM rev & <$30M funding

• Extended support • Additional QA • Focused on stable releases for enterprise • Included on USB

Page 46: Cassandra Core Concepts - Cassandra Day Toronto
Page 47: Cassandra Core Concepts - Cassandra Day Toronto

©2013 DataStax Confidential. Do not distribute without consent. 25