Virtual Nodes: Rethinking Topology in Cassandra

59
Eric Evans [email protected] @jericevans ApacheCon Europe November 7, 2012 Rethinking Topology in Cassandra 1 Wednesday, November 7, 12

description

 

Transcript of Virtual Nodes: Rethinking Topology in Cassandra

Page 1: Virtual Nodes: Rethinking Topology in Cassandra

Eric [email protected]

@jericevans

ApacheCon EuropeNovember 7, 2012

Rethinking Topology in Cassandra

1Wednesday, November 7, 12

Page 2: Virtual Nodes: Rethinking Topology in Cassandra

DHT 101

2Wednesday, November 7, 12

Page 3: Virtual Nodes: Rethinking Topology in Cassandra

DHT 101partitioning

AZ

3Wednesday, November 7, 12

The keyspace, a namespace encompassing all possible keys

Page 4: Virtual Nodes: Rethinking Topology in Cassandra

DHT 101partitioning

A

B

C

Y

Z

4Wednesday, November 7, 12

The namespace is divided into N partitions (where N is the number of nodes). Partitions are mapped to nodes and placed evenly throughout the namespace.

Page 5: Virtual Nodes: Rethinking Topology in Cassandra

DHT 101partitioning

A

B

C

Y Key = Aaa

Z

5Wednesday, November 7, 12

A record, stored by key, is positioned on the next node (working clockwise) from where it sorts in the namespace

Page 6: Virtual Nodes: Rethinking Topology in Cassandra

DHT 101replica placement

A

B

C

Y Key = Aaa

Z

6Wednesday, November 7, 12

Additional copies (replicas) are stored on other nodes. Commonly the next N-1 nodes, but anything deterministic will work.

Page 7: Virtual Nodes: Rethinking Topology in Cassandra

DHT 101consistency

Consistency

Availability

Partition tolerance

7Wednesday, November 7, 12

With multiple copies comes a set of trade-offs commonly articulated using the CAP theorem; At any given point, we can only guarantee 2 of Consistency, Availability, and Partition tolerance.

Page 8: Virtual Nodes: Rethinking Topology in Cassandra

A

?

?

W

DHT 101scenario: consistency level = one

8Wednesday, November 7, 12

Writing at consistency level ONE provides very high availability, only one in 3 member nodes need be up for write to succeed

Page 9: Virtual Nodes: Rethinking Topology in Cassandra

A

?

?

R

DHT 101scenario: consistency level = all

9Wednesday, November 7, 12

If strong consistency is required, reads with consistency ALL can be used of writes performed at ONE. The trade-off is in availability, all 3 member nodes must be up, else the read fails.

Page 10: Virtual Nodes: Rethinking Topology in Cassandra

DHT 101scenario: quorum write

A

B

?

R+W > N

W

10Wednesday, November 7, 12

Using QUORUM consistency, we only require floor((N/2)+1) nodes.

Page 11: Virtual Nodes: Rethinking Topology in Cassandra

DHT 101scenario: quorum read

?

B

C

R+W > N

R

11Wednesday, November 7, 12

Using QUORUM consistency, we only require floor((N/2)+1) nodes.

Page 12: Virtual Nodes: Rethinking Topology in Cassandra

Awesome, yes?

12Wednesday, November 7, 12

Page 13: Virtual Nodes: Rethinking Topology in Cassandra

Well...

13Wednesday, November 7, 12

Page 14: Virtual Nodes: Rethinking Topology in Cassandra

Problem:Poor load distribution

14Wednesday, November 7, 12

Page 15: Virtual Nodes: Rethinking Topology in Cassandra

Distributing Load

A

B

C

Y

Z

M

15Wednesday, November 7, 12

B and C hold replicas of A

Page 16: Virtual Nodes: Rethinking Topology in Cassandra

Distributing Load

A

B

C

Y

Z

M

16Wednesday, November 7, 12

A and B hold replicas of Z

Page 17: Virtual Nodes: Rethinking Topology in Cassandra

Distributing Load

A

B

C

Y

Z

M

17Wednesday, November 7, 12

Z and A hold replicas of Y

Page 18: Virtual Nodes: Rethinking Topology in Cassandra

Distributing Load

A

B

C

Y

Z

M

18Wednesday, November 7, 12

Disaster strikes!

Page 19: Virtual Nodes: Rethinking Topology in Cassandra

Distributing Load

AZ

Y B

CM

19Wednesday, November 7, 12

Sets [Y,Z,A], [Z,A,B], [A,B,C] all suffer the loss of A; Results in extra load on neighboring nodes

Page 20: Virtual Nodes: Rethinking Topology in Cassandra

Distributing Load

A

B

C

Y

Z A1

M

20Wednesday, November 7, 12

Solution: Replace/repair down node

Page 21: Virtual Nodes: Rethinking Topology in Cassandra

Distributing Load

A

B

C

Y

Z A1

M

21Wednesday, November 7, 12

Solution: Replace/repair down node

Page 22: Virtual Nodes: Rethinking Topology in Cassandra

Distributing Load

AZ

Y B

C

A1

M

22Wednesday, November 7, 12

Neighboring nodes are needed to stream missing data to A; Results in even more load on neighboring nodes

Page 23: Virtual Nodes: Rethinking Topology in Cassandra

Problem:Poor data distribution

23Wednesday, November 7, 12

Page 24: Virtual Nodes: Rethinking Topology in Cassandra

Distributing DataA

B

CD

24Wednesday, November 7, 12

Ideal distribution of keyspace

Page 25: Virtual Nodes: Rethinking Topology in Cassandra

Distributing DataA

B

CD

E

25Wednesday, November 7, 12

Bootstrapping a node, bisecting one partition; Distribution is no longer ideal

Page 26: Virtual Nodes: Rethinking Topology in Cassandra

Distributing Data

A

B

CD

EA

C

B

D

26Wednesday, November 7, 12

Moving existing nodes means moving corresponding data; Not ideal

Page 27: Virtual Nodes: Rethinking Topology in Cassandra

Distributing Data

A

B

CD

EA

C

B

D

27Wednesday, November 7, 12

Moving existing nodes means moving corresponding data; Not ideal

Page 28: Virtual Nodes: Rethinking Topology in Cassandra

Distributing DataA

B

CD

H E

FG

28Wednesday, November 7, 12

Frequently cited alternative: Double the size of your cluster, bisecting all ranges

Page 29: Virtual Nodes: Rethinking Topology in Cassandra

Distributing DataA

B

CD

H E

FG

29Wednesday, November 7, 12

Frequently cited alternative: Double the size of your cluster, bisecting all ranges

Page 30: Virtual Nodes: Rethinking Topology in Cassandra

Virtual Nodes

30Wednesday, November 7, 12

Page 31: Virtual Nodes: Rethinking Topology in Cassandra

In a nutshell...

host

host

host

31Wednesday, November 7, 12

Basically: “nodes” on the ring are virtual, and many of them are mapped to each “real” node (host)

Page 32: Virtual Nodes: Rethinking Topology in Cassandra

Benefits

• Operationally simpler (no token management)

• Better distribution of load

• Concurrent streaming involving all hosts

• Smaller partitions mean greater reliability

• Supports heterogenous hardware

32Wednesday, November 7, 12

Page 33: Virtual Nodes: Rethinking Topology in Cassandra

Strategies

• Automatic sharding

• Fixed partition assignment

• Random token assignment

33Wednesday, November 7, 12

Page 34: Virtual Nodes: Rethinking Topology in Cassandra

StrategyAutomatic Sharding

• Partitions are split when data exceeds a threshold

• Newly created partitions are relocated to a host with lower data load

• Similar to sharding performed by Bigtable, or Mongo auto-sharding

34Wednesday, November 7, 12

Page 35: Virtual Nodes: Rethinking Topology in Cassandra

StrategyFixed Partition Assignment

• Namespace divided into Q evenly-sized partitions

• Q/N partitions assigned per host (where N is the number of hosts)

• Joining hosts “steal” partitions evenly from existing hosts.

• Used by Dynamo and Voldemort (described in Dynamo paper as “strategy 3”)

35Wednesday, November 7, 12

Page 36: Virtual Nodes: Rethinking Topology in Cassandra

StrategyRandom Token Assignment

• Each host assigned T random tokens

• T random tokens generated for joining hosts; New tokens divide existing ranges

• Similar to libketama; Identical to Classic Cassandra when T=1

36Wednesday, November 7, 12

Page 37: Virtual Nodes: Rethinking Topology in Cassandra

Considerations

1. Number of partitions

2. Partition size

3. How 1 changes with more nodes and data

4. How 2 changes with more nodes and data

37Wednesday, November 7, 12

Page 38: Virtual Nodes: Rethinking Topology in Cassandra

Evaluating

Strategy No. Partitions Partition size

Random O(N) O(B/N)

Fixed O(1) O(B)

Auto-sharding O(B) O(1)

B ~ total data size, N ~ number of hosts

38Wednesday, November 7, 12

Page 39: Virtual Nodes: Rethinking Topology in Cassandra

Evaluating

• Automatic sharding

• partition size constant (great)

• number of partitions scales linearly with data size (bad)

• Fixed partition assignment

• Random token assignment

39Wednesday, November 7, 12

Page 40: Virtual Nodes: Rethinking Topology in Cassandra

Evaluating

• Automatic sharding

• Fixed partition assignment

• Number of partitions is constant (good)

• Partition size scales linearly with data size (bad)

• Higher operational complexity (bad)

• Random token assignment

40Wednesday, November 7, 12

Page 41: Virtual Nodes: Rethinking Topology in Cassandra

Evaluating

• Automatic sharding

• Fixed partition assignment

• Random token assignment

• Number of partitions scales linearly with number of hosts (good ok)

• Partition size increases with more data; decreases with more hosts (good)

41Wednesday, November 7, 12

Page 42: Virtual Nodes: Rethinking Topology in Cassandra

Evaluating

• Automatic sharding

• Fixed partition assignment

• Random token assignment

42Wednesday, November 7, 12

Page 43: Virtual Nodes: Rethinking Topology in Cassandra

Cassandra

43Wednesday, November 7, 12

Page 44: Virtual Nodes: Rethinking Topology in Cassandra

Configurationconf/cassandra.yaml

# Comma separated list of tokens,# (new installs only).initial_token:<token>,<token>,<token>

or

# Number of tokens to generate.num_tokens: 256

44Wednesday, November 7, 12

Two params control how tokens are assigned. The initial_token param now optionally accepts a csv list, or (preferably) you can assign a numeric value to num_tokens

Page 45: Virtual Nodes: Rethinking Topology in Cassandra

Configurationnodetool info

Token : (invoke with -T/--tokens to see all 256 tokens)ID : 64090651-6034-41d5-bfc6-ddd24957f164Gossip active : trueThrift active : trueLoad : 92.69 KBGeneration No : 1351030018Uptime (seconds): 45Heap Memory (MB): 95.16 / 1956.00Data Center : datacenter1Rack : rack1Exceptions : 0Key Cache : size 240 (bytes), capacity 101711872 (bytes ...Row Cache : size 0 (bytes), capacity 0 (bytes), 0 hits, ...

45Wednesday, November 7, 12

To keep the output readable, nodetool info no longer displays tokens (if there are more than one), unless the -T/--tokens argument is passed

Page 46: Virtual Nodes: Rethinking Topology in Cassandra

Configurationnodetool ring

Datacenter: datacenter1==========Replicas: 2

Address Rack Status State Load Owns Token 9022770486425350384127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -9182469192098976078127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -9054823614314102214127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -8970752544645156769127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -8927190060345427739127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -8880475677109843259127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -8817876497520861779127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -8810512134942064901127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -8661764562509480261127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -8641550925069186492127.0.0.1 rack1 Up Normal 97.24 KB 66.03% -8636224350654790732......

46Wednesday, November 7, 12

nodetool ring is still there, but the output is significantly more verbose, and it is less useful as the go-to

Page 47: Virtual Nodes: Rethinking Topology in Cassandra

Configurationnodetool status

Datacenter: datacenter1=======================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving-- Address Load Tokens Owns Host ID RackUN 10.0.0.1 97.2 KB 256 66.0% 64090651-6034-41d5-bfc6-ddd24957f164 rack1UN 10.0.0.2 92.7 KB 256 66.2% b3c3b03c-9202-4e7b-811a-9de89656ec4c rack1UN 10.0.0.3 92.6 KB 256 67.7% e4eef159-cb77-4627-84c4-14efbc868082 rack1

47Wednesday, November 7, 12

New go-to command is nodetool status

Page 48: Virtual Nodes: Rethinking Topology in Cassandra

Configurationnodetool status

Datacenter: datacenter1=======================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving-- Address Load Tokens Owns Host ID RackUN 10.0.0.1 97.2 KB 256 66.0% 64090651-6034-41d5-bfc6-ddd24957f164 rack1UN 10.0.0.2 92.7 KB 256 66.2% b3c3b03c-9202-4e7b-811a-9de89656ec4c rack1UN 10.0.0.3 92.6 KB 256 67.7% e4eef159-cb77-4627-84c4-14efbc868082 rack1

48Wednesday, November 7, 12

Of note, since it is no longer practical to name a host by it’s token (because it can have many), each host has a unique ID

Page 49: Virtual Nodes: Rethinking Topology in Cassandra

Configurationnodetool status

Datacenter: datacenter1=======================Status=Up/Down|/ State=Normal/Leaving/Joining/Moving-- Address Load Tokens Owns Host ID RackUN 10.0.0.1 97.2 KB 256 66.0% 64090651-6034-41d5-bfc6-ddd24957f164 rack1UN 10.0.0.2 92.7 KB 256 66.2% b3c3b03c-9202-4e7b-811a-9de89656ec4c rack1UN 10.0.0.3 92.6 KB 256 67.7% e4eef159-cb77-4627-84c4-14efbc868082 rack1

49Wednesday, November 7, 12

Note the token per-node count

Page 50: Virtual Nodes: Rethinking Topology in Cassandra

MigrationA

C B

50Wednesday, November 7, 12

Page 51: Virtual Nodes: Rethinking Topology in Cassandra

Migrationedit conf/cassandra.yaml and restart

# Number of tokens to generate.num_tokens: 256

51Wednesday, November 7, 12

Step 1: Set num_tokens in cassandra.yaml, and restart node

Page 52: Virtual Nodes: Rethinking Topology in Cassandra

Migrationconvert to T contiguous tokens in existing ranges

AAAAAAAAAAAA

AA A A A A A A A A AC

AA

AA

AA

AAAA

AB

52Wednesday, November 7, 12

This will cause the existing range to be split into T contiguous tokens. This results in no change to placement

Page 53: Virtual Nodes: Rethinking Topology in Cassandra

Migrationshuffle

AAAAAAAAAAAA

AA A A A A A A A A AC

AA

AA

AA

AAAA

AB

53Wednesday, November 7, 12

Step 2: Initialize a shuffle operation. Nodes randomly exchange ranges.

Page 54: Virtual Nodes: Rethinking Topology in Cassandra

Shuffle

• Range transfers are queued on each host

• Hosts initiate transfer of ranges to self

• Pay attention to the logs!

54Wednesday, November 7, 12

Page 55: Virtual Nodes: Rethinking Topology in Cassandra

Shufflebin/shuffle

Usage: shuffle [options] <sub-command>

Sub-commands: create Initialize a new shuffle operation ls List pending relocations clear Clear pending relocations en[able] Enable shuffling dis[able] Disable shuffling

Options: -dc, --only-dc Apply only to named DC (create only) -tp, --thrift-port Thrift port number (Default: 9160) -p, --port JMX port number (Default: 7199) -tf, --thrift-framed Enable framed transport for Thrift (Default: false) -en, --and-enable Immediately enable shuffling (create only) -H, --help Print help information -h, --host JMX hostname or IP address (Default: localhost) -th, --thrift-host Thrift hostname or IP address (Default: JMX host)

55Wednesday, November 7, 12

Page 56: Virtual Nodes: Rethinking Topology in Cassandra

Performance

56Wednesday, November 7, 12

Page 57: Virtual Nodes: Rethinking Topology in Cassandra

removenode

0

100

200

300

400

Acunu Reflex / Cassandra 1.2 Cassandra 1.1

57Wednesday, November 7, 12

17 node cluster of EC2 m1.large instances, 460M rows

Page 58: Virtual Nodes: Rethinking Topology in Cassandra

bootstrap

0

125

250

375

500

Acunu Reflex / Cassandra 1.2 Cassandra 1.1

58Wednesday, November 7, 12

17 node cluster of EC2 m1.large instances, 460M rows

Page 59: Virtual Nodes: Rethinking Topology in Cassandra

The End

• Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall and Werner Vogels “Dynamo: Amazon’s Highly Available Key-value Store” Web.

• Low, Richard. “Improving Cassandra's uptime with virtual nodes” Web.

• Overton, Sam. “Virtual Nodes Strategies.” Web.

• Overton, Sam. “Virtual Nodes: Performance Results.” Web.

• Jones, Richard. "libketama - a consistent hashing algo for memcache clients” Web.

59Wednesday, November 7, 12