Illustration Image

Cassandra.Link

The best knowledge base on Apache Cassandra®

Helping platform leaders, architects, engineers, and operators build scalable real time data platforms.

6/28/2018

Reading time:14 mins

How to size up an Apache Cassandra cluster (Training)

by DataStax Academy

How to size up an Apache Cassandra cluster (Training) SlideShare Explore You Successfully reported this slideshow.How to size up an Apache Cassandra cluster (Training)Upcoming SlideShareLoading in …5× 0 Comments 40 Likes Statistics Notes Krishna Pillai , Software Engineer at Symantec Ravish Jajoo , Project Lead at Tata Consultancy Services Alexandre Carey , Stagiaire chez TNO at Database designer intern Jie Yao , 高级架构师 at 唯品会 at 唯品会 Popa Bogdan-Robert , Scrum master, Senior system integrator at Scrum master, Senior system integrator Show More No DownloadsNo notes for slideDiscuss the main features and highlights of the Cassandra database. The features that are important to you will influence how you design, size, and configure your Cassandra cluster.For Rack and Datacenter awareness, mention that includes deploying Cassandra in the cloud, such as Amazon EC2, Rackspace, Google Computing Cloud, etc.This should be self-explanatory for me as I go through this slideSince Cassandra is linearly scalable, your cluster can be scaled as large as needed. The focus for this presentation is more on the minimum number of nodes you’d want / need, alongwith the replication setting.Based on data from a University of Toronto studyhttp://vldb.org/pvldb/vol5/p1724_tilmannrabl_vldb2012.pdfReplication needed to achieve high availability when designing for failure.Can’t have replication factor larger than number of nodes in the cluster.Wait, there are people that didn’t understand what a rack or datacenter is? Well then, let’s backtrack a little and define some of these terms. Starting with the smallest unit, we have the node…By process, I mean a Java virtual machine. Cassandra is written in Java, and its binary code must run in a virtual machine.Nodes can also be represented as a cloud or virtual server instance.Datacenters can be geographically separated, but also logically separated as well. Additional use cases for data centers include disaster recovery and workload seperation. With single node databases when you write data, you can expect to read back the same data. It’s not so easy with distributed systems though. The node that a client writes data to may not be the same node another client is trying to read the data from. In that case, a distributed system must implement a consistency model to determine when written data is saved to the relevant nodes. May want to mention BASE, make sure to clarify that eventual consistency usually occurs within milliseconds (thanks Netflix!)Tunable consistency is a key feature of Cassandra, and the type of consistency you want to use may affect your cluster design.Be sure to explain what happens if data returned from each node does not match.Be sure to explain what happens if data returned from each node does not match.Cross-datacenter latency vs. local consistency / consistency across datacentersImportant to understand how the storage engine works, since that directly impacts data size. No reads before a write.Writes go to commit log for durability, and memtablePeriodically memtable data is flushed to disk into a SSTable, or sorted strings table. This will destroy the memtable so that the memory can be reused. Relevant commitlog entries also marked as cleared.Important to understand how the storage engine works, since that directly impacts data size.Important to understand how the storage engine works, since that directly impacts data size.Now that you have a basic understanding of how Cassandra works and the possible benefits to select and use, we can talk about the primary factors for sizing your database.Although not as key, I will also discuss some considerations for the replication factor as wellIf RF = 1, we are not making use of Cassandra’s advantages of being available. One node = single point of failure.If just using Cassandra for durability, may use RF=2 just to ensure we have two copies of your data on separate nodes.Next slide will talk a bit more about RF < 3.PerformanceFor high availability use cases, there are clusters configured to use a replication factor as high as 5. Not very common.Each Cassandra node has a certain data capacity, and out of that capacity it can only be used for data to a certain limit. These are some of the factors.Of course your data set needs to be accounted for. In addition there is overhead for writing the data in Cassandra, as well as certain structures used for read optimizations (Partition index, summary, Bloom filter)If using a RF > 1, must account for those additional copies. At RF=3, if your data set is 5TB it means C* will be saving 15TB.One consequence of log structured storage is that data that’s no longer needed will exist until a compaction will clean it up. That means additional space remains used until a compaction occurs.Free disk space must be reserved for compaction so that data can be merged into a new file. See above.Backing up your data is very easy with Cassandra. Since the data files are immutable, a snapshot can be taken which creates a hard link or copy of the relevant SSTables. Hard links in particular are pretty much zero cost, since it takes negligible disk space and time to create the hard link.However just be careful. If you are creating snapshots, or configured Cassandra to automatically create snapshots, that’s also going to eat up your disk space unless user does housekeeping.DataStax recommended disk capacity, size your cluster so that your data fits.Why can’t we just add more disks? Limited by performance of each node handling that much data (contention from reads/writes, flushing, compaction, limit on JVM heap memory allocation).For cluster sizing, you want to have enough nodes so that read and write performance meet any SLAs, or are otherwise acceptable to users.Failure conditions must also be taken into account. If a node fails, the workload from that node must be absorbed by the other nodes in the cluster. When recovering the node, this can result in further impact to performance.Don’t size cluster to fully utilize each node, leave room so that cluster can still perform acceptably during failure.Rule of thumb: Some Cassandra MVPs recommend having no less than 6 nodes in your cluster. With less than 6, if you lose one node, you lose a good chunk of your cluster’s throughput capability (at least 20%). 1. How To Size Up A Cassandra ClusterJoe Chu, Technical Trainerjchu@datastax.comApril 2014©2014 DataStax Confidential. Do not distribute without consent. 2. What is Apache Cassandra?• Distributed NoSQL database• Linearly scalable• Highly available with no single point of failure• Fast writes and reads• Tunable data consistency• Rack and Datacenter awareness©2014 DataStax Confidential. Do not distribute without consent. 2 3. Peer-to-peer architecture• All nodes are the same• No master / slave architecture• Less operational overhead for better scalability.• Eliminates single point of failure, increasing availability.©2014 DataStax Confidential. Do not distribute without consent. 3MasterSlaveSlavePeerPeerPeerPeerPeer 4. Linear Scalability• Operation throughput increases linearly with the number ofnodes added.©2014 DataStax Confidential. Do not distribute without consent. 4 5. Data Replication• Cassandra can write copies of data on different nodes.RF = 3• Replication factor setting determines the number of copies.• Replication strategy can replicate data to different racks andand different datacenters.©2014 DataStax Confidential. Do not distribute without consent. 5INSERT INTO user_table (id, first_name,last_name) VALUES (1, „John‟, „Smith‟); R1R2R3 6. Node• Instance of a running Cassandra process.• Usually represented a single machine or server.©2014 DataStax Confidential. Do not distribute without consent. 6 7. Rack• Logical grouping of nodes.• Allows data to be replicated across different racks.©2014 DataStax Confidential. Do not distribute without consent. 7 8. Datacenter• Grouping of nodes and racks.• Each data center can have separate replication settings.• May be in different geographical locations, but not always.©2014 DataStax Confidential. Do not distribute without consent. 8 9. Cluster• Grouping of datacenters, racks, and nodes that communicatewith each other and replicate data.• Clusters are not aware of other clusters.©2014 DataStax Confidential. Do not distribute without consent. 9 10. Consistency Models• Immediate consistencyWhen a write is successful, subsequent reads areguaranteed to return that latest value.• Eventual consistencyWhen a write is successful, stale data may still be read butwill eventually return the latest value.©2014 DataStax Confidential. Do not distribute without consent. 10 11. Tunable Consistency• Cassandra offers the ability to chose between immediate andeventual consistency by setting a consistency level.• Consistency level is set per read or write operation.• Common consistency levels are ONE, QUORUM, and ALL.• For multi-datacenters, additional levels such asLOCAL_QUORUM and EACH_QUORUM to control cross-datacenter traffic.©2014 DataStax Confidential. Do not distribute without consent. 11 12. CL ONE• Write: Success when at least one replica node hasacknowleged the write.• Read: Only one replica node is given the read request.©2014 DataStax Confidential. Do not distribute without consent. 12R1R2R3CoordinatorClientRF = 3 13. CL QUORUM• Write: Success when a majority of the replica nodes hasacknowledged the write.• Read: A majority of the nodes are given the read request.• Majority = ( RF / 2 ) + 1©2013 DataStax Confidential. Do not distribute without consent. 13©2014 DataStax Confidential. Do not distribute without consent. 13R1R2R3CoordinatorClientRF = 3 14. CL ALL• Write: Success when all of the replica nodes hasacknowledged the write.• Read: All replica nodes are given the read request.©2013 DataStax Confidential. Do not distribute without consent. 14©2014 DataStax Confidential. Do not distribute without consent. 14R1R2R3CoordinatorClientRF = 3 15. Log-Structured Storage Engine• Cassandra storage engine inspired by Google BigTable• Key to fast write performance on Cassandra©2014 DataStax Confidential. Do not distribute without consent. 16MemtableSSTable SSTable SSTableCommitLog 16. Updates and Deletes• SSTable files are immutable and cannot be changed.• Updates are written as new data.• Deletes write a tombstone, which mark a row or column(s) asdeleted.• Updates and deletes are just as fast as inserts.©2014 DataStax Confidential. Do not distribute without consent. 17SSTable SSTable SSTableid:1, first:John,last:Smithtimestamp: …405id:1, first:John,last:Williamstimestamp: …621id:1, deletedtimestamp: …999 17. Compaction• Periodically an operation is triggered that will merge the datain several SSTables into a single SSTable.• Helps to limits the number of SSTables to read.• Removes old data and tombstones.• SSTables are deleted after compaction©2014 DataStax Confidential. Do not distribute without consent. 18SSTable SSTable SSTableid:1, first:John,last:Smithtimestamp:405id:1, first:John,last:Williamstimestamp:621id:1, deletedtimestamp:999New SSTableid:1, deletedtimestamp:999................ 18. Cluster Sizing©2014 DataStax Confidential. Do not distribute without consent. 19 19. Cluster Sizing Considerations• Replication Factor• Data Size“How many nodes would I need to store my data set?”• Data Velocity (Performance)“How many nodes would I need to achieve my desiredthroughput?”©2014 DataStax Confidential. Do not distribute without consent. 20 20. Choosing a Replication Factor©2014 DataStax Confidential. Do not distribute without consent. 21 21. What Are You Using Replication For?• Durability or Availability?• Each node has local durability (Commit Log), but replicationcan be used for distributed durability.• For availability, a recommended setting is RF=3.• RF=3 is the minimum necessary to achieve both consistencyand availability using QUORUM and LOCAL_QUORUM.©2014 DataStax Confidential. Do not distribute without consent. 22 22. How Replication Can Affect Consistency Level• When RF < 3, you do not have as much flexibility whenchoosing consistency and availability.• QUORUM = ALL©2014 DataStax Confidential. Do not distribute without consent. 23R1R2CoordinatorClientRF = 2 23. Using A Larger Replication Factor• When RF > 3, there is more data usage and higher latency foroperations requiring immediate consistency.• If using eventual consistency, a consistency level of ONE willhave consistent performance regardless of the replicationfactor.• High availability clusters may use a replication factor as highas 5.©2014 DataStax Confidential. Do not distribute without consent. 24 24. Data Size©2014 DataStax Confidential. Do not distribute without consent. 25 25. Disk Usage Factors• Data Size• Replication Setting• Old Data• Compaction• Snapshots©2014 DataStax Confidential. Do not distribute without consent. 26 26. Data Sizing• Row and Column Data• Row and Column Overhead• Indices and Other Structures©2014 DataStax Confidential. Do not distribute without consent. 27 27. Replication Overhead• A replication factor > 1 will effectively multiply your data sizeby that amount.©2014 DataStax Confidential. Do not distribute without consent. 28RF = 1 RF = 2 RF = 3 28. Old Data• Updates and deletes do not actually overwrite or delete data.• Older versions of data and tombstones remain in the SSTablefiles until they are compacted.• This becomes more important for heavy update and deleteworkloads.©2014 DataStax Confidential. Do not distribute without consent. 29 29. Compaction• Compaction needs free disk space to write the newSSTable, before the SSTables being compacted are removed.• Leave enough free disk space on each node to allowcompactions to run.• Worst case for the Size Tier Compaction Strategy is 50% ofthe total data capacity of the node.• For the Leveled Compaction Strategy, that is about 10% ofthe total data capacity.©2014 DataStax Confidential. Do not distribute without consent. 30 30. Snapshots• Snapshots are hard-links or copies of SSTable data files.• After SSTables are compacted, the disk space may not bereclaimed if a snapshot of those SSTables were created.Snapshots are created when:• Executing the nodetool snapshot command• Dropping a keyspace or table• Incremental backups• During compaction©2014 DataStax Confidential. Do not distribute without consent. 31 31. Recommended Disk Capacity• For current Cassandra versions, the ideal disk capacity isapproximate 1TB per node if using spinning disks and 3-5 TBper node using SSDs.• Having a larger disk capacity may be limited by the resultingperformance.• What works for you is still dependent on your data modeldesign and desired data velocity.©2014 DataStax Confidential. Do not distribute without consent. 32 32. Data Velocity (Performance)©2014 DataStax Confidential. Do not distribute without consent. 33 33. How to Measure Performance• I/O Throughput“How many reads and writes can be completed persecond?”• Read and Write Latency“How fast should I be able to get a response for my read andwrite requests?”©2014 DataStax Confidential. Do not distribute without consent. 34 34. Sizing for Failure• Cluster must be sized taking into account the performanceimpact caused by failure.• When a node fails, the corresponding workload must beabsorbed by the other replica nodes in the cluster.• Performance is further impacted when recovering a node.Data must be streamed or repaired using the other replicanodes.©2014 DataStax Confidential. Do not distribute without consent. 35 35. Hardware Considerations for PerformanceCPU• Operations are often CPU-intensive.• More cores are better.Memory• Cassandra uses JVM heap memory.• Additional memory used as off-heap memory by Cassandra,or as the OS page cache.Disk• C* optimized for spinning disks, but SSDs will perform better.• Attached storage (SAN) is strongly discouraged.©2014 DataStax Confidential. Do not distribute without consent. 36 36. Some Final Words…©2014 DataStax Confidential. Do not distribute without consent. 37 37. Summary• Cassandra allows flexibility when sizing your cluster from asingle node to thousands of nodes• Your use case will dictate how you want to size and configureyour Cassandra cluster. Do you need availability? Immediateconsistency?• The minimum number of nodes needed will be determined byyour data size, desired performance and replication factor.©2014 DataStax Confidential. Do not distribute without consent. 38 38. Additional Resources• DataStax Documentationhttp://www.datastax.com/documentation/cassandra/2.0/cassandra/architecture/architecturePlanningAbout_c.html• Planet Cassandrahttp://planetcassandra.org/nosql-cassandra-education/• Cassandra Users Mailing Listuser-subscribe@cassandra.apache.orghttp://mail-archives.apache.org/mod_mbox/cassandra-user/©2014 DataStax Confidential. Do not distribute without consent. 39 39. Questions?Questions?©2014 DataStax Confidential. Do not distribute without consent. 40 40. Thank YouWe power the big dataapps that transform business.41©2014 DataStax Confidential. Do not distribute without consent. Recommended Grant Writing for EducationOnline Course - LinkedIn Learning Learning the Basics of BrandingOnline Course - LinkedIn Learning Office 365: PowerPoint Essential TrainingOnline Course - LinkedIn Learning Maximum Overdrive: Tuning the Spark Cassandra Connector (Russell Spitzer, Dat...DataStax Forrester CXNYC 2017 - Delivering great real-time cx is a true craftDataStax Academy Introduction to DataStax Enterprise Graph DatabaseDataStax Academy Introduction to DataStax Enterprise Advanced Replication with Apache CassandraDataStax Academy Cassandra on Docker @ Walmart LabsDataStax Academy Cassandra 3.0 Data ModelingDataStax Academy Cassandra Adoption on Cisco UCS & Open stackDataStax Academy About Blog Terms Privacy Copyright LinkedIn Corporation © 2018 Public clipboards featuring this slideNo public clipboards found for this slideSelect another clipboard ×Looks like you’ve clipped this slide to already.Create a clipboardYou just clipped your first slide! Clipping is a handy way to collect important slides you want to go back to later. Now customize the name of a clipboard to store your clips. Description Visibility Others can see my Clipboard

Illustration Image
How to size up an Apache Cassandra cluster (Training)

Successfully reported this slideshow.

How to size up an Apache Cassandra cluster (Training)
How To Size Up A Cassandra Cluster
Joe Chu, Technical Trainer
jchu@datastax.com
April 2014
©2014 DataStax Confidential. Do...
What is Apache Cassandra?
• Distributed NoSQL database
• Linearly scalable
• Highly available with no single point of fail...
Peer-to-peer architecture
• All nodes are the same
• No master / slave architecture
• Less operational overhead for better...
Linear Scalability
• Operation throughput increases linearly with the number of
nodes added.
©2014 DataStax Confidential. ...
Data Replication
• Cassandra can write copies of data on different nodes.
RF = 3
• Replication factor setting determines t...
Node
• Instance of a running Cassandra process.
• Usually represented a single machine or server.
©2014 DataStax Confident...
Rack
• Logical grouping of nodes.
• Allows data to be replicated across different racks.
©2014 DataStax Confidential. Do n...
Datacenter
• Grouping of nodes and racks.
• Each data center can have separate replication settings.
• May be in different...
Cluster
• Grouping of datacenters, racks, and nodes that communicate
with each other and replicate data.
• Clusters are no...
Consistency Models
• Immediate consistency
When a write is successful, subsequent reads are
guaranteed to return that late...
Tunable Consistency
• Cassandra offers the ability to chose between immediate and
eventual consistency by setting a consis...
CL ONE
• Write: Success when at least one replica node has
acknowleged the write.
• Read: Only one replica node is given t...
CL QUORUM
• Write: Success when a majority of the replica nodes has
acknowledged the write.
• Read: A majority of the node...
CL ALL
• Write: Success when all of the replica nodes has
acknowledged the write.
• Read: All replica nodes are given the ...
Log-Structured Storage Engine
• Cassandra storage engine inspired by Google BigTable
• Key to fast write performance on Ca...
Updates and Deletes
• SSTable files are immutable and cannot be changed.
• Updates are written as new data.
• Deletes writ...
Compaction
• Periodically an operation is triggered that will merge the data
in several SSTables into a single SSTable.
• ...
Cluster Sizing
©2014 DataStax Confidential. Do not distribute without consent. 19
Cluster Sizing Considerations
• Replication Factor
• Data Size
“How many nodes would I need to store my data set?”
• Data ...
Choosing a Replication Factor
©2014 DataStax Confidential. Do not distribute without consent. 21
What Are You Using Replication For?
• Durability or Availability?
• Each node has local durability (Commit Log), but repli...
How Replication Can Affect Consistency Level
• When RF < 3, you do not have as much flexibility when
choosing consistency ...
Using A Larger Replication Factor
• When RF > 3, there is more data usage and higher latency for
operations requiring imme...
Data Size
©2014 DataStax Confidential. Do not distribute without consent. 25
Disk Usage Factors
• Data Size
• Replication Setting
• Old Data
• Compaction
• Snapshots
©2014 DataStax Confidential. Do n...
Data Sizing
• Row and Column Data
• Row and Column Overhead
• Indices and Other Structures
©2014 DataStax Confidential. Do...
Replication Overhead
• A replication factor > 1 will effectively multiply your data size
by that amount.
©2014 DataStax Co...
Old Data
• Updates and deletes do not actually overwrite or delete data.
• Older versions of data and tombstones remain in...
Compaction
• Compaction needs free disk space to write the new
SSTable, before the SSTables being compacted are removed.
•...
Snapshots
• Snapshots are hard-links or copies of SSTable data files.
• After SSTables are compacted, the disk space may n...
Recommended Disk Capacity
• For current Cassandra versions, the ideal disk capacity is
approximate 1TB per node if using s...
Data Velocity (Performance)
©2014 DataStax Confidential. Do not distribute without consent. 33
How to Measure Performance
• I/O Throughput
“How many reads and writes can be completed per
second?”
• Read and Write Late...
Sizing for Failure
• Cluster must be sized taking into account the performance
impact caused by failure.
• When a node fai...
Hardware Considerations for Performance
CPU
• Operations are often CPU-intensive.
• More cores are better.
Memory
• Cassan...
Some Final Words…
©2014 DataStax Confidential. Do not distribute without consent. 37
Summary
• Cassandra allows flexibility when sizing your cluster from a
single node to thousands of nodes
• Your use case w...
Additional Resources
• DataStax Documentation
http://www.datastax.com/documentation/cassandra/2.0/cassandra/architectu
re/...
Questions?
Questions?
©2014 DataStax Confidential. Do not distribute without consent. 40
Thank You
We power the big data
apps that transform business.
41©2014 DataStax Confidential. Do not distribute without con...

Upcoming SlideShare

Loading in …5

×

  1. 1. How To Size Up A Cassandra Cluster Joe Chu, Technical Trainer jchu@datastax.com April 2014 ©2014 DataStax Confidential. Do not distribute without consent.
  2. 2. What is Apache Cassandra? • Distributed NoSQL database • Linearly scalable • Highly available with no single point of failure • Fast writes and reads • Tunable data consistency • Rack and Datacenter awareness ©2014 DataStax Confidential. Do not distribute without consent. 2
  3. 3. Peer-to-peer architecture • All nodes are the same • No master / slave architecture • Less operational overhead for better scalability. • Eliminates single point of failure, increasing availability. ©2014 DataStax Confidential. Do not distribute without consent. 3 Master Slave Slave Peer Peer PeerPeer Peer
  4. 4. Linear Scalability • Operation throughput increases linearly with the number of nodes added. ©2014 DataStax Confidential. Do not distribute without consent. 4
  5. 5. Data Replication • Cassandra can write copies of data on different nodes. RF = 3 • Replication factor setting determines the number of copies. • Replication strategy can replicate data to different racks and and different datacenters. ©2014 DataStax Confidential. Do not distribute without consent. 5 INSERT INTO user_table (id, first_name, last_name) VALUES (1, „John‟, „Smith‟); R1 R2 R3
  6. 6. Node • Instance of a running Cassandra process. • Usually represented a single machine or server. ©2014 DataStax Confidential. Do not distribute without consent. 6
  7. 7. Rack • Logical grouping of nodes. • Allows data to be replicated across different racks. ©2014 DataStax Confidential. Do not distribute without consent. 7
  8. 8. Datacenter • Grouping of nodes and racks. • Each data center can have separate replication settings. • May be in different geographical locations, but not always. ©2014 DataStax Confidential. Do not distribute without consent. 8
  9. 9. Cluster • Grouping of datacenters, racks, and nodes that communicate with each other and replicate data. • Clusters are not aware of other clusters. ©2014 DataStax Confidential. Do not distribute without consent. 9
  10. 10. Consistency Models • Immediate consistency When a write is successful, subsequent reads are guaranteed to return that latest value. • Eventual consistency When a write is successful, stale data may still be read but will eventually return the latest value. ©2014 DataStax Confidential. Do not distribute without consent. 10
  11. 11. Tunable Consistency • Cassandra offers the ability to chose between immediate and eventual consistency by setting a consistency level. • Consistency level is set per read or write operation. • Common consistency levels are ONE, QUORUM, and ALL. • For multi-datacenters, additional levels such as LOCAL_QUORUM and EACH_QUORUM to control cross- datacenter traffic. ©2014 DataStax Confidential. Do not distribute without consent. 11
  12. 12. CL ONE • Write: Success when at least one replica node has acknowleged the write. • Read: Only one replica node is given the read request. ©2014 DataStax Confidential. Do not distribute without consent. 12 R1 R2 R3Coordinator Client RF = 3
  13. 13. CL QUORUM • Write: Success when a majority of the replica nodes has acknowledged the write. • Read: A majority of the nodes are given the read request. • Majority = ( RF / 2 ) + 1 ©2013 DataStax Confidential. Do not distribute without consent. 13©2014 DataStax Confidential. Do not distribute without consent. 13 R1 R2 R3Coordinator Client RF = 3
  14. 14. CL ALL • Write: Success when all of the replica nodes has acknowledged the write. • Read: All replica nodes are given the read request. ©2013 DataStax Confidential. Do not distribute without consent. 14©2014 DataStax Confidential. Do not distribute without consent. 14 R1 R2 R3Coordinator Client RF = 3
  15. 15. Log-Structured Storage Engine • Cassandra storage engine inspired by Google BigTable • Key to fast write performance on Cassandra ©2014 DataStax Confidential. Do not distribute without consent. 16 Memtable SSTable SSTable SSTable Commit Log
  16. 16. Updates and Deletes • SSTable files are immutable and cannot be changed. • Updates are written as new data. • Deletes write a tombstone, which mark a row or column(s) as deleted. • Updates and deletes are just as fast as inserts. ©2014 DataStax Confidential. Do not distribute without consent. 17 SSTable SSTable SSTable id:1, first:John, last:Smith timestamp: …405 id:1, first:John, last:Williams timestamp: …621 id:1, deleted timestamp: …999
  17. 17. Compaction • Periodically an operation is triggered that will merge the data in several SSTables into a single SSTable. • Helps to limits the number of SSTables to read. • Removes old data and tombstones. • SSTables are deleted after compaction ©2014 DataStax Confidential. Do not distribute without consent. 18 SSTable SSTable SSTable id:1, first:John, last:Smith timestamp:405 id:1, first:John, last:Williams timestamp:621 id:1, deleted timestamp:999 New SSTable id:1, deleted timestamp:999 . . . . . . . . . . . . . . . .
  18. 18. Cluster Sizing ©2014 DataStax Confidential. Do not distribute without consent. 19
  19. 19. Cluster Sizing Considerations • Replication Factor • Data Size “How many nodes would I need to store my data set?” • Data Velocity (Performance) “How many nodes would I need to achieve my desired throughput?” ©2014 DataStax Confidential. Do not distribute without consent. 20
  20. 20. Choosing a Replication Factor ©2014 DataStax Confidential. Do not distribute without consent. 21
  21. 21. What Are You Using Replication For? • Durability or Availability? • Each node has local durability (Commit Log), but replication can be used for distributed durability. • For availability, a recommended setting is RF=3. • RF=3 is the minimum necessary to achieve both consistency and availability using QUORUM and LOCAL_QUORUM. ©2014 DataStax Confidential. Do not distribute without consent. 22
  22. 22. How Replication Can Affect Consistency Level • When RF < 3, you do not have as much flexibility when choosing consistency and availability. • QUORUM = ALL ©2014 DataStax Confidential. Do not distribute without consent. 23 R1 R2 Coordinator Client RF = 2
  23. 23. Using A Larger Replication Factor • When RF > 3, there is more data usage and higher latency for operations requiring immediate consistency. • If using eventual consistency, a consistency level of ONE will have consistent performance regardless of the replication factor. • High availability clusters may use a replication factor as high as 5. ©2014 DataStax Confidential. Do not distribute without consent. 24
  24. 24. Data Size ©2014 DataStax Confidential. Do not distribute without consent. 25
  25. 25. Disk Usage Factors • Data Size • Replication Setting • Old Data • Compaction • Snapshots ©2014 DataStax Confidential. Do not distribute without consent. 26
  26. 26. Data Sizing • Row and Column Data • Row and Column Overhead • Indices and Other Structures ©2014 DataStax Confidential. Do not distribute without consent. 27
  27. 27. Replication Overhead • A replication factor > 1 will effectively multiply your data size by that amount. ©2014 DataStax Confidential. Do not distribute without consent. 28 RF = 1 RF = 2 RF = 3
  28. 28. Old Data • Updates and deletes do not actually overwrite or delete data. • Older versions of data and tombstones remain in the SSTable files until they are compacted. • This becomes more important for heavy update and delete workloads. ©2014 DataStax Confidential. Do not distribute without consent. 29
  29. 29. Compaction • Compaction needs free disk space to write the new SSTable, before the SSTables being compacted are removed. • Leave enough free disk space on each node to allow compactions to run. • Worst case for the Size Tier Compaction Strategy is 50% of the total data capacity of the node. • For the Leveled Compaction Strategy, that is about 10% of the total data capacity. ©2014 DataStax Confidential. Do not distribute without consent. 30
  30. 30. Snapshots • Snapshots are hard-links or copies of SSTable data files. • After SSTables are compacted, the disk space may not be reclaimed if a snapshot of those SSTables were created. Snapshots are created when: • Executing the nodetool snapshot command • Dropping a keyspace or table • Incremental backups • During compaction ©2014 DataStax Confidential. Do not distribute without consent. 31
  31. 31. Recommended Disk Capacity • For current Cassandra versions, the ideal disk capacity is approximate 1TB per node if using spinning disks and 3-5 TB per node using SSDs. • Having a larger disk capacity may be limited by the resulting performance. • What works for you is still dependent on your data model design and desired data velocity. ©2014 DataStax Confidential. Do not distribute without consent. 32
  32. 32. Data Velocity (Performance) ©2014 DataStax Confidential. Do not distribute without consent. 33
  33. 33. How to Measure Performance • I/O Throughput “How many reads and writes can be completed per second?” • Read and Write Latency “How fast should I be able to get a response for my read and write requests?” ©2014 DataStax Confidential. Do not distribute without consent. 34
  34. 34. Sizing for Failure • Cluster must be sized taking into account the performance impact caused by failure. • When a node fails, the corresponding workload must be absorbed by the other replica nodes in the cluster. • Performance is further impacted when recovering a node. Data must be streamed or repaired using the other replica nodes. ©2014 DataStax Confidential. Do not distribute without consent. 35
  35. 35. Hardware Considerations for Performance CPU • Operations are often CPU-intensive. • More cores are better. Memory • Cassandra uses JVM heap memory. • Additional memory used as off-heap memory by Cassandra, or as the OS page cache. Disk • C* optimized for spinning disks, but SSDs will perform better. • Attached storage (SAN) is strongly discouraged. ©2014 DataStax Confidential. Do not distribute without consent. 36
  36. 36. Some Final Words… ©2014 DataStax Confidential. Do not distribute without consent. 37
  37. 37. Summary • Cassandra allows flexibility when sizing your cluster from a single node to thousands of nodes • Your use case will dictate how you want to size and configure your Cassandra cluster. Do you need availability? Immediate consistency? • The minimum number of nodes needed will be determined by your data size, desired performance and replication factor. ©2014 DataStax Confidential. Do not distribute without consent. 38
  38. 38. Additional Resources • DataStax Documentation http://www.datastax.com/documentation/cassandra/2.0/cassandra/architectu re/architecturePlanningAbout_c.html • Planet Cassandra http://planetcassandra.org/nosql-cassandra-education/ • Cassandra Users Mailing List user-subscribe@cassandra.apache.org http://mail-archives.apache.org/mod_mbox/cassandra-user/ ©2014 DataStax Confidential. Do not distribute without consent. 39
  39. 39. Questions? Questions? ©2014 DataStax Confidential. Do not distribute without consent. 40
  40. 40. Thank You We power the big data apps that transform business. 41©2014 DataStax Confidential. Do not distribute without consent.

Related Articles

cluster
troubleshooting
datastax

GitHub - arodrime/Montecristo: Datastax Cluster Health Check Tooling

arodrime

4/3/2024

Checkout Planet Cassandra

Claim Your Free Planet Cassandra Contributor T-shirt!

Make your contribution and score a FREE Planet Cassandra Contributor T-Shirt! 
We value our incredible Cassandra community, and we want to express our gratitude by sending an exclusive Planet Cassandra Contributor T-Shirt you can wear with pride.

Join Our Newsletter!

Sign up below to receive email updates and see what's going on with our company

Explore Related Topics

AllKafkaSparkScyllaSStableKubernetesApiGithubGraphQl

Explore Further

cassandra