Illustration Image

Cassandra.Link

The best knowledge base on Apache Cassandra®

Helping platform leaders, architects, engineers, and operators build scalable real time data platforms.

1/25/2019

Reading time:18 mins

Lessons Learned From Running 1800 Clusters (Brooke Jensen, Instaclust…

by DataStax

Lessons Learned From Running 1800 Clusters (Brooke Jensen, Instaclust… SlideShare Explore You Successfully reported this slideshow.Lessons Learned From Running 1800 Clusters (Brooke Jensen, Instaclustr) | Cassandra Summit 2016Upcoming SlideShareLoading in …5× 0 Comments 4 Likes Statistics Notes Ma Arthur , LofTechnology Backend Developer at Backend Engineer Pierre Carré , Responsable technique chez Orange Applications for Business at Orange Business Services Colleen Velo , Senior Technical Operations Engineer at SmartThings at SmartThings Ken Nordquist , Solutions Architect, Agile Software Development Leader, Entrepreneur at DataStax No DownloadsNo notes for slideNetworkTopologyStrategy places replicas in the same data center by walking the ring clockwise until reaching the first node in another rack.  Also leaves open the possibility of DC migrations later on.Datastax recommends not to use logical racks. Cause: Most users tend to ignore / forget rack requirements - should be in alternating order. Same number of rack as nodes? –Use racks = RF Expanding is difficult – not if you’re using vnodes. Makes repairing easier Cluster operations Minimises downtime. Lose a whole rack of nodes without downtime. rack and data center information for the local node defined in the cassandra-rackdc.properties prefer_local=true - tells Cassandra to use the local IP address when communication is not across different data centers.Causes downtime when adding nodes. RF2 doing quorum. Driver config + consistency (QUORUM, Defaultretrypolicy) Change to DowngradingConsistencyRetryPolicy 2.1 handles compactions of large partitions better. Quorum queries do read repairs, so one slower (compacting) node will take much longer to return digest to coordinator, making the whole operation slower. SSTables per read Tombstones per read Indicates compactions are not keeping up or compaction strategy is not appropriate.Nodetool stop only stops current compaction, but does not prevent more compactions from occurring. So the same (problematic) compaction will be kicked off again later. Compactionthroughput – on 2.1 applies to new compactions, on 2.2.5+ applies instantly. Particularly in 2.0-H makes is readable. 50% for STCS If you fill up a disk – can cause corrupt sstables, fails halfway through compactions. C* doesn’t restartSnapshots can consume considerable space on disk. I like to look at the data files on disk – easier than Cfstats. Look for large CFs. Can you remove data? Note: might not just be your data. Space can commonly be consumed by snapshots or even system keyspaces. We’ve had nodes nearly fill up because of stored hinted handoffs. Be wary of changing gc_grace – make sure there are no other nodes down, or are back up within 3 hours or else the tombstone won’t get passed in HH. Recovery – add ebs, start C*, compact, etc. ~25G remaining but only 4G free. Unlikely. This example was SizeTieredCompactionStrategy If you are watching the node closely, you can let the available disk space get VERY low during compactions (MB free). Be prepared to stop Cassandra on the node if it gets too low.Added the three 800’s previously – didn’t think they’d need the storage but needed compute QUORUM - A write must be written to the commit log and memtable on a quorum of replicas I guess they were having a busy day.- when Cassandra attempts a read at CL higher than 1 it actually request data from the likely fastest node a digest from the remaining nodes. It then compares the digest values to ensure they agree.- if the don't agree, Cassandra attempts to apply the "most recent data" wins rule and also repair the inconsistency across nodes. To do this, it issue a query at CL=ALL- CASSANDRA 7947 (https://issues.apache.org/jira/browse/CASSANDRA-7947) changed the behaviour of Cassandra so that a failure at the original CL is report rather than a failure at CL=ALLTwo nodes up but one was missing updates/data which triggered the CL=ALL reads and failures. In our experience it is faster overall to take a slow-and-steady approach rather than overloading the cluster and having to recover it. Eg. Don’t execute many rebuilds in parallel. 1. Brooke JensenVP Technical Operations & Customer ServicesInstaclustrLessons learned from running over 1800 2000 clusters 2. Instaclustr• Launched at 2014 Summit.• Now 25+ staff in 4 countries.• Engineering (dev & ops) from Canberra, AU• Cassandra as a service (CaaS)• AWS, Azure, Softlayer, GCP in progress.• Automated provisioning – running within minutes.• 24/7 monitoring and response.• Repairs, backups, migrations, etc.• Expert Cassandra support.• Spark and Zeppelin add-ons.• Enterprise support• For customers who cannot use a managed service or require greater level of control of their cluster.• Gain 24/7 access to our Engineers for “third level” Cassandra support• Troubleshooting, advice, emergency response.• Consulting solutions• Data model design and/or review• Cluster design, sizing, performance testing and tuning• Training for developers and operational engineers• Find out more or start a free trial© DataStax, All Rights Reserved. 2 3. “Globally unique perspective of Cassandra.”• Our customer base:• Diverse. From early stage start-ups to large well-known global enterprises.• Education, Retail, Marketing, Advertising, Finance,Insurance, Health, Social, Research• All use cases: Messaging, IoT, eCommerce, Analytics,Recommendations, Security.• Small development clusters to large scale productiondeployments requiring 100% uptime.• Nodes under management:• 700+ active nodes under management,• All versions from Cassandra 2.0.11 - Cassandra 3.7© DataStax, All Rights Reserved. 3 4. About MeBrooke JensenVP Technical Operations, Customer Services / Cassandra MVP• Previous: Senior Software Engineer, Instaclustr• Education: Bachelor Software Engineering• Life before Instaclustr:• 11+ years Software Engineering.• Specialized in performance optimization of large enterprise systems (e.g. Australian Customs, TaxationOffice, Department of Finance, Deutsche Bank)• Extensive experience managing and resolving major system incidents and outages.• Lives: Canberra, Au© DataStax, All Rights Reserved. 4 5. Talk Overview• Collection of common problems we see and manage on a daily basis.• Examples and war stories from the field.• HOWTOs, tips and tricks.• Covering:• Cluster Design• Managing compactions• Large partitions• Disk usage and management• Tombstones and Deletes• Common sense advice© DataStax, All Rights Reserved. 5 6. Cluster Design Basics – Racks & RF• For production we recommend (minimum): 3 nodes in 3 racks with RF3. Make racks a multiple of RF.• Use logical racks and map to physical racks.• Each rack will contain a full copy of the data.• Can survive the loss of nodes without losing QUORUM (strong consistency)• Use NetworkTopologyStrategy. It’s not just for multi-DC, but is also “rack aware”ALTER KEYSPACE <keyspace> WITH replication = {'class': 'NetworkTopologyStrategy','DC1': '3'}© DataStax, All Rights Reserved. 6Getting this right upfront will makemanagement of the cluster mucheasier in the future.R2R2R2R1R1R1R3R3R3 7. The case for single racks• Datastax docs suggest not to use racks?• “It’s hard to set up”• “Expanding is difficult” – not if using vnodes (default from 2.0.9)• Spending the time to set up is WORTH IT!• Minimizes downtime during upgrades and maintenance• Can perform upgrades/restarts rack-by-rack• Can (technically) lose a whole rack without downtime• We go one further and map racks to AWS AZ:© DataStax, All Rights Reserved. 7 8. Setting it upyaml:endpoint_snitch: GossipingPropertyFileSnitchcassandra-rackdc.properties:Executing 'cat /etc/cassandra/cassandra-rackdc.properties' on 52.37.XXX.XXXHost 52.37.XXX.XXX response:#Generated by Instaclustr#Mon Mar 28 19:22:21 UTC 2016dc=US_WEST_2prefer_local=truerack=us-west-2b© DataStax, All Rights Reserved. 8 9. Compactions – the basics© DataStax, All Rights Reserved. 10• Regular compactions are an integral part of any healthy Cassandra cluster.• Occur periodically to purge tombstones, merge disparate row data into new SSTables to reclaimdisk space and keep read operations optimized.• Can have a significant disk, memory (GC), cpu, IO overhead.• Are often the cause of “unexplained” latency or IO issues in the cluster• Ideally, get the compaction strategy right at table creation time. You can change it later, but thatmay force a re-write all of the data in that CF using the new Compaction Strategy• STCS – Insert heavy and general workloads• LCS – Read heavy workloads, or more updates than inserts• DTCS – Not where there are updates to old data or inserts that are out of order. 10. Monitoring Compactions$ nodetool compactionstats -Hpending tasks: 130compaction type keyspace table completed total unit progressCompaction instametrics events_raw 1.35 GB 1.6 GB bytes 84.77%Compaction instametrics events_raw 1.28 GB 1.6 GB bytes 80.21%Active compaction remaining time : 0h00m33s• Not uncommon for large compactions to get “stuck” or fall behind.• On 2.0 in particular. Significantly improved in 2.1, even better in 3• A single node doing compactions can cause latency issues acrossthe whole cluster, as it will become slow to respond to queries.• Heap pressure will cause frequent flushing of Memtables to disk.=> many small SSTables => many compactions© DataStax, All Rights Reserved. 11 11. Compactions: other things to check© DataStax, All Rights Reserved. 12 12. Managing CompactionsFew things you can do if compactions are causing issues (e.g. latency)Throttle: nodetool setcompactionthroughput 16Stop and disable : nodetool stop COMPACTIONTake the node out (and unthrottle):nodetool disablebinary && nodetool disablegossip && nodetool disablethrift && nodetool setcompactionthroughput 0© DataStax, All Rights Reserved. 13Set until C* is restarted. On 2.1 applies to NEWcompactions, on 2.2.5+ applies instantlyOther nodes will mark this node as down,So need to complete within HH window (3h)Case is important!Stops currently active compactions only.Compaction startsNode taken out 13. Large Partitions• One of the biggest problems we deal with. Root cause of many other issues, and a PITA to manage.• We recommend to keep them 100MB or less.Creates issues with:CompactionsIn 2.0, compactions of partitions > 64MB were considerably slower. Partitions >2GB often getting stuck.Improved in 2.1 and confirmed we observe less of these problems in upgraded clusters.Adding, replacing nodes – streaming will often fail.Querying large partitions is considerably slower. The whole partition is stored on every replica node,leading to hotspots.Can be hard to get rid of.© DataStax, All Rights Reserved. 14 14. Checking partition sizes~ $ nodetool cfstats -H keyspace.columnfamilyCompacted partition minimum bytes: 125 bytesCompacted partition maximum bytes: 11.51 GBCompacted partition mean bytes: 844 bytes$ nodetool cfhistograms keyspace columnfamilyPercentile SSTables Write Latency Read Latency Partition Size Cell Count(micros) (micros) (bytes)50% 1.00 14.00 124.00 372 275% 1.00 14.00 1916.00 372 295% 3.00 24.00 17084.00 1597 1298% 4.00 35.00 17084.00 3311 2499% 5.00 50.00 20501.00 4768 42Min 0.00 4.00 51.00 125 0Max 5.00 446.00 20501.00 12359319162 129557750© DataStax, All Rights Reserved. 15Huge delta between 99th percentile andMax indicates most data (bytes) is inone partition. 15. Disk Usage• As a guide, maintain nodes under 70% (50% for STCS).• At 80% take action.• Why so much headroom?• Compactions will cause a temporary increase in disk usage while both sets of SSTables exist, butonce complete will free up space that was occupied by old SSTables.• FYI, repair requests a snapshot before execution.• Recovering from a filled disk can be a pain, and you CAN LOSE DATA.• C* won’t start, for a start.• Nodes out of the cluster during recovery >3 hours will require repair.© DataStax, All Rights Reserved. 16Sep 08 05:38:15 cassandra[17118]: at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_91]Sep 08 05:38:15 cassandra[17118]: Caused by: java.io.IOException: No configured data directory containsenough space to write 99 bytesSep 08 05:38:16 systemd[1]: cassandra.service: Main process exited, code=exited,Sep 08 05:38:16 systemd[1]: cassandra.service: Unit entered failed state. 16. Try this first: stop writing data.© DataStax, All Rights Reserved. 17 17. Can’t stop? Won’t stop?Quick win: clearing snapshots.nodetool cfstats or nodetool listsnapshots will show if you have any snapshots to clear:© DataStax, All Rights Reserved. 18nodetool clearsnapshot 18. Finding data to remove© DataStax, All Rights Reserved. 19I like to look at the data folders on disk – easier to identify than with cfstats.Note also: might not just be your data. Space can commonly be consumed by snapshots or even system keyspaces.• We’ve had nodes nearly fill up because of stored hints. 19. Tip: Removing data© DataStax, All Rights Reserved. 20DELETE - creates tombstones which will not be purged by compactions until after gc_grace_seconds• Default is 10 days, but you can ALTER it and it is effective immediately.• Make sure all nodes are UP before changing gc_grace.TRUNCATE or DROP – only creates a snapshot as a backup before removing all the data.• The disk space is released as soon as the snapshot is cleared• Preferred where possible. 20. Disk Usage – Other Actions to try• Add Nodes + run cleanups• After all new nodes are running, run nodetool cleanup on each of the previously existing nodes to removethe keys that no longer belong to those nodes.• If on AWS, add EBS (requires restart).• Disable autocompactions (will negatively effect read latency so not recommended)© DataStax, All Rights Reserved. 21Tip: JOINING (adding) nodes• When you add nodes to a cluster, they will typically overstream data initially using more disk space than you expect. Duplicates will becompacted away eventually.• Disable compaction throttling while the node is JOINING.• If streaming/joining fails and you have to restart it, the node will restream ALL SSTables again from the beginning, potentially filling up thedisks. ‘rm’ cassandra data folder before restarting. 21. Compaction spikes• Compactions, particularly large ones, will cause spikes in disk usage while both sets of SSTablesexist.• Ideally, you want the compaction(s) to complete and free up space, but how can you assesswhether that is possible?Unlikely.© DataStax, All Rights Reserved. 22 22. Compaction spikes1. Find the tmp SSTable associated with the current compaction. From this, together with %complete in compactionstats you can get a feel for how much more space you need:$ -rw-r--r-- find /var/lib/cassandra/data/ -name "*tmp*Data.db" | xargs ls –lh1 root root 4.5G Sep 1 14:56 keyspace1/posts/keyspace1-posts-tmp-ka-118955-Data.db2. Keep a very close eye the disk, compaction and size of tmp file:watch –n30 'df -h; ls -lh keyspace1-posts-tmp-ka-118955-Data.db; nodetool compactionstats –H’Filesystem Size Used Avail Use% Mounted on/dev/md127 787G 746G 506M 100% /var/lib/cassandra© DataStax, All Rights Reserved. 23 23. Case study: Yesterday’s dramaScene:• 15 node production cluster, 12 * m4xl-1600 nodes + 3 * m4xl-800 nodes (ie 3 with half storage)• Keyspace is RF 2 and application requires QUORUM• (sum_of_replication_factors / 2) + 1 = 2 (ie both replicas)• Therefore can’t take nodes out (or let them die) as it will cause application outage.• Peak processing time is 8am-6pm.• Need to keep the node up until the end of the day.• Write heavy workload© DataStax, All Rights Reserved. 24 24. 09:33:~ $ df –hFilesystem Size Used Avail Use% Mounted on/dev/md127 787G 777G 10G 99% /var/lib/cassandra11:03:Filesystem Size Used Avail Use% Mounted on/dev/md127 787G 781G 5.9G 100% /var/lib/cassandra12:37:~ $ nodetool disableautocompaction~ $ df –hFilesystem Size Used Avail Use% Mounted on/dev/md127 787G 769G 18G 98% /var/lib/cassandra© DataStax, All Rights Reserved. 25 25. © DataStax, All Rights Reserved. 2613:40:Filesystem Size Used Avail Use% Mounted on/dev/md127 787G 785G 2G 100% /var/lib/cassandraCrap. 26. Solution was to move one CF to EBS in the background before the disk fills up.~ $ du -hs /var/lib/cassandra/data/prod/*89G /var/lib/cassandra/data/prod/cf-39153090119811e693793df4078eeb9938G /var/lib/cassandra/data/prod/cf_one_min-e17256f091a011e5a5c327b05b4cd3f4~ $ rsync -aOHh /var/lib/cassandra/data/prod/cf_one_min-e17256f091a011e5a5c327b05b4cd3f4 /mnt/ebs/Meanwhile:Filesystem Size Used Avail Use% Mounted on/dev/md127 787G 746G 906M 100% /var/lib/cassandra/dev/xvdp 79G 37G 39G 49% /mnt/ebsNow just mount bind it, and restart Cassandra:/dev/xvdp on /lib/cassandra/data/prod/cf_one_min-e17256f091a011e5a5c327b05b4cd3f4© DataStax, All Rights Reserved. 27 27. Monitoring – how we detect problems• Client read and write latency• Local CF read and write latency• Number of reads or writes deviating from average• Outlier nodes• Down nodes• Disk usage• Pending compactions• Check for large partitions (data model issues)• In the logs:• Large batch warnings• Tombstone warnings• Excessive GC and/or long pauses© DataStax, All Rights Reserved. 28 28. Case study: Don’t break your cluster.WARNING! It is possible to get your cluster into a state from which you are unable to recoverwithout significant downtime or data loss.© DataStax, All Rights Reserved. 29 29. “This happened during normal operations at night, so I don't think any of us were doing anythingabnormal. We've been doing some processing that creates pretty heavy load over the last few weeks...”Orly?© DataStax, All Rights Reserved. 30 30. Unthrottled data load• Load average of 56, on 8 core machines.• Nodes were saturated and exhausted heap space.• Regular GC pauses of 12000ms - 17000ms• Memtables are frequently flushed to disk.• This resulted in over 120,000 small SSTables being created on some nodes.• Data was spread across thousands of SSTables, so read latency skyrocketed.• Was using paxos writes (LWT), which require a read before every write. This caused writes to failbecause as reads were timing out.• Compactions could not keep up, and added additional load to the already overloaded nodes.• C* eventually crashed on most nodes, leaving some corrupt SSTables.© DataStax, All Rights Reserved. 31 31. 17 second GC pauses. Nice.Aug 16 15:51:58 INFO o.a.cassandra.service.GCInspector ConcurrentMarkSweep GC in 12416ms. CMSOld Gen: 6442450872 -> 6442450912; Par Eden Space: 1718091776 -> 297543768; Par Survivor Space:214695856 -> 0Aug 16 15:52:20 INFO o.a.cassandra.service.GCInspector ConcurrentMarkSweep GC in 17732ms. CMSOld Gen: 6442450912 -> 6442450864; Par Eden Space: 1718091776 -> 416111040; Par Survivor Space:214671752 -> 0Heap pressure causes C* to flush memtables to disk. This created >120,000 Memtables on somenodes. 3+ days just to catch up on compactions, which were continually failing because of:Aug 18 22:11:43 java.io.FileNotFoundException: /var/lib/cassandra/data/keyspace/cf-f4683d90f88111e586b7e962b0d85be3/keyspace-cf-ka-1243722-Data.db (Too many open files)java.lang.RuntimeException: java.io.FileNotFoundException: /var/lib/cassandra/data/keyspace/cf-f4683d90f88111e586b7e962b0d85be3/keyspace-cf-ka-1106806-Data.db (No such file or directory)© DataStax, All Rights Reserved. 32 32. 1. Once we got C* stable and caught up on compactions, there were still corrupt SSTables present andnodes were in an inconsistent state.2. Couldn’t fix with repairs:ERROR o.apache.cassandra.repair.Validator Failed creating a merkle tree for [repair #21be1ac0-6809-11e6-a098-b377cb035d78 on keyspace/cf, (-227556542627198517,-225096881583623998]], /52.XXX.XXX.XXX (see log for details)ERROR o.a.c.service.CassandraDaemon Exception in thread Thread[ValidationExecutor:708,1,main]java.lang.NullPointerException: null3. Have deleted corrupt SSSTables on some nodes. This is ok, presume there are other copies of the datain the cluster. We’ll have to repair later.4. Run online scrubs on each node to identify corrupt SSTables, and fix (rewrite) where possible.5. For nodes where online scrub does not complete, take the node offline and attempt an offline scrub ofidentified corrupt SSTables.6. If offline scrub fails to rewrite any SSTables a node, delete those remaining corrupt SSTables.7. Run a repair across the cluster to make data consistent across all nodes.@ 8th September, 3 weeks after the initial data load and the cluster is STILL in an inconsistent statewith corrupt SSTables and queries occasionally failing.© DataStax, All Rights Reserved. 33Long road to recovery 33. Some final tips• When making major changes to the cluster (expanding, migrating, decomissioning), GO SLOW.• It takes longer to recover from errors than just doing it right the first time.• Things I’ve seen customers do:• Rebuild 16 nodes in a new DC concurrently• Decommission multiple nodes at once• Unthrrotled data loads• Keep C* up to date, but not too up to date.• 2.0 has troubles with large compactions• Currently investigating segfaults with MV in 3.7• Read the source code.• It is the most thorough and up to date documentation.© DataStax, All Rights Reserved. 34 Recommended Teaching Techniques: Creating Multimedia LearningOnline Course - LinkedIn Learning Teaching Future-Ready StudentsOnline Course - LinkedIn Learning SMART Board Essential TrainingOnline Course - LinkedIn Learning Cassandra Tuning - Above and Beyond (Matija Gobec, SmartCat) | Cassandra Summ...DataStax Webinar: Top 5 Reasons Why DataStax Enterprise Is Game Changing For ArchitectsDataStax strangeloop 2012 apache cassandra anti patternsMatthew Dennis How to size up an Apache Cassandra cluster (Training)DataStax Academy Webinar: How Active Everywhere Database Architecture Accelerates Hybrid Cloud...DataStax Webinar | Aligning GDPR Requirements with Today's Hybrid Cloud RealitiesDataStax Designing a Distributed Cloud Database for DummiesDataStax About Blog Terms Privacy Copyright LinkedIn Corporation © 2019 Public clipboards featuring this slideNo public clipboards found for this slideSelect another clipboard ×Looks like you’ve clipped this slide to already.Create a clipboardYou just clipped your first slide! Clipping is a handy way to collect important slides you want to go back to later. Now customize the name of a clipboard to store your clips. Description Visibility Others can see my Clipboard

Illustration Image
Lessons Learned From Running 1800 Clusters (Brooke Jensen, Instaclust…

Successfully reported this slideshow.

Lessons Learned From Running 1800 Clusters (Brooke Jensen, Instaclustr) | Cassandra Summit 2016
Brooke Jensen
VP Technical Operations & Customer Services
Instaclustr
Lessons learned from running over 1800 2000 clusters
Instaclustr
• Launched at 2014 Summit.
• Now 25+ staff in 4 countries.
• Engineering (dev & ops) from Canberra, AU
• Cassa...
“Globally unique perspective of Cassandra.”
• Our customer base:
• Diverse. From early stage start-ups to large well-
know...
About Me
Brooke Jensen
VP Technical Operations, Customer Services / Cassandra MVP
• Previous: Senior Software Engineer, In...
Talk Overview
• Collection of common problems we see and manage on a daily basis.
• Examples and war stories from the fiel...
Cluster Design Basics – Racks & RF
• For production we recommend (minimum): 3 nodes in 3 racks with RF3.
 Make racks a mu...
The case for single racks
• Datastax docs suggest not to use racks?
• “It’s hard to set up”
• “Expanding is difficult” – n...
Setting it up
yaml:
endpoint_snitch: GossipingPropertyFileSnitch
cassandra-rackdc.properties:
Executing 'cat /etc/cassandr...
Compactions – the basics
© DataStax, All Rights Reserved. 10
• Regular compactions are an integral part of any healthy Cas...
Monitoring Compactions
$ nodetool compactionstats -H
pending tasks: 130
compaction type keyspace table completed total uni...
Compactions: other things to check
© DataStax, All Rights Reserved. 12
Managing Compactions
Few things you can do if compactions are causing issues (e.g. latency)
Throttle: nodetool setcompacti...
Large Partitions
• One of the biggest problems we deal with. Root cause of many other issues, and a PITA to manage.
• We r...
Checking partition sizes
~ $ nodetool cfstats -H keyspace.columnfamily
…
Compacted partition minimum bytes: 125 bytes
Comp...
Disk Usage
• As a guide, maintain nodes under 70% (50% for STCS).
• At 80% take action.
• Why so much headroom?
• Compacti...
Try this first: stop writing data.
© DataStax, All Rights Reserved. 17
Can’t stop? Won’t stop?
Quick win: clearing snapshots.
nodetool cfstats or nodetool listsnapshots will show if you have an...
Finding data to remove
© DataStax, All Rights Reserved. 19
I like to look at the data folders on disk – easier to identify...
Tip: Removing data
© DataStax, All Rights Reserved. 20
DELETE - creates tombstones which will not be purged by compactions...
Disk Usage – Other Actions to try
• Add Nodes + run cleanups
• After all new nodes are running, run nodetool cleanup on ea...
Compaction spikes
• Compactions, particularly large ones, will cause spikes in disk usage while both sets of SSTables
exis...
Compaction spikes
1. Find the tmp SSTable associated with the current compaction. From this, together with %
complete in c...
Case study: Yesterday’s drama
Scene:
• 15 node production cluster, 12 * m4xl-1600 nodes + 3 * m4xl-800 nodes (ie 3 with ha...
09:33:
~ $ df –h
Filesystem Size Used Avail Use% Mounted on
/dev/md127 787G 777G 10G 99% /var/lib/cassandra
11:03:
Filesys...
© DataStax, All Rights Reserved. 26
13:40:
Filesystem Size Used Avail Use% Mounted on
/dev/md127 787G 785G 2G 100% /var/li...
Solution was to move one CF to EBS in the background before the disk fills up.
~ $ du -hs /var/lib/cassandra/data/prod/*
8...
Monitoring – how we detect problems
• Client read and write latency
• Local CF read and write latency
• Number of reads or...
Case study: Don’t break your cluster.
WARNING! It is possible to get your cluster into a state from which you are unable t...
“This happened during normal operations at night, so I don't think any of us were doing anything
abnormal. We've been doin...
Unthrottled data load
• Load average of 56, on 8 core machines.
• Nodes were saturated and exhausted heap space.
• Regular...
17 second GC pauses. Nice.
Aug 16 15:51:58 INFO o.a.cassandra.service.GCInspector ConcurrentMarkSweep GC in 12416ms. CMS
O...
1. Once we got C* stable and caught up on compactions, there were still corrupt SSTables present and
nodes were in an inco...
Some final tips
• When making major changes to the cluster (expanding, migrating, decomissioning), GO SLOW.
• It takes lon...

Upcoming SlideShare

Loading in …5

×

  1. 1. Brooke Jensen VP Technical Operations & Customer Services Instaclustr Lessons learned from running over 1800 2000 clusters
  2. 2. Instaclustr • Launched at 2014 Summit. • Now 25+ staff in 4 countries. • Engineering (dev & ops) from Canberra, AU • Cassandra as a service (CaaS) • AWS, Azure, Softlayer, GCP in progress. • Automated provisioning – running within minutes. • 24/7 monitoring and response. • Repairs, backups, migrations, etc. • Expert Cassandra support. • Spark and Zeppelin add-ons. • Enterprise support • For customers who cannot use a managed service or require greater level of control of their cluster. • Gain 24/7 access to our Engineers for “third level” Cassandra support • Troubleshooting, advice, emergency response. • Consulting solutions • Data model design and/or review • Cluster design, sizing, performance testing and tuning • Training for developers and operational engineers • Find out more or start a free trial © DataStax, All Rights Reserved. 2
  3. 3. “Globally unique perspective of Cassandra.” • Our customer base: • Diverse. From early stage start-ups to large well- known global enterprises. • Education, Retail, Marketing, Advertising, Finance, Insurance, Health, Social, Research • All use cases: Messaging, IoT, eCommerce, Analytics, Recommendations, Security. • Small development clusters to large scale production deployments requiring 100% uptime. • Nodes under management: • 700+ active nodes under management, • All versions from Cassandra 2.0.11 - Cassandra 3.7 © DataStax, All Rights Reserved. 3
  4. 4. About Me Brooke Jensen VP Technical Operations, Customer Services / Cassandra MVP • Previous: Senior Software Engineer, Instaclustr • Education: Bachelor Software Engineering • Life before Instaclustr: • 11+ years Software Engineering. • Specialized in performance optimization of large enterprise systems (e.g. Australian Customs, Taxation Office, Department of Finance, Deutsche Bank) • Extensive experience managing and resolving major system incidents and outages. • Lives: Canberra, Au © DataStax, All Rights Reserved. 4
  5. 5. Talk Overview • Collection of common problems we see and manage on a daily basis. • Examples and war stories from the field. • HOWTOs, tips and tricks. • Covering: • Cluster Design • Managing compactions • Large partitions • Disk usage and management • Tombstones and Deletes • Common sense advice © DataStax, All Rights Reserved. 5
  6. 6. Cluster Design Basics – Racks & RF • For production we recommend (minimum): 3 nodes in 3 racks with RF3.  Make racks a multiple of RF. • Use logical racks and map to physical racks. • Each rack will contain a full copy of the data. • Can survive the loss of nodes without losing QUORUM (strong consistency) • Use NetworkTopologyStrategy. It’s not just for multi-DC, but is also “rack aware” ALTER KEYSPACE <keyspace> WITH replication = {'class': 'NetworkTopologyStrategy','DC1': '3'} © DataStax, All Rights Reserved. 6 Getting this right upfront will make management of the cluster much easier in the future. R2 R2 R2 R1 R1 R1 R3 R3 R3
  7. 7. The case for single racks • Datastax docs suggest not to use racks? • “It’s hard to set up” • “Expanding is difficult” – not if using vnodes (default from 2.0.9) • Spending the time to set up is WORTH IT! • Minimizes downtime during upgrades and maintenance • Can perform upgrades/restarts rack-by-rack • Can (technically) lose a whole rack without downtime • We go one further and map racks to AWS AZ: © DataStax, All Rights Reserved. 7
  8. 8. Setting it up yaml: endpoint_snitch: GossipingPropertyFileSnitch cassandra-rackdc.properties: Executing 'cat /etc/cassandra/cassandra-rackdc.properties' on 52.37.XXX.XXX Host 52.37.XXX.XXX response: #Generated by Instaclustr #Mon Mar 28 19:22:21 UTC 2016 dc=US_WEST_2 prefer_local=true rack=us-west-2b © DataStax, All Rights Reserved. 8
  9. 9. Compactions – the basics © DataStax, All Rights Reserved. 10 • Regular compactions are an integral part of any healthy Cassandra cluster. • Occur periodically to purge tombstones, merge disparate row data into new SSTables to reclaim disk space and keep read operations optimized. • Can have a significant disk, memory (GC), cpu, IO overhead. • Are often the cause of “unexplained” latency or IO issues in the cluster • Ideally, get the compaction strategy right at table creation time. You can change it later, but that may force a re-write all of the data in that CF using the new Compaction Strategy • STCS – Insert heavy and general workloads • LCS – Read heavy workloads, or more updates than inserts • DTCS – Not where there are updates to old data or inserts that are out of order.
  10. 10. Monitoring Compactions $ nodetool compactionstats -H pending tasks: 130 compaction type keyspace table completed total unit progress Compaction instametrics events_raw 1.35 GB 1.6 GB bytes 84.77% Compaction instametrics events_raw 1.28 GB 1.6 GB bytes 80.21% Active compaction remaining time : 0h00m33s • Not uncommon for large compactions to get “stuck” or fall behind. • On 2.0 in particular. Significantly improved in 2.1, even better in 3 • A single node doing compactions can cause latency issues across the whole cluster, as it will become slow to respond to queries. • Heap pressure will cause frequent flushing of Memtables to disk. => many small SSTables => many compactions © DataStax, All Rights Reserved. 11
  11. 11. Compactions: other things to check © DataStax, All Rights Reserved. 12
  12. 12. Managing Compactions Few things you can do if compactions are causing issues (e.g. latency) Throttle: nodetool setcompactionthroughput 16 Stop and disable : nodetool stop COMPACTION Take the node out (and unthrottle): nodetool disablebinary && nodetool disablegossip && nodetool disablethrift && nodetool setcompactionthroughput 0 © DataStax, All Rights Reserved. 13 Set until C* is restarted. On 2.1 applies to NEW compactions, on 2.2.5+ applies instantly Other nodes will mark this node as down, So need to complete within HH window (3h) Case is important! Stops currently active compactions only. Compaction starts Node taken out
  13. 13. Large Partitions • One of the biggest problems we deal with. Root cause of many other issues, and a PITA to manage. • We recommend to keep them 100MB or less. Creates issues with: Compactions In 2.0, compactions of partitions > 64MB were considerably slower. Partitions >2GB often getting stuck. Improved in 2.1 and confirmed we observe less of these problems in upgraded clusters. Adding, replacing nodes – streaming will often fail. Querying large partitions is considerably slower. The whole partition is stored on every replica node, leading to hotspots. Can be hard to get rid of. © DataStax, All Rights Reserved. 14
  14. 14. Checking partition sizes ~ $ nodetool cfstats -H keyspace.columnfamily … Compacted partition minimum bytes: 125 bytes Compacted partition maximum bytes: 11.51 GB Compacted partition mean bytes: 844 bytes $ nodetool cfhistograms keyspace columnfamily Percentile SSTables Write Latency Read Latency Partition Size Cell Count (micros) (micros) (bytes) 50% 1.00 14.00 124.00 372 2 75% 1.00 14.00 1916.00 372 2 95% 3.00 24.00 17084.00 1597 12 98% 4.00 35.00 17084.00 3311 24 99% 5.00 50.00 20501.00 4768 42 Min 0.00 4.00 51.00 125 0 Max 5.00 446.00 20501.00 12359319162 129557750 © DataStax, All Rights Reserved. 15 Huge delta between 99th percentile and Max indicates most data (bytes) is in one partition.
  15. 15. Disk Usage • As a guide, maintain nodes under 70% (50% for STCS). • At 80% take action. • Why so much headroom? • Compactions will cause a temporary increase in disk usage while both sets of SSTables exist, but once complete will free up space that was occupied by old SSTables. • FYI, repair requests a snapshot before execution. • Recovering from a filled disk can be a pain, and you CAN LOSE DATA. • C* won’t start, for a start. • Nodes out of the cluster during recovery >3 hours will require repair. © DataStax, All Rights Reserved. 16 Sep 08 05:38:15 cassandra[17118]: at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_91] Sep 08 05:38:15 cassandra[17118]: Caused by: java.io.IOException: No configured data directory contains enough space to write 99 bytes Sep 08 05:38:16 systemd[1]: cassandra.service: Main process exited, code=exited, Sep 08 05:38:16 systemd[1]: cassandra.service: Unit entered failed state.
  16. 16. Try this first: stop writing data. © DataStax, All Rights Reserved. 17
  17. 17. Can’t stop? Won’t stop? Quick win: clearing snapshots. nodetool cfstats or nodetool listsnapshots will show if you have any snapshots to clear: © DataStax, All Rights Reserved. 18 nodetool clearsnapshot
  18. 18. Finding data to remove © DataStax, All Rights Reserved. 19 I like to look at the data folders on disk – easier to identify than with cfstats. Note also: might not just be your data. Space can commonly be consumed by snapshots or even system keyspaces. • We’ve had nodes nearly fill up because of stored hints.
  19. 19. Tip: Removing data © DataStax, All Rights Reserved. 20 DELETE - creates tombstones which will not be purged by compactions until after gc_grace_seconds • Default is 10 days, but you can ALTER it and it is effective immediately. • Make sure all nodes are UP before changing gc_grace. TRUNCATE or DROP – only creates a snapshot as a backup before removing all the data. • The disk space is released as soon as the snapshot is cleared • Preferred where possible.
  20. 20. Disk Usage – Other Actions to try • Add Nodes + run cleanups • After all new nodes are running, run nodetool cleanup on each of the previously existing nodes to remove the keys that no longer belong to those nodes. • If on AWS, add EBS (requires restart). • Disable autocompactions (will negatively effect read latency so not recommended) © DataStax, All Rights Reserved. 21 Tip: JOINING (adding) nodes • When you add nodes to a cluster, they will typically overstream data initially using more disk space than you expect. Duplicates will be compacted away eventually. • Disable compaction throttling while the node is JOINING. • If streaming/joining fails and you have to restart it, the node will restream ALL SSTables again from the beginning, potentially filling up the disks. ‘rm’ cassandra data folder before restarting.
  21. 21. Compaction spikes • Compactions, particularly large ones, will cause spikes in disk usage while both sets of SSTables exist. • Ideally, you want the compaction(s) to complete and free up space, but how can you assess whether that is possible? Unlikely. © DataStax, All Rights Reserved. 22
  22. 22. Compaction spikes 1. Find the tmp SSTable associated with the current compaction. From this, together with % complete in compactionstats you can get a feel for how much more space you need: $ -rw-r--r-- find /var/lib/cassandra/data/ -name "*tmp*Data.db" | xargs ls –lh 1 root root 4.5G Sep 1 14:56 keyspace1/posts/keyspace1-posts-tmp-ka-118955-Data.db 2. Keep a very close eye the disk, compaction and size of tmp file: watch –n30 'df -h; ls -lh keyspace1-posts-tmp-ka-118955-Data.db; nodetool compactionstats –H’ Filesystem Size Used Avail Use% Mounted on /dev/md127 787G 746G 506M 100% /var/lib/cassandra © DataStax, All Rights Reserved. 23
  23. 23. Case study: Yesterday’s drama Scene: • 15 node production cluster, 12 * m4xl-1600 nodes + 3 * m4xl-800 nodes (ie 3 with half storage) • Keyspace is RF 2 and application requires QUORUM • (sum_of_replication_factors / 2) + 1 = 2 (ie both replicas) • Therefore can’t take nodes out (or let them die) as it will cause application outage. • Peak processing time is 8am-6pm. • Need to keep the node up until the end of the day. • Write heavy workload © DataStax, All Rights Reserved. 24
  24. 24. 09:33: ~ $ df –h Filesystem Size Used Avail Use% Mounted on /dev/md127 787G 777G 10G 99% /var/lib/cassandra 11:03: Filesystem Size Used Avail Use% Mounted on /dev/md127 787G 781G 5.9G 100% /var/lib/cassandra 12:37: ~ $ nodetool disableautocompaction ~ $ df –h Filesystem Size Used Avail Use% Mounted on /dev/md127 787G 769G 18G 98% /var/lib/cassandra © DataStax, All Rights Reserved. 25
  25. 25. © DataStax, All Rights Reserved. 26 13:40: Filesystem Size Used Avail Use% Mounted on /dev/md127 787G 785G 2G 100% /var/lib/cassandra Crap.
  26. 26. Solution was to move one CF to EBS in the background before the disk fills up. ~ $ du -hs /var/lib/cassandra/data/prod/* 89G /var/lib/cassandra/data/prod/cf-39153090119811e693793df4078eeb99 38G /var/lib/cassandra/data/prod/cf_one_min-e17256f091a011e5a5c327b05b4cd3f4 ~ $ rsync -aOHh /var/lib/cassandra/data/prod/cf_one_min-e17256f091a011e5a5c327b05b4cd3f4 /mnt/ebs/ Meanwhile: Filesystem Size Used Avail Use% Mounted on /dev/md127 787G 746G 906M 100% /var/lib/cassandra /dev/xvdp 79G 37G 39G 49% /mnt/ebs Now just mount bind it, and restart Cassandra: /dev/xvdp on /lib/cassandra/data/prod/cf_one_min-e17256f091a011e5a5c327b05b4cd3f4 © DataStax, All Rights Reserved. 27
  27. 27. Monitoring – how we detect problems • Client read and write latency • Local CF read and write latency • Number of reads or writes deviating from average • Outlier nodes • Down nodes • Disk usage • Pending compactions • Check for large partitions (data model issues) • In the logs: • Large batch warnings • Tombstone warnings • Excessive GC and/or long pauses © DataStax, All Rights Reserved. 28
  28. 28. Case study: Don’t break your cluster. WARNING! It is possible to get your cluster into a state from which you are unable to recover without significant downtime or data loss. © DataStax, All Rights Reserved. 29
  29. 29. “This happened during normal operations at night, so I don't think any of us were doing anything abnormal. We've been doing some processing that creates pretty heavy load over the last few weeks...” Orly? © DataStax, All Rights Reserved. 30
  30. 30. Unthrottled data load • Load average of 56, on 8 core machines. • Nodes were saturated and exhausted heap space. • Regular GC pauses of 12000ms - 17000ms • Memtables are frequently flushed to disk. • This resulted in over 120,000 small SSTables being created on some nodes. • Data was spread across thousands of SSTables, so read latency skyrocketed. • Was using paxos writes (LWT), which require a read before every write. This caused writes to fail because as reads were timing out. • Compactions could not keep up, and added additional load to the already overloaded nodes. • C* eventually crashed on most nodes, leaving some corrupt SSTables. © DataStax, All Rights Reserved. 31
  31. 31. 17 second GC pauses. Nice. Aug 16 15:51:58 INFO o.a.cassandra.service.GCInspector ConcurrentMarkSweep GC in 12416ms. CMS Old Gen: 6442450872 -> 6442450912; Par Eden Space: 1718091776 -> 297543768; Par Survivor Space: 214695856 -> 0 Aug 16 15:52:20 INFO o.a.cassandra.service.GCInspector ConcurrentMarkSweep GC in 17732ms. CMS Old Gen: 6442450912 -> 6442450864; Par Eden Space: 1718091776 -> 416111040; Par Survivor Space: 214671752 -> 0 Heap pressure causes C* to flush memtables to disk. This created >120,000 Memtables on some nodes.  3+ days just to catch up on compactions, which were continually failing because of: Aug 18 22:11:43 java.io.FileNotFoundException: /var/lib/cassandra/data/keyspace/cf- f4683d90f88111e586b7e962b0d85be3/keyspace-cf-ka-1243722-Data.db (Too many open files) java.lang.RuntimeException: java.io.FileNotFoundException: /var/lib/cassandra/data/keyspace/cf- f4683d90f88111e586b7e962b0d85be3/keyspace-cf-ka-1106806-Data.db (No such file or directory) © DataStax, All Rights Reserved. 32
  32. 32. 1. Once we got C* stable and caught up on compactions, there were still corrupt SSTables present and nodes were in an inconsistent state. 2. Couldn’t fix with repairs: ERROR o.apache.cassandra.repair.Validator Failed creating a merkle tree for [repair #21be1ac0-6809-11e6-a098- b377cb035d78 on keyspace/cf, (-227556542627198517,-225096881583623998]], /52.XXX.XXX.XXX (see log for details) ERROR o.a.c.service.CassandraDaemon Exception in thread Thread[ValidationExecutor:708,1,main] java.lang.NullPointerException: null 3. Have deleted corrupt SSSTables on some nodes. This is ok, presume there are other copies of the data in the cluster. We’ll have to repair later. 4. Run online scrubs on each node to identify corrupt SSTables, and fix (rewrite) where possible. 5. For nodes where online scrub does not complete, take the node offline and attempt an offline scrub of identified corrupt SSTables. 6. If offline scrub fails to rewrite any SSTables a node, delete those remaining corrupt SSTables. 7. Run a repair across the cluster to make data consistent across all nodes. @ 8th September, 3 weeks after the initial data load and the cluster is STILL in an inconsistent state with corrupt SSTables and queries occasionally failing. © DataStax, All Rights Reserved. 33 Long road to recovery
  33. 33. Some final tips • When making major changes to the cluster (expanding, migrating, decomissioning), GO SLOW. • It takes longer to recover from errors than just doing it right the first time. • Things I’ve seen customers do: • Rebuild 16 nodes in a new DC concurrently • Decommission multiple nodes at once • Unthrrotled data loads • Keep C* up to date, but not too up to date. • 2.0 has troubles with large compactions • Currently investigating segfaults with MV in 3.7 • Read the source code. • It is the most thorough and up to date documentation. © DataStax, All Rights Reserved. 34

Related Articles

spring
angular
rest

GitHub - jhipster/jhipster-sample-app-cassandra: This is a sample application created with JHipster, with the Cassandra option

jhipster

3/7/2024

Checkout Planet Cassandra

Claim Your Free Planet Cassandra Contributor T-shirt!

Make your contribution and score a FREE Planet Cassandra Contributor T-Shirt! 
We value our incredible Cassandra community, and we want to express our gratitude by sending an exclusive Planet Cassandra Contributor T-Shirt you can wear with pride.

Join Our Newsletter!

Sign up below to receive email updates and see what's going on with our company

Explore Related Topics

AllKafkaSparkScyllaSStableKubernetesApiGithubGraphQl

Explore Further

cassandra