Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
DataStax: 7 Deadly Sins for Cassandra Ops
Upcoming SlideShare
Loading in …5
×
1
Like this presentation? Why not share!
- 1. 7 Deadly Sins for Cassandra Ops Rachel Pedreschi DSE Evangelist, Datastax
- 2. 2© 2015. All Rights Reserved.
- 3. Lust 3© 2015. All Rights Reserved. #1 USE THE SIMPLE SNITCH
- 4. So many snitches… 4© 2015. All Rights Reserved. • SimpleSnitch • RackInferringSnitch • PropertyFileSnitch • GossipingPropertyFileSnitch • Ec2Snitch • Ec2MultiRegionSnitch • GoogleCloudSnitch • CloudstackSnitch
- 5. Switching snitches 5© 2015. All Rights Reserved. If the topology of the network has changed: -Shut down all the nodes, then restart them. -Run a sequential repair and nodetool cleanup on each node. DOWNTIME ALERT!!!!
- 6. Greed 6© 2015. All Rights Reserved. #3 NOT UNDERSTANDING REPAIR
- 7. Repair options 7© 2015. All Rights Reserved. repair (default, check your version!) repair -pr (only repair the primary range) repair -inc (only new data that has not previously been repaired) sequential repair (creates snapshots) parallel repair uses replica not being repaired
- 8. Envy 8© 2015. All Rights Reserved. #3 CHOOSE THE WRONG COMPACTION STRATEGY FOR YOUR WORKLOAD
- 9. Sized Tiered 9© 2015. All Rights Reserved. SST1 SST1 SST2 SST1 SST2 SST3 SST1 SST2 SST3 SST4 SST5 FLUSH FLUSH FLUSH FLUSH COMPACT SST5 SST6 SST5 SST6 SST7 SST5 SST6 SST7 SST8 SST5 SST10 FLUSH FLUSH FLUSH SST5 SST6 SST7 SST8 FLUSH SST9 COMPACT Compacts a set number of SSTables into a single, larger SSTable
- 10. Leveled 10© 2015. All Rights Reserved. Level 0 Level 1 10 Level 2 100 Level 3 1,000 Level 4 10,000 Level 5 100,000 … Level 6, 7, etc.
- 11. Date Tiered 11© 2015. All Rights Reserved. https://labs.spotify.com/2014/12/18/date-tiered-compaction/
- 12. Gluttony 12© 2015. All Rights Reserved. #4 CHOOSING THE WRONG HARDWARE
- 13. 13© 2015. All Rights Reserved. CPU RAM DISK
- 14. 14© 2015. All Rights Reserved. • 2 socket, ECC memory • 16GiB minimum, prefer 32-64GiB, over 128GiB and Linux will need serious tuning • SSD where possible, Samsung 840 Pro is a good choice, any Intel is fine • NO SAN/NAS, 20ms latency tops • if you MUST (and please, don’t) dedicate spindles to C* nodes, use separate network • Avoid disk configurations targeted at Hadoop, disks are too slow
- 15. Sloth 15© 2015. All Rights Reserved. #5 NOT TUNING YOUR OS
- 16. /etc/rc.local 16© 2015. All Rights Reserved. ra=$((2**14))# 16k ss=$(blockdev --getss /dev/sda) blockdev --setra $(($ra / $ss)) /dev/sda echo 128 > /sys/block/sda/queue/nr_requests echo deadline > /sys/block/sda/queue/scheduler echo 16384 > /sys/block/md7/md/stripe_cache_size
- 17. /etc/sysctl.conf 17© 2015. All Rights Reserved. fs.file-max = 1048576 vm.max_map_count = 1048576 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 65536 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 vm.swappiness = 1
- 18. Wrath 18© 2015. All Rights Reserved. #6 OVER OR UNDERSIZING YOUR JVM
- 19. G1 or CMS? 19© 2015. All Rights Reserved. • Cassandra 8150 vs 7486 • Larger heap? Look into using G1 • Read the Docs • Test, test and did I mention, test?
- 20. Pride 20© 2015. All Rights Reserved. #7 NOT STRESS TESTING
- 21. cassandra-stress .yaml (>= 2.1) 21© 2015. All Rights Reserved. 1. DDL – for defining your schema 2. Column Distributions – for defining the shape and size of each column globally and within each partition 3. Insert Distributions – for defining how the data is written during the stress test 4. DML – for defining how the data is queried during the stress test
- 22. Recommended Sessions 22© 2015. All Rights Reserved. DataStax Making Cassandra Fail (for effective testing) 3:30 Thursday Ballroom H Pythian Manage your compactions before they manage you 4:20 Ballroom H The Last Pickle Steady State Data Size with Compaction, Tombstones, and TTL 4:20 Great America #2 Wednesday Crowdstrike, Inc. Real World DTCS For Operators
- 23. Thank you @RachelPedreschi
Public clipboards featuring this slide
No public clipboards found for this slide