Illustration Image

Cassandra.Link

The best knowledge base on Apache Cassandra®

Helping platform leaders, architects, engineers, and operators build scalable real time data platforms.

1/27/2021

Reading time:3 min

Upgrading a Large Cassandra Cluster with cstar | Official Pythian® Blog

by Valerie Parham-Thompson

I recently did an upgrade of 200+ nodes of Cassandra across multiple environments sitting behind multiple applications using the cstar tool. We chose the cstar tool because, out of all automation options, it has topology awareness specifically to Cassandra. Here are some things I noticed.The cstar tool is used to run commands on servers in a distributed way. The alternate cstarpar is used if you need to run the commands on the originating server instead. The Last Pickle detailed a fine example in their 3-part series on cstarpar [https://thelastpickle.com/blog/2018/12/11/cstar-reboots.html]. In our case, the individual nodes didn’t have the same access to a configuration management server that the jump host did. The cstarpar script was used to issue a command to the configuration management server, and then send ssh commands to the individual nodes (to change files, restart, etc.).The jobs folder is on the originating server under ~/.cstar/jobs, with a UUID-labeled directory for each job, and server hostname directories underneath. The output is in a file named “out” under each hostname directory. Grepping through ~/.cstar/jobs/[UUID]/server*/out is a handy way to view desired info in the output.The cstar output can be a little too quiet, and we know that sometimes means trouble. The tag on a -v flag so you have lots of output to grep through as above.Related, you also have to ask for some output. One of the pre-checks was to verify that specifically named files didn’t exist. Long story short, but the most efficient way to do this particular check was to grep through directories. In the test, the command worked, and in staging, the command worked. In production, cstar was marking each node as failed. Much troubleshooting later, we realized that the files existed in test and staging, but not production, so the script wasn’t finding anything and therefore “failing.” Piping the output into a ‘wc -l’ allowed each check to have some kind of response, and the script succeeded.It’s documented that all of the nodes in a cluster have to be registering as up, or cstar will fail. The automated process we used was to shut down Cassandra, pull the new config and binary, and restart Cassandra, node by node. With a lot of Cassandra nodes, even with a brief sleep time in between nodes, I was hitting the permissions server too often and too quickly for its comfort, and about 75% of the way through, it started blocking me after Cassandra was shut down on every 10th node or so. The only way I detected this was that cstar paused for long enough that I noticed; there was no error message. I had to wait for the permissions server to stop limiting me, and then manually issue the commands on the node. On the plus side, cstar didn’t fail while waiting for me and continued on with the list of nodes automatically after I took care of the individual node.I saved the best for last. It’s a trick to make other automation tools aware of Cassandra topology. In this upgrade environment, we had multiple data centers with varying numbers of nodes within each, and cstar was smart about distributing the work so that generally the same percentage of nodes were completed for each data center at any point in time. That meant that in the end, the largest data center wasn’t being hit repeatedly with remaining upgrades.Overall, the gotchas were minor, and I’m happy we used the cstar tool on this upgrade. It allowed flexibility to run custom scripts in a unique environment and certainly shortened the amount of time required to upgrade a large cluster.Check out the cstar tool here https://github.com/spotify/cstar.Interested in working with Valerie? Schedule a tech call.

Illustration Image

I recently did an upgrade of 200+ nodes of Cassandra across multiple environments sitting behind multiple applications using the cstar tool. We chose the cstar tool because, out of all automation options, it has topology awareness specifically to Cassandra. Here are some things I noticed.

The cstar tool is used to run commands on servers in a distributed way. The alternate cstarpar is used if you need to run the commands on the originating server instead. The Last Pickle detailed a fine example in their 3-part series on cstarpar [https://thelastpickle.com/blog/2018/12/11/cstar-reboots.html]. In our case, the individual nodes didn’t have the same access to a configuration management server that the jump host did. The cstarpar script was used to issue a command to the configuration management server, and then send ssh commands to the individual nodes (to change files, restart, etc.).

The jobs folder is on the originating server under ~/.cstar/jobs, with a UUID-labeled directory for each job, and server hostname directories underneath. The output is in a file named “out” under each hostname directory. Grepping through ~/.cstar/jobs/[UUID]/server*/out is a handy way to view desired info in the output.

The cstar output can be a little too quiet, and we know that sometimes means trouble. The tag on a -v flag so you have lots of output to grep through as above.

Related, you also have to ask for some output. One of the pre-checks was to verify that specifically named files didn’t exist. Long story short, but the most efficient way to do this particular check was to grep through directories. In the test, the command worked, and in staging, the command worked. In production, cstar was marking each node as failed. Much troubleshooting later, we realized that the files existed in test and staging, but not production, so the script wasn’t finding anything and therefore “failing.” Piping the output into a ‘wc -l’ allowed each check to have some kind of response, and the script succeeded.

It’s documented that all of the nodes in a cluster have to be registering as up, or cstar will fail. The automated process we used was to shut down Cassandra, pull the new config and binary, and restart Cassandra, node by node. With a lot of Cassandra nodes, even with a brief sleep time in between nodes, I was hitting the permissions server too often and too quickly for its comfort, and about 75% of the way through, it started blocking me after Cassandra was shut down on every 10th node or so. The only way I detected this was that cstar paused for long enough that I noticed; there was no error message. I had to wait for the permissions server to stop limiting me, and then manually issue the commands on the node. On the plus side, cstar didn’t fail while waiting for me and continued on with the list of nodes automatically after I took care of the individual node.

I saved the best for last. It’s a trick to make other automation tools aware of Cassandra topology. In this upgrade environment, we had multiple data centers with varying numbers of nodes within each, and cstar was smart about distributing the work so that generally the same percentage of nodes were completed for each data center at any point in time. That meant that in the end, the largest data center wasn’t being hit repeatedly with remaining upgrades.

Overall, the gotchas were minor, and I’m happy we used the cstar tool on this upgrade. It allowed flexibility to run custom scripts in a unique environment and certainly shortened the amount of time required to upgrade a large cluster.

Check out the cstar tool here https://github.com/spotify/cstar.

Interested in working with Valerie? Schedule a tech call.

Related Articles

cassandra
tools
sstables

ic-tools for Apache Cassandra SSTables

John Doe

2/17/2023

cassandra
tools

Checkout Planet Cassandra

Claim Your Free Planet Cassandra Contributor T-shirt!

Make your contribution and score a FREE Planet Cassandra Contributor T-Shirt! 
We value our incredible Cassandra community, and we want to express our gratitude by sending an exclusive Planet Cassandra Contributor T-Shirt you can wear with pride.

Join Our Newsletter!

Sign up below to receive email updates and see what's going on with our company

Explore Related Topics

AllKafkaSparkScyllaSStableKubernetesApiGithubGraphQl

Explore Further

cassandra