Illustration Image

Cassandra.Link

The best knowledge base on Apache Cassandra®

Helping platform leaders, architects, engineers, and operators build scalable real time data platforms.

11/15/2023

Reading time:10 min

How to Migrate Your Cassandra Database to Kubernetes with Zero Downtime

by DataStax

Author: Alexander DejanovskiDataStax·FollowPublished inBuilding Real-World, Real-Time AI·8 min read·Feb 23, 2022--K8ssandra is a cloud-native distribution of the Apache Cassandra® database that runs on Kubernetes, with a suite of tools to ease and automate operational tasks. In this post, we’ll walk you through a database migration from a Cassandra cluster running in AWS Amazon Elastic Compute Cloud (EC2) to a K8ssandra cluster running in Kubernetes on AWS Elastic Kubernetes Service (EKS), with zero downtime.As a Cassandra user, your expectation should be that migrating to K8ssandra would happen without downtime. To achieve zero downtime, you can use “classic” clusters running on virtual machines or bare metal instances and the data center (DC) switch technique, commonly used in the Cassandra community, to transfer clusters to different hardware or environments. The good news is that it’s not very different for clusters running in Kubernetes because most Container Network Interfaces (CNI) will provide routable pod IPs.Routable pod IPs in KubernetesA common misconception about Kubernetes networking is that services are the only way to expose pods outside the cluster and that pods themselves are only reachable directly from within the cluster.Looking at the Calico documentation, we can read the following:“If the pod IP addresses are routable outside of the cluster then pods can connect to the outside world without Source Network Address Translation (SNAT), and the outside world can connect directly to pods without going via a Kubernetes service or Kubernetes ingress.”The same documentation tells us that the default CNI used in AWS EKS, Azure AKS, and GCP GKE provide routable pod IPs within a virtual private cloud (VPC).A VPC is necessary because Cassandra nodes in both data centers will need to communicate without going through services. Each Cassandra node stores the list of all the other nodes in the cluster in the system.peers(_v2) table and communicates with them using the IP addresses stored there. If pod IPs aren’t routable, there’s no (easy) way to create a hybrid Cassandra cluster that would span outside a Kubernetes cluster’s boundaries.Database migration using Cassandra data center switchThe traditional technique to migrate a cluster to a different set of hardware or environment is to:Locate nodes in the target infrastructure and add a new data center to the clusterConfigure keyspaces so that Cassandra replicates data to the new data centerSwitch traffic to the new data center once it’s up to dateDecommission the old infrastructureWhile this procedure was brilliantly documented by my co-worker Alain Rodriguez on the TLP blog, there are some subtleties related to running our new data center in Kubernetes, and more precisely, using K8ssandra, which we’ll cover in detail here.Here are the steps we’ll go through to perform the migration:Restrict traffic to the existing data centerExpand the Cassandra cluster by adding a new data center in a Kubernetes cluster using K8ssandraRebuild the newly created data centerSwitch traffic over to the K8ssandra data centerDecommission the original Cassandra data centerPerforming the migrationInitial stateOur starting point is a Cassandra 4.0-rc1 cluster running in AWS on EC2 instances:In the AWS console, we can access the details of a node in the EC2 service and locate its VPC id, which we’ll need later to create a peering connection with the EKS cluster VPC:Figure 1. Finding the VPC id.The next step is to create an EKS cluster with the correct settings so that pod IPs will be reachable from the existing EC2 instances.Creating the EKS clusterWe’ll use the k8ssandra-terraform project to spin up an EKS cluster with three nodes (see the guide on how to install K8ssandra on EKS for help).After cloning the project locally, we initialize a few env variables to get started:Then, we go to the env directory and initialize our Terraform files:We can then update the variables.tf file and adjust it to our needs:Ensure the private classless inter-domain routing (CIDR) blocks are different from those used in the EC2 cluster VPC otherwise, you may end up with IP addresses conflicts.Now create the EKS cluster and the three worker nodes:The operation will take a few minutes to complete and output something similar to this:Note the connect_cluster command, which will allow us to create the kubeconfig context entry to interact with the cluster using kubectl:We can now check the list of worker nodes in our Kubernetes cluster:VPC peering and security groupsOur Terraform scripts will create a specific VPC for the EKS cluster. For our Cassandra nodes to communicate with the K8ssandra nodes, we’ll need to create a peering connection between both VPCs. Follow the documentation provided by AWS on this topic to create the peering connection: VPC Peering Connection.Once the VPC peering connection is created, and the route tables are updated in both VPCs, update the inbound rules of the security groups for both the EC2 Cassandra nodes and the EKS worker nodes. You’ll want to accept all TCP traffic on ports 7000 and 7001, which Cassandra nodes use to communicate with each other (unless configured otherwise).Preparing the Cassandra cluster for the expansionWhen expanding a Cassandra cluster to another data center, assuming you haven’t created your cluster with the SimpleSnitch (otherwise, you’ll have to switch snitches first), you need to make sure your keyspaces use the NetworkTopologyStrategy (NTS). This replication strategy is the only one that is DC and rack aware. The default SimpleStrategy will not consider DCs and behave as if all nodes were collocated in the same DC and rack.Figure 2. Original Cassandra cluster.We’ll use cqlsh on one of the EC2 Cassandra nodes to list the existing keyspaces and update their replication strategy.Several system keyspaces use the special LocalStrategy and are not replicated across nodes. They contain only node-specific information and cannot be altered in any way.We’ll alter the following keyspaces to make them use NTS and only put replicas on the existing data center:system_auth (contains user credentials for authentication purposes)system_distributed (contains repair history data and MV build status)system_traces (contains probabilistic tracing data)tlp_stress (user-created keyspace)Add any other user-created keyspace to the list. Here we only have the tlp_stress keyspace, created by the tlp-stress tool to generate some data for this migration.We’ll now run the following command on all the above keyspaces using the existing data center name, in our case us-west-2:You should make sure to pin client traffic to the us-west-2 data center by specifying it as the local data center. You can do this by using the DCAwareRoundRobinPolicy in some older versions of the DataStax drivers or by specifying it as a local data center when creating a new CqlSession object in the 4.x branch of the Java Driver:You can find more information in the driver’s documentation.Deploying K8ssandra as a new data center​​K8ssandra ships with cass-operator, which orchestrates the Cassandra nodes and handles their configuration. Cass-operator exposes an additionalSeeds setting which allows us to add seed nodes that are not managed by the local instance of cass-operator and, by doing so, create a new data center that will expand an existing cluster.Figure 3. Creating a K8ssandra deployment for the new data center.We’ll put all our existing Cassandra nodes as additional seeds, and you shouldn’t need more than three nodes in this list, even if your original cluster is larger. The following migration.yaml values file will be used for our K8ssandra Helm chart:Note that the cluster name must match the value used for the EC2 Cassandra nodes, and the data center should be named differently than the existing one(s). We’ll only install Cassandra in our K8ssandra data center, but you could deploy other components during this phase.Let’s deploy K8ssandra and have it join the Cassandra cluster:You can monitor the logs of the Cassandra pods to see if they’re joining appropriately:Cass-operator will only start one node at a time. So, if you get a message that looks like the following, try checking the logs of another pod:If VPC peering is done appropriately, the nodes should join the cluster one by one, and after a while, nodetool status should give an output that looks like this:Rebuilding the new data centerNow that our K8ssandra data center has joined the cluster, we’ll alter the replication strategies to create replicas in the k8s-1 DC for the keyspaces we previously altered:Figure 4. Replicating data to the new data center by rebuilding.After altering all required keyspaces, rebuild the newly added nodes by running the following command for each Cassandra pod:Once all three nodes are rebuilt, the load should be similar on all nodes:Note that K8ssandra will create a new superuser and that the existing users in the cluster will be retained as well after the migration. You can forcefully recreate the existing superuser credentials in the K8ssandra data center by adding the following block in the “cassandra” section of the Helm values file:Switching traffic to the new data centerClient traffic can now be directed at the k8s-1 data center, the same way we previously restricted it to us-west-2. If your clients are running from within the Kubernetes cluster, use the Cassandra service exposed by K8ssandra as a contact point for the driver.If the clients are running outside of the Kubernetes cluster, you’ll need to enable Ingress and configure it appropriately (which is outside the scope of this blog post).Figure 5. Redirecting client traffic to the new data center.Decommissioning the old data center and finishing the migrationOnce all the client apps/services have been restarted, we can alter our keyspaces to only replicate them on k8s-1:Figure 6. Decommission the original data center.Then ssh into each of the Cassandra nodes in us-west-2 and run the following command to decommission them:They will appear as leaving (UL) while the decommission is running:The operation should be fairly fast as no streaming will take place since we no longer have keyspaces replicated on us-west-2.Once all three nodes were decommissioned, we should be left with the k8s-1 data center only:As a final step, we can now delete the VPC peering connection as it is no longer necessary.Note that the cluster can run in hybrid mode for as long as necessary. There’s no requirement to delete the us-west-2 data center if it makes sense to keep it alive.ConclusionIn this post, I’ve illustrated that it’s indeed possible to migrate existing Cassandra clusters to K8ssandra without downtime by leveraging flat networking. This allows Cassandra nodes running in VMs to connect to Cassandra pods running in Kubernetes directly. If you haven’t explored K8ssandra yet, I strongly encourage you to check it out!Follow the DataStax Tech Blog for more developer stories. Check out our YouTube channel for tutorials and DataStax Developers on Twitter for the latest news about our dev community.ResourcesK8ssandraCalico — Determining Best Networking OptionApache Cassandra — Data Center SwitchK8ssandra TerraformTLP-StressCass OperatorK8ssandra Discord CommunityK8ssandra ForumDataStaxinBuilding Real-World, Real-Time AIWhat’s a Vector Database?By Bill McLane8 min read·Sep 18--DataStaxinBuilding Real-World, Real-Time AI7 Reasons to Choose Apache Pulsar over Apache KafkaBy Chris Bartholomew, Streaming Engineering, DataStax6 min read·May 13, 2021--DataStaxinBuilding Real-World, Real-Time AIA Vector Primer: Understanding the Basics of Generative AIBy Charna Parkey4 min read·Oct 19--DataStaxinBuilding Real-World, Real-Time AIReclaiming Persistent Volumes in KubernetesAuthor: Frank Rosner8 min read·Sep 29, 2021--1Extio TechnologyMastering Kubernetes Pod-to-Pod Communication: A Comprehensive GuideIntroduction6 min read·Jun 16--ScyllaDBMongoDB vs ScyllaDB: Architecture ComparisonbenchANT compares MongoDB and ScyllaDB architectures, with a focus on what the differences mean for performance and scalability11 min read·6 days ago--Hugo HjertenPulumi after years of TerraformIs it worth changing IaC tool?7 min read·Oct 27--4Sairam KrishInstalling Clickhouse on a kubernetes clusterClickhouse is the fastest and most resource efficient open-source database for real-time apps and analytics. Installing clickhouse on a…3 min read·Jun 6--YakuphanDemystifying Kubernetes Ingress: ALB vs. NginxIntroduction: In the dynamic world of Kubernetes, efficiently managing incoming traffic to your services is paramount. Two popular…4 min read·Aug 12--1Tommer AmberBoosting Your Kubernetes Productivity: Essential kubectl Plugins for Efficient Cluster ManagementIntro3 min read·Oct 30--2

Illustration Image

Author: Alexander Dejanovski

DataStax
Building Real-World, Real-Time AI
8 min readFeb 23, 2022

--

K8ssandra is a cloud-native distribution of the Apache Cassandra® database that runs on Kubernetes, with a suite of tools to ease and automate operational tasks. In this post, we’ll walk you through a database migration from a Cassandra cluster running in AWS Amazon Elastic Compute Cloud (EC2) to a K8ssandra cluster running in Kubernetes on AWS Elastic Kubernetes Service (EKS), with zero downtime.

As a Cassandra user, your expectation should be that migrating to K8ssandra would happen without downtime. To achieve zero downtime, you can use “classic” clusters running on virtual machines or bare metal instances and the data center (DC) switch technique, commonly used in the Cassandra community, to transfer clusters to different hardware or environments. The good news is that it’s not very different for clusters running in Kubernetes because most Container Network Interfaces (CNI) will provide routable pod IPs.

Routable pod IPs in Kubernetes

A common misconception about Kubernetes networking is that services are the only way to expose pods outside the cluster and that pods themselves are only reachable directly from within the cluster.

Looking at the Calico documentation, we can read the following:

“If the pod IP addresses are routable outside of the cluster then pods can connect to the outside world without Source Network Address Translation (SNAT), and the outside world can connect directly to pods without going via a Kubernetes service or Kubernetes ingress.”

The same documentation tells us that the default CNI used in AWS EKS, Azure AKS, and GCP GKE provide routable pod IPs within a virtual private cloud (VPC).

A VPC is necessary because Cassandra nodes in both data centers will need to communicate without going through services. Each Cassandra node stores the list of all the other nodes in the cluster in the system.peers(_v2) table and communicates with them using the IP addresses stored there. If pod IPs aren’t routable, there’s no (easy) way to create a hybrid Cassandra cluster that would span outside a Kubernetes cluster’s boundaries.

Database migration using Cassandra data center switch

The traditional technique to migrate a cluster to a different set of hardware or environment is to:

  • Locate nodes in the target infrastructure and add a new data center to the cluster
  • Configure keyspaces so that Cassandra replicates data to the new data center
  • Switch traffic to the new data center once it’s up to date
  • Decommission the old infrastructure

While this procedure was brilliantly documented by my co-worker Alain Rodriguez on the TLP blog, there are some subtleties related to running our new data center in Kubernetes, and more precisely, using K8ssandra, which we’ll cover in detail here.

Here are the steps we’ll go through to perform the migration:

  1. Restrict traffic to the existing data center
  2. Expand the Cassandra cluster by adding a new data center in a Kubernetes cluster using K8ssandra
  3. Rebuild the newly created data center
  4. Switch traffic over to the K8ssandra data center
  5. Decommission the original Cassandra data center

Performing the migration

Initial state

Our starting point is a Cassandra 4.0-rc1 cluster running in AWS on EC2 instances:

In the AWS console, we can access the details of a node in the EC2 service and locate its VPC id, which we’ll need later to create a peering connection with the EKS cluster VPC:

Figure 1. Finding the VPC id.

The next step is to create an EKS cluster with the correct settings so that pod IPs will be reachable from the existing EC2 instances.

Creating the EKS cluster

We’ll use the k8ssandra-terraform project to spin up an EKS cluster with three nodes (see the guide on how to install K8ssandra on EKS for help).

After cloning the project locally, we initialize a few env variables to get started:

Then, we go to the env directory and initialize our Terraform files:

We can then update the variables.tf file and adjust it to our needs:

Ensure the private classless inter-domain routing (CIDR) blocks are different from those used in the EC2 cluster VPC otherwise, you may end up with IP addresses conflicts.

Now create the EKS cluster and the three worker nodes:

The operation will take a few minutes to complete and output something similar to this:

Note the connect_cluster command, which will allow us to create the kubeconfig context entry to interact with the cluster using kubectl:

We can now check the list of worker nodes in our Kubernetes cluster:

VPC peering and security groups

Our Terraform scripts will create a specific VPC for the EKS cluster. For our Cassandra nodes to communicate with the K8ssandra nodes, we’ll need to create a peering connection between both VPCs. Follow the documentation provided by AWS on this topic to create the peering connection: VPC Peering Connection.

Once the VPC peering connection is created, and the route tables are updated in both VPCs, update the inbound rules of the security groups for both the EC2 Cassandra nodes and the EKS worker nodes. You’ll want to accept all TCP traffic on ports 7000 and 7001, which Cassandra nodes use to communicate with each other (unless configured otherwise).

Preparing the Cassandra cluster for the expansion

When expanding a Cassandra cluster to another data center, assuming you haven’t created your cluster with the SimpleSnitch (otherwise, you’ll have to switch snitches first), you need to make sure your keyspaces use the NetworkTopologyStrategy (NTS). This replication strategy is the only one that is DC and rack aware. The default SimpleStrategy will not consider DCs and behave as if all nodes were collocated in the same DC and rack.

Figure 2. Original Cassandra cluster.

We’ll use cqlsh on one of the EC2 Cassandra nodes to list the existing keyspaces and update their replication strategy.

Several system keyspaces use the special LocalStrategy and are not replicated across nodes. They contain only node-specific information and cannot be altered in any way.

We’ll alter the following keyspaces to make them use NTS and only put replicas on the existing data center:

  • system_auth (contains user credentials for authentication purposes)
  • system_distributed (contains repair history data and MV build status)
  • system_traces (contains probabilistic tracing data)
  • tlp_stress (user-created keyspace)

Add any other user-created keyspace to the list. Here we only have the tlp_stress keyspace, created by the tlp-stress tool to generate some data for this migration.

We’ll now run the following command on all the above keyspaces using the existing data center name, in our case us-west-2:

You should make sure to pin client traffic to the us-west-2 data center by specifying it as the local data center. You can do this by using the DCAwareRoundRobinPolicy in some older versions of the DataStax drivers or by specifying it as a local data center when creating a new CqlSession object in the 4.x branch of the Java Driver:

You can find more information in the driver’s documentation.

Deploying K8ssandra as a new data center

​​K8ssandra ships with cass-operator, which orchestrates the Cassandra nodes and handles their configuration. Cass-operator exposes an additionalSeeds setting which allows us to add seed nodes that are not managed by the local instance of cass-operator and, by doing so, create a new data center that will expand an existing cluster.

Figure 3. Creating a K8ssandra deployment for the new data center.

We’ll put all our existing Cassandra nodes as additional seeds, and you shouldn’t need more than three nodes in this list, even if your original cluster is larger. The following migration.yaml values file will be used for our K8ssandra Helm chart:

Note that the cluster name must match the value used for the EC2 Cassandra nodes, and the data center should be named differently than the existing one(s). We’ll only install Cassandra in our K8ssandra data center, but you could deploy other components during this phase.

Let’s deploy K8ssandra and have it join the Cassandra cluster:

You can monitor the logs of the Cassandra pods to see if they’re joining appropriately:

Cass-operator will only start one node at a time. So, if you get a message that looks like the following, try checking the logs of another pod:

If VPC peering is done appropriately, the nodes should join the cluster one by one, and after a while, nodetool status should give an output that looks like this:

Rebuilding the new data center

Now that our K8ssandra data center has joined the cluster, we’ll alter the replication strategies to create replicas in the k8s-1 DC for the keyspaces we previously altered:

Figure 4. Replicating data to the new data center by rebuilding.

After altering all required keyspaces, rebuild the newly added nodes by running the following command for each Cassandra pod:

Once all three nodes are rebuilt, the load should be similar on all nodes:

Note that K8ssandra will create a new superuser and that the existing users in the cluster will be retained as well after the migration. You can forcefully recreate the existing superuser credentials in the K8ssandra data center by adding the following block in the “cassandra” section of the Helm values file:

Switching traffic to the new data center

Client traffic can now be directed at the k8s-1 data center, the same way we previously restricted it to us-west-2. If your clients are running from within the Kubernetes cluster, use the Cassandra service exposed by K8ssandra as a contact point for the driver.

If the clients are running outside of the Kubernetes cluster, you’ll need to enable Ingress and configure it appropriately (which is outside the scope of this blog post).

Figure 5. Redirecting client traffic to the new data center.

Decommissioning the old data center and finishing the migration

Once all the client apps/services have been restarted, we can alter our keyspaces to only replicate them on k8s-1:

Figure 6. Decommission the original data center.

Then ssh into each of the Cassandra nodes in us-west-2 and run the following command to decommission them:

They will appear as leaving (UL) while the decommission is running:

The operation should be fairly fast as no streaming will take place since we no longer have keyspaces replicated on us-west-2.

Once all three nodes were decommissioned, we should be left with the k8s-1 data center only:

As a final step, we can now delete the VPC peering connection as it is no longer necessary.

Note that the cluster can run in hybrid mode for as long as necessary. There’s no requirement to delete the us-west-2 data center if it makes sense to keep it alive.

Conclusion

In this post, I’ve illustrated that it’s indeed possible to migrate existing Cassandra clusters to K8ssandra without downtime by leveraging flat networking. This allows Cassandra nodes running in VMs to connect to Cassandra pods running in Kubernetes directly. If you haven’t explored K8ssandra yet, I strongly encourage you to check it out!

Follow the DataStax Tech Blog for more developer stories. Check out our YouTube channel for tutorials and DataStax Developers on Twitter for the latest news about our dev community.

Resources

  1. K8ssandra
  2. Calico — Determining Best Networking Option
  3. Apache Cassandra — Data Center Switch
  4. K8ssandra Terraform
  5. TLP-Stress
  6. Cass Operator
  7. K8ssandra Discord Community
  8. K8ssandra Forum

Related Articles

migration
proxy
cassandra

GitHub - datastax/cql-proxy: A client-side CQL proxy/sidecar.

datastax

11/1/2024

Checkout Planet Cassandra

Claim Your Free Planet Cassandra Contributor T-shirt!

Make your contribution and score a FREE Planet Cassandra Contributor T-Shirt! 
We value our incredible Cassandra community, and we want to express our gratitude by sending an exclusive Planet Cassandra Contributor T-Shirt you can wear with pride.

Join Our Newsletter!

Sign up below to receive email updates and see what's going on with our company

Explore Related Topics

AllKafkaSparkScyllaSStableKubernetesApiGithubGraphQl

Explore Further

migration

kubernetes

cassandra
kubernetes