Illustration Image

Cassandra.Link

The best knowledge base on Apache Cassandra®

Helping platform leaders, architects, engineers, and operators build scalable real time data platforms.

10/29/2020

Reading time:5 min

Deploy a multi-cloud Elassandra cluster running on AKS and GKE

by John Doe

Once your AKS and GKE clusters are running, we need to deploy and configure additional services in these two clusters.Elassandra OperatorInstall the Elassandra operator in the default namespace:helm install --namespace default --name elassop --wait strapdata/elassandra-operatorConfigure CoreDNSThe Kubernetes CoreDNS is used for two reasons:Resolve DNS name of you DNS zone from inside the Kubernetes cluster using DNS forwarders to your DNS zone.Reverse resolution of the broadcast Elassandra public IP addresses to Kubernetes nodes private IP.You can deploy the CodeDNS custom configuration with the strapdata coredns-forwarder HELM chart to basically install (or replace) the coredns-custom configmap, and restart coreDNS pods.If your Kubernetes nodes does not have the ExternalIP set (like AKS), public node IP address should be available through the custom label elassandra.strapdata.com/public-ip.Then configure the CoreDNS custom configmap with your DNS name servers and host aliases. In the following example, this is Azure DNS name servers:kubectl delete configmap --namespace kube-system coredns-customhelm install --name coredns-forwarder --namespace kube-system \--set forwarders.domain="${DNS_DOMAIN}" \--set forwarders.hosts[0]="40.90.4.8" \--set forwarders.hosts[1]="64.4.48.8" \--set forwarders.hosts[2]="13.107.24.8" \--set forwarders.hosts[3]="13.107.160.8" \--set nodes.domain=internal.strapdata.com \--set $HOST_ALIASES \strapdata/coredns-forwarderThen restart CoreDNS pods to reload our configuration, but this depends on coreDNS deployment labels !On AKS:kubectl delete pod --namespace kube-system -l k8s-app=kube-dnsOn GKE:kubectl delete pod --namespace kube-system -l k8s-app=corednsCheck the CoreDNS custom configuration:kubectl get configmap -n kube-system coredns-custom -o yamlapiVersion: v1data:dns.server: |test.strapkube.com:53 {errorscache 30forward $DNS_DOMAIN 40.90.4.8 64.4.48.8 13.107.24.8 13.107.160.8}hosts.override: |hosts nodes.hosts internal.strapdata.com {10.132.0.57 146-148-117-125.internal.strapdata.com 146-148-117-12510.132.0.58 35-240-56-87.internal.strapdata.com 35-240-56-8710.132.0.56 34-76-40-251.internal.strapdata.com 34-76-40-251fallthrough}kind: ConfigMapmetadata:creationTimestamp: "2020-06-26T16:45:52Z"name: coredns-customnamespace: kube-systemresourceVersion: "6632"selfLink: /api/v1/namespaces/kube-system/configmaps/coredns-customuid: dca59c7d-6503-48c1-864f-28ae46319725Deploy a dnsutil pod:cat <<EOF | kubectl apply -f -apiVersion: v1kind: Podmetadata:name: dnsutilsnamespace: defaultspec:containers:- name: dnsutilsimage: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3command:- sleep- "3600"imagePullPolicy: IfNotPresentrestartPolicy: AlwaysEOFTest resolution of public IP names to internal Kubernetes node IP address:kubectl exec -ti dnsutils -- nslookup 146-148-117-125.internal.strapdata.comServer: 10.19.240.10Address: 10.19.240.10#53Name: 146-148-117-125.internal.strapdata.comAddress: 10.132.0.57ExternalDNSThe ExternalDNS is used to automatically update your DNS zone and create an A record for the Cassandra broadcast IP addresses. You can use it with a public or a private DNS zone, and with any DNS provider supported by ExternalDNS. In the following setup, we will use a DNS zone hosted on Azure.Deploy Elassandra DC1Deploy the first datacenter dc1 of the Elassandra cluster cl1 in the Kubernetes cluster kube1, with Kibana and Cassandra Reaper available through the Traefik ingress controller.Once the Elassandra datacenter is deployed, you get 3 Elassandra pods from 3 StatefulSets:Once the datacenter is ready, check the cluster status:Then get the generated TLS certificates and the Cassandra admin password (Because using the default cassandra user is not recommended, the Elassandra operator automatically creates an admin superuser role):Connect to the Elassandra/Cassandra node from the internet:SSL_CERTFILE=cl1-cacert.pem bin/cqlsh --ssl -u admin -p $CASSANDRA_ADMIN_PASSWORD cassandra-cl1-dc1-0-0.$DNS_DOMAIN 39001Connected to cl1 at cassandra-cl1-dc1-0-0.test.strapkube.com:39001.[cqlsh 5.0.1 | Cassandra 3.11.6.1 | CQL spec 3.4.4 | Native protocol v4]Use HELP for help.admin@cqlsh>Finally, you can check the Elassandra datacenter status (The CRD managed by the Elassandra Operator):First, we need to copy cluster secrets from the Elassandra datacenter dc1 into the Kubernetes kube2 running on GKE.for s in elassandra-cl1 elassandra-cl1-ca-pub elassandra-cl1-ca-key elassandra-cl1-kibana; dokubectl get secret $s — context kube1 — export -n default -o yaml | kubectl apply — context kube2 -n default -f -doneThen deploy the Elassandra datacenter dc2 into the GKE cluster2, using the same ports.The TRAEFIK_FQDN should be something like traefik-cluster2.$DNS_DOMAIN.The cassandra.remoteSeeds must include the DNS names of dc1 seed nodes, the first node of each rack StatefulSet with index 0.Once dc2 Elassandra pods are started, you get a running Elassandra cluster in AKS and GKE.The datacenter dc2 started without streaming data, and we now setup keyspace replication before rebuilding the datacenter from dc1 using an Elassandra task CRD. This task automatically includes Cassandra system keyspaces (system_auth, system_distributed, system_traces, and elastic_admin if Elasticsearch is enabled).The edctl utility allow to wait on conditions on Elassandra datacenters or tasks. We now rebuild dc2 from dc1 by streaming the data:If Elasticsearch is enabled in dc2, you need to run restart Elassandra pods to update the Elasticsearch cluster state since data have been populated by streaming data from dc1.kubectl delete pod --namespace default -l app=elassandra,elassandra.strapdata.com/datacenter=dc2Finally, check you can connect on dc2:SSL_CERTFILE=cl1-cacert.pem bin/cqlsh --ssl -u admin -p $CASSANDRA_ADMIN_PASSWORD cassandra-cl1-dc2-0-0.$DNS_DOMAIN 39001Connected to cl1 at cassandra-cl1-dc2-0-0.test.strapkube.com:39001.[cqlsh 5.0.1 | Cassandra 3.11.6.1 | CQL spec 3.4.4 | Native protocol v4]Use HELP for help.admin@cqlsh>Check the Elasticsearch cluster status on dc2. The kibana index was automatically created by the deployed kibana pod running in the Kubernetes cluster kube2:Here you get a multi-cloud Elassandra cluster running in multiple Kubernetes clusters. The Elassandra Operator gives you the flexibility to deploy on the cloud or on premise, in a public or private network. You can scale up/scale down, park/unpark your datacenters, you can loose a kubernetes node, a persistent volume or event a zone, the Elassandra datacenter remains up and running and you don’t have to manage any sync issue between your database and your Elasticsearch cluster.In next the articles, we’ll see how the Elassandra Operator deploys Kibana for data visualisation and Cassandra Reaper to manage continuous Cassandra repairs. We’ll also see how to setup the Prometheus Operator with Grafana dashboards to monitor the Elassandra Operator, the Elassandra nodes and Kubernetes resources.Have fun with this Elassandra Operator and thanks in advance for your feedbacks !Referenceshttp://operator.elassandra.io/https://github.com/strapdata/elassandra-operatorhttps://elassandra.readthedocs.io/en/latest/https://github.com/strapdata/elassandrahttps://medium.com/@john.sanda/deploying-cassandra-in-kubernetes-with-casskop-1f721586825b

Illustration Image

Once your AKS and GKE clusters are running, we need to deploy and configure additional services in these two clusters.

Elassandra Operator

Install the Elassandra operator in the default namespace:

Configure CoreDNS

The Kubernetes CoreDNS is used for two reasons:

  • Resolve DNS name of you DNS zone from inside the Kubernetes cluster using DNS forwarders to your DNS zone.
  • Reverse resolution of the broadcast Elassandra public IP addresses to Kubernetes nodes private IP.

You can deploy the CodeDNS custom configuration with the strapdata coredns-forwarder HELM chart to basically install (or replace) the coredns-custom configmap, and restart coreDNS pods.

If your Kubernetes nodes does not have the ExternalIP set (like AKS), public node IP address should be available through the custom label elassandra.strapdata.com/public-ip.

Then configure the CoreDNS custom configmap with your DNS name servers and host aliases. In the following example, this is Azure DNS name servers:

Then restart CoreDNS pods to reload our configuration, but this depends on coreDNS deployment labels !

On AKS:

On GKE:

Check the CoreDNS custom configuration:

Deploy a dnsutil pod:

Test resolution of public IP names to internal Kubernetes node IP address:

ExternalDNS

The ExternalDNS is used to automatically update your DNS zone and create an A record for the Cassandra broadcast IP addresses. You can use it with a public or a private DNS zone, and with any DNS provider supported by ExternalDNS. In the following setup, we will use a DNS zone hosted on Azure.

Deploy Elassandra DC1

Deploy the first datacenter dc1 of the Elassandra cluster cl1 in the Kubernetes cluster kube1, with Kibana and Cassandra Reaper available through the Traefik ingress controller.

Once the Elassandra datacenter is deployed, you get 3 Elassandra pods from 3 StatefulSets:

Once the datacenter is ready, check the cluster status:

Then get the generated TLS certificates and the Cassandra admin password (Because using the default cassandra user is not recommended, the Elassandra operator automatically creates an admin superuser role):

Connect to the Elassandra/Cassandra node from the internet:

Finally, you can check the Elassandra datacenter status (The CRD managed by the Elassandra Operator):

First, we need to copy cluster secrets from the Elassandra datacenter dc1 into the Kubernetes kube2 running on GKE.

Then deploy the Elassandra datacenter dc2 into the GKE cluster2, using the same ports.

  • The TRAEFIK_FQDN should be something like traefik-cluster2.$DNS_DOMAIN.
  • The cassandra.remoteSeeds must include the DNS names of dc1 seed nodes, the first node of each rack StatefulSet with index 0.

Once dc2 Elassandra pods are started, you get a running Elassandra cluster in AKS and GKE.

The datacenter dc2 started without streaming data, and we now setup keyspace replication before rebuilding the datacenter from dc1 using an Elassandra task CRD. This task automatically includes Cassandra system keyspaces (system_auth, system_distributed, system_traces, and elastic_admin if Elasticsearch is enabled).

The edctl utility allow to wait on conditions on Elassandra datacenters or tasks. We now rebuild dc2 from dc1 by streaming the data:

If Elasticsearch is enabled in dc2, you need to run restart Elassandra pods to update the Elasticsearch cluster state since data have been populated by streaming data from dc1.

Finally, check you can connect on dc2:

Check the Elasticsearch cluster status on dc2. The kibana index was automatically created by the deployed kibana pod running in the Kubernetes cluster kube2:

Here you get a multi-cloud Elassandra cluster running in multiple Kubernetes clusters. The Elassandra Operator gives you the flexibility to deploy on the cloud or on premise, in a public or private network. You can scale up/scale down, park/unpark your datacenters, you can loose a kubernetes node, a persistent volume or event a zone, the Elassandra datacenter remains up and running and you don’t have to manage any sync issue between your database and your Elasticsearch cluster.

In next the articles, we’ll see how the Elassandra Operator deploys Kibana for data visualisation and Cassandra Reaper to manage continuous Cassandra repairs. We’ll also see how to setup the Prometheus Operator with Grafana dashboards to monitor the Elassandra Operator, the Elassandra nodes and Kubernetes resources.

Have fun with this Elassandra Operator and thanks in advance for your feedbacks !

References

Related Articles

cloud
kubernetes
datastax

DataStax Hyper-Converged Database: The Future of Data Infrastructure Is Here | DataStax

Patrick McFadin

7/11/2024

cassandra
kubernetes

Checkout Planet Cassandra

Claim Your Free Planet Cassandra Contributor T-shirt!

Make your contribution and score a FREE Planet Cassandra Contributor T-Shirt! 
We value our incredible Cassandra community, and we want to express our gratitude by sending an exclusive Planet Cassandra Contributor T-Shirt you can wear with pride.

Join Our Newsletter!

Sign up below to receive email updates and see what's going on with our company

Explore Related Topics

AllKafkaSparkScyllaSStableKubernetesApiGithubGraphQl

Explore Further

kubernetes