Illustration Image

Cassandra.Link

The best knowledge base on Apache Cassandra®

Helping platform leaders, architects, engineers, and operators build scalable real time data platforms.

6/3/2021

Reading time:7 min

Apache Cassandra Lunch #43: DSBulk with sed and awk - Business Platform Team

by Arpan Patel

In Apache Cassandra Lunch #43: DSBulk with sed and awk, we discuss how we can use the DataStax Bulk Loader with sed and awk for Cassandra data operations. The live recording of Cassandra Lunch, which includes a more in-depth discussion and a demo, is embedded below in case you were not able to attend live. If you would like to attend Apache Cassandra Lunch live, it is hosted every Wednesday at 12 PM EST. Register here now!In Apache Cassandra Lunch #43: DSBulk with sed and awk, we discuss how we can use the DataStax Bulk Loader with sed and awk for Cassandra data operations. We have a walkthrough to show you how you can do ETL from different Cassandra instances using DSBulk (E + L) with sed and awk (T). The live recording embedded below contains a live demo as well, so be sure to watch that as well!Also, check out this blog for a series that will outline the general approaches for data operations in business-critical environments that leverage Cassandra and must maintain high-availability in an agile development and continuous delivery environment.If you are not familiar with the DataStax Bulk Loader, it is an open-source tool that can be used to load and unload CSV or JSON data in and out of supported databases. The tool can be used with DataStax Astra, DSE, and open-source Apache Cassandra. More on the tool can be found here, and you can also check out the GitHub repository for additional information. We can now move onto the walkthrough portion of the blog. First, you will need to go this GitHub repository. You can follow the instructions on the repository, or follow the instructions on this blog, but they will be the same. In this walkthrough, we will be using dsbulk to unload data from a DataStax Astra instance, do data transformations using awk and sed, and then load it into a Dockerized Apache Cassandra instance. You can do this walkthrough with any combination of 2 Cassandra distributions mentioned above. If you are working with deployed instances, you can use their contact points and more on that can be found here.We will introduce the scenario that this example follows: Someone on our team wants us to take the previous_employees_by_title table and create a new table that has the time worked in days instead of a start time and end time. This can be done within the same instance, but for the purposes of showing more of what dsbulk can do, we opted for moving data between DataStax Astra and a Dockerized instance of Apache Cassandra.PrerequisitesDataStax AstraDataStax Bulk Loader (make sure to add to PATH)DockerGNU awkGNU sed1. Clone this repo and cd into itgit clone https://github.com/Anant/example-cassandra-dsbulk-with-sed-and-awk.gitcd example-cassandra-dsbulk-with-sed-and-awk2. Set-up DataStax Astra2.1 – Sign up for a free DataStax Astra account if you do not have one already2.2 – Hit the Create Database button on the dashboard2.3 – Hit the Get Started button on the dashboardThis will be a pay-as-you-go method, but they won’t ask for a payment method until you exceed $25 worth of operations on your account. We won’t be using nearly that amount, so it’s essentially a free Cassandra database in the cloud.2.4 – Define your databaseDatabase name: whatever you wantKeyspace name: testCloud: whichever GCP region applies to you.Hit create database and wait a couple minutes for it to spin up and become active.2.5 – Generate application tokenOnce your database is active, connect to it.Once on dashboard/<your-db-name>, click the Settings menu tab.Select Admin User for role and hit generate token.COPY DOWN YOUR CLIENT ID AND CLIENT SECRET as they will be used by dsbulk2.6 – Download Secure BundleHit the Connect tab in the menuClick on Node.js (doesn’t matter which option under Connect using a driver)Download Secure BundleMove Secure Bundle into the cloned directory.2.7 – Load CSV dataHit the Upload Data buttonDrag-n-drop the previous_employees_by_title.csv file into the section.Once uploaded successfully, hit the Next buttonMake sure the table name is called previous_employees_by_titleChange employee_id from text to uuidChange first_day and last_day to timestampSelect job_title as the Partition KeySelect employee_id as the Clustering ColumnHit the Next buttonSelect test as the target keyspaceHit the Next button to begin loading the csv.If the upload fails, just try it again and it should work by the 2nd try.3. Setup Dockerized Apache Cassandra3.1 – Start docker container, expose port 9042, and mount this directory to the root of the container.docker run --name cassandra -p 9042:9042 -d -v "$(pwd)":/example-cassandra-dsbulk-with-sed-and-awk cassandra:latest3.1 – Create Destination Tabledocker exec -it cassandra cqlshsource '/example-cassandra-dsbulk-with-sed-and-awk/days_worked_by_previous_employees_by_job_title.cql'4. Extract, Transform, and Load from DataStax Astra to Dockerized Apache Cassandra with dsbulk, sed, and awk.We will cover 2 methods for loading data into Dockerized Apache Cassandra after unloading it from DataStax Astra. The first method breaks up the unloading and loading steps into 2 seperate methods. The second method does everything in one pipe. 4.1 – Unload to CSV file after extracting and transforming.In this command, we are unloading from DataStax Astra, running an awk script, doing a sed transformation, and then writing the output to a CSV. The awk script cleans up the timestamp format of the first_day and last_day columns so that we can use the mktime() function to calculate the time in number of seconds. Then, the script prints out the first 3 columns and then the calculated duration of time worked in days per row. We also have some conditionals saying if that the value returned is negative, then make it positive. This occurred because we used a CSV generator and calculated random datetimes for those 2 columns for ~100,000 rows. Then, we use sed to fix the header that awk spits out with what we want for the destination table and write the output to a new CSV called days_worked_by_previous_employees_by_job_title.csv.NOTE: Input your specific variables for the placeholders in the commanddsbulk unload -k test -t previous_employees_by_title -b "secure-connect-<db>.zip" -u <Client ID> -p <Client Secret> | gawk -F, -f duration_calc.awk | sed 's/job_title,employee_id,employee_name,0/job_title,employee_id,employee_name,number_of_days_worked/' > days_worked_by_previous_employees_by_job_title.csv4.2 – Load via CSV file to Dockerized Apache Cassandra Instancedsbulk load -url days_worked_by_previous_employees_by_job_title.csv -k test -t days_worked_by_previous_employees_by_job_title4.3 – Confirm via CQLSHselect count(*) from test.days_worked_by_previous_employees_by_job_title ;4.3 – Truncate tabletruncate table test.days_worked_by_previous_employees_by_job_title ;4.4 – Do everything in one commanddsbulk unload -k test -t previous_employees_by_title -b "secure-connect-<db>.zip" -u <Client ID> -p <Client Secret> | gawk -F, -f duration_calc.awk | sed 's/job_title,employee_id,employee_name,0/job_title,employee_id,employee_name,number_of_days_worked/' | dsbulk load -k test -t days_worked_by_previous_employees_by_job_title4.5 – Confirm via CQLSHselect count(*) from test.days_worked_by_previous_employees_by_job_title ;And that wraps up how we can quickly do some Cassandra data operations using dsbulk with sed and awk. Again, if you want to watch this walkthrough live, we have a demonstration in the live recording of Apache Cassandra Lunch #43: DSBulk with sed and awk embedded below. If you missed last week’s Apache Cassandra Lunch #42: SSTable Files with SSTableloader, be sure to check that out as well! If you want to attend Cassandra Lunch live every Wednesday at 12 PM EST, then you can register here now! Additionally, the playlist with all the previously recorded Cassandra Lunches is available here. Additional Resourceshttps://github.com/datastax/dsbulkhttps://docs.datastax.com/en/dsbulk/doc/dsbulk/reference/dsbulkCmd.htmlhttps://docs.datastax.com/en/dsbulk/doc/dsbulk/install/dsbulkInstall.htmlhttps://www.datastax.com/blog/introducing-datastax-bulk-loaderhttps://www.datastax.com/blog/datastax-bulk-loader-more-loadinghttps://www.datastax.com/blog/datastax-bulk-loader-examples-loading-other-locationshttps://docs.datastax.com/en/astra/docs/loading-and-unloading-data-with-datastax-bulk-loader.htmlCassandra.LinkCassandra.Link is a knowledge base that we created for all things Apache Cassandra. Our goal with Cassandra.Link was to not only fill the gap of Planet Cassandra, but to bring the Cassandra community together. Feel free to reach out if you wish to collaborate with us on this project in any capacity.We are a technology company that specializes in building business platforms. If you have any questions about the tools discussed in this post or about any of our services, feel free to send us an email! Posted in Modern Business | Comments Off on Apache Cassandra Lunch #43: DSBulk with sed and awk

Illustration Image

In Apache Cassandra Lunch #43: DSBulk with sed and awk, we discuss how we can use the DataStax Bulk Loader with sed and awk for Cassandra data operations. The live recording of Cassandra Lunch, which includes a more in-depth discussion and a demo, is embedded below in case you were not able to attend live. If you would like to attend Apache Cassandra Lunch live, it is hosted every Wednesday at 12 PM EST. Register here now!

In Apache Cassandra Lunch #43: DSBulk with sed and awk, we discuss how we can use the DataStax Bulk Loader with sed and awk for Cassandra data operations. We have a walkthrough to show you how you can do ETL from different Cassandra instances using DSBulk (E + L) with sed and awk (T). The live recording embedded below contains a live demo as well, so be sure to watch that as well!

Also, check out this blog for a series that will outline the general approaches for data operations in business-critical environments that leverage Cassandra and must maintain high-availability in an agile development and continuous delivery environment.

If you are not familiar with the DataStax Bulk Loader, it is an open-source tool that can be used to load and unload CSV or JSON data in and out of supported databases. The tool can be used with DataStax Astra, DSE, and open-source Apache Cassandra. More on the tool can be found here, and you can also check out the GitHub repository for additional information.

We can now move onto the walkthrough portion of the blog. First, you will need to go this GitHub repository. You can follow the instructions on the repository, or follow the instructions on this blog, but they will be the same. In this walkthrough, we will be using dsbulk to unload data from a DataStax Astra instance, do data transformations using awk and sed, and then load it into a Dockerized Apache Cassandra instance. You can do this walkthrough with any combination of 2 Cassandra distributions mentioned above. If you are working with deployed instances, you can use their contact points and more on that can be found here.

We will introduce the scenario that this example follows: Someone on our team wants us to take the previous_employees_by_title table and create a new table that has the time worked in days instead of a start time and end time. This can be done within the same instance, but for the purposes of showing more of what dsbulk can do, we opted for moving data between DataStax Astra and a Dockerized instance of Apache Cassandra.

Prerequisites

  • DataStax Astra
  • DataStax Bulk Loader (make sure to add to PATH)
  • Docker
  • GNU awk
  • GNU sed

1. Clone this repo and cd into it

git clone https://github.com/Anant/example-cassandra-dsbulk-with-sed-and-awk.git
cd example-cassandra-dsbulk-with-sed-and-awk

2. Set-up DataStax Astra

2.1 – Sign up for a free DataStax Astra account if you do not have one already

2.2 – Hit the Create Database button on the dashboard

2.3 – Hit the Get Started button on the dashboard

This will be a pay-as-you-go method, but they won’t ask for a payment method until you exceed $25 worth of operations on your account. We won’t be using nearly that amount, so it’s essentially a free Cassandra database in the cloud.

2.4 – Define your database

  • Database name: whatever you want
  • Keyspace name: test
  • Cloud: whichever GCP region applies to you.
  • Hit create database and wait a couple minutes for it to spin up and become active.

2.5 – Generate application token

  • Once your database is active, connect to it.
  • Once on dashboard/<your-db-name>, click the Settings menu tab.
  • Select Admin User for role and hit generate token.
  • COPY DOWN YOUR CLIENT ID AND CLIENT SECRET as they will be used by dsbulk

2.6 – Download Secure Bundle

  • Hit the Connect tab in the menu
  • Click on Node.js (doesn’t matter which option under Connect using a driver)
  • Download Secure Bundle
  • Move Secure Bundle into the cloned directory.

2.7 – Load CSV data

  • Hit the Upload Data button
  • Drag-n-drop the previous_employees_by_title.csv file into the section.
  • Once uploaded successfully, hit the Next button
  • Make sure the table name is called previous_employees_by_title
  • Change employee_id from text to uuid
  • Change first_day and last_day to timestamp
  • Select job_title as the Partition Key
  • Select employee_id as the Clustering Column
  • Hit the Next button
  • Select test as the target keyspace
  • Hit the Next button to begin loading the csv.
  • If the upload fails, just try it again and it should work by the 2nd try.

3. Setup Dockerized Apache Cassandra

3.1 – Start docker container, expose port 9042, and mount this directory to the root of the container.

docker run --name cassandra -p 9042:9042 -d -v "$(pwd)":/example-cassandra-dsbulk-with-sed-and-awk cassandra:latest

3.1 – Create Destination Table

docker exec -it cassandra cqlsh
source '/example-cassandra-dsbulk-with-sed-and-awk/days_worked_by_previous_employees_by_job_title.cql'

4. Extract, Transform, and Load from DataStax Astra to Dockerized Apache Cassandra with dsbulk, sed, and awk.

We will cover 2 methods for loading data into Dockerized Apache Cassandra after unloading it from DataStax Astra. The first method breaks up the unloading and loading steps into 2 seperate methods. The second method does everything in one pipe.

4.1 – Unload to CSV file after extracting and transforming.

In this command, we are unloading from DataStax Astra, running an awk script, doing a sed transformation, and then writing the output to a CSV. The awk script cleans up the timestamp format of the first_day and last_day columns so that we can use the mktime() function to calculate the time in number of seconds. Then, the script prints out the first 3 columns and then the calculated duration of time worked in days per row. We also have some conditionals saying if that the value returned is negative, then make it positive. This occurred because we used a CSV generator and calculated random datetimes for those 2 columns for ~100,000 rows. Then, we use sed to fix the header that awk spits out with what we want for the destination table and write the output to a new CSV called days_worked_by_previous_employees_by_job_title.csv.

NOTE: Input your specific variables for the placeholders in the command

dsbulk unload -k test -t previous_employees_by_title -b "secure-connect-<db>.zip" -u <Client ID> -p <Client Secret> | gawk -F, -f duration_calc.awk | sed 's/job_title,employee_id,employee_name,0/job_title,employee_id,employee_name,number_of_days_worked/' > days_worked_by_previous_employees_by_job_title.csv

4.2 – Load via CSV file to Dockerized Apache Cassandra Instance

dsbulk load -url days_worked_by_previous_employees_by_job_title.csv -k test -t days_worked_by_previous_employees_by_job_title

4.3 – Confirm via CQLSH

select count(*) from test.days_worked_by_previous_employees_by_job_title ;

4.3 – Truncate table

truncate table test.days_worked_by_previous_employees_by_job_title ;

4.4 – Do everything in one command

dsbulk unload -k test -t previous_employees_by_title -b "secure-connect-<db>.zip" -u <Client ID> -p <Client Secret> | gawk -F, -f duration_calc.awk | sed 's/job_title,employee_id,employee_name,0/job_title,employee_id,employee_name,number_of_days_worked/' | dsbulk load -k test -t days_worked_by_previous_employees_by_job_title

4.5 – Confirm via CQLSH

select count(*) from test.days_worked_by_previous_employees_by_job_title ;

And that wraps up how we can quickly do some Cassandra data operations using dsbulk with sed and awk. Again, if you want to watch this walkthrough live, we have a demonstration in the live recording of Apache Cassandra Lunch #43: DSBulk with sed and awk embedded below.

If you missed last week’s Apache Cassandra Lunch #42: SSTable Files with SSTableloader, be sure to check that out as well! If you want to attend Cassandra Lunch live every Wednesday at 12 PM EST, then you can register here now! Additionally, the playlist with all the previously recorded Cassandra Lunches is available here.

Additional Resources

Cassandra.Link

Cassandra.Link is a knowledge base that we created for all things Apache Cassandra. Our goal with Cassandra.Link was to not only fill the gap of Planet Cassandra, but to bring the Cassandra community together. Feel free to reach out if you wish to collaborate with us on this project in any capacity.

We are a technology company that specializes in building business platforms. If you have any questions about the tools discussed in this post or about any of our services, feel free to send us an email!

Related Articles

python
java
cassandra

Vald

John Doe

2/11/2024

Checkout Planet Cassandra

Claim Your Free Planet Cassandra Contributor T-shirt!

Make your contribution and score a FREE Planet Cassandra Contributor T-Shirt! 
We value our incredible Cassandra community, and we want to express our gratitude by sending an exclusive Planet Cassandra Contributor T-Shirt you can wear with pride.

Join Our Newsletter!

Sign up below to receive email updates and see what's going on with our company

Explore Related Topics

AllKafkaSparkScyllaSStableKubernetesApiGithubGraphQl

Explore Further

sed