Illustration Image

Cassandra.Link

The best knowledge base on Apache Cassandra®

Helping platform leaders, architects, engineers, and operators build scalable real time data platforms.

1/30/2019

Reading time:16 mins

Data Analytics using Cassandra and Spark - OpenCredo

by John Doe

This blog is the first part of a series “Spark – the Pragmatic bits”. Get the full overview here.Typically, one cannot rely on Cassandra alone when performing aggregations, data analysis and the like. We can use it to capture all the data we need, but on its own Cassandra’s capabilities are insufficient to use that data to its full extent. In addition to this, moving large amounts of data in and out of Cassandra is often problematic.In this blog post I intend to show how adopting Spark alongside Cassandra can help solving these problems. DataStax ships their solution (DataStax Enterprise) with a pre-packaged integrated Cassandra and Spark cluster, but the actual libraries (cassandra-spark-connector) to make this happen are open source. For now, I will not write about the details of deployment of a Cassandra/Spark mixed cluster (this will be covered in a follow up post), but rather focus on the functionality that it can provide.Spark from ten thousand feetSpark’s own documentation describes it as “a fast and general-purpose cluster computing system”. Its roots go back to Twitter who used it as their data analytics solution, but it’s been a full-blown Apache project for several years now, currently at version 2.1.0. Spark is written in Scala, but has APIs for Java, Python and R. Having used the Scala and Java interfaces, my experience shows that the Scala API provides a smoother experience over the Java one.In some ways you can say, that Spark is the spiritual successor of Hadoop MapReduce – it’s younger, faster, nicer to use, and generally just better, but solves the same problem in a similar fashion. However, most importantly, Spark is agnostic to the underlying storage layer – you can use HDFS, the local filesystem, an RDBMS, or in our case, Cassandra.The core abstraction of Spark is called an RDD: a Resilient Distributed Dataset. There are very few assumptions made about it – it is a set of data that :Is distributed/partitioned across the Spark cluster (and potentially in the datastore used)Is replicated across the Spark nodes for fault toleranceNote that Spark’s definition of an RDD is actually true for Cassandra’s storage model, as partitions in Cassandra:Are distributed/partitioned across the Cassandra nodesAre replicated across the Cassandra nodes for fault toleranceAs a result, it is possible to “teach” Spark to understand and leverage Cassandra’s partitioning model – thus resulting in a very efficient combination of storage and computing.How does it work?The fundamental idea is quite simple: Spark and Cassandra clusters are deployed to the same set of machines. Cassandra stores the data; Spark worker nodes are co-located with Cassandra and do the data processing.Spark is a batch-processing system, designed to deal with large amounts of data. When a job arrives, the Spark workers load data into memory, spilling to disk if necessary. The important aspect of this is that there is no network traffic. The Spark worker understands how Cassandra distributes the data and reads only from the local node.Afterwards the “usual” map-reduce style behaviour takes place: certain operations can and will be executed on local nodes, others require data to be exchanged between the Spark workers (in Spark terminology this is called a “shuffle”).Finally, the results are written back somewhere – again there are multiple storage options ranging from dumping a set of CSV files to local disk, HDFS, or databases such as Cassandra or an RDBMS.In the second half of this post, I’m going to show some examples of how to use Spark and Cassandra together. The use-cases will be written around a scenario of an imaginary bank that captures credit card transactions of its customers and wants to get more out of this data.Laying the foundationsTo begin with, let’s explore what is required from the Cassandra perspective to store the data. To do this on a basic level, we only need a very a simple schema. Let’s start with two tables: one to capture basic customer information and another to log card transactions:Table cc_customers is fairly simple: I’ve added the customer’s home county to allow for some rollup-processing demonstration.The other table, cc_transactions is the primary store for transaction data. It stores transactions per customer, bucketed by time; so it’s directly suitable to query for transaction history based on time criteria.In this example a transaction status can be PENDING, COMPLETED, FAILED or REPAID; any charges will have a negative amount while repayments will appear as positive ones. Also, a single customer can have multiple cards.Transactions have a unique TimeUUID identifier that is a good clustering key as Cassandra will store them in their natural “wall clock time” order.This is a decent data model, it scales appropriately, allows capturing transaction information as well as some limited querying ability. Additionally, only a handful of mutations need to be applied to each transaction (to change transaction status), which fits Cassandra’s sweet spot well1.However it has several shortcomings: it is quite difficult to query the transactions by any other criteria than time. Customers would rightly expect to be able to use our application to access such features, however they are difficult to support. Additionally, it would be hard for the management of our bank to analyse customer behaviour and get aggregated statistics.Cassandra on its own does not give you the tools to deal with these problems. Let’s see how adding Spark into the mix can address some of these challenges.Use Case 1: Calculating current account balancesA reasonable request from any customer is to understand what is their current balance on each of their cards. When asked the question: given my customer id and card, how much money do I have? In this case our application would need to answer with a single number, the sum of all completed transactions and repayments on that particular card.Option 1: Calculate on the flyOne possible way of implementing this requirement would be to read the full transaction history of that particular user and calculate the balance on the fly. It is easy to see that this would not be a good idea – we would need to query a potentially large amount of data from multiple partitions, which could be quite slow. Even worse, the number of these partitions would keep increasing with time, resulting in even slower and slower queries.The only advantage of this approach is that we would always be guaranteed of getting a correct balance at any given time. The read complexity and latency however, makes it a bad choice.Option 2: Store the latest account balanceAn alternative approach would be to maintain a balance for each card at the time of capturing the card transactions. This might seem a better way, but also has several serious problems:It would be necessary to read the current balance before every transaction in order to be able to adjust it2.Given Cassandra does not support ACID transactions, it’s difficult to ensure that the balance is correct:No card transactions are missedNone of them are added up twiceConcurrent updates don’t corrupt the resultStatus changes in transactions need to be dealt withIt impacts write latency which is likely to be important if we want to capture a large amount of transactionsOption 3: Leverage Spark for periodic rollupsThe third option is that we periodically (say at the end of every day) calculate the balance based on the transaction log for each card and store it for quick reads. This is effectively Option #1 done for all customers, except that we use Spark to do the aggregations, and then simply store the result in another Cassandra table.The following table will be used to store the balances:This table definition is rather unsurprising: we store the single aggregated balance for each customer and card, and a timestamp of when this was calculated (we’ll get back to why in a minute).All we need now is a way to efficiently calculate this value for all customers, which can be done with the following Spark fragment3:In this particular case, I’m using Spark’s original RDD API, which is the most flexible way of interacting with Spark. An RDD is a high-level abstraction (in fact not too dissimilar to Java 8 Streams in the way you interact with it). It assumes nothing about the structure of the data besides that it’s distributed across the Spark cluster and we have resilience in the system.As you can see, I’m writing reasonably simple Scala code to do the processing; which results in quite a lot of freedom in what that processing can actually be.Let’s take a look line-by-line:Simple helper variable definition that I’ll use later in the code – declaring the statuses that I want to include into the calculations. PENDING or FAILED transactions will be excluded.Capturing the time when this code started running, we want to store this and only count transactions that happened earlier than this time.Create an RDD based on a Cassandra table cc_transactions of keyspace cc so that we can use it in Spark. This is a method supplied by the cassandra-spark-connector.Project Cassandra columns that we are interested in.A criteria to only include rows that have an ID that was captured before the Spark script started running. This initial where selection criteria will actually be pushed down to Cassandra and executed as a CQL clause. It’s worth pointing out that unlike usual Cassandra querying, there is no specification of a partition key – it’ll be applied to all partitions across the whole cluster.Further filtering the rows we are interested in. Unlike the previous (where) operation this is a Spark-level filtering, hence I can use the code-level Set-contains operation.keyBy results in a dataset structured in a way that is of interest to us: tuples in the format of (K, V) where K is a composition of customeridand card. This is the first step towards achieving our goal – to get the balance per customer and card. Note that this operation results in Spark reshuffling the data; which is unavoidable in our case.The Java object that we are operating on at this point is still of type CassandraRow – which is awkward to use directly in calculations. We therefore use map to extract the interesting bit: the amount.Sum up all the amounts per key to get the results.Technical mapping, all the operations so far have led to a nested tuple structure (K, amount) where K key is a tuple of (customerid, card). This is not something we can directly save back to Cassandra, so just flatten it and add the current timestamp.We write the results back to Cassandra into the cc_balance so that our application can query it directly.The application serving customers their desired information can now simply query cc_balance table to give the sum of all transactions – as accurate on the given date. If required it’s trivial then to query the last set of transactions (those that are logged later than the batch rollup) and include them in an on-the-fly manner. This would be a single-partition query, therefore efficient and scalable, unlike Option #1.Spark is reasonably quick to execute batch jobs such as this. While it’s definitely not fast enough to serve direct end-user requests; there would be no problems with running the aggregation several times a day if required.Use Case 2: Aggregating based on a Customer propertyAnother use-case might be to try and gain an understanding of how much people in different counties spend per month. In a relational database this would be a trivial operation, but would more than likely require a table join. Cassandra does not support joins, and aggregations against the full dataset are not efficient.Again, we can use Spark to generate the data we are interested in. For this particular use-case I will use a different API of Spark: SparkSQL to demonstrate.About Spark’s APIsSpark, since version 1.6, has two main APIs to interact with. The first, and original one is the RDD abstraction that I used previously to calculate account balances. It is a very flexible model that makes little or no assumptions of the data and allows almost arbitrary code to be executed during the data processing.This is great sometimes, but in many cases that level of flexibility is not required and could be traded for a more expressive description of the data processing.The Dataframe / SparkSQL API leverages the fact that many data sources actually used are tabular in nature – data is often organised in tuples (rows) of well-defined value types (columns) addressable by position.A Dataframe is just such a tabular dataset, and it can be underpinned by RDBMS tables, Cassandra structures or simple CSV files. As we have the abstract notion of a table, we can use SQL to query and transform it; which as a language, was designed to do exactly that.Choosing the best way of interacting with Spark and Cassandra can be down to many factors, but the SparkSQL route is compelling since we already have tables and it is easier to get started with due to the familiarity of SQL.SparkSQL, as the name suggests uses a variation of the SQL language to perform queries across multiple Dataframes. The actual requirement is to sum up all COMPLETED transactions per county, per month. To do this we are going to need to join the cc_transactions table with cc_customers and do a group by / sum operation.To access Cassandra tables in SparkSQL, we first have to define them in Spark’s Session:These declarations define the two tables in the Spark session with alias spark_cc_transactions and spark_cc_customers based on the respective Cassandra tables4.Having defined these initial structures, we can now execute the following SparkSQL query:Note that the bulk of the logic is encapsulated in Line #1 of the code – it’s a single SQL query that could just as easily have been executed on an RDBMS. This is actually sufficient to get the answers we were interested in.The subsequent lines (#2-#5) are there to persist the results back to Cassandra. This did not necessarily need to be in Cassandra; other options are also available. The results could be saved into a relational database, dumped as a CSV or loaded directly into some application that could do visualisation. For simplicity I chose to write it back to Cassandra as it’s already available. The target table is cc_county_statistics, and the query below shows the type of data it would contain after the aggregation:Use Case 3: Flagging suspicious transactionsIn the third example, I want to show the possibility of joining a Cassandra table with an additional, external data source.Let’s suppose that our bank periodically receives reports of lost credit cards. We want to identify those transactions that happened after a card was reported lost. The information about the lost cards is supplied to us in a CSV file:Rather than loading this file into it’s own Cassandra table and querying that, we can instead use the CSV file directly as a Spark data source to find the suspicious transactions.Spark Dataframes allows us to use a more programmatic (RDD-style) approach, instead of writing SQL. This example demonstrates this particular style of code.First, let’s define the data sources again in the Spark session:Now we can do the actual processing:The code above should hopefully be straightforward to understand – we join the transactions with the lost cards based on the card’s identifier, and find those transactions that happened after 12:00am on the day they were reported lost. The two helper functions used in Line #3 generate the appropriate comparable Unix epoch timestamps.Lines #5-#7 write back the results to Cassandra.With this post I have aimed to show some of the ways you can combine Cassandra and Spark to address some common challenges, which, if you were using Cassandra alone, would more than likely prove very problematic. Cassandra and Spark are technologies that fit together extremely well, both having a similar abstract notion of how data should be distributed and replicated across a set of computers.Cassandra lacks advanced querying and data processing capabilities, while Spark on its own does not have a persistent data store. While the examples in this post are deliberately simple, adding Spark into the mix allows for much more complex processing.Combining these technologies can be a great match for your processing needs. You can use Cassandra as a primary data store that real-time applications interact with, whilst at the same time running analytical processing or data rollups over the same dataset using Spark.Whilst the recommended Cassandra deployment model is generally to physically isolate the workloads into a multi-datacenter deployment, the Spark processing is still able to run over the same set of data – replicated across the analytics side of the cluster and without any need of ETLing it between an OLTP and an OLAP store.For anybody already using Cassandra, where you have more advanced data processing requirements, we would highly recommend you consider adopting Spark too.So far all the code examples shown assume a SparkContext, or Spark session already exists, and set up to be connected to Cassandra.There is a little bit more setup work needed in the Spark driver applications to make this actually happen.First off, in Scala, any driver application needs the following sbt dependencies:SparkContext is the entry point to using Spark and is relied upon by the RDD API. It needs to be initialised in the following manner:This is the minimum required code. Additionally, it is mandatory to supply a Spark master URI and settings for Cassandra connection, however, these can be specified at execution time. For this blog post’s examples, I used the spark-submit script.To use spark-submit, the driver application needs to be packaged up in a jar file and submitted to Spark in the following way:Please note the parameters passed in the command line: the Spark master’s URI and the Cassandra host specification. It is also required to specify the spark-cassandra-connector Spark package.1: Many mutations on the same partitions results in fragmentation on Cassandra’s SSTable level which impacts read latency:2: This is a well-known anti-pattern in Cassandra. Because of the lack of transactional guarantees and optimisation for write latency executing a read before mutating data may lead to incorrect results and increases latency3: I believe that Spark’s Scala API is easier to interact with compared to the Java API, therefore I’m using Scala examples to demonstrate the solution. The same could be written in Java or Python as well.4: I intentionally chose different names to avoid confusion, but the tables in Spark Session can be named anything – in fact calling them the same as their Cassandra counterparts is quite expressiveDataStax Enterprise: https://www.datastax.com/products/datastax-enterpriseSpark: https://spark.apache.org/Spark RDD whitepaper: https://people.csail.mit.edu/matei/papers/2012/nsdi_spark.pdfOther Spark whitepapers: https://spark.apache.org/research.htmlspark-cassandra-connector: https://github.com/datastax/spark-cassandra-connectorPatterns of Successful Cassandra Data Modeling: https://opencredo.com/cassandra-data-modelling-patterns/How Not To Use Cassandra Like An RDBMS (and what will happen if you do): https://opencredo.com/how-not-to-use-cassandra-like-an-rdbms-and-what-will-happen-if-you-do/Watch the “Detecting stolen AWS credentials with Spark” webinar here!

Illustration Image
Data Analytics using Cassandra and Spark

This blog is the first part of a series “Spark – the Pragmatic bits”. Get the full overview here.

Typically, one cannot rely on Cassandra alone when performing aggregations, data analysis and the like. We can use it to capture all the data we need, but on its own Cassandra’s capabilities are insufficient to use that data to its full extent. In addition to this, moving large amounts of data in and out of Cassandra is often problematic.

In this blog post I intend to show how adopting Spark alongside Cassandra can help solving these problems. DataStax ships their solution (DataStax Enterprise) with a pre-packaged integrated Cassandra and Spark cluster, but the actual libraries (cassandra-spark-connector) to make this happen are open source. For now, I will not write about the details of deployment of a Cassandra/Spark mixed cluster (this will be covered in a follow up post), but rather focus on the functionality that it can provide.

Spark from ten thousand feet

Spark’s own documentation describes it as “a fast and general-purpose cluster computing system”. Its roots go back to Twitter who used it as their data analytics solution, but it’s been a full-blown Apache project for several years now, currently at version 2.1.0. Spark is written in Scala, but has APIs for Java, Python and R. Having used the Scala and Java interfaces, my experience shows that the Scala API provides a smoother experience over the Java one.

In some ways you can say, that Spark is the spiritual successor of Hadoop MapReduce – it’s younger, faster, nicer to use, and generally just better, but solves the same problem in a similar fashion. However, most importantly, Spark is agnostic to the underlying storage layer – you can use HDFS, the local filesystem, an RDBMS, or in our case, Cassandra.

The core abstraction of Spark is called an RDD: a Resilient Distributed Dataset. There are very few assumptions made about it – it is a set of data that :

  • Is distributed/partitioned across the Spark cluster (and potentially in the datastore used)
  • Is replicated across the Spark nodes for fault tolerance

Note that Spark’s definition of an RDD is actually true for Cassandra’s storage model, as partitions in Cassandra:

  • Are distributed/partitioned across the Cassandra nodes
  • Are replicated across the Cassandra nodes for fault tolerance

As a result, it is possible to “teach” Spark to understand and leverage Cassandra’s partitioning model – thus resulting in a very efficient combination of storage and computing.

How does it work?

The fundamental idea is quite simple: Spark and Cassandra clusters are deployed to the same set of machines. Cassandra stores the data; Spark worker nodes are co-located with Cassandra and do the data processing.

Spark is a batch-processing system, designed to deal with large amounts of data. When a job arrives, the Spark workers load data into memory, spilling to disk if necessary. The important aspect of this is that there is no network traffic. The Spark worker understands how Cassandra distributes the data and reads only from the local node.

Afterwards the “usual” map-reduce style behaviour takes place: certain operations can and will be executed on local nodes, others require data to be exchanged between the Spark workers (in Spark terminology this is called a “shuffle”).

Finally, the results are written back somewhere – again there are multiple storage options ranging from dumping a set of CSV files to local disk, HDFS, or databases such as Cassandra or an RDBMS.

In the second half of this post, I’m going to show some examples of how to use Spark and Cassandra together. The use-cases will be written around a scenario of an imaginary bank that captures credit card transactions of its customers and wants to get more out of this data.

Laying the foundations

To begin with, let’s explore what is required from the Cassandra perspective to store the data. To do this on a basic level, we only need a very a simple schema. Let’s start with two tables: one to capture basic customer information and another to log card transactions:

Table cc_customers is fairly simple: I’ve added the customer’s home county to allow for some rollup-processing demonstration.

The other table, cc_transactions is the primary store for transaction data. It stores transactions per customer, bucketed by time; so it’s directly suitable to query for transaction history based on time criteria.

In this example a transaction status can be PENDING, COMPLETED, FAILED or REPAID; any charges will have a negative amount while repayments will appear as positive ones. Also, a single customer can have multiple cards.

Transactions have a unique TimeUUID identifier that is a good clustering key as Cassandra will store them in their natural “wall clock time” order.

This is a decent data model, it scales appropriately, allows capturing transaction information as well as some limited querying ability. Additionally, only a handful of mutations need to be applied to each transaction (to change transaction status), which fits Cassandra’s sweet spot well1.

However it has several shortcomings: it is quite difficult to query the transactions by any other criteria than time. Customers would rightly expect to be able to use our application to access such features, however they are difficult to support. Additionally, it would be hard for the management of our bank to analyse customer behaviour and get aggregated statistics.

Cassandra on its own does not give you the tools to deal with these problems. Let’s see how adding Spark into the mix can address some of these challenges.

Use Case 1: Calculating current account balances

A reasonable request from any customer is to understand what is their current balance on each of their cards. When asked the question: given my customer id and card, how much money do I have? In this case our application would need to answer with a single number, the sum of all completed transactions and repayments on that particular card.

Option 1: Calculate on the fly

One possible way of implementing this requirement would be to read the full transaction history of that particular user and calculate the balance on the fly. It is easy to see that this would not be a good idea – we would need to query a potentially large amount of data from multiple partitions, which could be quite slow. Even worse, the number of these partitions would keep increasing with time, resulting in even slower and slower queries.

The only advantage of this approach is that we would always be guaranteed of getting a correct balance at any given time. The read complexity and latency however, makes it a bad choice.

Option 2: Store the latest account balance

An alternative approach would be to maintain a balance for each card at the time of capturing the card transactions. This might seem a better way, but also has several serious problems:

  • It would be necessary to read the current balance before every transaction in order to be able to adjust it2.
  • Given Cassandra does not support ACID transactions, it’s difficult to ensure that the balance is correct:
    • No card transactions are missed
    • None of them are added up twice
    • Concurrent updates don’t corrupt the result
  • Status changes in transactions need to be dealt with
  • It impacts write latency which is likely to be important if we want to capture a large amount of transactions

Option 3: Leverage Spark for periodic rollups

The third option is that we periodically (say at the end of every day) calculate the balance based on the transaction log for each card and store it for quick reads. This is effectively Option #1 done for all customers, except that we use Spark to do the aggregations, and then simply store the result in another Cassandra table.

The following table will be used to store the balances:

This table definition is rather unsurprising: we store the single aggregated balance for each customer and card, and a timestamp of when this was calculated (we’ll get back to why in a minute).

All we need now is a way to efficiently calculate this value for all customers, which can be done with the following Spark fragment3:

In this particular case, I’m using Spark’s original RDD API, which is the most flexible way of interacting with Spark. An RDD is a high-level abstraction (in fact not too dissimilar to Java 8 Streams in the way you interact with it). It assumes nothing about the structure of the data besides that it’s distributed across the Spark cluster and we have resilience in the system.

As you can see, I’m writing reasonably simple Scala code to do the processing; which results in quite a lot of freedom in what that processing can actually be.

Let’s take a look line-by-line:

  1. Simple helper variable definition that I’ll use later in the code – declaring the statuses that I want to include into the calculations. PENDING or FAILED transactions will be excluded.
  2. Capturing the time when this code started running, we want to store this and only count transactions that happened earlier than this time.
  3. Create an RDD based on a Cassandra table cc_transactions of keyspace cc so that we can use it in Spark. This is a method supplied by the cassandra-spark-connector.
  4. Project Cassandra columns that we are interested in.
  5. A criteria to only include rows that have an ID that was captured before the Spark script started running. This initial where selection criteria will actually be pushed down to Cassandra and executed as a CQL clause. It’s worth pointing out that unlike usual Cassandra querying, there is no specification of a partition key – it’ll be applied to all partitions across the whole cluster.
  6. Further filtering the rows we are interested in. Unlike the previous (where) operation this is a Spark-level filtering, hence I can use the code-level Set-contains operation.
  7. keyBy results in a dataset structured in a way that is of interest to us: tuples in the format of (K, V) where K is a composition of customeridand card. This is the first step towards achieving our goal – to get the balance per customer and card. Note that this operation results in Spark reshuffling the data; which is unavoidable in our case.
  8. The Java object that we are operating on at this point is still of type CassandraRow – which is awkward to use directly in calculations. We therefore use map to extract the interesting bit: the amount.
  9. Sum up all the amounts per key to get the results.
  10. Technical mapping, all the operations so far have led to a nested tuple structure (K, amount) where K key is a tuple of (customerid, card). This is not something we can directly save back to Cassandra, so just flatten it and add the current timestamp.
  11. We write the results back to Cassandra into the cc_balance so that our application can query it directly.

The application serving customers their desired information can now simply query cc_balance table to give the sum of all transactions – as accurate on the given date. If required it’s trivial then to query the last set of transactions (those that are logged later than the batch rollup) and include them in an on-the-fly manner. This would be a single-partition query, therefore efficient and scalable, unlike Option #1.

Spark is reasonably quick to execute batch jobs such as this. While it’s definitely not fast enough to serve direct end-user requests; there would be no problems with running the aggregation several times a day if required.

Use Case 2: Aggregating based on a Customer property

Another use-case might be to try and gain an understanding of how much people in different counties spend per month. In a relational database this would be a trivial operation, but would more than likely require a table join. Cassandra does not support joins, and aggregations against the full dataset are not efficient.

Again, we can use Spark to generate the data we are interested in. For this particular use-case I will use a different API of Spark: SparkSQL to demonstrate.

About Spark’s APIs

Spark, since version 1.6, has two main APIs to interact with. The first, and original one is the RDD abstraction that I used previously to calculate account balances. It is a very flexible model that makes little or no assumptions of the data and allows almost arbitrary code to be executed during the data processing.

This is great sometimes, but in many cases that level of flexibility is not required and could be traded for a more expressive description of the data processing.

The Dataframe / SparkSQL API leverages the fact that many data sources actually used are tabular in nature – data is often organised in tuples (rows) of well-defined value types (columns) addressable by position.

A Dataframe is just such a tabular dataset, and it can be underpinned by RDBMS tables, Cassandra structures or simple CSV files. As we have the abstract notion of a table, we can use SQL to query and transform it; which as a language, was designed to do exactly that.

Choosing the best way of interacting with Spark and Cassandra can be down to many factors, but the SparkSQL route is compelling since we already have tables and it is easier to get started with due to the familiarity of SQL.

SparkSQL, as the name suggests uses a variation of the SQL language to perform queries across multiple Dataframes. The actual requirement is to sum up all COMPLETED transactions per county, per month. To do this we are going to need to join the cc_transactions table with cc_customers and do a group by / sum operation.

To access Cassandra tables in SparkSQL, we first have to define them in Spark’s Session:

These declarations define the two tables in the Spark session with alias spark_cc_transactions and spark_cc_customers based on the respective Cassandra tables4.

Having defined these initial structures, we can now execute the following SparkSQL query:

Note that the bulk of the logic is encapsulated in Line #1 of the code – it’s a single SQL query that could just as easily have been executed on an RDBMS. This is actually sufficient to get the answers we were interested in.

The subsequent lines (#2-#5) are there to persist the results back to Cassandra. This did not necessarily need to be in Cassandra; other options are also available. The results could be saved into a relational database, dumped as a CSV or loaded directly into some application that could do visualisation. For simplicity I chose to write it back to Cassandra as it’s already available. The target table is cc_county_statistics, and the query below shows the type of data it would contain after the aggregation:

Use Case 3: Flagging suspicious transactions

In the third example, I want to show the possibility of joining a Cassandra table with an additional, external data source.

Let’s suppose that our bank periodically receives reports of lost credit cards. We want to identify those transactions that happened after a card was reported lost. The information about the lost cards is supplied to us in a CSV file:

Rather than loading this file into it’s own Cassandra table and querying that, we can instead use the CSV file directly as a Spark data source to find the suspicious transactions.

Spark Dataframes allows us to use a more programmatic (RDD-style) approach, instead of writing SQL. This example demonstrates this particular style of code.

First, let’s define the data sources again in the Spark session:

Now we can do the actual processing:

The code above should hopefully be straightforward to understand – we join the transactions with the lost cards based on the card’s identifier, and find those transactions that happened after 12:00am on the day they were reported lost. The two helper functions used in Line #3 generate the appropriate comparable Unix epoch timestamps.

Lines #5-#7 write back the results to Cassandra.

With this post I have aimed to show some of the ways you can combine Cassandra and Spark to address some common challenges, which, if you were using Cassandra alone, would more than likely prove very problematic. Cassandra and Spark are technologies that fit together extremely well, both having a similar abstract notion of how data should be distributed and replicated across a set of computers.

Cassandra lacks advanced querying and data processing capabilities, while Spark on its own does not have a persistent data store. While the examples in this post are deliberately simple, adding Spark into the mix allows for much more complex processing.

Combining these technologies can be a great match for your processing needs. You can use Cassandra as a primary data store that real-time applications interact with, whilst at the same time running analytical processing or data rollups over the same dataset using Spark.

Whilst the recommended Cassandra deployment model is generally to physically isolate the workloads into a multi-datacenter deployment, the Spark processing is still able to run over the same set of data – replicated across the analytics side of the cluster and without any need of ETLing it between an OLTP and an OLAP store.

For anybody already using Cassandra, where you have more advanced data processing requirements, we would highly recommend you consider adopting Spark too.

So far all the code examples shown assume a SparkContext, or Spark session already exists, and set up to be connected to Cassandra.

There is a little bit more setup work needed in the Spark driver applications to make this actually happen.

First off, in Scala, any driver application needs the following sbt dependencies:

SparkContext is the entry point to using Spark and is relied upon by the RDD API. It needs to be initialised in the following manner:

This is the minimum required code. Additionally, it is mandatory to supply a Spark master URI and settings for Cassandra connection, however, these can be specified at execution time. For this blog post’s examples, I used the spark-submit script.

To use spark-submit, the driver application needs to be packaged up in a jar file and submitted to Spark in the following way:

Please note the parameters passed in the command line: the Spark master’s URI and the Cassandra host specification. It is also required to specify the spark-cassandra-connector Spark package.

1: Many mutations on the same partitions results in fragmentation on Cassandra’s SSTable level which impacts read latency:
2: This is a well-known anti-pattern in Cassandra. Because of the lack of transactional guarantees and optimisation for write latency executing a read before mutating data may lead to incorrect results and increases latency
3: I believe that Spark’s Scala API is easier to interact with compared to the Java API, therefore I’m using Scala examples to demonstrate the solution. The same could be written in Java or Python as well.
4: I intentionally chose different names to avoid confusion, but the tables in Spark Session can be named anything – in fact calling them the same as their Cassandra counterparts is quite expressive

Watch the “Detecting stolen AWS credentials with Spark” webinar here!

Related Articles

python
cassandra
spark

GitHub - andreia-negreira/Data_streaming_project: Data streaming project with robust end-to-end pipeline, combining tools such as Airflow, Kafka, Spark, Cassandra and containerized solution to easy deployment.

andreia-negreira

12/2/2023

cassandra
spark

Checkout Planet Cassandra

Claim Your Free Planet Cassandra Contributor T-shirt!

Make your contribution and score a FREE Planet Cassandra Contributor T-Shirt! 
We value our incredible Cassandra community, and we want to express our gratitude by sending an exclusive Planet Cassandra Contributor T-Shirt you can wear with pride.

Join Our Newsletter!

Sign up below to receive email updates and see what's going on with our company

Explore Related Topics

AllKafkaSparkScyllaSStableKubernetesApiGithubGraphQl

Explore Further

cassandra