Overview
This is the first in a series of blog post I’m planning to create as part of my prep for my Cassandra summit talk ‘Load Testing Cassandra Applications’.
cassandra-stress is a great utility for stress testing Cassandra. However, available documentation is a little sparse and it not always entirely clear what load cassandra-stress will generate in a given situation. In this series of blog posts, I plan to walk through a number of cassandra stress scenarios examining exactly how cassandra-stress behaves.
For this series, I will be using the latest 3.x version of cassandra-stress. If I notice any differences related to a particular point version of Cassandra I will call them out.
In this first post, I will look at what is about the most basic cassandra-stress command you can run:cassandra-stress write n=10 -node x.x.x.x
I chose this command for two reasons: firstly, using a simple command will allow us to look at some of the basic functions that apply across any cassandra-stress command and secondly, in almost all scenarios, you will want to execute a write to populate a cluster with data before running a read or mixed scenario.
cassandra-stress
Let’s start by looking at that components of the command itself:
cassandra-stress
: invokes a shell script which in turn invokes the main function of the Java class org.apache.cassandra.stress.Stresswrite
: execute write operations (other options being read, mixed, user, counter_write and counter_read)n=10
: execute 10 operations-node x.x.x.x
: the address of a node in the cluster to establish the initial connection
Filling in with defaults, here’s all the settings cassandra-stress will actually use for the run (from my in-progress implementation of CASSANDRA-11914):
Step by Step
As you can see, there is a lot going on behind the scenes. So. let’s walk-through step by step what cassandra-stress actually does when you execute this command:
-
- The options provided through the command line are parsed and filled in with a whole range of default options as necessary (will touch on important defaults below and more in future articles). Interestingly, at this stage a valid cassandra.yaml will need to be loaded. However, as far as I could tell this is just a side effect of cassandra-stress using some core Cassandra classes and the contents of the cassandra.yaml have no effect on the actual cassandra-stress operations.
- The Cassandra Java driver is used to connect to the node specified in the command line. From this node, the driver retrieves the node list and token map for the cluster and then initiates a connection to each node in the cluster.
- A create keyspace (if one doesn’t already exist) command is executed with the following definition:
While this definition is a reasonable choice for a simple default, it’s important to note that this is unlikely to be representative of a keyspace you would want to run in production. By far the most common production scenario would be to use NetworkTopologyStrategy and a replication factor of 3. To have cassandra-stress create a keyspace with this strategy you would need to drop any existing keyspace and add the following parameters to the cassandra-stress command line:-schema replication(strategy=NetworkTopologyStrategy,DC_NAME=3)
Replace DC_NAME with the actual name of your Cassandra data center. On some systems you may also need to escape the brackets ie.replication\(...\)
- Once the keyspace is created, cassandra-stress creates two tables in the keyspace: standard1 and counter1. We’ll ignore counter1 for now as it’s not used in this test. The definition of the standard1 table created is as follows:
While, again, this is a reasonable choice for a simple default, there are a few characteristics to keep in mind if you are trying to draw conclusions from performance using this table definition:- There is no clustering key, so 1 row per partition – potentially very different performance to a scenario with many rows per partition.
- Compression is disabled – the overhead of compression is typically not huge but could be significant.
- Compact Storage is enabled – this is not enabled by default and will result in smaller representation of the data on disk (although minimal difference with Cassandra 3.x).
I’ll cover options for using different schemas in a later installment of this series.
- cassandra-stress will attempt to make a jmx connection to the nodes in the cluster to collect garbage collection stats. If the attempt fails, the run will proceed without collection garbage collection stats.
- Next, cassandra-stress will run a warmup. This is a standard practice in load testing to reduce variation from start-up variation such as code being loaded to memory and JVM hotspot compilers. The number of warm-up iterations is the lesser 25% of the target of operations or 50k – 2 operations in this trivial example. The warm-up operations are basically the same as the test operations except not timed so I won’t go into them in detail.
- We’ve entered the actual load test phase. cassandra-stress creates 200 client threads and begins executing the target number of operations. In a real test, using the -rate option to control the number of client threads is a good way to control load on the cluster.
The first attempted operation will create a CQL prepared statement as follows:UPDATE "standard1" SET "C0" = ?,"C1" = ?,"C2" = ?,"C3" = ?,"C4" = ? WHERE KEY=?
Although we were probably expecting an INSERT statement, updates and inserts are identical in terms of Cassandra implementation so we can expect performance to be the same.This prepared statement will then be will be executed 10 times with different, random data generated for each execution. The statement will be executed with consistent level LOCAL_ONE.cassandra-stress seeds the random generation with a static string plus the column name and a seed number which for the write command defaults to sequentially used numbers from 1 to the number of operations. That means that each column will get different values but the set of values generated will be the same over multiples runs. Generating a static set of values is necessary for read tests but does have the side effect that if you were to run our sample operation (write n=10) 1000 times the end result would still be just 10 rows of data in the table. - Finally, cassandra-stress prints its results. Here’s an example from a run of this command:
Many of these results are self explanatory but some bear further explanation:
Op rate is the rate of execution commands. Partition rate is that rate that partitions were visited (updated or read) by those commands and Row rate is that rate that rows were visited. For simple, single-row commands all three rates will be equal. The rates will vary in more complex scenarios where a single operation might visit multiple partitions and rows.
Similarly, Total partitions is the total number of partitions visited during the test. It’s worth noting that this is not unique partitions so even in some write-only scenarios it may not reflect the total number of partitions created by the test.
The GC statistics report on garbage collection and are zero in this case as JMX ports were blocked to the test cluster.
Conclusion
Well, that’s a lot to write about a simple test that inserts 10 rows into a table. Putting it together has helped improved my understanding of cassandra-stress, I hope it’s useful for you too. In future installments, I’ll look some more into the different data generation operations, mixed test and customers schemas using the YAML configuration file. Let me know in the comments if there are any particular areas of interest for the future articles.
Click here for Part Two: Mixed Command
Click here for Part Three: Using YAML Profiles