This simple form allows you to try out different values for your Apache Cassandra cluster and see what the impact is for your application.
"Consistent" means that for this particular Read/Write level combo, all nodes will "see" the same data. "Eventually consistent" means that you might get old data from some nodes and new data for others until the data has been replicated across all devices. The idea is that this way you can increase read/write speeds and improve tolerance against dead nodes.
How many nodes can go down without application noticing? This is a lower bound - in large clusters, you could lose more nodes and if they happen to be handling different parts of the keyspace, then you wouldn't notice either.
How many nodes can go down without physically losing data? This is a lower bound - in large clusters, you could lose more nodes and if they happen to be handling different parts of the keyspace, then you wouldn't notice either.
The more nodes you read from, more network traffic ensues, and the bigger the latencies involved. Cassandra read operation won't return until at least this many nodes have responded with some data value.
The more nodes you write to, more network traffic ensues, and the bigger the latencies involved. Cassandra write operation won't return until at least this many nodes have acknowledged receiving the data.
The bigger your cluster is, the more the data gets distributed across your nodes. If you are using the RandomPartitioner, or are very good at distributing your keys when you use OrderedPartitioner, this is how much data each of your nodes has to handle. This is also how much of your keyspace becomes inaccessible for each node that you lose beyond the safe limit, above.