Cassandra: Lessons Learned

After using Cassandra for 3 years since version 0.8.5, I thought I’d put together a blurb on lessons learned. Here it goes!

Use Cases

What works

Anything that involves high speed collection of data for analysis in the background or via batch. For example:

  • Logging and data collection
    • Web servers
    • Mobile devices
    • Internet of things
    • Sensors
    • Finance
      • Market data logging
      • Transaction logging
      • Trading activity
      • Record keeping for compliance
  • Telecommunications
    • Call log
  • Application servers
    • Sharing session data
    • Shopping carts
    • Use profiles and preferences
    • Metrics, metering and monitoring
  • Lucene-style document indexing
  • Expandable, redundant media storage

What doesn’t work

  • Anything that requires real time analytics and aggregation
  • Relational queries
  • Reliable counters

Data model

If you are a Java developer, Cassandra data model is best described as the following pseudo code:

public class Row extends TreeMap { } 
public class ColumnFamily extends HashMap { } 
public class Keyspace extends HashMap { } 
public class Cassandra extends HashMap { }  

A keyspace is made up of column families. A column family is made ip of rows. Rows are referred to by keys. Each key is unique within a column family. Rows are made up of columns.

Columns within a row are sorted by column name. Sort order is configured at the time the column family is created and may not be changed. Column names can be composite and made up of multiple parts

Column values can be just about anything including binary. Values can be distributed counters, and important and useful feature. Columns can have a TTL and expire automatically – a very useful feature for managing data retention.

Client API libraries


Thrift is a low level RPC protocol used by Cassandra to expose some API. There is a multitude of client libraries, such as Pelops, Hector, Astyanax, etc. I have been using Thrift on my projects. Note that Cassandra team considers Thrift to be feature complete and therefore it has not seen a single new feature in at least 2 years.


Cassandra supports an SQL-like language called CQL. If you are looking for an equivalent of SQL you are going to be disappointed.

In some cases it is simpler and easier to use than lower level Thrift API and certainly many people swear by it. My humble opinion is tht if you are looking for SQL, save yourself hassle and use an SQL database. However, at least evaluate it if starting a new Cassandra implementation from scratch.

Hardware and infrastructure requirements

One major mistake that those new to Cassandra make is spending a lot of money on expensive hardware. In fact, Cassandra can run on a reasonably configured modern machine.

Commodity hardware with smaller SSD storage

In my experience the most optimal configuration is a minimum of 16–32 Gig of RAM, 256–512 G SSD, and at least four CPU cores. It is ok to virtualize, but make sure that each VM is on separate physocal hardware using separate physical storage.

It is best to start off with no more than 512 G SSD for storage and expand it by adding more nodes, rather than adding more to the same hardware.

For example, if I were to configure Cassandra on Amazon I would pick either c3.2xlarge or c3.4xlarge instance types and combine the two drives using RAID0. As my needs grow I would add more nodes rather than move to larger nodes.


The faster the better. Slow connections between nodes will result in replication delays.


Do not attempt to hire a traditional DBA to support Cassandra as knowledge of both Linux and Java is required.

While reasonably performant out of the box with default settings, Cassandra is not an easy system to tune for optimal performance. Doing that requires thorough understanding of core Java and Java memory management parameters. Outside of Java ecosystem this can be a turn-off for some.

Storage, redundancy and performance are expanded by adding more nodes. This can happen during normal business hours as long as consistency parameters are met. Same applies to node replacements.

As the number of servers grows be prepared to hire a devops army or look for a managed solution. Datastax offering helps but still not enough. Even in the cloud there is no good managed solution that we found. requires you give up Thrift and CQL, and Instaclustr as of this moment does not use third generation SSD-backed instance types.

Technically speaking backups are not strictly needed because data is replicated. In fact, backup mechanisms in Cassandra are limited. You need to come up with your own backup mechanism. Point in time backups are possible but require creative scripting.

Pros and Cons


  • Powerful and flexible data model
  • Perfect for use cases where you can refer to your stored data directly by primary keys and you need a fast data collection mechanism and have a batch process to analyze it
  • Replication is trivial to configure
  • Once setup can run unattended for long periods of time
  • Fixed cost of a Cassandra cluster in Amazon AWS can be an advantage vs. variable cost of DynamoDB


  • Point in time style backups aren’t possible without clever scripting
  • Can’t utilize common DBA skills for operations
  • Can be a devops nightmare
    • Regular repair process is required but is very taxing on the system, requires baby sitting, and may leave the node in an inconsistent state
  • Some advertised features are impractical to use in real life
    • Distributed counters can become inaccurate under heavy load
    • Wide rows are supported but not handled gracefully

Lessons learned

  • Do not spend money to make your life difficult. Use off the shelf hardware rather than spending on enterprise grade iron
  • Use smaller SSDs on each node and expand capacity by adding nodes
  • Keep all nodes hot by having clients on all nodes. This reduces the need for regular repairs.
  • Cassandra is not necessarily your solution to a Big Data
    • Is your data really Big ?
    • Does your use case fit Cassandra’s strength
    • Modern SQL databases can handle millions of records
    • If you are in the Amazon environment RDS supports dual redundancy
    • What constitutes Big Data anyway ?
    • Consider your redundancy needs. Do you feel the probability of losing a server warrants the devops hassle of having more of them ?
  • In Amazon AWS cloud I would seriously consider alternatives
    • DynamoDB is much more cost effective to use and operate if your workload is predictable. Since DynamoDB charges per use, costs can be variable. Cassandra on the other hand results in a fixed cost.
    • RDS offers dual redundancy with MySQL and PostgreSQL. Postgres support for JSON documents makes it a good alternative to Cassandra and MongoDB
  • Some data structures are anti-thetical to Cassandra. Queues are problematic because Cassandra cant handle frequently updated data gracefully. Read-before-write workloads are very taxing on the system. Writes followed immediately by reads are unpredictable, especially when replication factor is higher than 2.
  • Wide rows can be a challenge even though Cassandra does support up to 2 billion columns. Wide rows can create a load imbalance and present a challenge for compactions and slice queries.
  • If you need to do complex joins or real time aggregations save yourself trouble and use SQL , while reserving Cassandra for what it is really good at.

2 thoughts on “Cassandra: Lessons Learned

Comments are closed.