Why I am Tempted to Replace Cassandra With DynamoDB

I have written about Cassandra in the past. I have been using Cassandra actively for the past three years, and I am one of the big advocates of technology out there. However, as I have pointed in this blog and on my Twitter page – if you plan on scaling Cassandra out, be prepared to recruit an army of Java developers to do devops. Cassandra becomes a devops nightmare beyond 3-4 nodes. In this post I am going to try and explain why.

I started seriously considering DynamoDB for my project when I started looking into seemingly excessive inter-zone network charges. We have traced it down to our Cassandra cluster of 3 nodes and replication factor 3 that essentially tripled our network charges on a regular basis. As we started thinking through optimization scenarios and whether we need Cassandra at all for some parts of our application, DynamoDB began to make sense. We have successfully replaced a custom ActiveMQ cluster with Amazon SQS resulting in over a $1000 in monthly savings in AWS charges, and even more savings in terms of devops. Could we do the same with Cassandra ?

Cassandra devops revolves around the following areas: capacity and replication planning, consistency, scaling up and down, software upgrades, node replacements, and regular repairs.

Capacity and Replication Planning

In order to plan capacity with Cassandra one must understand the performance of a single node, performance impact of replication across more than one, and consistency when more than one node is involved. There is no document that says “If you provision this instance type on AWS and configure it in this way you will get this many operations per second.”

There is a multitude of settings in the configuration files that require a graduate degree in computer science to comprehend and that are best left alone at their defaults. In other words, there is no sure way for me to say that if I want this many concurrent users doing this many concurrent operations I need this type of a cluster.

Contrasting that with DynamoDB, as far as capacity planning goes all I need to care about is what is the minimum IOPS require by my application of the particular table, what is the maximum I am willing to pay for, and how often and when I should auto scale it. Period. End of story.

Consistency

In Cassandra world consistency revolves around two factors: consistency level and replication factor. You can have fast performance and eventual consistency, or you can have slower performance and high consistency. While consistency level is specified per call, replication factor is specified at key space initialization. If you ever want to change replication factor be prepared for hours of maintenance work which becomes impossible on a live cluster once the number of nodes grows.

Again, this is an area where DynamoDB model makes much more sense. If I want consistent reads I must pay twice for IOPS. That’s it. It becomes a purely financial decision.

Scaling up and down

Scaling a Cassandra cluster involves adding new nodes. Each additional node require hours of baby sitting. The process of adding a node takes a few mins, but bootstrapping can take hours. If you are using tokens you are in a bigger pickle since you have to compute just the right balance, move tokens around, and clean up (* we are using tokens since this is a legacy production cluster, and there is no safe and easy way to migrate to vnodes). Once you have added a node, it becomes a fixed cost plus extra network charges. If you ever want to scale down you have to work backwards and decommission extra nodes, which takes hours, and then you have to rebalance your cluster again if you’re still using tokens.

The tokens vs vnodes situation is of particular annoyance to me. Cassandra has left many of us excluded from this feature because it does not offer clean , safe and seamless upgrade mechanism.

Going back to DynamoDB, the only thing I need to care about is IOPS. What is my minimum ? What is my maximum ? How much am I willing to pay. Period. End of story.

Software upgrades

Each time I had to upgrade Cassandra the process was the same and tedious: go to each node, upgrade the software, verify the settings have migrated (Cassandra does not offer tools to cleanly port settings from older versions), start the new binaries, run upgrade ss-tables process. It is a process that is bound to ruin a weekend for me. I am simply no longer interested.

One of the pet annoyances I have with Cassandra is how they deprecated Thrift API. Many of us have used the software for years and now have to either use deprecated API or port code to new CQL. Some of us have chosen, wisely or not, to use a Thrift library that is no longer up to date. So to use the new API we have to port the code, and an obvious question comes up – if I have to port my code to new library, do I still want to use Cassandra ?

I do not need to concern myself with software upgrades with DynamoDB. Period. End of story.

Node replacements

This is similar to scaling, as I described above. Node replacement in Cassandra world is an hours long process. No such thing with DynamoDB.

Regular repairs

If a cluster grows larger, especially in multi data center scenarios, Cassandra recommends that a regular repair process is run on each node. Again, this is a long running process that imposes significant IO workload on all nodes in the cluster. It can run for days on end, results in extra disk utilization, and requires baby sitting. On more than one occasion it has ruined a weekend for me.

DynamoDB does not require me to do anything of the sort.

So what is the moral of this story ?

From the data model perspective, DynamoDB and Cassandra are very similar. Cassandra offers more flexibility for sure, and I would much prefer Cassandra over DynamoDB. However, with no managed offering that is as simple as DynamoDB I really don’t have the patience anymore.

Yes, there is Instaclustr. But that too misses the point. I have done the math – it is simply not cost effective, and requires me to do the same capacity planning exercises I am trying to avoid.

What I really am looking for is a fully managed Cassandra system that works just like DynamoDB, and only pay for capacity that I actually use, with simple API calls to scale up and down. Until that happens I see DynamoDB on my horizon.

 

15 thoughts on “Why I am Tempted to Replace Cassandra With DynamoDB

  1. I find it quite confusing. As you recognize at the very end, you are not comparing cassandra to dynamodb, but a “manage yourself everything” with “get someone to do it”.

  2. DynamoDB is way too expensive. I can (and currently) do 50,000 writes per second with 2 Cassandra nodes that cost only $130.00/month each for a total of $260.00/month. This same throughput in AWS costs $24,000 a month. So $24,000 vs. $260.00… I’d say that’s worth a few headaches. Even if you go with beefy machines, which I personally haven’t found necessary, you might spend $1,400 – $2,000 a month for the linux boxes. I

    1. Yes, and if you look at my other posts on the topic (look for trackbacks above) you will I arrived at the same conclusion.

      I am still working on reducing our reliance on Cassandra but DynamoDB is not necessarily it.

      1. Just curious as we are faced with a similar conundrum. What eventually did you go with?

    2. Levi, another point you are not considering in your Cassandra cost estimate is network transfer costs , if you are using a multi-AZ cluster…

  3. If I recall well, Netflix has a narrow devops team (5 or so) to operate 2000+ Cassandra nodes.
    This doesn’t seem like “an army of Java developers to do devops”.

    When they upgrade a node they surely don’t ” go to each node, upgrade the software, verify the settings”. They run a program to upgrade theiir clusters and wether the number of nodes is 20, 2000 or 20000, the work should be the same. The degree of automation and the size of infrastructure is certainly not the same though.

    When comparing Cassandra and Dynamo, it’s actually hard to say which one is easier to operate. Noone can download Dynamo and operate it himself, so you are basically comparing basically a self managed cluster to a externally managed cluster – the work is obviously not the same and it’s hard for me to understand the point in comparing them the way you do it.

    Hope this makes sense to you.

Comments are closed.