DataStax cassandra netty OutOfMemoryError

Recently I encountered this issue. One of the application connects with Apache Cassandra NoSQL Database. Application uses DataStax java driver to connect with Cassandra. DataStax has dependency on the netty library. To be specific following are the Jars that application uses:

  • cassandra-driver-core-2.0.1.jar
  • netty-3.9.0.Final.jar

This application all of sudden ran in to ‘java.lang.OutOfMemoryError: unable to create new native thread‘. When thread dump was taken on the application, around 1650 threads were in ‘runnable’ state with following stack trace:

"New I/O worker #211" prio=10 tid=0x00007fa06424d000 nid=0x1a58 runnable [0x00007f9f832f6000]
   java.lang.Thread.State: RUNNABLE
	at Method)
	- locked  (a$2)
	- locked  (a java.util.Collections$UnmodifiableSet)
	- locked  (a
	at org.jboss.netty.util.internal.DeadLockProofWorker$
	at java.util.concurrent.ThreadPoolExecutor.runWorker(
	at java.util.concurrent.ThreadPoolExecutor$

Oh boy! This is way too many threads. All of them are netty library threads. Apparently issue turned out that Apache Cassandra NoSQL DB ran out of space. This issue was cascading in the application as OutOfMemoryError. When more space was allocated with Cassandra DB, problem went away.

However point here is: Even though Cassandra ran out of space, client applications should be resilient to it. It can’t result in OutOfMemoryError. It’s an unacceptable behavior.

One thought on “DataStax cassandra netty OutOfMemoryError

Add yours

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at

Up ↑

%d bloggers like this: