I am working on a Mule ESB project that processes a significant number of inbound HTTP requests. We have been using MongoDB to log a ton of data because, well, its just awesome. So I dropped the Mule MongoDB Connector (version 2.0.1) into my Maven pom.xml and had everything logging to mongo in minutes. “What a piece of cake!” I though. Everything was working great at first…until I threw a heavy load at it. It was then that I realized that the Mule Mongo Connector Fails Under Heavy Load. Keep reading to find out what caused the problem and see how I fixed it. 

Before I describe the cause of the problem. Let me describe my Mule Flow a little. It is very straight forward so I have no reason to think there is anything about my flow that would cause the Mongo Connector to fail. Here is a simplified version of my mule config.

This flow does the following: receive the incoming request, convert the http post params to a Map, add a collectionName outbound header property, call a business object Java class, convert the payload to JSON, and log it to Mongo.

I tested this flow using ApacheBench. When testing with a concurrency level of 1, everything works great.

Looking a the mongo client, everything logged just fine.

If I change the concurrency level, however. Everything blows up!!!

It gets through about 130 requests before it just hangs. Eventually, I get the following Exceptions in my Mule log.

Checking the mongo client, I can see only 130 requests got logged (I wiped out the collection before running my high concurrency test).

From this point forward, nothing else gets logged to Mongo. The Mongo Connector is totally hosed.

So what caused this problem? Digging through the MongoDb Connector source code, I learned that they are using the super awesome Apache Commons GenericKeyedObjectPool to create a ConnectionPool to Mongo. For normal Cloud Connectors, this would probably be a great idea. But this is definitely a big NO NO for the Java Mongo driver. The Mongo documentation specifically states:

The Java MongoDB driver is thread safe. If you are using in a web serving environment, for example, you should create a single Mongo instance, and you can use it in every request.

The GenericKeyedObjectPool will attempt to return an idle Object from its pool if possible. If idle objects do not exist (which they certainly did not under a load with a concurrency of 50), it will attempt to create a new object. When a new object is activated, the following code gets called in the org.mule.module.mongo.MongoCloudConnector class.

Line 819 is our culprit. They are creating a NEW instance of the Mongo driver, which, according to the Mongo docs, should not be done. Because there are multiple Mongo instances trying to talk to MongoDB from different threads, a race condition occurs and the mongo driver eventually locks up.

So, how did I fix this issue? Easy! I got rid of the Mule MongoDB Connector and just created my own instance of the Mongo driver as a Spring bean. I also created two very simple Pojo classes that use the spring injected Mongo driver to insert and update a collection in Mongo. Here is my spring config.

And here are my very simple Java classes. The AbstractMongoComponent is a base class I wrote that simply stores the injected Spring properties.

Tying it all together, the change to my mule flow was VERY simple. In addition to my spring config (shown above), it now looks like this.

Once I made this change to use the Mongo driver as a global singleton, everything worked great. Here is a repeat of my high concurrency ApacheBench test with this change in place. Mule and Mongo handled this load with no problems whatsoever.

Hopefully the guys at MuleSoft will address this matter soon. Although, it was so easy to use the normal Java MongoDB driver, I don’t really know if they should bother.