There are a lot of options for queues out there, why use MongoDB?
More broadly speaking, databases were not designed to be queues, so of
course they aren't as optimal as customized solutions. But if you
already have a database (MongoDB in this case) in your infrastructure
and you currently have small throughput requirements for your queue then
you can defer the additional infrastructure complexity of adding a new
queueing system until you actually need something more scalable. You
also get to have good visibility into your queue and work with more familiar tools. We are taking items from the queue in Haskell, but putting them on is simple to do in another language: it is just a database insert with a couple required fields.
But just because we are using a database doesn't mean this isn't a dump polling interface. MongoDB does have some features that can be taken advantage of to
make it scale much farther than that.
MongoDB supports tailable cursors for capped collections. A single query can be left open as a tailable cursor and continue to receive updates. A capped collection is a limited size collection that will just overwrite the oldest entries when it reaches its maximum size.
Similar tactics are also possible in PostgreSQL using the listen interface. There is a Ruby gem that does exactly this.
worker <- liftIO $ createWorker runDB forever $ do messageContents <- liftIO $ nextFromQueue worker
This package uses mongoDB-haskell, the Haskell mongoDB driver. If you want to use persistent-mongoDB to strongly type your bson as Haskell records you could use some of the entity conversion functions that are available in persistent-mongoDB.