Announcement Announcement Module
No announcement yet.
Removal of Support for Mandatory and Immediate Flags Page Title Module
Move Remove Collapse
Conversation Detail Module
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    OK, we need to wrap this up for 1.0. Here's what I propose:

    1. RabbitTemplate.doSend() becomes protected, so you can set the flags on basic.publish if you need to
    2. We add a ChannelListener interface and allow you to register implementations in the ConnectionFactory, then you can add the ReturnListener in a sensible place

    Does that work?


    • #17
      My apologies for missing your post on the 12th, I just saw it today as I returned to revisit the topic while working through some rabbit refinements. I must have missed the email.

      So, yes the changes you propose would be helpful. I hope I didn't miss the boat here -- saw issue for ChannelListener so I'm hoping that's a good sign.



      • #18
        I know this thread is a few months old at this point and that you've taken steps to allow the mandatory flag to be set if desired (by inheriting from rabbittemplate). I think we have a use case for which the mandatory flag would be appropriate, but please feel free to suggest alternatives. Our situation is as follows:

        We have a high volume queue for which we want to distribute the load across our cluster. The solution we came up with is for each instance of our application to create an anonymous/non-durable/auto-delete queue that is bound to a common exchange. Upon startup, our apps will 'register' their anonymous queues, i.e. will send a notification to a topic exchange to which producers are listening that identifies the exchange and queue name that producers can use to send messages. Producers will then, for this particular exchange, round robin messages to each queue. The goal was to have an even distribution of queues across the cluster (since queues are sticky to the node they were created on), and thus an even distribution of messages.

        Since the queues are anonymous/non-durable/auto-delete, they'll disappear when our app shuts down. We have a mechanism for sending an 'unregister' method on shutdown, but that assumes a graceful shutdown. It's possible that the producer could end up sending messages to queues that have been deleted. We'd like some way for the producer to be notified that a consumer has vanished. We could go with a heartbeat, but something like the functionality of the mandatory flag seems simpler.

        I don't want to use a feature that could be deprecated at some point in the future, so I'm open to suggestions. Thoughts?



        • #19
          If I understand correctly I think you are just duplicating the work of the broker, balancing work across as many consumers as are registered on a particular queue - i.e. why don't you just use a single queue? Or did I miss something?


          • #20
            Problem we found with a single queue, was that in our clustered environment, all messages effectively pass through the broker on which the queue was created. This lead to high memory usage on that broker. So high that we hit the high memory water mark and never recovered. We had persistent messages turned off, and verified that there were no un-acked messages. Seemed to be a problem of scale.


            • #21
              I see. Maybe you could try the rabbitmq-discuss mailing list for more detailed advice about clusters and scale - and also to get the latest on why the mandatory flag is disliked and considered for deprecation. I would be surprised if there wasn't a broker-level fix for your problem, rather than having to code workarounds into your clients.


              • #22
                Yeah, they actually kind of do:


                See the link posted by Alexis. Matthew wrote a custom "hashing exchange" that will evenly distribute messages across all queues bound to the exchange. Seems like a great solution and one we're investigating. Interesting to note that this custom exchange was only written within the last month.