Announcement Announcement Module
Collapse
No announcement yet.
How to have only one message consumed at time in a websphere cluster Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • How to have only one message consumed at time in a websphere cluster

    Hi,

    Our application receives messages on a queue in a websphere cluster. Processing of a message requires that only one message can processed at time. We don't want to implement a locking mecanism.

    Is there any way to configure the messaging system so that only one message is processed over the whole cluster ?

    Thanks for any help.

    Regards,
    MickaŽl

  • #2
    Queue is the implementation of P2P messaging paradigm so there can only be once consumer. The fact that you have a cluster simply means you have several consumers and your Messaging system (WAS in this case) should load balance messages between them (typically RoundRobin)

    Comment


    • #3
      Thanks for your answer Oleg,

      But I'm not sure to understand what you mean. A consumer could be deployed in cluster or not, messages from a queue can be processed in parallel, this is often a reason why JMS is used. The cluster can even increase the number of messages processed in parallel. My objective is to limit the number of processed messages at one at time for databas concurrency reasons. Limiting to one message processed seems easy with a single server as limiting the number of threads dedicated to a queue resolves the problem. But how can limit the number of threads on all the cluster ? I don't want two nodes of the server processing each one message at the same time.

      As you said, messages are load balanced between nodes of the cluster. But does the load balancer waits that the first message has been completely processed before processing the second message ? I think this is not the case but I may be wrong.

      Comment


      • #4
        WMQ provides broker level settings for "strict message ordering". You can find more information in their resources, e.g.:
        http://www-01.ibm.com/support/docvie...id=swg21446463

        HTH,
        Mark

        Comment


        • #5
          There are two types of destinations; Queues and Topics. They both have strict contracts (Queue - P2P, Topic is PubSub) so cluster or no cluster it would be a violation to allow more then once consumer to receive a single Message from the Queue. If you need that you use Topic.

          Comment


          • #6
            I think I haven't been clear.

            I agree there is only one consumer for a P2P/queue and that each message should be processed only one time. My consumer is my cluster. This doesn't mean that there is only one MessageListener at a time, there can be several MassageListener working in parallel at the same time on the same queue and thus several messages of the same queue processed at the same time as written in JMS spec section 8.2. This is what I want to avoid.

            I don't have messages ordering problems. My requirement is : I don't want messages A and B from the same queue to be processed at the same time. This is easy in a non cluster environment by limiting number of MessageListeners (equivalent ot threads) but I don't know how to do it in a cluster.

            I guess JMS or Spring don't give any solution to this problem. I'll look at WebSphere specificities as I found a special sequential queue that should work the way I want.
            http://pic.dhe.ibm.com/infocenter/ti...iguration.html
            Thanks for your help.

            Comment


            • #7
              Hopefully you saw my reply above the last one from Oleg.

              There is not really anything Spring can do for this since it's a broker-level feature (has to manage whether or not multiple Sessions can be active at the same time).

              Comment


              • #8
                Thanks Mark, as you said the solution is at broker level. Thanks for the link.

                Comment

                Working...
                X