Announcement Announcement Module
No announcement yet.
Beginner Question: Load balancing across application servers Page Title Module
Move Remove Collapse
Conversation Detail Module
  • Filter
  • Time
  • Show
Clear All
new posts

  • Beginner Question: Load balancing across application servers

    I would like to know how Spring Integration works when deployed on multiple application servers. Can some one point me to the correct documentation.
    Ashish Jamthe

  • #2
    You can use a transport of your choosing to get messages from one node to the other. If your application servers are loosely coupled this should be a contract first webservice (over http, jms, or another transport), if the systems are tightly coupled and you just want to scale out you might want to use remoting (rmi, HttpInvoker, jms).

    The loadbalancing part you can do with Competing Consumers. You can subscribe multiple endpoints to the same queue channel and let them poll concurrently. At the moment this will not work with a DirectChannel (see INT-567).

    To get back to your question, Spring Integration doesn't automagically weave multiple application servers together, it just gives you the building blocks to do that yourself.


    • #3

      If the polling interval is the same on all consumers, then would competing consumers would give round-robin balancing?

      If more control is required over the balancing, is it possible to integrate camel?

      Many thanks,



      • #4
        If the intervals are the same it would give you something very close to round-robin.


        • #5
          In 1.0.2 the issue Iwein mentioned ( will provide load-balancing from the dispatcher itself (as a "dispatch strategy"). That means that a direct channel can be used (synchronous, in same thread), and no intermediate "load-balancer" component is necessary. It will simply be part of the channel->subscriber invocation process.

          If, on the other hand, you want to buffer messages in the channel and use polling consumers, then those consumers will balance the load in a highly configurable way. For example, you can provide different polling intervals, and you can also provide different task-executors. That means that one consumer may poll very fast with only a couple threads while another consumer polls more slowly but with more threads, etc.


          • #6
            I just committed the changes for INT-567 you can use

            	<channel id="defaultChannel" />	
            	<channel id="failOverChannel" dispatcher="fail-over"/>
            The first option will do round-robin load balancing, the second will use fail-over (always use the same handler unless it fails). I don't see the need for other strategies, but we could add them later of course.

            Please give it a go and let me know how you like the syntax (we can change it before 1.0.2, provided you give your input on time)


            • #7
              In my situation my application is a consumer of a JMS queue (using jms:inbound-channel-adapter) and will be deployed to a WAS ND cluster of 4 application servers. At the risk of sounding stupid how can I expect SI to behave in this scenario? I know the idea of clustering application servers is so they behave as a single server but will I effectivley have competing consumers?


              • #8
                I don't know WAS, but I think you will indeed have 4 consumers, which will be competing because of the nature of a jms queue.