Announcement Announcement Module
No announcement yet.
TCP outbound gateway Page Title Module
Move Remove Collapse
Conversation Detail Module
  • Filter
  • Time
  • Show
Clear All
new posts

  • TCP outbound gateway

    We have some requirements around connecting to a proprietary backed via sockets. I would assume naturally that the TCP outbound gateway is the suited for that. The other requirement that had come forward is that the socket has to be long lived as creating a new connection is very costly and the server only handles one request at a time. Would be great if you could give some insight around the following:

    1. How do we configure the socket to be long lived or never expire?

    2. The problem with the above is that the socket might go stale due to some unforeseen reason like firewall etc and if is does, is there an even driven way of establishing a connection again automatically?

    3. The back end could expose anywhere between 2 to 60 ports each which could handle one connection. Is there an elegant way of load balancing among them without having to create 2-60 outbound gateways?

    4. The load balancing can be round robin based on if the connection is available. The catch here is that the physical and logical connection might be valid but the back end is not responsive and hence may timeout. The ideal strategy would be to be aware of the connection status by trying a handshake message and if does not respond, needs to be blacklisted. The load balancer while handling the message should be aware of the false connection and avoid it altogether.This whole process needs to be asynchronous because if the load balancer is responsible for checking the connection, it needs to wait for a timeout and then fails over and by then the validity of the message might have been exceeded.
    5. How can I create my own load balancing strategy to go selective round robin based on the white listed connection?

    Hope the info I have given is good enough. Thanks a ton!

  • #2
    1. That is the default; the connection factory serves up a single socket, with an infinite timeout. An attribute 'single-use' (default false), if true. indicates the socket should only be used for one request/response.

    2. A failed socket will automatically be reconnected on the next message. You can also configure 'client-mode' to "true", which will cause the framework to attempt to reconnect on a schedule, regardless of whether a new message arrives but, if a message arrives, an immediate attempt to reconnect will be made, if necessary, regardless of the reconnect schedule.

    3. In 2.2.M1 (out soon), we have added a Caching Connection factory that will do exactly as you describe. A pool of connections will be created on-demand (up to a limit); the set of open connections are used in a round-robin fashion.

    4. Setting the so-timeout on the connection should do what you want. When a connection is pulled from the pool (from a FIFO queue), we send the request, then wait for the reply; if the socket times out, it will be closed, There currently is no automated retry here (to try another connection); you would have to do that with an error-flow. However, we do have an open JIRA issue to implement a retry interceptor ( It is scheduled for 2.2 (but it won't make it into the upcoming milestone 1).

    5. The caching connection factory uses a strategy interface 'Pool', and the default implementation is 'SimplePool', which uses a BlockingQueue to store available connections (hence round robin). We don't have a notion of "open but unusable' state for a connection. You could implement your own Pool to do whatever you want; we just delegate to the pool to get a connection and the pool calls back into the connection factory to get a new one when needed.

    So, I believe, with the M1 release, you should have all you need for this situation, aside from a built-in retry mechanism, but that can be implemented with an error flow.

    Hope that helps.


    • #3
      Regarding #3 - if you mean 60 different ports, then the caching connection factory won't help (that is for multiple connections to the same port).

      You would probably need some custom code to generate the components rather than configure 60 different factories. There are a number of techniques that can be used for that.


      • #4
        Thank you, Gary for the prompt response! Will digest the information you have given and see how I can best fit it. Currently we are constrained on using the 3.0.5.RELEASE version of spring. Would Spring integration 2.2 work with that or would it require a higher version of spring?


        • #5
          The current plan is for 2.2 to require Spring 3.1.

          That said, the CachingClientConnectionFactory is simply a wrapper for one of the existing client connection factories, so it shouldn't be hard for you to grab it from the 2.2 release and use it with 2.1.x.