Announcement Announcement Module
Collapse
No announcement yet.
TCP Adapters not processing requests after getting so timeout Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • TCP Adapters not processing requests after getting so timeout

    Hi

    We are using the TCP outbound, inbound adapters with aggregator (took the help from the spring-integration tcp-client-server example.

    The issue we are facing is,

    we are sending 200 request to the outbound adapterand listening for response on inbound adapter, in middle if we get the socket read time out exception the process is not getting completed (meaning application is stopping to process further)

    when we set so-timeout to 0 all the messages are getting processed.

    The following are the differences between the tcp-client-server example from spring-integration
    1) we are using single-use=false.
    2) we have not configured error channel in inbound adapter
    3) we are hitting an external TCP server (meaning we don't code for server)

    any direction to understand this issue will be of great help.

    Thanks
    Srikanth

  • #2
    There is a note about this in the section on collaborating channel adapters

    http://static.springsource.org/sprin...p.html#d4e4466

    Setting it to zero is the correct approach. Prior to 2.1 you couldn't do that, and had to set it to some arbitrary high number.

    I concluded some time ago that this default is incorrect, and it will be changed in 2.2.

    https://jira.springsource.org/browse/INT-2511

    Comment


    • #3
      Thanks a lot for quick response, That helps.

      One more quick question if we use so-timeout 0 and if the server shuts down and comes back, will the spring-integration connection factory works without restart (reloading context)?

      Comment


      • #4
        Yes, the client will reconnect on the next message.

        You can also set client-mode="true" (2.1 and later), and we will attempt to reconnect on a schedule rather than waiting for the first message.

        Comment


        • #5
          Thanks for the help Gary

          Comment


          • #6
            Sorry Gary, one more question,

            is there a way to bring the whole tcp client side connection pool size up at the start up (open and establish the connection at application start up)?

            Comment


            • #7
              Yes (since 2.1); I mentioned this above.

              Set client-mode = "true" on the outbound-channel-adapter.

              There is also a 'retry-interval' which determines how often we will attempt to reconnect if we can't connect, or after a failure. A new message will also attempt to reconnect.

              I am not sure what you mean by pool; there is only one connection with single-use="false". 2.2.M1 (which should be out shortly) will have support for a pool of connections.
              Last edited by Gary Russell; Apr 21st, 2012, 08:00 AM.

              Comment


              • #8
                sorry when I say poll, I mean by pool-size property of the connection factory

                We are using single-use=false, so to achieve connection pool currently do we need to configure multiple connection factories and multiple outbound/inbound adapters and order them to use the channel capability of round-robin and load-balancing, please suggest.

                Comment


                • #9
                  What you have will work; in 2.2 we will have a caching connection factory, but that's really intended for gateways where a shared socket is reserved until the response is received. Having a pool in that situation will be a big help.

                  Given that you are using collaborating channel adapters, which means requests and replies are completely asynchronous, it's not clear to me what the benefit of a pool of connections to the same host will buy you.

                  Can you explain your use case? Specifically why you need a pool in this scenario?

                  Comment


                  • #10
                    Thanks for the helpful replies Gary,

                    The reason we are looking for pool of connections as we need to handle nearly 50 to 100 transactions per second, we thought even
                    though we are using adapters we might need connection pool to handle this high volume, please correct us.

                    Comment


                    • #11
                      Unless you have multiple routes between client and server (i.e. multiple network interfaces on each box and separate switching topology), I doubt you will see much, if any, benefit from having multiple sockets (but see below). If you don't have multiple paths, all the bits will still travel serially on the same piece of wire. You may see a little buffer contention, but I doubt it will be measurable compared to network contention.

                      This is all based on using channel adapters; it's a different story with the outbound-gateway; which is why we're adding pools in 2.2. WIth adapters, once the message is put on the wire, the socket is available for the next message.

                      How big are the messages?

                      Even 100BASE-TX ethernet can nominally support 10 megabytes per second so, if your messages average 1k, that means it can nominally support 9765 messages per second; 10k per message means 976 messages per second, etc. (These are nominal, in practice, it will be less than this, depending on your infrastructure).

                      With gigabit ethernet, the numbers are 10x this.

                      However, it also depends on how the server handles requests. If the server handles each request from a client connection on a single thread then you will see benefit from multiple connections. That is not a good design for an application like this (that uses a shared socket) because the throughput is based on the time it takes the server to process a request. Of course, you may not have control over the server. If it's a Spring Integration application, when using-nio="true" on the server side, each request is handled on a separate thread from a pool, and we immediately read the next request. With using-nio="false" you can do this by making the channel an asynch channel.

                      Hope this helps.
                      Last edited by Gary Russell; Apr 23rd, 2012, 08:12 AM.

                      Comment


                      • #12
                        Gary, our message is small, the path you suggested will be apt to us.

                        The explanation provided is very clear and to the point, it is very much helpful.

                        Thanks for all the clarifications.

                        Comment

                        Working...
                        X