Announcement Announcement Module
Collapse
No announcement yet.
Spring Integration 2.1 request-reply benchmark tests showed very poor performance Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Spring Integration 2.1 request-reply benchmark tests showed very poor performance

    hello,
    we did benchmark tests on camel, si and mule. The performance figures for request-reply over JMS scenario with spring integration were surprisingly poor (while one-way messaging results were very good). avg transactions per second - 55 (at best). Response time drops to 1600 ms over 10 min test run. The graphs show that overall performance degrades over time during the sustained load test (producers, consumers and the broker are deployed on different nodes on the network). When I looked at the Active MQ JMX, it appeared that si creates a consumer and a connection for every reply and these connections never go away unless the client application is completely shut down. There is no difference in the results obtained when we scaled producers 1, 10, 50 and 100. The metrics obtained for Mule and Camel for the same scenario are incomparably better - around 3-20ms response time and much-much larger throughput. I wonder if anyone could suggest any improvements to si configuration to speed up the request-reply performance (and avoid creating a consumer for every reply or at least make sure that the reply handlers close connection to the broker and exit when reply is received).

    here is our original configuration:

    producer:
    Code:
    <si:gateway id="htmlProcessorGateway"
            service-interface="com.mycompany.si.demo.HtmlProcessorGateway"
            default-reply-channel="htmlProcessorReplyChannel"
            default-request-channel="htmlProcessorRequestChannel"  />
     
        
        <si:channel id="htmlProcessorRequestChannel" />
        <si:channel id="htmlProcessorReplyChannel" />
           
        <int-jms:outbound-gateway id="htmlProcessorOutboundGateway"
        	request-channel="htmlProcessorRequestChannel"
        	request-destination-name="htmlProcessorRequest"
        	reply-channel="htmlProcessorReplyChannel"
        	connection-factory="connectionFactory"
        	reply-destination-name="htmlProcessorReply"
        	 delivery-persistent="true" />
    consumer

    Code:
    bean id="htmlProcessorServiceActivator" class="com.mycompany.si.demo.HtmlProcessorServiceActivator" />
             
        <si:channel id="htmlProcessorRequestChannel" />
        
        <int-jms:inbound-gateway
        	request-channel="htmlProcessorRequestChannel"
        	request-destination-name="htmlProcessorRequest"
        	connection-factory="connectionFactory"
        	explicit-qos-enabled-for-replies="true"
        	max-concurrent-consumers="10" concurrent-consumers="10" />
    Many thanks
    Julia

  • #2
    Can you show your connectionFactory configuration? I assume you were not using the CachingConnectionFactory?

    Thanks,
    Mark

    Comment


    • #3
      here is connection factory configuration
      Code:
      <bean class="org.apache.activemq.ActiveMQConnectionFactory" id="mq01-jmsCF">
                  <property name="brokerURL" value="tcp://activemq.dev.mycompany.corp:61616?jms.useAsyncSend=true" />
         </bean>
      
      	<bean class="org.apache.activemq.pool.PooledConnectionFactory" id="mq01-pCF">
      	    <property name="maxConnections" value="10" />
      	    <property name="maximumActive" value="10" />
      	    <property name="connectionFactory" ref="mq01-jmsCF" />
         </bean>
        
          <bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
              <constructor-arg ref="mq01-pCF"/>
              <property name="cacheProducers" value="true"/>
              <property name="cacheConsumers" value="true"/>
              <property name="sessionCacheSize" value="5"/>
          </bean>
      it looks as if cashing connection factory has no effect on caching Consumers of the reply.

      I attach screenshots of the JMX on the broker for the test: 10 producers - 6 consumers with connection pool 10, session 5.
      Attachment

      Attachment
      Over 10 producers the list of reply connections is as long as 33K over 10 min test run with throughput 56 transactions per second.

      I also attach Jmeter graphs for this test
      Transactions per second
      Attachment
      Average response time
      Attachment
      Attached Files

      Comment


      • #4
        Can you try passing "mq01-jmsCF" directly to the CachingConnectionFactory and leaving out the PooledConnectionFactory that you current have in the middle?

        Comment


        • #5
          SI request-reply throughput still degrades after configuration adjustments

          Thank you for replying. In reality we have tried plugging ActiveMQConnectionFactory directly to CachingConnectionFactory as well as PooledConnectionFactory without CachingConnectionFactory. We have also experimented with:
          1) not having a named reply queue and using ActiveMQ TemporaryQueue instead. This has crushed the broker as it could not cope with instantiating TemporaryQueues for every reply message. (sic!)

          2) using poller for reply channel. e.g.
          <si:channel id="htmlProcessorReplyChannel" ><si:queue capacity="100" /></si:channel>

          3) tried plugging task executor
          <si:channel id="htmlProcessorReplyChannel" >
          <si:dispatcher task-executor="executor"/>
          </si:channel>
          <task:executor id="executor" pool-size="250" queue-capacity="100" />

          4) tried sending request-reply messages synchronously (not using Future)

          None of this has changed the overall trend, i.e. the degrading throughput.
          If you have any other ideas as to how to modify the request-reply configuration, it will be very much appreciated.

          I am planning to run a break test to see how long it will take to bring the broker down. The broker is 16 core/8G RAM. I will post my findings. Please, do not regard these posts as compromising si - we are really trying to find a solution to improve si's handling of the request reply messaging. To make for this rather disconcerting finding about spring integration I should mention that Mule has failed in one-way messaging miserably although Camel is holding the load quite steadily in both one-way and two-way scenarios even if not as fast as we hoped.

          Comment


          • #6
            Julia
            First, i wanted to thank you for the effort Such feedback is very important. Now to the point.
            I've investigated your issue and was actually able to reproduce it and see the same results. What you see is a result of miss-configuration where the connection factory on the outbound side has caching for the consumers set to 'true'
            Code:
             <property name="cacheConsumers" value="true"/>
            Although it has a special meaning for certain scenarios (which we can discuss later), it is actually the root of your problem.
            Consumer's are being cached per message selector and Session
            Here is the javadoc:
            Code:
            Specify whether to cache JMS MessageConsumers per JMS Session 
             instance (more specifically: one MessageConsumer per Destination, 
             selector String and Session). Note that durable subscribers will only 
             be cached until logical closing of the Session handle. 
            Default is "true". Switch this to "false" in order to always recreate 
             MessageConsumers on demand.
            Message selector is a String corresponding to Message ID and since the Message is id is different for each message sent you never get a chance to reuse cache consumer so a new consumer is created each time you send a Message and cached. Eventually performance degrades as based on the fact that the cache is growing and it ends with Out of Memory exception.

            So please set this attribute to 'false' and you should see totally different results.
            Please confirm that you do as well as how these numbers compare to Camel.
            Also, it would be nice if you could post (e.g., ZIP) relevant Camel configuration so we can compare it on our end as well.

            Cheers and thanks once again.

            Comment


            • #7
              Julia

              In addition, I've also raised this documentation JIRA https://jira.springsource.org/browse/INT-2667

              Comment


              • #8
                Julia

                Also, I did sone of my own testing of the request/reply differences between Camel and SI and I am getting quite a different picture so I am attaching my tests so we can compare notes.
                I am actually getting incomparably poor numbers on the Camel while I tried to keep things as simple as I could on either side.
                For example, my Camel config is:
                Prducer:
                Code:
                <route>
                	<from uri="direct:invokeJmsCamelQueue" />
                	<to uri="jms:jmsCamelQueue?replyTo=bar&amp;connectionFactory=connectionFactory" pattern="InOut" />
                </route>
                Consumer:
                Code:
                <route>
                	<from uri="jms:jmsCamelQueue?connectionFactory=connectionFactory" />
                	<to uri="bean:myService" />
                </route>
                ConnectionFactory
                Code:
                <bean id="connectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory">
                	<property name="maxConnections" value="10" />
                	<property name="maximumActive" value="10" />
                	<property name="connectionFactory">
                		<bean class="org.apache.activemq.ActiveMQConnectionFactory">
                			<property name="brokerURL" value="tcp://localhost:61616" />
                		</bean>
                	</property>
                </bean>
                The SI configuration is exactly as yours with the exception of cacheConsumer's is set to false on the outbound side.
                And here are the performance figures I am getting per 10 messages exchanged.

                Camel:
                Exchanged 10 messages in 10148 millis
                Exchanged 10 messages in 10067 millis
                Exchanged 10 messages in 10064 millis
                Exchanged 10 messages in 10062 millis
                Exchanged 10 messages in 10060 millis
                Exchanged 10 messages in 10065 millis
                Exchanged 10 messages in 10057 millis
                Exchanged 10 messages in 10062 millis


                SI:
                Exchanged 10 messages in 39
                Exchanged 10 messages in 41
                Exchanged 10 messages in 35
                Exchanged 10 messages in 43
                Exchanged 10 messages in 39
                Exchanged 10 messages in 48
                Exchanged 10 messages in 67
                Exchanged 10 messages in 32
                Exchanged 10 messages in 40
                Exchanged 10 messages in 33

                So, in Camel I am exchanging on average 1 message per second where in SI its averaging a little under a 1000 per sec. Another interesting thing is that in Si it actually improves over time under a heavy load
                Look at this numbers for SI per 1000 messages sent:

                Exchanged 1000 *messages in 2462
                Exchanged 1000 *messages in 2203
                Exchanged 1000 *messages in 1776
                Exchanged 1000 *messages in 1558
                Exchanged 1000 *messages in 1520
                Exchanged 1000 *messages in 1976
                Exchanged 1000 *messages in 1243
                Exchanged 1000 *messages in 1245
                Exchanged 1000 *messages in 1240
                Exchanged 1000 *messages in 1274
                Exchanged 1000 *messages in 1599
                Exchanged 1000 *messages in 1143
                Exchanged 1000 *messages in 1137
                Exchanged 1000 *messages in 1134
                Exchanged 1000 *messages in 1104
                Exchanged 1000 *messages in 1071

                Anyway, I am attaching my tests to ensure we are testig the same thing and as I said I am very shocked (to the point where I am thinking I am doing something wrong, but can't see what) about the Camel numbers so may be you can clarify

                Comment


                • #9
                  Hello Oleg!

                  Your Camel route was indeed not configured optimal for high throughput over request/reply. On the producer site, you should configure the following two options (you can read more about the supported options here [1]):
                  receiveTimeout: You can speedup how often the Camel producer will pull for reply messages using the receiveTimeout option. By default its 1000 millis.
                  replyToType: Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. This option allows you to use exclusive queues instead of shared ones. Shared ones uses a JMSSelector to only consume reply messages which it expects. However there is a drawback doing this as JMS selectos is slower.

                  Withe the correct options (and a slightly modified example which I attached) I got the following numbers with Apache Camel:

                  [1] http://camel.apache.org/jms.html

                  Exchanged 10 messages in 35 millis
                  Exchanged 10 messages in 16 millis
                  Exchanged 10 messages in 31 millis
                  Exchanged 10 messages in 21 millis
                  Exchanged 10 messages in 31 millis
                  Exchanged 10 messages in 35 millis
                  Exchanged 10 messages in 18 millis
                  Exchanged 10 messages in 26 millis
                  Exchanged 10 messages in 16 millis
                  Exchanged 10 messages in 17 millis
                  Exchanged 10 messages in 18 millis
                  Exchanged 10 messages in 26 millis
                  Exchanged 10 messages in 54 millis
                  Exchanged 10 messages in 15 millis
                  Exchanged 10 messages in 25 millis
                  Exchanged 10 messages in 16 millis
                  Exchanged 10 messages in 20 millis
                  Exchanged 10 messages in 18 millis
                  Exchanged 10 messages in 20 millis
                  Exchanged 10 messages in 23 millis
                  Exchanged 10 messages in 29 millis
                  Exchanged 10 messages in 65 millis
                  Exchanged 10 messages in 16 millis
                  Exchanged 10 messages in 32 millis
                  Exchanged 10 messages in 20 millis
                  Exchanged 10 messages in 14 millis
                  Exchanged 10 messages in 21 millis
                  Exchanged 10 messages in 17 millis
                  Exchanged 10 messages in 26 millis
                  Exchanged 10 messages in 20 millis
                  Exchanged 10 messages in 21 millis
                  Exchanged 10 messages in 46 millis
                  Exchanged 10 messages in 17 millis
                  Exchanged 10 messages in 32 millis
                  Exchanged 10 messages in 22 millis
                  Exchanged 10 messages in 27 millis
                  Exchanged 10 messages in 22 millis
                  Exchanged 10 messages in 27 millis

                  and sending 1000 messages:

                  Exchanged 1000 messages in 2410 millis
                  Exchanged 1000 messages in 1751 millis
                  Exchanged 1000 messages in 1491 millis
                  Exchanged 1000 messages in 1131 millis
                  Exchanged 1000 messages in 1301 millis
                  Exchanged 1000 messages in 2365 millis
                  Exchanged 1000 messages in 735 millis
                  Exchanged 1000 messages in 665 millis
                  Exchanged 1000 messages in 679 millis
                  Exchanged 1000 messages in 800 millis
                  Exchanged 1000 messages in 936 millis
                  Exchanged 1000 messages in 603 millis
                  Exchanged 1000 messages in 576 millis
                  Exchanged 1000 messages in 593 millis
                  Exchanged 1000 messages in 576 millis
                  Exchanged 1000 messages in 606 millis
                  Exchanged 1000 messages in 580 millis
                  Exchanged 1000 messages in 595 millis
                  Exchanged 1000 messages in 585 millis
                  Exchanged 1000 messages in 575 millis
                  Exchanged 1000 messages in 567 millis
                  Exchanged 1000 messages in 588 millis
                  Exchanged 1000 messages in 578 millis
                  Exchanged 1000 messages in 579 millis
                  Exchanged 1000 messages in 571 millis
                  Exchanged 1000 messages in 584 millis
                  Exchanged 1000 messages in 582 millis
                  Exchanged 1000 messages in 612 millis
                  Exchanged 1000 messages in 601 millis
                  Exchanged 1000 messages in 602 millis
                  Exchanged 1000 messages in 615 millis
                  Exchanged 1000 messages in 592 millis
                  Exchanged 1000 messages in 588 millis
                  Exchanged 1000 messages in 585 millis

                  Now, it's again your turn... ;-)

                  Best,
                  Christian Müller
                  V.P. Apache CamelAttachment
                  Attached Files

                  Comment


                  • #10
                    Thanks Christian, i see now.
                    So what you are saying is that default request/reply Camel configuration is un-comparably poor in comparison to other frameworks (e.g., Spring Integration) but can be improved with some additional configuration. But that also implies that if additional attributes are used on frameworks such as Spring Integration then its performance could be improved as well. Obviously at this point it is just speculation, but i'll let you know tomorrow.

                    Once again, thanks for the feedback

                    Comment


                    • #11
                      Also could you please post hardware specification of the machine you run the test on, just want to make sure we have fare numbers.
                      Anyway, i'll run your tests on mine to ensure that they are.

                      Comment


                      • #12
                        http://camel.465427.n5.nabble.com/fyi-SI-td5716049.html

                        Comment


                        • #13
                          Hello Oleg!

                          No, that's not the case and not what I'm saying.
                          The default request/reply Camel configuration in Camel use temp. queues for the response. In your producer template, you only have to configure the endpoint uri "activemq:jmsCamelQueue" instead of "activemq:jmsCamelQueue?replyTo=bar&receiveTimeout =5&replyToType=Exclusive". With this configuration, I got the numbers below.
                          This means in the default configuration Camel is almost two times faster than the Spring Integration solution.

                          Exchanged 1000 messages in 2583 millis
                          Exchanged 1000 messages in 1710 millis
                          Exchanged 1000 messages in 1635 millis
                          Exchanged 1000 messages in 1184 millis
                          Exchanged 1000 messages in 1189 millis
                          Exchanged 1000 messages in 3420 millis
                          Exchanged 1000 messages in 712 millis
                          Exchanged 1000 messages in 683 millis
                          Exchanged 1000 messages in 659 millis
                          Exchanged 1000 messages in 824 millis
                          Exchanged 1000 messages in 825 millis
                          Exchanged 1000 messages in 609 millis
                          Exchanged 1000 messages in 577 millis
                          Exchanged 1000 messages in 705 millis
                          Exchanged 1000 messages in 589 millis
                          Exchanged 1000 messages in 597 millis
                          Exchanged 1000 messages in 580 millis
                          Exchanged 1000 messages in 570 millis
                          Exchanged 1000 messages in 587 millis
                          Exchanged 1000 messages in 579 millis
                          Exchanged 1000 messages in 593 millis
                          Exchanged 1000 messages in 600 millis
                          Exchanged 1000 messages in 628 millis
                          Exchanged 1000 messages in 597 millis
                          Exchanged 1000 messages in 577 millis
                          Exchanged 1000 messages in 649 millis
                          Exchanged 1000 messages in 592 millis
                          Exchanged 1000 messages in 587 millis
                          Exchanged 1000 messages in 642 millis
                          Exchanged 1000 messages in 601 millis

                          Best,
                          Christian

                          Comment


                          • #14
                            Checkmate :-)

                            Comment


                            • #15
                              @Christian

                              Sorry, you right i did get that wrong. Didn't realize that I had URL parameters thus overriding whatever default behavior there is

                              I'll follow up with more
                              Cheers

                              Comment

                              Working...
                              X