Announcement Announcement Module
Collapse
No announcement yet.
What's the best practice for using JMS in spring? Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • What's the best practice for using JMS in spring?

    For the SimpleMessageListenerContainer, it doesn't originally support XA transaction, which may be required in some cases.

    For the DefaultMessageListenerContainer, it uses a pull approach which unnecessarily consumes CPU power and bothers JMS server, definitely not desired for production usage.

    For the ServerSessionMessageListenerContainer, there is no document or sample to show its usage. And in my experimentation, I have encountered message loss issue which may be related to multi-threading (message get lost in a run mode, but if I debug it in Eclipse, run it one by one, it can usually receive all the messages).

    Also, if I have N numbers of queues and topics, I have to create N numbers of ...Containers, just because each one can only take one destination.

    So can anybody tell me what is the best practice?

    Thanks!

  • #2
    Very good question.

    Based on what I understand, for stand-alone java servers(non-J2EE), the DefaultMessageListenerContainer is the choice as it has support for XA.

    However, as you mentioned, the method is basically a blocking call and the thread blocks until it receives a message or is timed-out (if receivetimeout is set). I haven't noticed any significant CPU usage as you mentioned. The resource(thread) is blocked and if you have significant number of JMS destinations, this might be an issue, otherwise I don't think it is a big deal.

    I have used this container with TIBCO EMS and you can set a "prefetch" value on the destination. (If it is set to "5", it means the client will receive 5 messages at a time instead of doing a round-trip for each message). The socket level connection is optimized that way and you can tweak it.

    I am sure ActiveMQ and WebsphereMQ supports this kind of a feature.

    My 2 cents.

    Thanks,
    Murali K

    Comment


    • #3
      From my observation, the MessageConsumer.receive() is called continuously once the container starts up, no matter there is message coming or not. You can test it in an IDE and place a break point at invokeListener(). Theoretically, if you don't keep pulling messages, how do you know when they arrive?

      SimpleMessageListenerContainer is using MessageListener, which is really an on-demand approach. So from my point of view, it is a more scalable and reasonable production solution than the DefaultMessageListenerContainer, unless you have message coming every second.

      Comment


      • #4
        I guess you are configuring the container with a "receiveTimeout" value, and that is why you see the "invokeListener" being called many times. Maybe that's the reason it was buring some CPU cycles.

        If you don't set the "receiveTimeout" value, then the equivalent JMS API method call will be "consumer.receive()", which is a blocking call, which means that the undelying socket connection is waiting for a message over the wire.

        Hope that helps.

        Thanks,
        Murali K

        Comment


        • #5
          well, it is actually the reversed way:

          protected Message receiveMessage(MessageConsumer consumer) throws JMSException {
          return (this.receiveTimeout < 0 ? consumer.receive() : consumer.receive(this.receiveTimeout));
          }

          public static final long DEFAULT_RECEIVE_TIMEOUT = 1000;
          private long receiveTimeout = DEFAULT_RECEIVE_TIMEOUT;

          If you don't set it, it by default waits for 1 second. The looping happens. If you set it to be negative, it is a blocking call. But if you have configured a transactionManager, the receive will be invoked within a transaction:

          TransactionStatus status = this.transactionManager.getTransaction(this.transa ctionDefinition);
          doReceiveAndExecute(session, consumer, status);
          this.transactionManager.commit(status);

          Finally it will be either transaction time out or message comes to the rescue. And the transaction will be a long one if message is not frequent.

          I should say that with such 1 second blocking time, the hit to CPU may be minimal, if you don't have a lot of queues/topics to listen to. But I am not sure about the impact to transaction related resources.

          All in all, I am a little bit hesitating about the pulling approach.

          Comment


          • #6
            Yep, that's what I meant, when you set "receiveTimeout" to a -ve value (blocks forever).

            I haven't used any XA features yet, but would like to know if someone has used it and their thoughts

            Thanks,
            Murali K

            Comment


            • #7
              While DefaultMessageListenerContainer does use a pull approach behind the scenes, it is still considered a recommendable approach in many environments. After all, any kind of listening approach always boils down to some kind of receive loop on the underlying socket connection; it's only really about the level at which it happens.

              DefaultMessageListenerContainer's advantage is the level of control that it gives, in particular with respect to thread management and transaction demarcation. DMLC is the only listener container that does not impose the thread management onto the JMS provider, that is, does not use/block JMS provider threads. It is also able to gracefully recover from JMS provider failure, such as connection loss. Furthermore, as noted, it is the only variant that supports external transaction managers, in particular for XA transactions.

              DefaultMessageListenerContainer is also the only listener container variant that is compatible with both managed JMS and non-managed JMS, i.e. JMS in a J2EE environment as well as standalone JMS usage, since it sticks with standard JMS API that is compatible with all of J2EE's JMS restrictions. SimpleMessageListenerContainer on the other hand uses "Session.setMessageListener", not supported in a J2EE environment.

              The only issue in a J2EE environment is thread management: On WebLogic and WebSphere, we recommend to specify a CommonJ WorkManagerTaskExecutor, which makes DMLC delegate to a server-managed thread pool, resulting in fully integrated thread management. Alternatively, simply stick with the default SimpleAsyncTaskExecutor, which works fine as well in many cases. (It will work just as well as Quartz does...)

              Regarding transaction management: If you specify a "transactionManager" on DefaultMessageListenerContainer, it will wrap its receive loop in a transaction. This is typically used with JtaTransactionManager, where no transactional resources are actually bound: It's only really marking the thread (and the listener's JMS Session) as "XA-active" and waiting for something to happen. This should be pretty efficient with any decent JTA provider.

              As a consequence, DefaultMessageListenerContainer is the variant to use in a J2EE environment, and also a primary choice for native JMS usage. It is often used with native JMS providers such as Tibco, talking to an external broker process (even when running in a J2EE server), typically without XA transactions. Alternatively, consider using SimpleMessageListenerContainer, but only for native JMS usage without XA, and only if your JMS provider gracefully handles thread management and connection recovery.

              FWIW, there is a further approach for JMS listening, for native JMS usage only but with the provider pushing incoming messages - and with full XA support: the JCA 1.5 message endpoint mechanism. It requires a JMS 1.1 provider that ships a JCA 1.5 connector (e.g. ActiveMQ), running in a local JCA container within the application. This is the approach that the Jencks project (http://jencks.org) takes, as a third-party add-on to Spring: worth considering if XA transactions are needed, as alternative to DMLC.

              Juergen

              Comment


              • #8
                be careful about caching levels on the DMLC

                It's worth noting that, in some cases, you may have to manually set the cacheLevelName property of a DMLC to CACHE_NONE. I have found this to be necessary when all of the following conditions are met:

                * when using the Websphere MQ JMS Provider
                * when running inside of Websphere 6
                * when using the WorkManagerTaskExecutor for thread management
                * when NOT using a transaction manager

                Having discussed this with IBM level 3 support, I have found out that caching handles to Session objects across threads is not supported. Caching at the DMLC level in this scenario is not necessary anyways, as the MQ JMS Provider is managing its own pool of Session objects.
                Last edited by fiddlerpianist; Jan 2nd, 2007, 12:02 PM. Reason: clarification

                Comment


                • #9
                  Hi,

                  Do you have a listing you can give as an example showing the Weblogic Manager for the task executor please. You mentioned setting the cache level name

                  Comment

                  Working...
                  X