Announcement Announcement Module
Collapse
No announcement yet.
Shutdown problem with Jms listeners after Spring upgrade Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Shutdown problem with Jms listeners after Spring upgrade

    I am upgrading our application from using Spring 2.0 to 2.0.7. However I seem to have run into problems when trying to shutdown the application (the context). This worked fine with Spring 2.0.

    We use WMQ SiB/JMS running on a Websphere 6.1.0.9 instance.

    A race condition appears and two threads are blocked. The web container thread is blocked trying to shutdown the context and one of the listener threads gets blocked after it's been notified to shutdown its jms connection.

    I have been able to create a minimal webapp with a Jms listener bean, the few classes and config files are enclosed in a maven project zip. (I have a full war available as well)

    I did a thread dump of the JVM and got this on these two threads:


    3XMTHREADINFO "WebContainer : 1" (TID:0x33F43600, sys_thread_t:0x348F1988, state:B, native ID:0x00000B84) prio=5
    4XESTACKTRACE at com/ibm/ws/sib/api/jms/impl/JmsMsgConsumerImpl.close(JmsMsgConsumerImpl.java:8 77)
    4XESTACKTRACE at com/ibm/ws/sib/api/jms/impl/JmsSessionImpl.close(JmsSessionImpl.java:808)
    4XESTACKTRACE at com/ibm/ws/sib/api/jms/impl/JmsConnectionImpl.close(JmsConnectionImpl.java:710 )
    4XESTACKTRACE at org/springframework/jms/connection/ConnectionFactoryUtils.releaseConnection(Connectio nFactoryUtils.java:80)
    4XESTACKTRACE at org/springframework/jms/listener/AbstractJmsListeningContainer.shutdown(AbstractJms ListeningContainer.java:302)
    4XESTACKTRACE at org/springframework/jms/listener/AbstractJmsListeningContainer.destroy(AbstractJmsL isteningContainer.java:264)
    4XESTACKTRACE at org/springframework/beans/factory/support/DisposableBeanAdapter.destroy(DisposableBeanAdapte r.java:145)
    4XESTACKTRACE at org/springframework/beans/factory/support/DefaultSingletonBeanRegistry.destroyBean(DefaultSi ngletonBeanRegistry.java:347)
    4XESTACKTRACE at org/springframework/beans/factory/support/DefaultSingletonBeanRegistry.destroySingleton(Defa ultSingletonBeanRegistry.java:320)
    4XESTACKTRACE at org/springframework/beans/factory/support/DefaultSingletonBeanRegistry.destroySingletons(Def aultSingletonBeanRegistry.java:293)
    4XESTACKTRACE at org/springframework/context/support/AbstractApplicationContext.destroyBeans(AbstractAp plicationContext.java:706)
    4XESTACKTRACE at org/springframework/context/support/AbstractApplicationContext.doClose(AbstractApplica tionContext.java:684)
    4XESTACKTRACE at org/springframework/context/support/AbstractApplicationContext.close(AbstractApplicati onContext.java:651)
    4XESTACKTRACE at com/test/util/ContextLoader$1.close(ContextLoader.java:55)
    4XESTACKTRACE at org/springframework/web/context/ContextLoader.closeWebApplicationContext(ContextLo ader.java:336)
    4XESTACKTRACE at org/springframework/web/context/ContextLoaderListener.contextDestroyed(ContextLoad erListener.java:74)
    4XESTACKTRACE at com/ibm/ws/wswebcontainer/webapp/WebApp.notifyServletContextDestroyed(WebApp.java:7 23)

    Jms listener thread:
    3XMTHREADINFO "myContainer-2" (TID:0x36DF5600, sys_thread_t:0x348F128C, state:B, native ID:0x000008DC) prio=5
    4XESTACKTRACE at java/util/Collections$SynchronizedCollection.remove(Collecti ons.java:1599)
    4XESTACKTRACE at com/ibm/ws/sib/api/jms/impl/JmsSessionImpl.removeConsumer(JmsSessionImpl.java: 2483)
    4XESTACKTRACE at com/ibm/ws/sib/api/jms/impl/JmsMsgConsumerImpl.close(JmsMsgConsumerImpl.java:9 14)
    4XESTACKTRACE at org/springframework/jms/support/JmsUtils.closeMessageConsumer(JmsUtils.java:144)
    4XESTACKTRACE at org/springframework/jms/listener/DefaultMessageListenerContainer$AsyncMessageListen erInvoker.clearResources(DefaultMessageListenerCon tainer.java:899)
    4XESTACKTRACE at org/springframework/jms/listener/DefaultMessageListenerContainer$AsyncMessageListen erInvoker.run(DefaultMessageListenerContainer.java :865)
    4XESTACKTRACE at java/lang/Thread.run(Thread.java:801)

    The blocking only appears after a connection has been used, ie a message must have been put on the queue. If I debug the shutdown process, the problem does NOT appear, it seems that the threads get its appropriate needs of time to do the shutdown correctly.

    Is this a bug that appeared after 2.0 somehere or what could it be?

    I'm thankful for any help!
    BR
    /Johannes

  • #2
    Hi,

    i have seen that you are not using a CommonJ Workmanager inside your DMLC. Not sure if this is the right solution but in generall i would recommend to use the CommonJ Workmanager abstraction. So the created threads are under control of the application server.

    Have a look here for more informations on how to configure it

    http://forum.springframework.org/sho...34&postcount=2

    rgds
    agim

    Comment


    • #3
      Hi, thanks for your comments. I went about using the Work Manager as well, but that, unfortunaetly, did not solve the problem. The second blocked thread got this stacktrace in the thread dump instead:

      3XMTHREADINFO "WorkManager.DefaultWorkManager : 0" (TID:0x36F56D00, sys_thread_t:0x34901734, state:B, native ID:0x000004A8) prio=5
      4XESTACKTRACE at java/util/Collections$SynchronizedCollection.remove(Collecti ons.java:1599)
      4XESTACKTRACE at com/ibm/ws/sib/api/jms/impl/JmsSessionImpl.removeConsumer(JmsSessionImpl.java: 2483)
      4XESTACKTRACE at com/ibm/ws/sib/api/jms/impl/JmsMsgConsumerImpl.close(JmsMsgConsumerImpl.java:9 14)
      4XESTACKTRACE at org/springframework/jms/support/JmsUtils.closeMessageConsumer(JmsUtils.java:144)
      4XESTACKTRACE at org/springframework/jms/listener/DefaultMessageListenerContainer$AsyncMessageListen erInvoker.clearResources(DefaultMessageListenerCon tainer.java:899)
      4XESTACKTRACE at org/springframework/jms/listener/DefaultMessageListenerContainer$AsyncMessageListen erInvoker.run(DefaultMessageListenerContainer.java :865)
      4XESTACKTRACE at org/springframework/scheduling/commonj/DelegatingWork.run(DelegatingWork.java:61)
      4XESTACKTRACE at com/ibm/ws/asynchbeans/J2EEContext.run(J2EEContext.java:1112)
      4XESTACKTRACE at com/ibm/ws/asynchbeans/WorkWithExecutionContextImpl.go(WorkWithExecutionC ontextImpl.java:195)
      4XESTACKTRACE at com/ibm/ws/asynchbeans/CJWorkItemImpl.run(CJWorkItemImpl.java:187)
      4XESTACKTRACE at com/ibm/ws/util/ThreadPool$Worker.run(ThreadPool.java:1469)


      I'm down to the original problem anyhow as it seems. Do you or anyone have any more suggestions. To me it looks more and more like a Spring problem?

      Comment


      • #4
        There is also another thread regarding this here: http://forum.springframework.org/showthread.php?t=42494

        So it seems it is not an isolated problem.

        Comment


        • #5
          Hi, we ran into the same problem. Look at this page http://www-1.ibm.com/support/docview...id=swg1PK43397, it tells that the problem lies in Websphere, IBM realized the deadlock too, they have released a fix for this misbehaviour.

          We haven't tried to install the fix yet, because i got the jvm thread dumps indicating the deadlock today.

          Hope that helps. Sorry for the bad English.
          Last edited by dakrul; Nov 12th, 2007, 11:27 AM.

          Comment


          • #6
            As pointed out in that other thread as well: Have you tried this against Spring 2.5 or one of the recent Spring 2.0.8 snapshots? This very much looks like an issue we've fixed there already:

            http://jira.springframework.org/browse/SPR-4124

            This revision uses a shared lock for closing sessions and connections now, in order to avoid a deadlock within the JMS resources. The issue was originally reported against Oracle AQ, but might apply to WebSphere MQ as well...

            Juergen

            Comment

            Working...
            X