Announcement Announcement Module
Collapse
No announcement yet.
EAR using Spring JSM MDP fails to start on one of two WLS instances Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • EAR using Spring JSM MDP fails to start on one of two WLS instances

    Hi folks,

    Maybe this is a silly question, but I'm completely stuck :-(
    I have an application, which is sending/receiving JMS messages using Spring JMS MDP, which is running on a WLS 11 cluster (containing one admin and two managed instances). The application is deployed only on the managed instances. So is the JMS stuff: It's deployed to the WLS Cluster.
    Now: When starting up the WLS I receive on one of the instances an exception like this:
    2012-12-11 14:44:36,508 [[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR org.springframework.web.context.ContextLoader - Context initialization failed org.springframework.beans.factory.BeanCreationExce ption: Error creating bean with name 'JMSQueue' defined in class path resource [applicationContext-techarch.xml]: Invocation of init method failed; nested exception is javax.naming.NameNotFoundException:
    Unable to resolve 'queue.AKKResponses'. Resolved 'queue'; remaining name 'AKKResponses'

    which causes the startup of the application to fail.
    On the second instance, the application start succeeds successfully. (That's the silly point !)

    When I log on to Admin Console, stop and start the application again, the EAR starts fine on both instances.

    The application keeps up running fine, until I restart the WLS again. At this time the same error reappears.

    Does anybody of you have an idea how to prevent that error ?

    The same is happening on prod with 1 admin and 4 managed servers (distributed over two machines): The ear starts on one and fails on three managed servers - but there I don't have the permission to restart the ear :-(

    Best regards
    Christian

  • #2
    After some quick googling, it seems that WLS (maybe only older versions) has some issues with dots in JNDI names.

    Comment


    • #3
      Hi Mark,

      thx for the reply, but I don't think, that it's an issue with dots in the name, because:
      1. The application starts fine when I restart it in WLS console - this should not happen, when WLS could not resolve the name at all due to dots
      2. I created the JNDI name without "." - I used a slash instead, meaning the queue is called "jms/AKKResponse", so I guess the text in the exception message is converted a little bit by either WLS or Spring - WLS I guess.

      I think, it's more an issue of the deployment order, as we have a start script for WLS, which first starts the admin instance, then managed instance 2 and finally instance 1.
      The application start on instance 2 fails, the start on instance 1 succeeds.
      After all servers are up, I can log on to admin console and restart the application without any problems.

      Best regards
      Christian

      Comment

      Working...
      X