Announcement Announcement Module
Collapse
No announcement yet.
Spring instance of org.quartz.JobDataMap Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Spring instance of org.quartz.JobDataMap

    Gday,

    I have the following in my spring config:

    Code:
     <bean name="summaryTransferJob" class="org.quartz.JobDetail">
            <property name="name" value="summaryTransferJob"/>
            <property name="jobClass" value="com.bla.dpmr.jobs.SummaryTransferJob"/>
            <property name="durability" value="true"/>
            <property name="volatility" value="false"/>
            <property name="group" value="tips-jobs"/>
            <property name="jobDataMap" >
                <bean class="org.quartz.JobDataMap">
                    <constructor-arg>
                        <util:map>
                            <entry key="requestType" value="hmmm"/>
                        </util:map>
                    </constructor-arg>
                </bean>
            </property>
        </bean>
    
    
    public class SummaryTransferJob extends QuartzJobBean {
     protected void executeInternal(JobExecutionContext context)
                throws JobExecutionException {
     }
    
    }
    The reason I'm using org.quartz.JobDetail (instead of JobDetailBean) is because I'm using JDBCJobStore, and XmlWebApplicationContext is not serializable.
    JobDetail also takes org.quartz.JobDataMap instead of plain java.util.Map, hence my configuration above.
    The real question is, the JobDataMap constructor receives empty map in the constructor (I ran this through debugger). .
    In the end, the constructor of JobExecutionContext sees JobDetail.getJobDataMap() as empty.
    Have I done something silly in the configuration above?

    I have tried the following as well (without success)


    <bean class="org.quartz.JobDataMap">
    <constructor-arg>
    <map>
    <entry key="requestType" value="hmmm"/>
    </map>
    </constructor-arg>
    </bean>


    Thanks in advance!

  • #2
    Maybe something else is wrong; I don't see anything wrong with that config (at least inre the data map).

    Comment


    • #3
      Strange....

      After recreating the schema (and redeploy the application), it is all ok. My question is then... how can this happen?

      My guess is redeploying the application does not configure the scheduler to use the new spring config. i.e. it picks up whatever is stored in the database table.
      (I experienced this several times.. I had to recreate the schema).
      Say something wrong happened in the production, and I redeploy the app (with the new spring config).... what should I do? Wipe out the schema completely, (start with fresh tables?) and start with the new spring config?


      Note: I'm using the scheduler in a clustered env, and hence I used the JDBC jobstore.

      Comment


      • #4
        Ahh yes. You didn't say anything about using a db backed job store...

        You have to make sure that your scheduler factory bean is set to overwrite existing jobs:

        Code:
        <property name="overwriteExistingJobs" value="true"/>
        That way, any configuration changes you make to your job will be effected when you redeploy.

        Comment


        • #5
          Hmm

          I had that actually .. and it complaints about two triggers pointing to different job. i.e. it doesnt wipe out the old job. I still had to clean the database etc....

          Well.. it could be something else I did wrong... I'll have a look again. Thanks a lot for your helps!

          Comment


          • #6
            Originally posted by netzone_tech View Post
            I had that actually .. and it complaints about two triggers pointing to different job. i.e. it doesnt wipe out the old job. I still had to clean the database etc....

            Well.. it could be something else I did wrong... I'll have a look again. Thanks a lot for your helps!
            Make sure that you've named and assigned groups to both your jobs and your triggers so that when you relaunch the app, they appear to be the same jobs and triggers--they will get overwritten if the above flag is set--but they have to pass an 'equality' check that is based on the name/group.

            Comment

            Working...
            X