Announcement Announcement Module
Collapse
No announcement yet.
Getting OptimisticLockingFailureException when restarting a job!! Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Getting OptimisticLockingFailureException when restarting a job!!

    Hi,
    I am using 1.0.1.Release version. I am getting the OptimisticLockingFailureException when the job restarts after forced failure. Complete exception is attached with this post.

    My job configuration is..
    <bean id="smaCalculator" parent="simpleJob" scope="prototype">
    <property name="schedule" value="${batch.job.smaCalculator.schedule}" />
    <property name="steps">
    <list>
    <bean id="calculateSMA" parent="simpleStep" scope="prototype">
    <property name="itemReader" ref="accountItemProvider"/>
    <property name="itemWriter" ref="itemWriter"/>
    <property name="commitInterval" value="3" />
    </bean>
    </list>
    </property>
    </bean>
    <bean id="serviceMethodInvoker" class="org.springframework.batch.item.adapter.Item WriterAdapter">
    <property name="targetObject" ref="batchSMABuyingPowerCalculator"/>
    <property name="targetMethod" value="calculateStartOfDaySMA"></property>
    </bean>

    <bean id="itemWriter" class="com.om.dh.batch.core.support.ExceptionThrow ingItemWriterProxy">
    <property name="delegate" ref="serviceMethodInvoker"/>
    <property name="throwExceptionOnRecordNumber" value="7"/>
    </bean>
    <bean id="accountItemProvider"
    class="com.om.dh.batch.core.BatchDrivingQueryItemR eader">
    <property name="saveState" value="true"/>
    <property name="keyCollector">
    <bean class="org.springframework.batch.item.database.sup port.SingleColumnJdbcKeyCollector">
    <property name="jdbcTemplate" ref="jdbcTemplate"/>
    <property name="sql">
    <value>
    <![CDATA[
    select account_id from am_account where is_active=1
    order by account_id
    ]]>
    </value>
    </property>
    <property name="restartSql">
    <value>
    <![CDATA[
    select account_id from am_account where is_active=1
    and account_id > ? order by account_id
    ]]>
    </value>
    </property>
    </bean>
    </property>
    </bean>
    I don't know what caused the exception.

    I have 165 records in the database. After the first run, step execution table shows commit count as 2 which is correct, but it shows item count as 0 which is wrong. It should show as 6.

    After restarting the job it processes few records, finally it throws the OptimisticLockingFailureException and fails. Now it added one more entry in the step execution table and shows the commit count as 40 and item count as 0.

    Can you please explain what is going wrong here?

    regards,
    Ramkumar

  • #2
    Usually when that happens its because the repository hasn't been wrapped in a transaction, as described in the user guide:

    http://static.springframework.org/sp...n.html#d0e3277

    Comment


    • #3
      Lucas,
      Thanks for your quick reply. But i have the transaction configurations in my context file.Here is the conf i have
      <bean id="simpleJobRepository"
      class="org.springframework.batch.core.repository.s upport.JobRepositoryFactoryBean"
      p:databaseType="${batch.database}" p:dataSource-ref="dataSource" />

      <aop:config>
      <aop:advisor pointcut="execution(* org.springframework.batch.core..*Repository+.*(..) )"
      advice-ref="txAdvice" />
      </aop:config>

      <tx:advice id="txAdvice" transaction-manager="transactionManager">
      <tx:attributes>
      <tx:method name="create*" propagation="REQUIRES_NEW" isolation="SERIALIZABLE" />
      <tx:method name="*" />
      </tx:attributes>
      </tx:advice>
      <bean id="jobInstanceDao" class="org.springframework.batch.core.repository.d ao.JdbcJobInstanceDao">
      <property name="jdbcTemplate" ref="jdbcTemplate"/>
      <property name="jobIncrementer" ref="jobIncrementer"/>
      </bean>

      <bean id="stepExecutionDao" class="org.springframework.batch.core.repository.d ao.JdbcStepExecutionDao" >
      <property name="jdbcTemplate" ref="jdbcTemplate"/>
      <property name="stepExecutionIncrementer" ref="stepExecutionIncrementer"/>
      </bean>

      <bean id="batchSqlJobExecutionDao" class="com.om.dh.batch.dao.impl.BatchJobDaoImpl">
      <property name="jdbcTemplate" ref="jdbcTemplate"/>
      <property name="jobExecutionIncrementer" ref="jobExecutionIncrementer" />
      </bean>
      Am i missing something here?

      regards,
      Ramkumar

      Comment


      • #4
        Your stack trace show 2 transaction interceptors, which isn't normal (but might or might not be causing the problem). Is there something missing from the configuration? Do you have another tx:/aop: config section somewhere else in the context?

        Comment


        • #5
          My batch application includes hundreds of other beans from other modules. Right now, its running in jboss server. I did a search in other modules and i found one more transaction manager defined with the same id but points to the same database i use. Is it the problem?

          regards,
          Ramkumar

          Comment


          • #6
            I changed the id attribute of the other transaction manager present in the context. But still i am getting the same exception.

            Comment


            • #7
              Using two data sources and/or transaction managers in the same application might lead to this kind of problem. I'm a bit hazy what you mean by "other modules" and how your application is configured and deployed. You probably ought to try and isolate the problem by running the job launcher and job in a special ApplicationContext. Can you write a simple standalone application (no other modules) and run the job that way, to see what happens.

              Also I'm suspicious of your use of prototype scope (probably not the problem here, but I don't think it buys you anything and you might be making assumptions that are incorrect).

              Comment


              • #8
                Right now our batch application contains only the custom readers and writers. These writers will in turn invoke the different service classes which is packaged as jars and included in the batch application. These jars contains both the classes and their corresponding spring context files. This is what i meant by other modules.

                It looks like my problem solved. The job which is throwing the exception processes account ids which are more than 10 in width. However, the width of LONG_VAL column in BATCH_STEP_EXECUTION_CONTEXT table is only 10 characters long. I increased the size to 20 and it worked !!!. I don't know why i am getting the OptimisticLockingFailureException in this case.

                I am able to restart my job. But still my item count shows as 0 which is wrong and commit count as 58 and am processing 173 records and 3 as commit interval. Commit count is correct. Looks like this is a problem with the 1.0.1 Release. Later i tried reverting back to 1.0.0 release. But the reader throws ArrayIndexOutOfboundsException when it tries to update the status.This looks like fixed in 1.0.1.Release.

                regards,
                Ramkumar

                Comment


                • #9
                  Originally posted by ramkris View Post
                  It looks like my problem solved. The job which is throwing the exception processes account ids which are more than 10 in width. However, the width of LONG_VAL column in BATCH_STEP_EXECUTION_CONTEXT table is only 10 characters long. I increased the size to 20 and it worked !!!. I don't know why i am getting the OptimisticLockingFailureException in this case.
                  Cool. I'm glad it was that simple (there should have been an exception log somewhere that would have given us a hint on that - did you see it?). What database platform are you using? We can fix the long val precision.

                  I am able to restart my job. But still my item count shows as 0 which is wrong and commit count as 58 and am processing 173 records and 3 as commit interval. Commit count is correct. Looks like this is a problem with the 1.0.1 Release. Later i tried reverting back to 1.0.0 release. But the reader throws ArrayIndexOutOfboundsException when it tries to update the status.This looks like fixed in 1.0.1.Release.
                  It works for me in 1.0.1 and trunk and I can't see why 1.0.0 would be any different. What is your "simpleStep" parent bean? Did you use the factory beans from Spring Batch?

                  Comment


                  • #10
                    Originally posted by Dave Syer View Post
                    Cool. I'm glad it was that simple (there should have been an exception log somewhere that would have given us a hint on that - did you see it?). What database platform are you using? We can fix the long val precision.
                    I just checked the mysql scripts inside the1.0.1 jars. Its been changed to BIGINT and that solves this problem.



                    It works for me in 1.0.1 and trunk and I can't see why 1.0.0 would be any different. What is your "simpleStep" parent bean? Did you use the factory beans from Spring Batch?
                    This is my simpleStep bean definition
                    <bean id="simpleStep" class="org.springframework.batch.core.step.item.St atefulRetryStepFactoryBean"
                    abstract="true" >
                    <property name="allowStartIfComplete" value="true" />
                    <property name="transactionManager" ref="transactionManager" />
                    <property name="jobRepository" ref="simpleJobRepository" />
                    <property name="commitInterval" value="1" />
                    </bean>

                    Comment


                    • #11
                      It looks like you don't need the stateful retry factory bean. It might not make a difference, but can you try with a simple step factory bean?

                      Comment


                      • #12
                        Originally posted by Dave Syer View Post
                        It looks like you don't need the stateful retry factory bean. It might not make a difference, but can you try with a simple step factory bean?
                        It is showing the item count properly after changing it to SimpleStepFactoryBean. Looks like the problem exists with the StatefulRetryFactoryBean. But, i am getting the below exception when it tries to commit.

                        org.springframework.transaction.IllegalTransaction StateException: Pre-bound JDBC Connection found! HibernateTransactionManager does not support running within DataSourceTransactionManager if told to manage the DataSource itself. It is recommended to use a single HibernateTransactionManager for all transactions on a single DataSource, no matter whether Hibernate or JDBC access.]
                        at org.quartz.core.JobRunShell.run(JobRunShell.java:2 13)
                        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run (SimpleThreadPool.java:529)
                        Caused by: org.springframework.transaction.IllegalTransaction StateException: Pre-bound JDBC Connection found! HibernateTransactionManager does not support running within DataSourceTransactionManager if told to manage the DataSource itself. It is recommended to use a single HibernateTransactionManager for all transactions on a single DataSource, no matter whether Hibernate or JDBC access.
                        at org.springframework.orm.hibernate3.HibernateTransa ctionManager.doBegin(HibernateTransactionManager.j ava:434)
                        at org.springframework.transaction.support.AbstractPl atformTransactionManager.getTransaction(AbstractPl atformTransactionManager.java:377)
                        at org.springframework.transaction.interceptor.Transa ctionAspectSupport.createTransactionIfNecessary(Tr ansactionAspectSupport.java:263)
                        at org.springframework.transaction.interceptor.Transa ctionInterceptor.invoke(TransactionInterceptor.jav a:101)
                        at org.springframework.aop.framework.ReflectiveMethod Invocation.proceed(ReflectiveMethodInvocation.java :171)
                        at org.springframework.aop.framework.JdkDynamicAopPro xy.invoke(JdkDynamicAopProxy.java:204)
                        at $Proxy121.saveOrUpdate(Unknown Source)
                        at org.springframework.batch.core.job.SimpleJob.execu te(SimpleJob.java:162)
                        at org.springframework.batch.core.launch.support.Simp leJobLauncher$1.run(SimpleJobLauncher.java:86)
                        at org.springframework.core.task.SyncTaskExecutor.exe cute(SyncTaskExecutor.java:49)
                        at org.springframework.batch.core.launch.support.Simp leJobLauncher.run(SimpleJobLauncher.java:81)
                        at com.om.dh.batch.core.ProxyJobBean.runJob(ProxyJobB ean.java:153)
                        at com.om.dh.batch.core.ProxyJobBean.executeInternal( ProxyJobBean.java:89)
                        at org.springframework.scheduling.quartz.QuartzJobBea n.execute(QuartzJobBean.java:86)
                        at org.quartz.core.JobRunShell.run(JobRunShell.java:2 02)
                        ... 1 more
                        And i am doing the following for restarting the job.

                        1. Registering a quartz listener for all the jobs.
                        2. Check job's status in the jobWasExecuted() and reschedule the job after specified interval if the job is failed.
                        3. Do a Check in quartzJobBean if the job is restarting and get the corresponding job parameters from the db and pass it to the JobLauncher.

                        So the restarting the job will happen in a new Thread. Is that causing the above exception?

                        Comment


                        • #13
                          Starting a job in a new Thread is not normally a problem, as long as you understand where the transaction boundary is. The exception stack trace says what the problem is - you need to use HibernateTransactionManager if you are using Hibernate.

                          The bug in StatefulRetryStepFactoryBean is unrelated (and irrelevant since you weren't using the features of the retry): http://jira.springframework.org/browse/BATCH-623.

                          Comment


                          • #14
                            Originally posted by Dave Syer View Post
                            Starting a job in a new Thread is not normally a problem, as long as you understand where the transaction boundary is. The exception stack trace says what the problem is - you need to use HibernateTransactionManager if you are using Hibernate.
                            I am acutally using HibernateTransactionManager. I am confused why i am not getting this exception for the first time incase if there is non-hibernatetransactionmanager and why only during restarting the job?.

                            Originally posted by Dave Syer View Post
                            The bug in StatefulRetryStepFactoryBean is unrelated (and irrelevant since you weren't using the features of the retry): http://jira.springframework.org/browse/BATCH-623.
                            I will be adding the retry features to the StatefulRetryStepFactoryBean configuration. I am hoping that this will be fixed as part of the next release.

                            thanks,
                            Ramkumar

                            Comment

                            Working...
                            X