Announcement Announcement Module
No announcement yet.
Skippable exception thrown caused items to be processed again? Page Title Module
Move Remove Collapse
Conversation Detail Module
  • Filter
  • Time
  • Show
Clear All
new posts

  • Skippable exception thrown caused items to be processed again?

    I have a spring batch job built on top of Spring batch 2.1.8RELEASE. To make it easy, I simplied the job configuration as below. I noticed Skippable exception thrown in item processor (someProcessor as below) caused the step execution to process the good items again. For example, if the processing of the 7th item of the 10 items throws SomeSkippableException (skip limit not reached), it doesn't jump to the 8th item directly; instead, it reads and them processes the 1-6th items again and process the 8th item. Essentially 1-6th items are processed twice. I tried looking at spring batch code, but didn't find any clue to explain this. Is this the expected behavior? Where is code (java file) that explains this? How to avoid re-processing
    of those good items? Thanks a lot!

    <job id="simpleJob" restartable="false">
    <step id="someStep">
    <tasklet >
    <chunk reader="someReader" processor="someProcessor" writer="someWriter"
    commit-interval="10" skip-limit="100">
    <include class="SomeSkippableException"/>
    <listener ref="somePreparedStatementSetter"/>

    <bean id="someReader"
    class="org.springframework.batch.item.database.Jdb cCursorItemReader"
    <property name="dataSource" ref="dataSource" />
    <property name="rowMapper" ref="someItemRowMapper" />
    <property name="queryTimeout" value="1800" />
    <property name="preparedStatementSetter" ref="somePreparedStatementSetter" />
    <property name="sql">

  • #2
    that bevaviour is mandatory for spring batch to isolate the bad item(s), basically it rollbacks the chunk and processes/writes each item with commit-rate=1 to find the bad one (in either processor or writer)
    from this answer at stackoverflow to a similar question

    How to avoid re-processing
    the first processing got rolled back so there is no real re-processing in terms of persistence for unit of work