Announcement Announcement Module
Collapse
No announcement yet.
Retries and equality Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Retries and equality

    What is the default retry behavior when it is not configured or set in any way?

    I have a demo application that uses both HibernateCursorItemReader and HibernateItemWriter. It is quite similar to the hibernate sample provided in Spring Batch. It also has a processor that increase an item's amount (see CustomerCreditIncreaseProcessor). The difference is that in my application it is not creating a new item object when processing. It simply sets a new amount like

    Code:
    item.setBilledAmount( new BigDecimal( item.getBilledAmount(  ).intValue(  ) + 1 ) );
    and the item class does not override the equals and hashCode methods.

    Now, when I test the job and say it processes 10 records with starting billed amounts set to 0 for all, commit interval to 3 and put some tweak like failOnFlush just like in the example such that one record will fail, I noticed that for that particular batch that failed, the amounts will be like

    Code:
    AMOUNT
    0 <-- tweaked to fail so expected to be not updated
    3 <-- should not fail but why this was increased by 2 more
    4 <--- should not fail but why this was increased by 3 more
    Obviously, the processor was called again on retries but I'm not sure why and how many times it was called varies per item.

    Is retry doing this? Does my item not overriding equals and hashCode has to do with it?

    Btw, this does not happen when I remove skippable setting in my job.

    Code:
    <!--<skippable-exception-classes>
      java.lang.Exception
    </skippable-exception-classes>-->
    I get the desired results i.e. AMOUNT = 0 for the three items in that failed batch.

    It looks like that when an item is skipped in a particular batch, the rest of the items in that batch is re-processed. Is that a correct observation?

    Thanks.

  • #2
    You are correct, and it is nothing to do with equals()/hashCode(). The assumption is that the ItemProcessor has to be idempotent - the "normal" situation is where it is doing work in a transactional setting which is going to be rolled back in case of an exception. It is safest not to mutate the object but to create a new one and pass that to the writer.

    Raise an issue and maybe we can think about adding an explicit flag to the step / tasklet / chunk processor to tell it not to re-process on rollback.

    The best practice by far though is to make sure all exceptions that cause a rollback are encountered before you get to the ItemWriter. That way, the problem item is identified immediately, and it can be removed from the chunk without any need to scan for the error.

    Comment


    • #3
      Thanks for the reply, Dave.

      Making the processor idempotent worked. However, I don't recall (or maybe I missed it) in the documents mentioning that it must be like that to avoid unexpected results.

      Regarding the default behavior of the retry, why would it do a reprocess? Is it a more common use case?

      I think even if we try to handle all the exceptions before ItemWriter, it could still throw Exceptions. In the application I'm trying to test, the one it is skipping is DataAccessException.

      Code:
      <skippable-exception-classes>
                     org.springframework.dao.DataAccessException
                   </skippable-exception-classes>
      This is only thrown in the ItemWriter part.

      Have raised a Jira here http://jira.springframework.org/browse/BATCH-1242.

      Comment


      • #4
        Hi Dave,
        For a process implementation that is complex like calling an external system where its idempotency is not guaranteed or simply having custom counters inside the process method, this forced reprocessing becomes an issue.

        As a temporary solution, I'm planning to use a Map to store the processed items and their corresponding results. This is done in the @AfterProcess listener method. Then in the process() method itself, it checks first this Map if the item has already been processed. If yes, it returns the result immediately. The Map is cleared after every write through the @AfterWrite listener.

        I've tested this approach and it looks fine. However, I'm wondering if this is the best way to do it. What do you think?

        Thanks.

        Comment


        • #5
          It's fine as long as you recognise the limitations. If you call an external system which cannot deal with duplicates, then you can protect a single ultimately successful execution of the step this way, but a restart will not know about the processed items unless you store their keys in the ExecutionContext.

          By the way, if you can generate a unique key for each item as the key in your Map, then you don't need the Map, right? A Set will do fine, and it is easier to store in the ExecutionContext.

          You can see why we don't want to do this in the framework - it relies on key generation algorithms, and we had mixed success with that in 1.x, since people didn't really grasp the need.

          Comment


          • #6
            Restart is not required at the moment so it made the approach much simpler. However, you are right, if it is required, then the ExecutionContext part must be considered which then complicates the solution.

            Will Set alone be enough? I thought about using Set at first but then I realized I had to return a result (processed output) object per item (input). This is required in the process() method if I'm not mistaken.

            Comment


            • #7
              That depends on the implementation of your processor I suppose. I thought it was just mutating its input, in which case all you need to do is keep track of which ones to ignore on rollback.

              Comment


              • #8
                Ah, I removed the mutating input implementation earlier. So yep, a Set will suffice if no new object is created for the result.

                Thanks for clearing things up on this reprocess thing. I think I have some idea now on why reprocess is done on rollback.

                Comment

                Working...
                X