Announcement Announcement Module
Collapse
No announcement yet.
itemprocessor recalled only for first item in chunk when retryable exception Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • itemprocessor recalled only for first item in chunk when retryable exception

    hi,

    title is very indicative...

    i get recalled my itemprocessor process method when a retryable exception is thrown.
    But it is recalled only for the first element of the current chunk

    first could be understable, but i cannot find explanation for the last.

    could you help me?

    thanks in advance

  • #2
    The behaviour on retry depends on a lot of things, e.g. where is the exception thrown, is the processor marked as transactional. I assume from your confusion that maybe the exception was thrown in the writer (in which case I expect only the first item to be retried initially, but if that is successful then the rest will be also). You can find more in the books that are just about to be published, neither of which is anything to do with me, but the authors hang out here sometimes, so they might have a link for you.

    Comment


    • #3
      yes, exception is thrown in writer

      i hava a log entry at the beginning of itemprocessor process method, before committing and when throwing exception

      chunk is 2 items

      i get something like this:

      Executing step: [step1]
      processing item 3
      processing item 4
      Committing chunk with items [3, 4]
      throwing retry error in item 3
      processing item 3
      Committing chunk with items [3, 4]
      updated item 3
      updated item 4


      so, after throwing retry error i expected getting
      processing item 3
      processing item 4

      but only for item 3 the itemprocessor process method gets called


      as you said, the remaining items had to be retried? but maybe it keeps the process made before and only will be recalled for item 4 if it fails, and in this case will be again (3rd time) for item 3 and for the second time for item 4

      could it be?

      i think not, because sometimes item 4 fails instead of item 3, and only log for item 3 is recalled


      may i get an explanation?

      thanks

      Comment


      • #4
        Like I said, it depends. You didn't show your step configuration, or the implementation of your processor and writer, so the expected behaviour is impossible for us to predict. You throw an exception in the writer on item "3", but only the first time it sees it?

        Comment


        • #5
          i throw a random retry exception
          i have a service call "srv" which sets the type
          the service has algo the transactional method called "guardar"

          here is the step configuration

          <tasklet start-limit="2">
          <chunk reader="jpaItemReader"
          processor="creditIncreaseProcessor"
          writer="jpaCreditWriter"
          skip-limit="20"
          commit-interval="2"
          retry-limit="3">

          <retryable-exception-classes>
          <include class="com.jee.cam.batchdemo.exceptions.RetryExcep tion"/>
          </retryable-exception-classes>
          <skippable-exception-classes>
          <include class="com.jee.cam.batchdemo.exceptions.SkipExcept ion"/>
          </skippable-exception-classes>
          </chunk>
          </tasklet>

          reader is the jpa default reader,

          processor is

          public IncrementoCliente process(CreditoCliente item) throws Exception {
          logger.info("processing item " + item.getId());
          IncrementoCliente result = new IncrementoCliente();
          result.setCredito(item.getCredito().add(FIXED_AMOU NT));
          result.setNombre(item.getNombre() + "b");
          result.setId(item.getId());
          return result;
          }


          and writer is

          public void write(List<? extends IncrementoCliente> items) throws Exception {

          logger.info("Committing chunk with items " + items);

          for (IncrementoCliente credit : items) {

          Thread.currentThread().sleep(2000);

          Random generator = new Random();
          int randomIndex = generator.nextInt( 2 );
          if(randomIndex==0) {

          switch(srv.getException()) {

          case CreditService.EXCEPTION_RETRY:
          logger.error("lanzando retry error en item " + credit.getId());
          throw new RetryException("retry random exception en item: " + credit.getId());

          case CreditService.EXCEPTION_SKIP:
          logger.error("lanzando skip error en item " + credit.getId());
          throw new SkipException("skip random exception en item: " + credit.getId());
          }
          }

          srv.guardar(credit);

          }


          tell me if you need something more

          thanks for the interest

          Comment


          • #6
            I think you may have discovered a bug - all the items will eventually get re-processed, but not until the retry is exhausted. Not sure what version this appeared in (what are you using?). Can you raise a ticket in JIRA?

            Your random number approach will make it hard to debug, so I would recommend switching to an item-based exception throwing algorithm if you want to trace what's going on.

            Comment


            • #7
              batch core library version is 2.1.1.RELEASE

              i dont how to raise a ticket, is it enougth to attach this forum thread? and where exactly?


              maybe if you have time, you could try to reproduce the behaivor

              you can always throw exception in the second item of a two-chunk

              then you will see that process method for first item of chunk gets called, but not for the second item

              thanks again

              Comment


              • #8
                We noticed the same problem by setting up some integration tests to see if Spring batch can solve the needs for our organization.

                The test is pretty simple and set up like this so maybe you can use it to reproduce the behaviour:

                Code:
                <job id="integrationTest4" xmlns="http://www.springframework.org/schema/batch">
                    <step id="useCase4">
                        <batch:tasklet>
                            <batch:chunk reader="reader" processor="simpleProcessor" writer="failWriter"
                                    commit-interval="2" retry-limit="3">
                                <retryable-exception-classes>
                                    <include class="java.io.IOException"/>
                                </retryable-exception-classes>
                             </batch:chunk>
                        </batch:tasklet>
                    </step>
                </job>
                The input file only contains 10 records (easier for debugging) so a small commit-interval was chosen.


                The reader is a FlatFileItemReader. The writer simply extends FlatFileItemWriter to throw an error every 3 times and looks like this
                Code:
                public class FailWriter<T> extends FlatFileItemWriter<T> {
                	private static Log logger = LogFactory.getLog(FailWriter.class);
                
                	private static int counter = 1;
                
                	@Override
                	public void write(List<? extends T> items) throws Exception {
                		counter++;
                
                		if (counter%3==0) {
                			throw new IOException("Randomizer");
                		}
                
                		super.write(items);
                	}
                }

                After debugging for ourself, we were able to make it work by changing the following piece of code in FaultTolerantChunkProcessor, line 217 and following:

                Code:
                if (!processorTransactional && cached != null && count.get() > scanLimit) {
                	/*
                	* If there is a cached chunk then we must be
                	* scanning for errors in the writer, in which case
                	* only the first one will be written, and for the
                	* rest we need to fill in the output from the
                	* cache.
                	*/
                
                	output = cached;
                }
                but we're not sure about possible consequences.

                Is this a possible fix for this problem? If so, can it be implemented for the next spring batch release?
                Last edited by TDP; Aug 24th, 2011, 03:17 AM.

                Comment

                Working...
                X