Announcement Announcement Module
Collapse
No announcement yet.
Problem with ITEM_COUNT and COMMIT_COUNT Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Problem with ITEM_COUNT and COMMIT_COUNT

    Hi

    I am currently using spring bach 1.1.2.Release version.

    In my job configuration I specified the skip limit 2.

    I know that my writer is going to fail because I am inserting a field which is not there in database. My expected behavior is job should terminate with errors after 2 records and ITEM_COUNT should be zero and COMMIT_COUNT should be zero.

    However the actual results are different.
    1. I see ITEM_COUNT and COMMIT_COUNT keep increasing after every chunk.
    2. Job do not terminate after 2 records.
    3. On console I see "ORA-00904:invalid identifier" for every write.
    4. Finally job status is complete. ITEM_COUNT and COMMIT_COUNT are having values other than zero. However there are no records written into destination database.
    5. Surprisingly my processed flag is set to 1 for all records indicating that they are processed successfully. I update my process flag for the whole chunk once in afterChunk() method.

    FYI : If I do same mistake with reader it works good. It stops after 2 records and job status is FAILED.

    Please explain to me if I am doing anything wrong.

  • #2
    5. is actually not surprising. The ChunkListener gets a callback at the end of a transaction whether or not there were skips. You probably want to do the update of the process indicator in ItemWriter.flush() (having accounted for the ones that were skipped in a SkipListener).

    I am less able to explain the apparent breach of your skip limit. It probably shows a bug, but not one that we found in our unit tests and we do test scenarios similar to yours, so there may be a workaround.

    Did you set any other properties on the SkipLimitStepFactoryBean? Maybe some combination of the *Exceptions proeprties?

    Comment


    • #3
      Following is the job configuration I have.
      I am using "BatchSqlUpdateItemWriter"

      <bean id="step2" parent="skipLimitStep">
      <property name="itemReader" ref="itemReader" />
      <property name="itemWriter" ref="itemWriter" />
      <property name="commitInterval" value="100" />
      <property name="allowStartIfComplete" value="true" />
      <property name="skipLimit" value="1" />
      <property name="listeners">
      <list>
      <ref bean="format003Listener" />
      <ref bean="stepListener" />
      </list>
      </property>
      </bean>

      Let me know if you require any other information.

      Comment


      • #4
        Does the writer actually throw an exception? You mentioned an ORA code, but there's no stack trace?

        Comment


        • #5
          Stack Trace

          Hi Dave.

          I have attached the stack trace.

          In my writer I am printing the stack trace as below.

          public void write(Object item) throws Exception {
          try {
          Format003 format = (Format003) item;
          delegate.write(format);
          }
          catch (Exception e) {
          e.printStackTrace(); // This is place where stack trace is generated
          logger.error("Error occured while processing 003 format job " + e);
          }
          }

          ================================================== ============
          Last edited by Kmisaal; Sep 2nd, 2008, 04:02 PM.

          Comment


          • #6
            I might be mistaken, but it looks like you're swalling the exception and not rethrowing:

            Code:
            public void write(Object item) throws Exception {
            try {
            Format003 format = (Format003) item;
            delegate.write(format);
            }
            catch (Exception e) {
            e.printStackTrace(); // This is place where stack trace is generated
            logger.error("Error occured while processing 003 format job " + e);
            }
            }
            This is likely what is keeping the framework from failing, since it's seeing the write as successful due to the swallowed exception.

            Also, anytime posting code or stack traces on the forum, please use a code tag.
            Last edited by lucasward; Aug 29th, 2008, 01:41 PM. Reason: typo on code tag

            Comment


            • #7
              I tried removing the try catch block from the writer.
              Code:
              public void write(Object item) throws Exception {
                      Format003 format = (Format003) item;
              	delegate.write(format);
              	
              }
              Now whatever exception is thrown by delegate.write(format) will be re-thrown.

              After doing this job stops after some time.
              Some good results are
              1. job status is failed as per expected.

              However ITEM_COUNT = 102 and COMMIT_COUNT = 3. This is unexpected.
              My Skip limit is 2 and commit interval is 100.


              Therefore I expect that it should stop after 2 records. I expect ITEM_COUNT = 0 and COMMIT_COUNT = 0
              Last edited by Kmisaal; Aug 29th, 2008, 03:22 PM.

              Comment


              • #8
                Is the item count correct? Were 102 records processed before the skip limit was exceeded? Is it just the commit count that's off?

                Comment


                • #9
                  I queried the database and checked. It marked only 2 records as processed.
                  ie. processed = 1 for only 2 records in the source table.

                  However it shows ITEM_COUNT = 102 and COMMIT_COUNT = 3 which I think are inconsistent.

                  According to me if none of the records make through the destination my processed flag should not get updated at all. Even not for 2 records.
                  This may be because I am updating my processed flag in afterChunk() method in chunkListener.

                  Please clarify me on the ITEM_COUNT and COMMIT_COUNT values.

                  Thank You for your inputs.

                  Comment


                  • #10
                    The item count is over 100 because you read and processed the first chunk (of 100) and it failed on the flush. That transaction rolled back, and the items were written one-by-one in the next transaction, and flushed immediately (see implementation of BatchSqlUpdateItemWriter).

                    In the next transaction the first item was re-processed, was skipped and the transaction committed aggessively (we have debated about whether this is necessary, and it is certainly conservative, but probably not a bad thing on the whole). Here I start to get a bit hazy because you said you had set the skip limit to 2, but the XML you posted had it as 1.

                    Then in the next transaction the next record was detected as a previous failure and a skip would be attempted, but failed on account of the skip limit. Failed items also cause an increment of the item count because they are read again from the reader.

                    I count 3 or 4 transactions (depending on the real skip limit), but at least 2 rolled back, so I can't quite explain the commit count yet. The state of your business data at the end is as expected. What is your rollback count?

                    Your process indicator was updated in wrong (as I already pointed out), so it got updated whenever there was a commit, even if the item it was updating was skipped.
                    Last edited by Dave Syer; Aug 30th, 2008, 06:35 AM. Reason: clarity

                    Comment


                    • #11
                      worthwhile to capture in User Guide

                      I think this could be captured as a gotcha in the User guide(not to consume the exception, but to let the Batch FW to know this has happened)

                      Comment


                      • #12
                        Hey Dave.

                        Thanx for explaining the ITEM_COUNT values.

                        My skip limit is set to 2. There was a mistake in the xml I posted. I am not sure how to find out rollback count. So now you can put some light on COMMIT_COUNT = 3.

                        Also please indicated how to use ITEM_COUNT. Our monitoring team is looking at the ITEM_COUNT as how many records are processed successfully.
                        Which I think not correct meaning of ITEM_COUNT according to you.

                        ITEM_COUNT will show you how many records are read? Is there any way to find out how many records are processed successfully at perticular instance during the job execution? We require this for some estimates like how much percentage is complete and how much is remaining. I though ITEM_COUNT will be helpful in some way however looking at the way it is getting updated is not what we expect.

                        Please tell us the way if there is any.

                        Comment


                        • #13
                          If you are not expecting any failures then item count is fine. I guess if there are failures you have to subtract the rollback count times the commit interval and the skip count to get the number of items processed successfully. Or you could count them yourself in the writer and store them in the execution context, or in your own separate table.

                          Sorry for the hassle. We might be able to make some changes in 2.0 to make the counting reflect more what you expect. There is also a possibility that we might explicitly support percentage complete estimation.

                          Comment

                          Working...
                          X