Announcement Announcement Module
No announcement yet.
Inconsistency in persisted Spring Batch metadata? Page Title Module
Move Remove Collapse
Conversation Detail Module
  • Filter
  • Time
  • Show
Clear All
new posts

  • Inconsistency in persisted Spring Batch metadata?

    Hi all,

    I've come across something that might be an issue just by chance.

    I have an AOP logger that prints out a log message every time you call saveOrUpdateExecutionContext on JobRepository. That way I can show that the batch job is actually moving forward an processing elements (i.e.: I show every commit that's supposed to happen).

    Today we have been having some network problems, and suddenly our batch processes were deciding to stall during their execution. We discovered that this long waits were due to DB access: our jobs were waiting to complete commits. We have Spring Batch metadata being kept in one DBMS while the actual work is done against another DBMS. That's why commits were working fine on the metadata side and having trouble on the other. Since I had my logger wired into the jobs, I had a shiny message stating that 50 elements had been already processed. And, of course, Spring Batch metadata in the DB was displaying the same amount of processed items, which had supposedly been done on one single commit (commitCount was 1).

    That's when I jumped into the ItemOrientedStep code and discovered that it first calls the saveOrUpdateExecutionContext() method and afterwards tries to commit the current transaction. So here's my question: should this order be the other way around?

    Thanks for reading this long post and please forgive my stealing your time if this is irrelevant.

  • #2
    ItemOrientedStep simply puts item processing and metadata persistance in the same transaction, fundamentally saying "do both or none", which makes perfect sense I suppose. My point is the issue you described is a matter of tx setup, not ItemOrientedStep's execution logic.

    When working with multiple databases you need to use tx manager that supports two-phase commits (JTA) - maybe that's the catch?


    • #3
      I agree with Robert. Spring Batch meta-data is intended to be in the same transaction as the 'business transaction', which makes sense because things like the ExecutionContext are tied to whether or not the business transaction was successfull. For example, if the FlatFileItemReader is storing the fact that it's on line 100 in the ExecutionContext, you would only want that persisted if the output from those lines were also persisted. As Robert said, if you have a real need to use two databases, you will need two-phase commit support.


      • #4
        That makes sense. We're actually not using 2 DBMS but 3, although we only write on one of them on each step (plus the Spring Batch metadata). We'll have to look for a JTA transaction manager, then.



        • #5
          Hi again.

          We've been talking a bit here, and we think that having a unique transaction manager for both metadata and actual business data may not be a good solution: If you only have one transaction, what happens upon rolling it back? We think that the batch meta-data won't end up written precisely because you have rolled the transaction back.


          • #6
            Having both the business and batch data updates in the same transaction means they should be in synch and tx manager is supposed to handle it. When transaction is rolled back neither business nor batch data will be written - I don't see anything wrong with that.


            • #7
              Keep in mind that much of the meta-data being stored relates to the business processing itself. For example, we keep an item count that is incremented everytime an item is written. If you rollback the business transaction, this value needs to be rolledback as well. Furthermore, the value of the execution context is generally tied to the business transaction as well, since it generally contains state such as the number of lines read, etc. Having said that, if there is a failure that causes the step to be aborted, the meta-data will be stored in it's own transaction.


              • #8
                This is what we were missing:
                Originally posted by lucasward View Post
                if there is a failure [...] the meta-data will be stored in its own transaction.
                Thanks, robert and lucas.


                • #9
                  I've been looking around and I definitely can't find any place where a specific transaction for the jobRepository is created.

                  Looking at the configuration in simple-job-launcher-context.xml, I find the following AOP transaction management, which seems to be the key to all of this. Could you assess that what I understand looking at this is correct?
                  	pointcut="execution(* org.springframework.batch.core..*Repository+.*(..))"
                  	advice-ref="txAdvice" />
                  <tx:advice id="txAdvice" transaction-manager="transactionManager">
                  		<tx:method name="create*" propagation="REQUIRES_NEW" isolation="SERIALIZABLE" />
                  		<tx:method name="*" />
                  What I see with this is that:
                  • every time you invoke any method called createXXX() on any instance of JobRepository, you create a new transaction to run that method in it.
                  • every time you invoke any method called something else on any instance of JobRepository, you take the current transaction (as propagation REQUIRED is the default) and run in it

                  That way, going back to my prior questions, and supposing we have a unique transaction manager for both business logic and batch meta-data, when the business logic rolls back its transaction, itemCount and commitCount (among the other data that may be awaiting to be commited) will not be persisted (because rollback is performed in ItemOrientedStep). But, since there is another saveOrUpdate() before leaving the execution of the step, persisting that information (the one about the rollback) will be part of a new transaction that should have been created for that method.

                  Is all of this correct?


                  • #10
                    Let me put it this way. Let's say you're reading in from a file and writing out to a database in your step. The file stores it's current line number in the ExecutionContext. So, for a given chunk, you have item A,B,C,D,E, which corresponds to lines 1,2,3,4,5. When you commit the output of A through E, you want the line number of 5 to also have been committed as part of that transaction. The item count and commit count are somewhat less important unless you're using them for definitive metrics on SLAs, if not, you could wait until the end to commit them I suppose, although if something died midway through it might be pretty off. If you really wanted to use two transaction managers without JTA, you would have to write some kind of best effort transaction manager that called commit and rollback on both. But if something went wrong during that process you would have an invalid 'global transaction' between the two. Unless you have a strong need to have two transaction managers, I would really just use one.


                    • #11
                      Yep, I do understand what you say and agree that one transaction manager is the best solution, now that you said that after a rollback the metadata will use its own transaction. I just want to understand how and when these two transactions are stopped and started in case of having to roll back a chunk, so that I can see when the information stating that a step and/or job failed is persisted.


                      • #12
                        It's not after a rollback when it will use it's own transaction, but after a step fails, the storage of 'step status = failed' will be be in it's own transaction.

                        In 1.1, you'll probably see a change that the skipListener onSkipInWrite will be called in it's own transaction, but right now only the ItemRecoverer is called that way.