Announcement Announcement Module
No announcement yet.
onSkipInWrite() not called Page Title Module
Move Remove Collapse
Conversation Detail Module
  • Filter
  • Time
  • Show
Clear All
new posts

  • onSkipInWrite() not called


    I'm using Spring Batch 2.1.8 and myBatis 3.0.5 on Oracle. I have the following configuration:

    <job id="myJob">
      <step id="step1">
          <chunk reader="myReader" processor="myProcessor" writer="myWriter" 
    	  commit-interval="1" skip-limit="10">
    	  <include class="java.lang.Exception"/>
    	  <listener ref="logListener"/>
    The logListener class implements the onSkipIn* methods using annotations and when I've a write exception (e.g. "missing jdbcType for null parameters") the onSkipInWrite() method is called. Also, if the exceptions are not skippable, the onWriteError() method is called.

    However, when the exception is thrown at the commit time (e.g. "duplicate key constraint violated") the listener is not called, even if the exception is correctly skipped and the job completes successfully. The same problem occurs if the exceptions are not skippable: the onWriteError() method is not called.

    Where's the problem? Or, how can I catch an exception at the commit time?

    Thanks in advance for any suggestions!

  • #2
    No idea for this issue?

    Another question: can I skip only the bad item and not all the items in the chunk? For example, retrying with commit interval = 1?

    Thanks for your help!


    • #3
      Dear alby,

      I have experienced implementing ItemWriteListener in my writer not using annotations. At that time, I remember onWriteError() method was well performed when occurring an error.

      Anyway, as the answer to your second question, the way of skipping the bad item is just to return null instead of the item in process() method of ItemProcessor. Then, SimpleChunkProcessor in ChunkOrientedTasklet will only include not null items for the outputs of ItemWriter. this doesn't regard commit interval at all. Hence, I think you can consider checking bad item in ItemProcessor Layer in advance before transferring them to ItemWriter. if so, I guess this issue can be easily solved. Aside from that, in the case of using Oracle, duplicate key problem can be effortlessly solved by using MERGE_INTO statement I think.



      • #4
        Thank you Chory for your answer.

        The problem is not related to the ItemWriteListener, because it works correctly for normal executions (also the onWriteError() method). The issue only occurs when the exception is thrown at commit time.
        In the 6.9.3 paragraph of the doc (Database ItemWriters), I read that, e.g. for a DataIntegrityViolationException, "there is no way to know that an error will occur until [the items] are written out". The exception causes transaction rollback and item skip (if defined). This is clear for me. And I can also understand that the onWriteError() is not called because "as far as the step is concerned, all the items will be written out successfully"...

        However, it's unclear for me why, if an item is skipped (because I defined that), the onSkipInWrite() is not called. Why not? Furthermore, all the items in the chunk are skipped (because of transaction rollback).

        I agree with you on validating items before passing them to the ItemWriter, but this is not always possible (e.g. insert with an inner select that returns null).

        So, I don't know, I'm thinking to define a retry on the exceptions in order to rewrite each items in the chunk individually...



        • #5
          Spring Batch and myBatis

          I tried the same job with iBatis and it works! It seems that the problem is myBatis, or simply my ItemWriter implementation.

          I'm new to ibatis/mybatis, so I defined my own MyBatisBatchItemWriter starting from the "official" IbatisBatchItemWriter; but in MyBatis the SqlMapClientCallback/SqlMapExecutor classes are absent.

          This is an excerpt of IbatisBatchItemWriter#write method:
          List<BatchResult> results = (List<BatchResult>) sqlMapClientTemplate.execute(
          	new SqlMapClientCallback() {
          		public Object doInSqlMapClient(SqlMapExecutor executor)								throws SQLException {
          				for (T item : items) {
          					executor.update(statementId, item);
          				try {
          					return executor.executeBatchDetailed();
          				} catch (BatchException e) {
          					throw e.getBatchUpdateException();
          And this is what I wrote for myBatis:
          public void setSqlSessionFactory(SqlSessionFactory sqlSessionFactory) {
          	if (sqlSession == null) {
          		this.sqlSession = new SqlSessionTemplate(sqlSessionFactory, ExecutorType.BATCH);
          public void write(final List<? extends T> items) {
                  for (T item : items) {
          	        sqlSession.update(statementId, item);
          Clearly, my implementation is not correct...
          How can I rewrite the method? Or, has anyone ever used myBatis in a Spring Batch project? Are myBatis reader/writer planned in future releases? If no, why?



          • #6
            Solved! The problem described above is caused by ExecutorType.BATCH. Using SIMPLE/REUSE solves the issue, even if performance degrades...