Announcement Announcement Module
Collapse
No announcement yet.
multi-threading and long running transactions Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • multi-threading and long running transactions

    I am working on improving the performance of an existing spring batch implementation. The current batch works on a largish data set (100k) and basically creates pdf documents for each row.

    My environment. The batch runs in a clustered Websphere setup in the context of a WorkManager thread. It calls EJBs to do the retrieving of data and writing of data.

    It currently works although slow as it creates one document after the other. The production setup has three separately deployed pdf creating services so multithreading will definitely speed up processing. It also only executes on one of the cluster members which means from a batch perspective the cluster only provides fail over and no load balancing.

    My question is which approach to take. I have looked at partitioning but that does not seem to be right approach as I cannot break up the work in batches small enough and still keep the number of threads under control. In other words. 100k in 100 threads still means 1000 pdfs in one step and that is not really feasible as I have to call an EJB service with a transaction timeout of 2 minutes. I suppose I could call the EJB 1000 times from the writer or processor but that just does not feel right.

    The perfect solution in my mind would take the 100k rows process them 10 at a time in parallel and as soon as 1 finishes another starts so that there are always 10 executing.

  • #2
    I don't see what's wrong with a partitioning approach, but I probably don't understand all the details of your data. If your partitioner can generate 10 partitions (so 10K items each), and you send them to a Step with commit interval 100 (for example), you will get 10 threads processing in parallel, and committing Batch meta data every 100 items. You still have to generate 100K PDFs if I understand the requirement, but you can choose the transaction size (commit interval) independently of the partition size. Maybe the optimum commit interval is smaller in your case - it is usually a function of the business processing. The optimum partition (aka grid) size is probably fixed by the hardware.

    Comment


    • #3
      Ah I see. I was under the impression that the step will have to use all the items in a partition in one go. Come to think of it, it probably makes sense. I will try it that way and report back. Thanks for all the help.

      Comment


      • #4
        Ok so it works. Kinda.... It makes a lot of sense now and looking back I don't know how I did not think that the commit interval will make sure the threads do not do too much.

        I have a different issue now, which I do not know where to begin solving. If I stop the job and restart it the readers fail because they cannot start where they left off. To me it looks like the parameters are not replaced back into the query when restarting. The exception:
        Code:
        bad SQL grammar [SELECT SORT_KEY FROM ( SELECT ID AS SORT_KEY, ROW_NUMBER() OVER (ORDER BY ID ASC) AS ROW_NUMBER FROM TABLE WHERE ID >= :min and ID <= :max) WHERE ROW_NUMBER = 30]; nested exception is com.ibm.db2.jcc.b.SqlException: DB2 SQL error: SQLCODE: -312, SQLSTATE: 42618, SQLERRMC: min
        Step:
        Code:
        	<beans:bean name="extractStep:master" class="org.springframework.batch.core.partition.support.PartitionStep">
        	    <beans:property name="jobRepository" ref="jobRepository" />
        	    <beans:property name="stepExecutionSplitter">
        			<beans:bean name="stepExecutionSplitter" class="org.springframework.batch.core.partition.support.SimpleStepExecutionSplitter">
        			    <beans:constructor-arg ref="jobRepository" />
        		   	    <beans:constructor-arg  ref="extractStep" />
        			    <beans:constructor-arg ref="extractIdPartitioner"/>
        			</beans:bean>
        	    </beans:property>
        	    <beans:property name="partitionHandler">
        	    	<beans:bean class="org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler">
        	    		<beans:property name="taskExecutor" ref="taskExecutor"/>
        			    <beans:property name="step" ref="extractStep" />
        			    <beans:property name="gridSize" value="10" />
        			</beans:bean>
        	    </beans:property>
        		<beans:property name="stepExecutionListeners">
        			<beans:list>
        				<beans:ref bean="extractIdPartitioner"/> <!-- Get the JobId -->
        			</beans:list>
        		</beans:property>
        	</beans:bean>
        	
        	<beans:bean id="extractIdPartitioner" class="part.IDPartitioner">
        		<beans:property name="dataSource" ref="dataSource"/>	
        		<beans:property name="tableName" value="TABLENAME"/>	
        		<beans:property name="where" value="PROCESSED = 0"/>
        	</beans:bean>	
        	
        	<step id="extractStep">
        		<tasklet allow-start-if-complete="true" >
        			<chunk reader="extractReader" writer="extractWriter" commit-interval="10" skip-limit="10">
        				<skippable-exception-classes>
        					java.lang.Exception
        				</skippable-exception-classes>
        			</chunk>	
        			<listeners>
        				<listener ref="extractStepListener"/>
        			</listeners>					
        		</tasklet>
        	</step>
        Reader:
        Code:
        	<beans:bean id="extractReader" scope="step" autowire-candidate="false" class="org.springframework.batch.item.database.JdbcPagingItemReader">
        		<beans:property name="dataSource" ref="dataSource" />
        		<beans:property name="rowMapper">
        			<beans:bean class="xx.MyRowMapper" />
        		</beans:property>
        		<beans:property name="queryProvider">
        			<beans:bean class="org.springframework.batch.item.database.support.SqlPagingQueryProviderFactoryBean">
        				<beans:property name="dataSource" ref="dataSource"/>
        				<beans:property name="fromClause" value="TABLENAME"/>
        				<beans:property name="selectClause" value="*"/>
        				<beans:property name="sortKey" value="ID"/>
        				<beans:property name="whereClause" value="ID &gt;= :min and ID &lt;= :max"/>
        			</beans:bean>
        		</beans:property>
        		<beans:property name="parameterValues">
        			<beans:map>
        				<beans:entry key="min" value="#{stepExecutionContext[MIN]}"/>
        				<beans:entry key="max" value="#{stepExecutionContext[MAX]}"/>
        			</beans:map>
        		</beans:property>
        	</beans:bean>

        Looking at the stepExecutionContext of one of the threads:
        Code:
        {"map":{"entry":[{"string":"itemCount","long":177},{"string":"org.springframework.batch.core.step.item.ChunkMonitor.OFFSET","int":3},{"string":"MIN","long":18132},{"string":"JdbcPagingItemReader.read.count","int":90},{"string":"MAX","long":18308}]}}

        Comment


        • #5
          The DB2 error states that parameter names are not allowed in dynamic queries and that I should use ? instead. After doing that I got a different DB2 error and I figured out that there is a bug in the version of the batch framework (2.0.3) we are using that do not apply the parameters on restart. The doJumpToPage method in JdbcPagingItemReader (2.0.3) does not send the parameter values with. It is fixed in later versions so it looks like I will have to upgrade to get past this.

          Comment

          Working...
          X