Announcement Announcement Module
Collapse
No announcement yet.
reuse of job with different parameters outofmemoryexception - heap space Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • reuse of job with different parameters outofmemoryexception - heap space

    Hi all,

    I create a job that reads a file and processes it and deletes it

    I reuse the job passing it to JobLauncher again but with a new parameters object with new timestamp

    When i run this for a lot of times say a few hundred files I get the above exception.

    I have removed all the code that the job does so it is literally is sending a new job to the launcher and editing parameters and launching it and it does nothing and finishes.

    I still get the same error

    Can I use a job in this way using it over again with different parameters?


    Code:
    While (True)
    {
    //poll for new files in dir add to rdifiles arraylist
    Resource r = new FileSystemResource(new File((String) rdifiles.get(i)));
    reader.getDelegate().setStrict(false);
    reader.getDelegate().setResource(r);   
    ExecutionContext e = new ExecutionContext();      
    reader.getDelegate().open(e);
    
                
    JobParameters jobParameters = new JobParametersBuilder.addDate("now", new Date()).addString("JobType", "RDI").toJobParameters();
    
      try 
      {
                jobLauncher.run(job, jobParameters).getStatus();
      }
    }

  • #2
    What kind of JobRepository are you using?

    Comment


    • #3
      Here is the setup for job repository

      Cheers for help!

      Code:
      <bean id="transactionManager" class="org.springframework.batch.support.transaction.ResourcelessTransactionManager" />
      
      	<bean id="jobRepository" class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean">
      		<property name="transactionManager" ref="transactionManager"/>
      	</bean>

      Comment


      • #4
        The MapJobRepository keeps all of the information in memory. Since you are repeatedly launching jobs, you are adding to the repository, and, thus, using more memory each time until you run out.

        Comment


        • #5
          So is there a way of clearing the repository memory - an object that uses less memory.

          Or something like the last 50 jobs etc.

          Whats best course of action

          Comment


          • #6
            The best course of action is using a database to store your repository information.

            If you still use the MapJobRepositoryFactoryBean, then you can use its static method clear() that will clear all of the data.

            Comment


            • #7
              I have just tried this littleray after each job finishes to call

              MapJobRepositoryfactoryBean.clear();

              Im still getting the same issue

              I add to a list of 1000 files that need to process

              Then change resources run the job
              change the resource run the job etc

              Code:
              2009-07-06 11:13:35,183 ERROR main [org.springframework.batch.core.step.AbstractStep] - <Encountered an error executing the step: class java.lang.OutOfMemoryError: Java heap space>
              java.lang.OutOfMemoryError: Java heap space
              	at java.util.Arrays.copyOf(Unknown Source)
              	at java.lang.AbstractStringBuilder.expandCapacity(Unknown Source)
              	at java.lang.AbstractStringBuilder.append(Unknown Source)
              	at java.lang.StringBuffer.append(Unknown Source)
              	at com.edfe.orchard.correspondenceRoutingService.io.writing.SimpleWriter.write(SimpleWriter.java:15)
              	at org.springframework.batch.core.step.item.SimpleChunkProcessor.writeItems(SimpleChunkProcessor.java:155)
              	at org.springframework.batch.core.step.item.SimpleChunkProcessor.doWrite(SimpleChunkProcessor.java:136)
              	at org.springframework.batch.core.step.item.SimpleChunkProcessor.write(SimpleChunkProcessor.java:209)
              	at org.springframework.batch.core.step.item.SimpleChunkProcessor.process(SimpleChunkProcessor.java:193)
              	at org.springframework.batch.core.step.item.ChunkOrientedTasklet.execute(ChunkOrientedTasklet.java:70)
              	at org.springframework.batch.core.step.tasklet.TaskletStep$2.doInChunkContext(TaskletStep.java:264)
              	at org.springframework.batch.core.scope.context.StepContextRepeatCallback.doInIteration(StepContextRepeatCallback.java:67)
              	at org.springframework.batch.repeat.support.RepeatTemplate.getNextResult(RepeatTemplate.java:352)
              	at org.springframework.batch.repeat.support.RepeatTemplate.executeInternal(RepeatTemplate.java:212)
              	at org.springframework.batch.repeat.support.RepeatTemplate.iterate(RepeatTemplate.java:143)
              	at org.springframework.batch.core.step.tasklet.TaskletStep.doExecute(TaskletStep.java:239)
              	at org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:198)
              	at org.springframework.batch.core.job.AbstractJob.handleStep(AbstractJob.java:348)
              	at org.springframework.batch.core.job.flow.FlowJob.access$0(FlowJob.java:1)
              	at org.springframework.batch.core.job.flow.FlowJob$JobFlowExecutor.executeStep(FlowJob.java:137)
              	at org.springframework.batch.core.job.flow.support.state.StepState.handle(StepState.java:60)
              	at org.springframework.batch.core.job.flow.support.SimpleFlow.resume(SimpleFlow.java:144)
              	at org.springframework.batch.core.job.flow.support.SimpleFlow.start(SimpleFlow.java:124)
              	at org.springframework.batch.core.job.flow.FlowJob.doExecute(FlowJob.java:105)
              	at org.springframework.batch.core.job.AbstractJob.execute(AbstractJob.java:250)
              	at org.springframework.batch.core.launch.support.SimpleJobLauncher$1.run(SimpleJobLauncher.java:110)
              	at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:49)
              	at org.springframework.batch.core.launch.support.SimpleJobLauncher.run(SimpleJobLauncher.java:105)
              	at com.edfe.orchard.correspondenceRoutingService.Launcher.run(Launcher.java:191)
              	at com.edfe.orchard.correspondenceRoutingService.ContextSetter.main(ContextSetter.java:31)

              Comment


              • #8
                Also is there a limmit on the size of the file that can be read by a flat file item reader

                This might be cause of issue

                I have a 17 meg text file that im reading line by line

                If it tries to buffer the whole thing?

                At moment have a multi line record reader setup with a delegate to flat file reader that reads say 10 lines to make up complete object.

                Comment


                • #9
                  What is your commit interval? The framework holds the entire chunk in memory as it writes, so you will run out of memory if your commit interval is too large.

                  Comment


                  • #10
                    Just changed the commit interval to one and i still get same effect


                    I also ahve property "StartLimit" which i have tried low and high

                    Comment


                    • #11
                      I don't have any problem launching simple jobs. From the stack trace it looks like SimpleWriter is doing something with a StringBuffer. Can we see the implementation?

                      Comment


                      • #12
                        Correct it was the fact the writer which was creating a buffer of every message so i would have like a few thousand of them.

                        Sorted this now.

                        Thank you for your help.

                        Comment


                        • #13
                          Hi Dave,
                          Found this thread while having the same issue as the original poster.

                          I went through the MapJobRepositoryFactoryBean code and if I understood it correctly, job instances and daos are stored in static collections. Does this mean that even if we destroy the Spring context for each run of Batch job, we are still retaining the collections since they are static? Are they not tied to the Spring life cycle of beans?

                          So we have to always call 'clear' to ensure that we get a fresh repo and cleared memory (unless of course we restart the application and get a fresh jvm). Do you think it would be better that Spring Batch will automatically clear these collections?

                          Thanks.

                          On second thought, MapJobRepositoryFactoryBean should only be used for testing.

                          Comment

                          Working...
                          X