Announcement Announcement Module
Collapse
No announcement yet.
Problem with Spring Batch 2.0 ... ideas? Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by Dave Syer View Post
    Then it needs to be registered as a listener - the Step factry beans only detect listeners that are directly injected (as reader, writer, processor or tasklet). If you put a log statement or a breakpoint in your beforeStep you should be able to confirm that it is actually being called.
    Yeah i confirmed it ... its being called. Thansk a bunch.

    Seems to work fine now. Although its running VERY slow. That wouldnt be because of the snapshot jar would it?

    Comment


    • #17
      What do you mean? Why would a snapshot cause your job to run slow? Can you provide some more detail there? Or is your execution plan just slow because it involves large in-memory collections?

      Comment


      • #18
        Originally posted by Dave Syer View Post
        What do you mean? Why would a snapshot cause your job to run slow? Can you provide some more detail there? Or is your execution plan just slow because it involves large in-memory collections?
        Oh i have no clue why it would cause it to go slow. I wasnt sure if there was just code checked in that you knew wasn't hammered out all the way. Just wanted to rule out that before i started digging in.

        Actually our execution doesn't involve large in memory chunks. (especially the examples). The beforeStep actually uses a seperate class to parse the file ... and that under the covers launches a thread that puts chunks of it into a blocking queue, so that in the end we aren't ever going to have too much in memory (yeah its a pain ... but basically we can have anywhere from 10k files to 30GB set of files to parse with this)

        Good news is it seems to work pretty well so far (we have phase 1 in production) Only downside is the writers go against EJB3s and the JBoss Pooling wasnt set to strict so the server died because it made 13 million objects without ever tryign to pool them.

        Comment


        • #19
          spring quartz

          Hi,

          I'm beginner with spring quartz and i have a problem for my project.

          I'would whan my job is failed or sleep, the scheduler relance it and the job can retake
          where it has stopped.

          This is a differents steps how i was did :
          In the first i have m y class scheduler:

          Code:
          @DisallowConcurrentExecution
          public class JobLauncherDetails extends QuartzJobBean /*implements JobExecutionListener*/ {
          
          	private static final String JOB_DATA_MAP_MAX_RETRY = "maxRetry";
          	//private static final String JOB_DATA_MAP_NB_RETRIES = "nbRetries";
          	private final static String JOB_LOCATOR_CONTEXT_KEY = "jobLocator";
          	private final static String JOB_LAUNCHER_CONTEXT_KEY = "jobLauncher";
          	private static final String JOB_DATA_MAP_START_DATE = "startDate";
          	private static final String JOB_PARAM_LISTENER_DELAY_KEY = "listenerDelay";
          	private static final long JOB_PARAM_DEFAULT_LISTENER_DELAY = 1000;
          
          	private static Logger log = LoggerFactory.getLogger(JobLauncherDetails.class);
          
          	/**
          	 * Special key in job data map for the name of a job to run.
          	 */
          	private static final String JOB_NAME = "jobName";
          
          	private JobLocator jobLocator;
          	private JobLauncher jobLauncher;
          	private String jobName;
              private JobListener MyjobListener;
          	/**
          	 * Method called by Quartz trigger. The given {@link JobExecutionContext}
          	 * gives info on the Job to run.
          	 */
          	protected void executeInternal(JobExecutionContext context) throws JobExecutionException {
          		try {
          			jobLocator = (JobLocator) context.getScheduler().getContext().get(JOB_LOCATOR_CONTEXT_KEY);
          			jobLauncher = (JobLauncher) context.getScheduler().getContext().get(JOB_LAUNCHER_CONTEXT_KEY);
          			MyjobListener=(JobListener)context.getScheduler().getContext().get(MyjobListener);
          			
          		
          		} catch (SchedulerException se) {
          			log.error("Unable to get jobLocator and jobLauncher from scheduler context.", se);
          		}
          
          		if (jobLocator == null || jobLauncher == null) {
          			log.error("Unable to run a job without valids jobLocator and jobLauncher.");
          		} else {
          			JobDetail jobDetail = context.getJobDetail();
          			Map<String, Object> jobDataMap = context.getMergedJobDataMap();
          			
          			if (jobDataMap == null || jobDataMap.size() == 0) {
          				log.error("Unable to run a job without a valid jobDataMap (no job name provided...).");
          			} else {
          				
          				jobName = (String) jobDataMap.get(JOB_NAME);
          
          				if (jobName == null || jobName.isEmpty()) {
          					log.error("Unable to run a job: no job name provided...");
          				} else {
          					
          					if(context.getRefireCount()==0) {
          						// Add a date to the jobDataMap so that the job is unique.
          						// It is useful to distinguish 2 instances of the same trigger.
          						// Otherwise, a JobExecutionAlreadyRunningException will be launched
          						// (no need to modify this parameter for a refired job)
          						jobDataMap.put(JOB_DATA_MAP_START_DATE, new Date());
          					}
          
          					JobParameters jobParameters = getJobParametersFromJobMap(jobDataMap);
          					Scheduler sched= context.getScheduler();
          					try {
          						sched.getListenerManager().addJobListener(new NcaJobListener());
          						
          					} catch (SchedulerException e) {
          						// TODO Auto-generated catch block
          						e.printStackTrace();
          					}	
          					
          					if (log.isInfoEnabled())
          						log.info("\n**********************************************************************\n" 
          								+ "* Quartz trigger - start job: {}. Key: {}\n"
          								+ "* Firing unique ID: {}\n" 
          								+ "* Refire count: {}\n" 
          								+ "* Job parameters: {}\n"
          								+ "* isConcurrentExectionDisallowed: {}\n"
          								+ "**********************************************************************\n",
          								new Object[] { jobName, jobDetail==null?"":jobDetail.getKey(), context.getFireInstanceId(), 
          												context.getRefireCount(), jobParameters==null?"":jobParameters.toString(), 
          												context.getJobDetail().isConcurrentExectionDisallowed() });
          					
          					JobExecution jobExec = null;
          					BatchStatus jobStatus = BatchStatus.UNKNOWN;
          
          					try {
          						jobExec = jobLauncher.run(jobLocator.getJob(jobName), jobParameters);
          						log.info("Job Id: {}", jobExec.getId());
          
          					}
          
          					} catch (Exception ex) {
          						if(!(ex instanceof JobExecutionException)) {
          							log.error("Error while running Job [{}]. Rescheduling if possible.\nError: {}\n******************************", jobName, ex.toString());
          							// TODO: retirer ce log...
          							log.error("Error full stack:", ex);
          							
          						} else {
          							throw (JobExecutionException) ex;
          						}
          					}
          
          					if (log.isInfoEnabled())
          						log.info("\n**********************************************************************\n" 
          								+ "* Quartz trigger - end job: {}\n"
          								+ "* Firing unique ID: {}\n" 
          								+ "* Refire count: {}\n" 
          								+ "* Job parameters: {}\n"
          								+ "* Start date: {}\n"
          								+ "* End date: {}\n"
          								+ "* Status: {}\n"
          								+ "**********************************************************************\n",
          								new Object[] { jobName, context.getFireInstanceId(), context.getRefireCount(), 
          												jobParameters==null?"":jobParameters.toString(),
          												jobExec==null?"":jobExec.getStartTime(),
          												jobExec==null?"":jobExec.getEndTime(),
          												jobStatus });
          				}
          			}
          		}
          	}
          
          	/**
          	 * Copy parameters that are of the correct type over to
          	 * {@link JobParameters}, ignoring jobName.
          	 * 
          	 * @return a {@link JobParameters} instance
          	 */
          	private JobParameters getJobParametersFromJobMap(Map<String, Object> jobDataMap) {
          
          		JobParametersBuilder builder = new JobParametersBuilder();
          
          		for (Entry<String, Object> entry : jobDataMap.entrySet()) {
          			String key = entry.getKey();
          			Object value = entry.getValue();
          			if (value instanceof String && !key.equals(JOB_NAME)) {
          				builder.addString(key, (String) value);
          			} else if (value instanceof Float || value instanceof Double) {
          				builder.addDouble(key, ((Number) value).doubleValue());
          			} else if (value instanceof Integer || value instanceof Long) {
          				builder.addLong(key, ((Number) value).longValue());
          			} else if (value instanceof Date) {
          				builder.addDate(key, (Date) value);
          			} else {
          				log.debug("JobDataMap contains values which are not job parameters (ignoring).");
          			}
          		}
          
          		return builder.toJobParameters();
          	}
          
          	/*@Override
          	public void afterJob(JobExecution jobExecution) {
          		// TODO Auto-generated method stub
          		log.info("*************************** Job ended with status: {} ********************", jobExecution.getExitStatus());
          
          		if (ExitStatus.FAILED.equals(jobExecution.getExitStatus())) {
          			log.error("******************************\nJob [{}] failed. Reschedule if possible.\n******************************", jobName);
          
          			// rescheduleJob(jobExecution);
          		}
          	}
          
          	@Override
          	public void beforeJob(JobExecution jobExecution) {
          		log.info("************** Before running job {} **************", jobName);
          	}*/
          
          }

          Comment


          • #20
            In the second i have a class jobfaileur:

            Code:
            public class JobFailureListener implements JobExecutionListener {
            	private static Logger log = LoggerFactory.getLogger(JobFailureListener.class);
            	private static final String JOB_DATA_MAP_MAX_RETRY = "maxRetry";
            	private static final String JOB_NAME = "jobName";
            	private String jobName;
            	public void beforeJob(JobExecution jobExecution) {
            	// nothing to do
            		log.info("*************************** Job ended with status: {} ********************", jobExecution.getExitStatus());
            	}
            
            	public void afterJob(JobExecution jobExecution, JobExecutionContext context) throws JobExecutionException {
            	
            		if( jobExecution.getStatus() == BatchStatus.COMPLETED ){
            			System.out.println("!!!!!!!!!!!!!!!!!  sa	 marche !!!!!!!!!!!!!!!!");
            	    }
            		
            		else
            		
            			if (!jobExecution.getAllFailureExceptions().isEmpty()) {
            	ExitStatus exitStatus = ExitStatus.FAILED;
            	log.error("******************************\nJob [{}] failed. Reschedule if possible.\n******************************");
            	
            	//rescheduler if possible
            	rescheduleJob(jobExecution, context);
            	
            	for (Throwable e : jobExecution.getAllFailureExceptions()) {
            
            	exitStatus = exitStatus.addExitDescription(e);
            
            	}
            
            	jobExecution.setExitStatus(exitStatus);
            
            	}
            
            	}
            
            	private void rescheduleJob(JobExecution jobExec, JobExecutionContext context) throws JobExecutionException {
            		rescheduleJob(jobExec, context, null);
            	}
            
            	private void rescheduleJob(JobExecution jobExec, JobExecutionContext context, Throwable excep) throws JobExecutionException {
            		// TODO: Gérer la politique de réessai, ex :
            		// http://stackoverflow.com/questions/4408858/quartz-retry-when-failure
            	
            		if(jobExec == null)
            			throw new IllegalArgumentException("jobExec cannot be null.");
            		
            		if(context == null)
            			throw new IllegalArgumentException("context cannot be null.");
            		
            		String fireInstanceId =   context.getFireInstanceId();
            		int refireCount = context.getRefireCount();
            		
            		if (log.isInfoEnabled())
            			log.info("\n**********************************************************************\n" 
            					+ "* Quartz trigger - rescheduling job: {}\n"
            					+ "* Firing unique ID: {}\n" 
            					+ "* Refire count: {}\n" 
            					+ "**********************************************************************\n",
            					new Object[] { jobName, fireInstanceId, refireCount});
            		
            		JobDataMap dataMap = context.getJobDetail().getJobDataMap();
            		int maxRetry = dataMap.getIntValue(JOB_DATA_MAP_MAX_RETRY);
            		
            		JobExecutionException jobExecutionException = null;
            		
                    if(maxRetry>0 && refireCount<maxRetry) {        	
                    	jobExecutionException = new JobExecutionException("Error: the job [" + jobName + "] didn't end properly, refire it immediately.", excep);
                    	jobExecutionException.setRefireImmediately(true);
                    	//boolean refireImmediatelyResult = jobExecutionException.refireImmediately();
                    	
                    	log.info("************** Job Id: {} - rescheduled. ", jobExec.getId());
                    	
                    	throw jobExecutionException;
                    } else {
                    	//jobExecutionException = new JobExecutionException("Error: the job [" + jobName + "] didn't end properly. No more retry possible, sending alarm.", excep);
                    	
                    	log.error("\n**********************************************************************\n" 
            					+ "* Quartz trigger - No more retry possible for job: {}\n"
            					+ "* Firing unique ID: {}\n" 
            					+ "* UNSCHEDULING TRIGGER + SENDING EXPLOIT ALARM...\n"
            					+ "**********************************************************************\n",
            					jobName, fireInstanceId);
                    	
                    	AlarmDef alarmDef = AlarmExploit.searchBatchFailedAlarmDefByBatchName(jobName);
                    	AlarmExploit.generateAlarmWithoutTemplate(alarmDef, "Batch failed", "The following batch failed (no more retry possible): " + jobName, Locale.ENGLISH.toString(), null);
                    }
            	}
            
            	@Override
            	public void afterJob(JobExecution jobExecution) {
            		// TODO Auto-generated method stub
            			}
            	
            }
            I also add an sleep for my job and when i run i have a message who said that my job is finished or it's nnot there but juste sleep.
            Please can i have a response, exemple or others suggestions , i need some one to help me please

            Thanks a lot

            Comment

            Working...
            X