Spring Batch- Code example

In last article of Spring Batch we gone through concepts and interfaces provided by spring batch framework.

In this article let’s see how we can use those concepts to create a batch job.

For simplicity, below is a simple batch job we will be taking as an example.

  • Read from a csv file.
  • Process data – select only records where age > 30
  • Write to another csv file.
  1.  configuration file snippet for this job

<batch:job id=“reportJob”>

<batch:listeners>

<batch:listener ref=“customJobListener” />

</batch:listeners>

<batch:step id=“step1”>

<tasklet>

<chunk reader=“csvFileItemReader” writer=“cvsFileItemWriter” processor=“filterCSVProcessor”commit-interval=“1”>

<listeners>

<listener ref=“customStepListener” />

<listener ref=“customItemReaderListener” />

<listener ref=“customItemWriterListener” />

<listener ref=“customItemProcessListener” />

</listeners>

</chunk>

</tasklet>

</batch:step>

</batch:job>

In this configuration file you first define a job with job id. Then you define any listeners which pprovide call-backs at specific points in the lifecycle of a Job like before or after start of job.

Then you define a step which also has unique ID. Then you define the steps which are required for performing your job. These steps are defined in terms of item reader/processor/writers, and whole as a unit are defined with chunk element.

  • reader – The ItemReader that provides items for processing.
  • writer – The ItemWriter that processes the items provided by the ItemReader.
  • commit-interval – The number of items that will be processed before the transaction is committed.

Then you define listeners for step which are again call-backs at specific points in the lifecycle of a step like before or after start of step or before and after reading item etc.

The chunk element is defined within tasklet tag.

Below is a code snippet for item reader listener.

public class CustomItemReaderListener implements ItemReadListener<User> {

public void afterRead(User arg0) {

System.out.println(“CustomItemReaderListener : ” +”afterRead()”);

}

public void beforeRead() {

System.out.println(“CustomItemReaderListener : ” +”beforeRead()”);

}

public void onReadError(Exception arg0) {

System.out.println(“CustomItemReaderListener : ” +”onReadError()”);

}

}

Similarly you define other listeners for job, step, reader/writer/processor etc which are implementations of respective listeners interfaces provided by spring batch framework.

 

  1. Then we need to launch this job. Here is a code snippet.

String[] springConfig = {

“spring/batch/config/context.xml”,

“spring/batch/jobs/job-report.xml”

};

ApplicationContext context =

new ClassPathXmlApplicationContext(springConfig);

JobLauncher jobLauncher = (JobLauncher) context.getBean(“jobLauncher”);

Job job = (Job) context.getBean(“reportJob”);

try {

JobExecution execution = jobLauncher.run(job, new JobParameters());

System.out.println(“Job Exit Status : ” + execution.getStatus());

} catch (Exception e) {

e.printStackTrace();

}

System.out.println(“Done with batch”);

First you create a JobLauncher instance from “jobLauncher” bean defined in context.xml

Then you create a Job instance from “reportJob” which is defined in job configuration file.

When you run your Job instance with help of JobLauncher you will get a JobExecution instance which provides you the status whether your job executed successfully or not.

 

  1. Now let’s see how reader and writers are configured in our job-report.xml

<bean id=”csvFileItemReader” class=”org.springframework.batch.item.file.FlatFileItemReader”>

<!– Read a csv file –>

<property name=”resource” value=”file:csv/input/read.csv” />

<property name=”lineMapper”>

<bean class=”org.springframework.batch.item.file.mapping.DefaultLineMapper”>

<!– split it –>

<property name=”lineTokenizer”>

<bean                                                                                           class=”org.springframework.batch.item.file.transform.DelimitedLineTokenizer”>

<property name=”names” value=”name,age,phone” />

</bean>

</property>

<property name=”fieldSetMapper”>

<!– map to an object –>

<bean                                                                                              class=”org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper”>

<property name=”prototypeBeanName” value=”user” />

</bean>

</property>

</bean>

</property>

</bean>

<bean id=”cvsFileItemWriter” class=”org.springframework.batch.item.file.FlatFileItemWriter”>

<!– write to this csv file –>

<property name=”resource” value=”file:csv/output/write.csv” />

<property name=”shouldDeleteIfExists” value=”true” />

<property name=”lineAggregator”>

<bean                                                               class=”org.springframework.batch.item.file.transform.DelimitedLineAggregator”>

<property name=”delimiter” value=”,” />

<property name=”fieldExtractor”>

<bean                                     class=”org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor”>

<property name=”names” value=”name,age,phone” />

</bean>

</property>

</bean>

</property>

</bean>

</beans>

You first define beans which are responsible for reading and writing. In this example we are using readers and writers provided by spring batch framework i.e. FlatFileItemReader and FlatFileItemWriter as we are using comma separated records. You can have your custom classes as well.

Then you define a csv file to be read or written using property resource.

  1. Reading

We have defined a line maper . Line mappers are used for tokenizing lines into FieldSet in our case domain class User , followed by mapping to items.

  1. Writing:

We have defined a line aggregator that converts an object into a delimited list of strings. The default delimiter is a comma.

Then we need to extract our fields from our domain object user. For this we use field extractor to which given an array of property names, it will reflectively call getters on the item and return an array of all the values.

  1. Processing:

We have our custom file processor as we want to filter out records where age > 30.

For this we defined a bean

<bean id=“filterCSVProcessor” class=“com.springbatch.processor.FilterCSVProcessor” />

And here is implementation

public class FilterCSVProcessor implements ItemProcessor<User, User>{

public User process(User user) throws Exception {

if( user.getAge() > 30)

return user;

else

return null;

}

}

That’s all. Complete source code can be found here.