First of all its important to know that Mule allows to process messages in batches.
Within a Mule application, batch processing provides a construct for asynchronously processing larger-than-memory data sets that are split into individual records.
Batch jobs allow for the description of a reliable process that automatically splits up source data and stores it into persistent queues, which makes it possible to process large data sets while providing reliability.
In the event that the application is redeployed or Mule crashes, the job execution is able to resume at the point it stopped.
A batch job is a scope that splits large messages into records that Mule processes asynchronously. In the same way flows process messages, batch jobs process records.
Within an application, we can initiate a Batch Job scope, which splits messages into individual records, performs actions upon each record, and then reports on the results and potentially pushes the processed output to other systems or queues.
A batch job contains one or more batch steps that act upon records as they move through the batch job.
Each batch step in a batch job contains processors that act upon a record to transform, route, enrich, or otherwise process data contained within it.
By leveraging the functionality of existing Mule processors, the batch step offers a lot of flexibility regarding how a batch job processes records.
A batch job executes when the flow reaches the process-records section of the batch job. When triggered, Mule creates a new batch job instance.
When the job instance becomes executable, the batch engine submits a task for each record block to the I/O pool to process each record.
Parallelism occurs automatically, at the record block level. The Mule runtime engine uses its autotuning capabilities to determine how many threads to use and the level of parallelism to apply.
When the batch job starts executing, Mule splits the incoming message into records, stores them in a persistent queue, and queries and schedules those records in blocks of records to process.
By default, the runtime stores 100 records in each batch step. You can customize this size according to the performance you require.
In which Scenarios we can use Batch processing:
If you like my post please follow me to read my latest post on programming and technology.
Find the length of the longest absolute path to a file within the abstracted file…
You manage an e-commerce website and need to keep track of the last N order…
You are given a stream of elements that is too large to fit into memory.…
The formula for the area of a circle is given by πr². Use the Monte…
Given an integer k and a string s, write a function to determine the length…
There is a staircase with N steps, and you can ascend either 1 step or…