-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Bad performances of multithreaded tasklet using chunks due to throttling algorithm [BATCH-2081] #1516
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Philippe Mouawad commented Hello, See also attached run logs, note that particular 3 lines: As you can see, for 35 seconds (time for long running Writer), you have only 3 threads among 15 that are working, see attached thread dump. With current algorithm, it seems we have big variations in work with long periods during which only slow writers are working. Is this by design or is it an issue ? |
Philippe Mouawad commented 2 Thread Dumps done during the slow writer work |
Philippe Mouawad commented Log of a run (I interrupted it) |
Philippe Mouawad commented Project showing issue. |
Philippe Mouawad commented See http://stackoverflow.com/questions/18262857/spring-batch-tasklet-with-multi-threaded-executor-has-very-bad-performances-re for at least a workaround. |
Thank you for opening the issue. Can you retry with the latest release of Spring Batch(5.0.2) and report back the results? |
Philippe Mouawad opened BATCH-2081 and commented
I have a batch with following configuration:
I notice that execution of batch is not performing greatly.
Analysing what happens, I notice that frequently :
Name: threadPoolTaskExecutorForBatch-X (DOING NOTHING)
State: WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@5c45bb65
Total blocked: 15 703 Total waited: 15 347
Stack trace:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
Name: SimpleAsyncTaskExecutor-1
State: WAITING on java.lang.Object@9e58027
Total blocked: 14 536 Total waited: 23 229
Stack trace:
java.lang.Object.wait(Native Method)
java.lang.Object.wait(Object.java:485)
org.springframework.batch.repeat.support.ResultHolderResultQueue.take(ResultHolderResultQueue.java:139)
org.springframework.batch.repeat.support.ResultHolderResultQueue.take(ResultHolderResultQueue.java:33)
org.springframework.batch.repeat.support.TaskExecutorRepeatTemplate.getNextResult(TaskExecutorRepeatTemplate.java:144)
org.springframework.batch.repeat.support.RepeatTemplate.executeInternal(RepeatTemplate.java:215)
org.springframework.batch.repeat.support.RepeatTemplate.iterate(RepeatTemplate.java:144)
org.springframework.batch.core.step.tasklet.TaskletStep.doExecute(TaskletStep.java:253)
org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:195)
org.springframework.batch.core.job.SimpleStepHandler.handleStep(SimpleStepHandler.java:137)
org.springframework.batch.core.job.flow.JobFlowExecutor.executeStep(JobFlowExecutor.java:64)
org.springframework.batch.core.job.flow.support.state.StepState.handle(StepState.java:60)
org.springframework.batch.core.job.flow.support.SimpleFlow.resume(SimpleFlow.java:152)
org.springframework.batch.core.job.flow.support.SimpleFlow.start(SimpleFlow.java:131)
org.springframework.batch.core.job.flow.FlowJob.doExecute(FlowJob.java:135)
org.springframework.batch.core.job.AbstractJob.execute(AbstractJob.java:301)
org.springframework.batch.core.launch.support.SimpleJobLauncher$1.run(SimpleJobLauncher.java:134)
java.lang.Thread.run(Thread.java:662)
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
java.lang.Thread.run(Thread.java:662)
This happens when you have one processing (writer) of an item that takes a lot of time and all other threads have finished working on items filled in queue, instead of reading new items and letting idle thread work, what happens is that queue waits for the last long thread work to end (changing count > results.size() condition ) and give data to others.
Affects: 2.1.8, 2.1.9, 2.2.0.RC1, 2.2.0.RC2, 2.2.1
Attachments:
1 votes, 2 watchers
The text was updated successfully, but these errors were encountered: