Traditionally, we wanted to execute works submitted as part of a single work plan in the order they were submitted, because a work such as "delete document 1" could be followed by "create document 1", for example. Executing these works in the reverse order would lead to a very different result.
This is even more likely to happen now that we process entities on flushes (), for example if you change an indexed entity, flush, change the entity again, and commit.
However, this strict ordering is not always necessary: when every work pertains to a different entity, there is no reason not to execute them in parallel, and benefit from the related performance boost when the index is sharded.
We should investigate ways to get rid of the serial execution of works, both for the Lucene backend and the Elasticseearch backend.
One solution could be to inspect works before they are executed, and merge works that relate to the same document: delete + add becomes update, update + update keeps only the second update, add + delete is completely skipped, add + update becomes an add with the document from the update, etc. Once we did that, we could get rid of serial orchestrators completely, and remove a lot of the complexity introduced by DefaultElasticsearchWorkSequenceBuilder: we could expect that all works can be executed in parallel, and in bulks.
The question is: what would be the scope of such merging? Within each work plan? Within each "batch" of works, which includes several work plans (e.g. each batch processed by org.hibernate.search.backend.elasticsearch.orchestration.impl.ElasticsearchBatchingWorkOrchestrator#executor)? One could argue that, within each "batch" of works, the relative ordering of work plans may not properly reflect the relative ordering of database changes, so we cannot know for sure how to merge the works. But it could be a best-effort approach.
Another solution would be to just detect that multiple works relate to the same document, and fall back to serial execution in that case, but use parellel execution otherwise. I'm not sure this would be much simpler, though.
Fixed as part of : we now have multiple queues per shard, allowing parallel execution of works to some extent.