Currently (as of HSEARCH-1375, at least):
Mass indexing failures in purge, flush, optimize, entity indexing are reported to the failure handler by the mass indexer and by the background thread in the backend.
Automatic indexing failures trigger an exception in the user thread and are reported to the failure handler by the background thread in the backend.
We should not report failures twice:
For most operations (purge, flush, optimize, indexing outside of a plan) propagating the exception is enough: the requester will report the failure to the failure handler if necessary.
For indexing plans, we should return an execution report to the mapper and have the mapper report failures, if any. This would be done differently depending on the automatic indexing synchronization strategy: "queued" would just report the exception to the failure handler, while "committed"/"searchable" would be able to just re-throw an exception.
The only case where the backend should be allowed to report an exception a second time is in the Lucene backend, when a workset was executed without committing it, then the next workset fails. In that case, we should report somewhere that the first workset may not have been committed after all, independently from how we report the failure of the second workset.