Harmonize the behavior of regexp queries across backends regarding analysis
The Lucene backend doesn't analyze/normalize regexp patterns, which is IMO the correct solution in order to avoid problems with regexp metacharacters.
The Elasticsearch backend, on the other hand, does normalize regexp patterns.
This isn't consistent.
--Maybe we should normalize patterns by default and provide a skipAnalysis() method,which wouldn't work on Elasticsearch for now, since they don't provide any way to customize analysis for regexp queries).-- => Actually even that is not an option: Elasticsearch is not consistent in how it treats analyzed and normalized fields, either. Regexps on normalized fields are normalized, but regexps on analyzed fields are not.
So in the end, I think we should check the code in Elasticsearch? If there’s a hint that this behavior might be unintentional, maybe we can get it fixed.