Measure time more precisely when computing execution time of Lucene queries

Description

We currently rely on TimingSource#monotonicTimeEstimate to compute query execution time, which leads to relatively low resolution and sometimes to returning 0ms for the query execution time when it's less than 5 ms. It's particularly problematic if these numbers are used in statistics (average, ...).

There's really no reason to be that conservative: calling System.nanoTime while collecting results in a Lucene Collector would be a problem, but calling it once before a query and once after a query is not.

It's nice to rely on a pluggable interface such as TimingSource, though: it could be useful for testing.

Let's add another method to TimingSource, say nanoTime, implemented with System.nanoTime() in DefaultTimingSource. Then let's call this method before query execution and after query execution, and set the query execution time ("took") to the difference.

Important: let's not change anything to how timeouts are handled. In particular, we must not mix values obtained through TimingSource.nanoTime with values obtained through TimingSource.monotonicTimeEstimate.

Environment

None
Fixed

Assignee

Fabio Massimo Ercoli

Reporter

Yoann Rodière

Labels

None

Suitable for new contributors

None

Feedback Requested

None

Components

Fix versions

Priority

Major
Configure