Android API documentation (for example uptimeMillis()) or the Cortex-A9 Technical Reference Manual (regarding the TSC counter), assures you that "this clock is guaranteed to be monotonic".
A monotonic clock is a clock that generates a monotonic sequence of numbers in function of the time; a sequence (an) is monotonic increasing if an+1 ≥ an for all n ∈ N.
So a monotonic sequence looks like this: 1, 2, 2, 2, 3, 4, 5... That means that subsequent calls to a primitive like uptimeMillis() can return the same value more times.
So using a monotonic clock we can be really sure that no value will be smaller than the previous one; but in real hardware it is impossible to implement a perfect monotonic clock, and sooner or later the clock will be reset, or the sequence will wrap around. Even 64-bit variables have a limit! So uptimeMillis() will be reset after 292471209 years; nanoTime() will be reset after 292 years.
For practical purposes, we don't care about the reset problem, but we are really interested in the idea that more values can be equal to each other, for evaluating the timer granularity.
We can evaluate the clock granularity measuring the difference between a value and the first different value in the sequence: for example, if the sequence is 1, 1, 1, 1, 50, 50, 50, the granularity will be 49. With this code, we can measure the granularity, and we can reduce the error, repeating the test multiple times and taking the smallest value:
In the following table you can find the granularity measured in two completely different phones:
NanoTime() has a granularity that is several orders of magnitude better the other timing functions; we can say it is 1000 times more accurate than all other functions (ns vs. ms) and its resolution is 100 times higher.
Now we know how different timing methods perform in terms of granularity and resolution. We should do some benchmarks in oder to decide which one is the best timing primitive possible.
Looking at the Android source code we know that three timing primitives(SystemClock.uptimeMillis, System.currentThreadTimeMillis, System.nanoTime) rely on clock_gettime();
System.currentTimeMillis() invokes the gettimeofday() primitive, which shares the same backend of clock_gettime(). Shall we assume the same performance for all the different primitives?
In the following graph you can see the results of some benchmarking; each method is called 100k times, and the elapsed time is measured using nanoTime().
Looking at the benchmarking results, it's clear that nanoTime() performs better.
This analysis is valid as a general rule of thumb; if timing performance on a specific platform really matters to you, please benchmark! A lot of variables are involved in reading the hardware clock through kernel primitives; never assume!
We can definitively use nanoTime() for measuring elapsed time on Android devices. If we are developing native applications, clock_gettime() will provide us the same results with less overhead. If we really want to reduce this overhead, the two possible solutions are:
In any case, please remember that after an uptime of 292 years, the user can be disappointed by the timer reset!