First, the macro in os/Linux.c is actually converting
jiffies to milliseconds (10^-3) not microseconds (10^-6).
Second, there are several problems caused by the fact
that the numbers that are scanned in and converted are
stored in integers. The kernel is using unsigned
integers while this code is using signed, which will
cause them to overflow in half the time. Converting
from jiffies to milliseconds will cause an overflow in
1/10 of the kernel reported time value. These combine
to cause an overflow after a processes has used just
under 25 days of cpu time which is not uncommon for
some busy server daemons. It should normally take
almost 500 days to reach this overflow. Also because
of the order of the conversion, all fractional second
information is lost. I understand why that order was
chosen, but given the above information it would
probably be better to use a long long to store these
time values instead.
If this is acceptable then I could probably work on a
patch to submit.