> On Friday 21 October 2005 13:47, Serge Goodenko wrote:
> > How can I increase the resolution of UML kernel timer?
> Elaborate on what you mean. You show being a kernel hacker, and I need to know
> if you want to increase the resolution for jiffies (i.e. you must surely
> increase HZ in UML) or with some other API.
Well in fact I'm quite a newbie in kernel and I don't really know what would be better.
My problem is to measure interval between incoming TCP ACKs (in more common case - interval between incoming packets).
I tried to do this using jiffies (e.g. tcp_time_stamp) and found it's definetely not enough for me (I suppose resolution about microseconds is needed). So increasing HZ up to 1000 is not enough too.
> For instance, IIRC the standard kernel has support for interpolators, which is
> used for gettimeofday().
> To use that, you need an external source. And maybe UML implements (or this
> could be done) a "source" using host gettimeofday(). After doing that, you're
> only limited by host gettimeofday() precision, and to increase that you may:
> > Should I apply any patches to host kernel (like HRT or ktimers) or what?
> But first UML must use host gettimeofday().
Ok, I see. So I just need to explore UML code to see if and how it uses host's gettimeofday() or write some code to use it?
> > I need to use higher than standard jiffies resolution (10ms) in kernel
> > network code.
> Don't remember where, but at least you can try increasing HZ for UML
> (include/asm-um/param.h). That is reasonable until 1000. For an host, it has
> been suggested that 10000 would work, but don't know for the guest.
For the host: as far as I remember menuconfig allows maximum HZ = 1000 (will check out). Can I increase HZ manually upto, say, 10000 in source or .config file?
> Also, HZ (i.e. the timer interrupt) is implemented using signals (i.e. alarm()), which
> are both slow and not precise (no more than host jiffies).
hm.. this suggests that maybe it would be really better not to use jiffies in UML (in my case), but some API?