From: Giacomo G. <gi...@ga...> - 2003-09-16 14:00:09
|
Hi all the precision of the timer in Shark is due to the calibration routine that calculates the CPU clocks on 1 msec. This value is printed during the init step of a shark demo (the clk_per_msec). I rewrote this calibration routine starting from the linux one... but there are not the same and it's normal a difference of some microsec. Anyway, after a test session with an external timer (controlled by parallel port) I think that is Linux the more imprecise. The timer precision is due to the relation between the PIT and the TSC. Considering that a PIT has 16 bit counter, this precision cannot be more than 1 part on 65535 With RTC corrections (the IRQ8Handler) I tried to reduce the long term drift... and the effect cannot be seen on 1 second, but hours. The old timer routine of shark (the current of OsLib) uses this 1197 value that now is no more significative (the system tick is apart from the timer count) and it brought a bigger error. bye Giacomo Ian Broster wrote: >We've got a serious problem of clock drift in Shark. >Over 1 second, sys_gettime() loses about 50us >compared to the machine running linux. > >I note that oslib/kl/event.c and oslib/ll/sys/ll/time.h >make use of a magic number 1197, relating to >1.19718 MHz. Should this be 1.193182 MHz? > >I'm not sure about this, particularly as the >ratio of these two frequencies (1.10032) does not >quite match our clock drift discrepancy (1.0059). > > >ian > > > |