|
From: David G. <dga...@gm...> - 2018-05-18 17:18:13
|
2018-05-18 16:58 GMT+02:00 Thorsten Otto <ad...@th...>: > On Freitag, 18. Mai 2018 16:33:28 CEST David Gálvez wrote: >> the loop count would be 0 >> for any given value of nanoseconds below 1000, so the loop count will >> be round up to 1 to get at least the delay requested. > > Yes, but if the loop count is so low that it is not really precise for > microsecs, whats the purpose then computing it based on nanosecs? Even a > 060@95Mhz is not 1000 times faster than a 68000. > OK, I understand that for slow CPUs a nanosecond delay calculation doesn't make too much sense because only going through the loop once is going to take at least 1 usec. But I think it makes sense for the 060 and ColdFire, I don't know yet about the 040. > Also, given that one of the factors is already essentially shifted right by 28 > bits, that leaves only 4 bits, multiplied by the ns value, then divided by > 1000, and thus only a range of 1-16 loops, if i read the code correctly. I'm not sure what are you implying here, that ndelay() for the 060 doesn't make sense either? With a 060 at 66 Mhz for 150 nsec I get a loop count value of 10. If I implemented this I should do something for slow CPUs too. The code could check if the kernel is running in a slow machine and to don't make any calculation if it's the case, but I'm not so sure if this function should do this, I could do it in getloops4ns( ) but not in ndelay( ) because it will add still more overhead. I guess is up to the driver's coder to check the machine or do specific machine binaries and decide to use ndelay() or a couple of "nop" instructions. |