|
From: Roger B. <an...@xp...> - 2018-05-18 20:19:31
|
On 18 May 2018 at 10:57, David Gálvez wrote: > > I'd like to add a delay function for nanoseconds for driver to use, > similar to udelay() and mdelay(). The problem is that the count loop > calculation takes probably the time the driver needs to wait or may be > more in our machines. So I was thinking to use another approach, > adding two functions, one than calculates the loops given a number of > nanoseconds, getloops4ns( ), then in the loading part of the driver > code the driver gets the loops for the delays it needs, for example if > it needs 150 ns and 500 ns it will do: > > delay_150ns = getloops4ns(150) > delay_500ns = getloops4ns(500) > > and then when the delay is need it, it will call the second function > that loops directly. > > ndelay_loops(delay_150ns) > ndelay_loops(delay_500ns) > > so we don't have the loop calculation overhead in the use moment. > > The ndelay() function still will be available. > > I've attached the patch for you to take a look in case you want to > comment anything. > Something similar to this is done in EmuTOS, in case you haven't noticed. I think that, unless you really expect MiNT to run native on modern PCs, trying to get a nanosecond-level value is a bit optimistic. The EmuTOS approach is to calculate the number of loops for a millisecond delay value during EmuTOS initialisation. Individual drivers multiply (rare) or divide it as required during driver initialisation. If the MiNT mdelay() function is accurate, then you could use the same approach. The key thing is to make the actual delay loop itself as repeatable as possible. EmuTOS does a subq.l from a register, which should be relatively immune to caching & pipeline variability. Of course, mdelay() may already do all this, I haven't looked, so apologies if I'm preaching to the converted. Roger |