Jan Kiszka wrote:
>> Maybe that could be bundled in rtdm_lock_get_disable_irq(lock, irq) ?
>> Disabling an irq line should be pretty fast on non-x86, besides, if the
>> original code used spin_lock_bh, it is maybe because the critical
>> section is expected to be lengthy, so irqsave/irqrestore may cause high
>> latencies in the rest of the system.
>
> Generally spoken, disabling preemption can cause high latencies as well
> as most critical paths include a task switch. However, anything that is
> more light-weight than irqsave/restore and has compatible semantics
> would still be welcome. It just requires some relevant performance
> and/or latency gain to justify complications in the locking model.
>
> We actually have two or so cases in RTnet where drivers contain long
> locked code paths (orders of a few hundred microseconds). In those
> cases, disabling the IRQ line without any preemption lock is the only
> way to keep system latencies in reasonable limits. But those are still
> corner cases.
The interrupt latency also matters in some cases (which is why, for
instance, Xenomai switches contexts with irqs on ARM and powerpc now,
this does not improve scheduling latency, only irq latency). This new
kind of spinlocking would be a solution to replace spinlock_bh.
I suspect the drivers which will cause long critical sections are the
ones where the packets needs to be copied using PIO, instead of DMA.
Such drivers still exist, especially in the non-x86 word.
--
Gilles.
|