Am 07.10.2010 14:11, Gilles Chanteperdrix wrote:
> Jan Kiszka wrote:
>> Am 07.10.2010 13:00, Gilles Chanteperdrix wrote:
>>> Jan Kiszka wrote:
>>>>> Maybe that could be bundled in rtdm_lock_get_disable_irq(lock, irq) ?
>>>>> Disabling an irq line should be pretty fast on non-x86, besides, if the
>>>>> original code used spin_lock_bh, it is maybe because the critical
>>>>> section is expected to be lengthy, so irqsave/irqrestore may cause high
>>>>> latencies in the rest of the system.
>>>> Generally spoken, disabling preemption can cause high latencies as well
>>>> as most critical paths include a task switch. However, anything that is
>>>> more light-weight than irqsave/restore and has compatible semantics
>>>> would still be welcome. It just requires some relevant performance
>>>> and/or latency gain to justify complications in the locking model.
>>>>
>>>> We actually have two or so cases in RTnet where drivers contain long
>>>> locked code paths (orders of a few hundred microseconds). In those
>>>> cases, disabling the IRQ line without any preemption lock is the only
>>>> way to keep system latencies in reasonable limits. But those are still
>>>> corner cases.
>>> The interrupt latency also matters in some cases (which is why, for
>>> instance, Xenomai switches contexts with irqs on ARM and powerpc now,
>>> this does not improve scheduling latency, only irq latency). This new
>>> kind of spinlocking would be a solution to replace spinlock_bh.
>>
>> We still have no BH in Xenomai to protect against (and I don't think we
>> want them). So every porting case still needs careful analysis to
>> translate from the Linux model to RTDM. What would be possible with a
>> preemption-disabling spin_lock is inter-task synchronization without IRQ
>> disabling - kind of light-weight mutexes (/wrt their worst case, ie.
>> contention).
>
> I was rather thinking that when porting a linux driver to rtdm, the BH
> code was migrated to the irq handler. So, the spinlock would allow to
> have mutual exclusion between the task code and the irq handler.
>
>>
>>> I suspect the drivers which will cause long critical sections are the
>>> ones where the packets needs to be copied using PIO, instead of DMA.
>>> Such drivers still exist, especially in the non-x86 word.
>>
>> Right, but not all have excessive access times. And we should better
>> address the expensive cases with IRQ threads (which is an overdue to-do
>> for RTDM) so that normal mutexes can be used.
>
> You really do not want irq threads on non-x86 machines: they turn irq
> latencies into kernel-space threads scheduling latencies.
At least some ARM-variants, I bet. Anyway, the drivers that would want
to make use of this will suffer from high latencies anyway. If reading a
larger Ethernet frame takes hundreds of micros, you don't want IRQ
context for this job (writing frames is only half of the problem).
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
|