rtnet-developers Mailing List for RTnet - Real-Time Networking for Linux (Page 9)
Brought to you by:
bet-frogger,
kiszka
You can subscribe to this list here.
| 2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(9) |
Nov
(12) |
Dec
(6) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2005 |
Jan
(13) |
Feb
(1) |
Mar
(2) |
Apr
|
May
(7) |
Jun
(13) |
Jul
(6) |
Aug
(15) |
Sep
(1) |
Oct
(3) |
Nov
(2) |
Dec
(11) |
| 2006 |
Jan
(2) |
Feb
|
Mar
(13) |
Apr
|
May
(6) |
Jun
(7) |
Jul
(8) |
Aug
(13) |
Sep
(28) |
Oct
(5) |
Nov
(17) |
Dec
(5) |
| 2007 |
Jan
(2) |
Feb
(18) |
Mar
(22) |
Apr
(5) |
May
(4) |
Jun
(2) |
Jul
(5) |
Aug
(22) |
Sep
|
Oct
(3) |
Nov
(4) |
Dec
(2) |
| 2008 |
Jan
|
Feb
(3) |
Mar
(15) |
Apr
(7) |
May
(2) |
Jun
(18) |
Jul
(19) |
Aug
(6) |
Sep
(7) |
Oct
(2) |
Nov
(3) |
Dec
(1) |
| 2009 |
Jan
(8) |
Feb
(2) |
Mar
|
Apr
(3) |
May
(4) |
Jun
(2) |
Jul
(7) |
Aug
|
Sep
(2) |
Oct
(8) |
Nov
(16) |
Dec
(16) |
| 2010 |
Jan
(5) |
Feb
(2) |
Mar
|
Apr
(16) |
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
(8) |
Oct
(20) |
Nov
|
Dec
(10) |
| 2011 |
Jan
(15) |
Feb
(33) |
Mar
(5) |
Apr
(8) |
May
(5) |
Jun
|
Jul
|
Aug
(2) |
Sep
(21) |
Oct
(21) |
Nov
(12) |
Dec
(7) |
| 2012 |
Jan
(2) |
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(2) |
Sep
(5) |
Oct
|
Nov
|
Dec
(2) |
| 2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
(7) |
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(1) |
| 2014 |
Jan
|
Feb
(3) |
Mar
(5) |
Apr
|
May
(1) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
(2) |
Nov
(8) |
Dec
|
| 2015 |
Jan
|
Feb
|
Mar
(4) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2016 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
| 2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Anders B. <and...@co...> - 2010-10-20 15:33:27
|
This patch together with http://rtnet.git.sourceforge.net/git/gitweb.cgi?p=rtnet/rtnet;a=commit;h=fc574959b590e001357fcfb8bfb2853571c0ec14 http://rtnet.git.sourceforge.net/git/gitweb.cgi?p=rtnet/rtnet;a=commit;h=a914a3ed980a74d9e97a3c438b10eebd2ca6c2ec has made it possible to compile rtnet-0.9.12 with linux-2.6.35.7 and xenomai-2.5.5.2 and latest ipipe. Regards /Anders |
|
From: Edward H. <qua...@op...> - 2010-10-08 19:33:52
|
Jan/Gilles, Thanks for the lively debate on this. The insights that you provided on interrupt handling are invaluable. Apparently the Nextreme Broadcom Gigabit Dual NICs were developed as a great convenience for linux user space functions as they provide significant flexibility in terms of wake on lan, nway configuration, interNIC load balancing, etc. Unfortunately, in order to provide such features the entire driver is populated with functions for managing the soft interrupts that provide for these features. More significant, is that these paired NICs share buffers, so the softirqs are even propagated into all send and receive functions. So,microsurgery is not an easy option. Moreover, managing these soft interrupts appears to be generate too much latency for a real-time application. So, I have come to the conclusion that the Dual Nextreme NICS (built-in to this HP motherboard) are not a good candidate for transporting high-frequency UDP traffic, as features inherent in their architecture significantly degrade latency. If you feel this assessment is incorrect, please don't hesitate to chime in. RSVP. In lieu of this I have decided to build an rtnet device driver for the Intel Pro 1000 1GB 82574 pci-e based board. This will require an upgrade to the existing e1000e driver. The board is inexpensive($30.00) and the existing linux driver looks easier to convert to rtnet. In addition there is an already good example in the current rtnet experimental e1000e driver. I hope to complete the driver within the next week. I will provide an update when the testing is completed. Best Regards, Ed On Thu, 2010-10-07 at 14:33 +0200, Jan Kiszka wrote: > Am 07.10.2010 14:11, Gilles Chanteperdrix wrote: > > Jan Kiszka wrote: > >> Am 07.10.2010 13:00, Gilles Chanteperdrix wrote: > >>> Jan Kiszka wrote: > >>>>> Maybe that could be bundled in rtdm_lock_get_disable_irq(lock, irq) ? > >>>>> Disabling an irq line should be pretty fast on non-x86, besides, if the > >>>>> original code used spin_lock_bh, it is maybe because the critical > >>>>> section is expected to be lengthy, so irqsave/irqrestore may cause high > >>>>> latencies in the rest of the system. > >>>> Generally spoken, disabling preemption can cause high latencies as well > >>>> as most critical paths include a task switch. However, anything that is > >>>> more light-weight than irqsave/restore and has compatible semantics > >>>> would still be welcome. It just requires some relevant performance > >>>> and/or latency gain to justify complications in the locking model. > >>>> > >>>> We actually have two or so cases in RTnet where drivers contain long > >>>> locked code paths (orders of a few hundred microseconds). In those > >>>> cases, disabling the IRQ line without any preemption lock is the only > >>>> way to keep system latencies in reasonable limits. But those are still > >>>> corner cases. > >>> The interrupt latency also matters in some cases (which is why, for > >>> instance, Xenomai switches contexts with irqs on ARM and powerpc now, > >>> this does not improve scheduling latency, only irq latency). This new > >>> kind of spinlocking would be a solution to replace spinlock_bh. > >> > >> We still have no BH in Xenomai to protect against (and I don't think we > >> want them). So every porting case still needs careful analysis to > >> translate from the Linux model to RTDM. What would be possible with a > >> preemption-disabling spin_lock is inter-task synchronization without IRQ > >> disabling - kind of light-weight mutexes (/wrt their worst case, ie. > >> contention). > > > > I was rather thinking that when porting a linux driver to rtdm, the BH > > code was migrated to the irq handler. So, the spinlock would allow to > > have mutual exclusion between the task code and the irq handler. > > > >> > >>> I suspect the drivers which will cause long critical sections are the > >>> ones where the packets needs to be copied using PIO, instead of DMA. > >>> Such drivers still exist, especially in the non-x86 word. > >> > >> Right, but not all have excessive access times. And we should better > >> address the expensive cases with IRQ threads (which is an overdue to-do > >> for RTDM) so that normal mutexes can be used. > > > > You really do not want irq threads on non-x86 machines: they turn irq > > latencies into kernel-space threads scheduling latencies. > > At least some ARM-variants, I bet. Anyway, the drivers that would want > to make use of this will suffer from high latencies anyway. If reading a > larger Ethernet frame takes hundreds of micros, you don't want IRQ > context for this job (writing frames is only half of the problem). > > Jan > |
|
From: Jan K. <jan...@si...> - 2010-10-07 12:33:16
|
Am 07.10.2010 14:11, Gilles Chanteperdrix wrote: > Jan Kiszka wrote: >> Am 07.10.2010 13:00, Gilles Chanteperdrix wrote: >>> Jan Kiszka wrote: >>>>> Maybe that could be bundled in rtdm_lock_get_disable_irq(lock, irq) ? >>>>> Disabling an irq line should be pretty fast on non-x86, besides, if the >>>>> original code used spin_lock_bh, it is maybe because the critical >>>>> section is expected to be lengthy, so irqsave/irqrestore may cause high >>>>> latencies in the rest of the system. >>>> Generally spoken, disabling preemption can cause high latencies as well >>>> as most critical paths include a task switch. However, anything that is >>>> more light-weight than irqsave/restore and has compatible semantics >>>> would still be welcome. It just requires some relevant performance >>>> and/or latency gain to justify complications in the locking model. >>>> >>>> We actually have two or so cases in RTnet where drivers contain long >>>> locked code paths (orders of a few hundred microseconds). In those >>>> cases, disabling the IRQ line without any preemption lock is the only >>>> way to keep system latencies in reasonable limits. But those are still >>>> corner cases. >>> The interrupt latency also matters in some cases (which is why, for >>> instance, Xenomai switches contexts with irqs on ARM and powerpc now, >>> this does not improve scheduling latency, only irq latency). This new >>> kind of spinlocking would be a solution to replace spinlock_bh. >> >> We still have no BH in Xenomai to protect against (and I don't think we >> want them). So every porting case still needs careful analysis to >> translate from the Linux model to RTDM. What would be possible with a >> preemption-disabling spin_lock is inter-task synchronization without IRQ >> disabling - kind of light-weight mutexes (/wrt their worst case, ie. >> contention). > > I was rather thinking that when porting a linux driver to rtdm, the BH > code was migrated to the irq handler. So, the spinlock would allow to > have mutual exclusion between the task code and the irq handler. > >> >>> I suspect the drivers which will cause long critical sections are the >>> ones where the packets needs to be copied using PIO, instead of DMA. >>> Such drivers still exist, especially in the non-x86 word. >> >> Right, but not all have excessive access times. And we should better >> address the expensive cases with IRQ threads (which is an overdue to-do >> for RTDM) so that normal mutexes can be used. > > You really do not want irq threads on non-x86 machines: they turn irq > latencies into kernel-space threads scheduling latencies. At least some ARM-variants, I bet. Anyway, the drivers that would want to make use of this will suffer from high latencies anyway. If reading a larger Ethernet frame takes hundreds of micros, you don't want IRQ context for this job (writing frames is only half of the problem). Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux |
|
From: Gilles C. <gil...@xe...> - 2010-10-07 12:12:12
|
Jan Kiszka wrote: > Am 07.10.2010 13:00, Gilles Chanteperdrix wrote: >> Jan Kiszka wrote: >>>> Maybe that could be bundled in rtdm_lock_get_disable_irq(lock, irq) ? >>>> Disabling an irq line should be pretty fast on non-x86, besides, if the >>>> original code used spin_lock_bh, it is maybe because the critical >>>> section is expected to be lengthy, so irqsave/irqrestore may cause high >>>> latencies in the rest of the system. >>> Generally spoken, disabling preemption can cause high latencies as well >>> as most critical paths include a task switch. However, anything that is >>> more light-weight than irqsave/restore and has compatible semantics >>> would still be welcome. It just requires some relevant performance >>> and/or latency gain to justify complications in the locking model. >>> >>> We actually have two or so cases in RTnet where drivers contain long >>> locked code paths (orders of a few hundred microseconds). In those >>> cases, disabling the IRQ line without any preemption lock is the only >>> way to keep system latencies in reasonable limits. But those are still >>> corner cases. >> The interrupt latency also matters in some cases (which is why, for >> instance, Xenomai switches contexts with irqs on ARM and powerpc now, >> this does not improve scheduling latency, only irq latency). This new >> kind of spinlocking would be a solution to replace spinlock_bh. > > We still have no BH in Xenomai to protect against (and I don't think we > want them). So every porting case still needs careful analysis to > translate from the Linux model to RTDM. What would be possible with a > preemption-disabling spin_lock is inter-task synchronization without IRQ > disabling - kind of light-weight mutexes (/wrt their worst case, ie. > contention). I was rather thinking that when porting a linux driver to rtdm, the BH code was migrated to the irq handler. So, the spinlock would allow to have mutual exclusion between the task code and the irq handler. > >> I suspect the drivers which will cause long critical sections are the >> ones where the packets needs to be copied using PIO, instead of DMA. >> Such drivers still exist, especially in the non-x86 word. > > Right, but not all have excessive access times. And we should better > address the expensive cases with IRQ threads (which is an overdue to-do > for RTDM) so that normal mutexes can be used. You really do not want irq threads on non-x86 machines: they turn irq latencies into kernel-space threads scheduling latencies. -- Gilles. |
|
From: Jan K. <jan...@si...> - 2010-10-07 11:18:56
|
Am 07.10.2010 13:00, Gilles Chanteperdrix wrote: > Jan Kiszka wrote: >>> Maybe that could be bundled in rtdm_lock_get_disable_irq(lock, irq) ? >>> Disabling an irq line should be pretty fast on non-x86, besides, if the >>> original code used spin_lock_bh, it is maybe because the critical >>> section is expected to be lengthy, so irqsave/irqrestore may cause high >>> latencies in the rest of the system. >> >> Generally spoken, disabling preemption can cause high latencies as well >> as most critical paths include a task switch. However, anything that is >> more light-weight than irqsave/restore and has compatible semantics >> would still be welcome. It just requires some relevant performance >> and/or latency gain to justify complications in the locking model. >> >> We actually have two or so cases in RTnet where drivers contain long >> locked code paths (orders of a few hundred microseconds). In those >> cases, disabling the IRQ line without any preemption lock is the only >> way to keep system latencies in reasonable limits. But those are still >> corner cases. > > The interrupt latency also matters in some cases (which is why, for > instance, Xenomai switches contexts with irqs on ARM and powerpc now, > this does not improve scheduling latency, only irq latency). This new > kind of spinlocking would be a solution to replace spinlock_bh. We still have no BH in Xenomai to protect against (and I don't think we want them). So every porting case still needs careful analysis to translate from the Linux model to RTDM. What would be possible with a preemption-disabling spin_lock is inter-task synchronization without IRQ disabling - kind of light-weight mutexes (/wrt their worst case, ie. contention). > > I suspect the drivers which will cause long critical sections are the > ones where the packets needs to be copied using PIO, instead of DMA. > Such drivers still exist, especially in the non-x86 word. Right, but not all have excessive access times. And we should better address the expensive cases with IRQ threads (which is an overdue to-do for RTDM) so that normal mutexes can be used. Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux |
|
From: Gilles C. <gil...@xe...> - 2010-10-07 11:01:12
|
Jan Kiszka wrote: >> Maybe that could be bundled in rtdm_lock_get_disable_irq(lock, irq) ? >> Disabling an irq line should be pretty fast on non-x86, besides, if the >> original code used spin_lock_bh, it is maybe because the critical >> section is expected to be lengthy, so irqsave/irqrestore may cause high >> latencies in the rest of the system. > > Generally spoken, disabling preemption can cause high latencies as well > as most critical paths include a task switch. However, anything that is > more light-weight than irqsave/restore and has compatible semantics > would still be welcome. It just requires some relevant performance > and/or latency gain to justify complications in the locking model. > > We actually have two or so cases in RTnet where drivers contain long > locked code paths (orders of a few hundred microseconds). In those > cases, disabling the IRQ line without any preemption lock is the only > way to keep system latencies in reasonable limits. But those are still > corner cases. The interrupt latency also matters in some cases (which is why, for instance, Xenomai switches contexts with irqs on ARM and powerpc now, this does not improve scheduling latency, only irq latency). This new kind of spinlocking would be a solution to replace spinlock_bh. I suspect the drivers which will cause long critical sections are the ones where the packets needs to be copied using PIO, instead of DMA. Such drivers still exist, especially in the non-x86 word. -- Gilles. |
|
From: Jan K. <jan...@si...> - 2010-10-07 10:52:39
|
Am 07.10.2010 11:41, Gilles Chanteperdrix wrote: > Jan Kiszka wrote: >> Am 07.10.2010 11:27, Gilles Chanteperdrix wrote: >>> Jan Kiszka wrote: >>>> Am 07.10.2010 11:13, Gilles Chanteperdrix wrote: >>>>> Jan Kiszka wrote: >>>>>> Am 04.10.2010 22:13, Edward Hoffman wrote: >>>>>>> I am attempting to port the Broadcom tg3 NetXtreme linux driver to >>>>>>> rtnet. >>>>>> Your port will be welcome! >>>>>> >>>>>>> The Broadcom driver internally utilizes >>>>>>> spin_lock_bh/spin_unlock_bh to disable/enable software interrupts. >>>>>>> There is no rtdm lock equivalent. >>>>>>> I believe that software interrupts are used by the driver for the >>>>>>> following reasons: >>>>>>> 1. To allow for simultaneous DMA read write operations >>>>>>> 2. To use shared local variables in a critical section of the driver. >>>>>>> 3. To receive changes to settings variables in realtime from the >>>>>>> userpace Nextreme utilities. (not sure about this one). >>>>>>> >>>>>>> What RTDM lock functions should I use to substitute for >>>>>>> spin_lock_bh/spin_unlock_bh? Should I substitute >>>>>>> rtdm_lock_get_irqsave/rtdm_lock_put_irqrestore? Or does rtdm need to ne >>>>>>> extended for softirqs used internally by the driver? >>>>>>> >>>>>>> I looked through the other device drivers and could not detect the >>>>>>> correct substitute. >>>>>> As RTDM provides no abstraction of bottom-halves, there are also no >>>>>> specific locking services to disable them. You have to use IRQ-disabling >>>>>> services unless you are in IRQ context. >>>>> Maybe if you want to only exclude the driver's irq, you may only disable >>>>> the irq line at IC level, with rtdm_irq_disable? >>>> Usually discouraged due to potentially slow line-disabling services and >>>> the breakage this will cause when the line is shared. >>> Why would this cause breakage when the line is shared? >> >> If the disabling context is preemptible (which is normally the >> motivation when avoiding irqsave), the line may remain off until that >> task happens to be scheduled in again. That can easily cause >> prio-inversions to other devices sharing the line. > > Yes, we should obviously disable preemption when doing that. What would > probably work is: > - disabling preemption > - disabling the irq line > - locking the spinlock. > > Maybe that could be bundled in rtdm_lock_get_disable_irq(lock, irq) ? > Disabling an irq line should be pretty fast on non-x86, besides, if the > original code used spin_lock_bh, it is maybe because the critical > section is expected to be lengthy, so irqsave/irqrestore may cause high > latencies in the rest of the system. Generally spoken, disabling preemption can cause high latencies as well as most critical paths include a task switch. However, anything that is more light-weight than irqsave/restore and has compatible semantics would still be welcome. It just requires some relevant performance and/or latency gain to justify complications in the locking model. We actually have two or so cases in RTnet where drivers contain long locked code paths (orders of a few hundred microseconds). In those cases, disabling the IRQ line without any preemption lock is the only way to keep system latencies in reasonable limits. But those are still corner cases. Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux |
|
From: Gilles C. <gil...@xe...> - 2010-10-07 09:41:35
|
Jan Kiszka wrote: > Am 07.10.2010 11:27, Gilles Chanteperdrix wrote: >> Jan Kiszka wrote: >>> Am 07.10.2010 11:13, Gilles Chanteperdrix wrote: >>>> Jan Kiszka wrote: >>>>> Am 04.10.2010 22:13, Edward Hoffman wrote: >>>>>> I am attempting to port the Broadcom tg3 NetXtreme linux driver to >>>>>> rtnet. >>>>> Your port will be welcome! >>>>> >>>>>> The Broadcom driver internally utilizes >>>>>> spin_lock_bh/spin_unlock_bh to disable/enable software interrupts. >>>>>> There is no rtdm lock equivalent. >>>>>> I believe that software interrupts are used by the driver for the >>>>>> following reasons: >>>>>> 1. To allow for simultaneous DMA read write operations >>>>>> 2. To use shared local variables in a critical section of the driver. >>>>>> 3. To receive changes to settings variables in realtime from the >>>>>> userpace Nextreme utilities. (not sure about this one). >>>>>> >>>>>> What RTDM lock functions should I use to substitute for >>>>>> spin_lock_bh/spin_unlock_bh? Should I substitute >>>>>> rtdm_lock_get_irqsave/rtdm_lock_put_irqrestore? Or does rtdm need to ne >>>>>> extended for softirqs used internally by the driver? >>>>>> >>>>>> I looked through the other device drivers and could not detect the >>>>>> correct substitute. >>>>> As RTDM provides no abstraction of bottom-halves, there are also no >>>>> specific locking services to disable them. You have to use IRQ-disabling >>>>> services unless you are in IRQ context. >>>> Maybe if you want to only exclude the driver's irq, you may only disable >>>> the irq line at IC level, with rtdm_irq_disable? >>> Usually discouraged due to potentially slow line-disabling services and >>> the breakage this will cause when the line is shared. >> Why would this cause breakage when the line is shared? > > If the disabling context is preemptible (which is normally the > motivation when avoiding irqsave), the line may remain off until that > task happens to be scheduled in again. That can easily cause > prio-inversions to other devices sharing the line. Yes, we should obviously disable preemption when doing that. What would probably work is: - disabling preemption - disabling the irq line - locking the spinlock. Maybe that could be bundled in rtdm_lock_get_disable_irq(lock, irq) ? Disabling an irq line should be pretty fast on non-x86, besides, if the original code used spin_lock_bh, it is maybe because the critical section is expected to be lengthy, so irqsave/irqrestore may cause high latencies in the rest of the system. -- Gilles. |
|
From: Jan K. <jan...@si...> - 2010-10-07 09:33:51
|
Am 07.10.2010 11:27, Gilles Chanteperdrix wrote: > Jan Kiszka wrote: >> Am 07.10.2010 11:13, Gilles Chanteperdrix wrote: >>> Jan Kiszka wrote: >>>> Am 04.10.2010 22:13, Edward Hoffman wrote: >>>>> I am attempting to port the Broadcom tg3 NetXtreme linux driver to >>>>> rtnet. >>>> Your port will be welcome! >>>> >>>>> The Broadcom driver internally utilizes >>>>> spin_lock_bh/spin_unlock_bh to disable/enable software interrupts. >>>>> There is no rtdm lock equivalent. >>>>> I believe that software interrupts are used by the driver for the >>>>> following reasons: >>>>> 1. To allow for simultaneous DMA read write operations >>>>> 2. To use shared local variables in a critical section of the driver. >>>>> 3. To receive changes to settings variables in realtime from the >>>>> userpace Nextreme utilities. (not sure about this one). >>>>> >>>>> What RTDM lock functions should I use to substitute for >>>>> spin_lock_bh/spin_unlock_bh? Should I substitute >>>>> rtdm_lock_get_irqsave/rtdm_lock_put_irqrestore? Or does rtdm need to ne >>>>> extended for softirqs used internally by the driver? >>>>> >>>>> I looked through the other device drivers and could not detect the >>>>> correct substitute. >>>> As RTDM provides no abstraction of bottom-halves, there are also no >>>> specific locking services to disable them. You have to use IRQ-disabling >>>> services unless you are in IRQ context. >>> Maybe if you want to only exclude the driver's irq, you may only disable >>> the irq line at IC level, with rtdm_irq_disable? >> >> Usually discouraged due to potentially slow line-disabling services and >> the breakage this will cause when the line is shared. > > Why would this cause breakage when the line is shared? If the disabling context is preemptible (which is normally the motivation when avoiding irqsave), the line may remain off until that task happens to be scheduled in again. That can easily cause prio-inversions to other devices sharing the line. Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux |
|
From: Gilles C. <gil...@xe...> - 2010-10-07 09:27:48
|
Jan Kiszka wrote: > Am 07.10.2010 11:13, Gilles Chanteperdrix wrote: >> Jan Kiszka wrote: >>> Am 04.10.2010 22:13, Edward Hoffman wrote: >>>> I am attempting to port the Broadcom tg3 NetXtreme linux driver to >>>> rtnet. >>> Your port will be welcome! >>> >>>> The Broadcom driver internally utilizes >>>> spin_lock_bh/spin_unlock_bh to disable/enable software interrupts. >>>> There is no rtdm lock equivalent. >>>> I believe that software interrupts are used by the driver for the >>>> following reasons: >>>> 1. To allow for simultaneous DMA read write operations >>>> 2. To use shared local variables in a critical section of the driver. >>>> 3. To receive changes to settings variables in realtime from the >>>> userpace Nextreme utilities. (not sure about this one). >>>> >>>> What RTDM lock functions should I use to substitute for >>>> spin_lock_bh/spin_unlock_bh? Should I substitute >>>> rtdm_lock_get_irqsave/rtdm_lock_put_irqrestore? Or does rtdm need to ne >>>> extended for softirqs used internally by the driver? >>>> >>>> I looked through the other device drivers and could not detect the >>>> correct substitute. >>> As RTDM provides no abstraction of bottom-halves, there are also no >>> specific locking services to disable them. You have to use IRQ-disabling >>> services unless you are in IRQ context. >> Maybe if you want to only exclude the driver's irq, you may only disable >> the irq line at IC level, with rtdm_irq_disable? > > Usually discouraged due to potentially slow line-disabling services and > the breakage this will cause when the line is shared. Why would this cause breakage when the line is shared? -- Gilles. |
|
From: Gilles C. <gil...@xe...> - 2010-10-07 09:27:22
|
Jan Kiszka wrote: > Am 04.10.2010 22:13, Edward Hoffman wrote: >> I am attempting to port the Broadcom tg3 NetXtreme linux driver to >> rtnet. > > Your port will be welcome! > >> The Broadcom driver internally utilizes >> spin_lock_bh/spin_unlock_bh to disable/enable software interrupts. >> There is no rtdm lock equivalent. >> I believe that software interrupts are used by the driver for the >> following reasons: >> 1. To allow for simultaneous DMA read write operations >> 2. To use shared local variables in a critical section of the driver. >> 3. To receive changes to settings variables in realtime from the >> userpace Nextreme utilities. (not sure about this one). >> >> What RTDM lock functions should I use to substitute for >> spin_lock_bh/spin_unlock_bh? Should I substitute >> rtdm_lock_get_irqsave/rtdm_lock_put_irqrestore? Or does rtdm need to ne >> extended for softirqs used internally by the driver? >> >> I looked through the other device drivers and could not detect the >> correct substitute. > > As RTDM provides no abstraction of bottom-halves, there are also no > specific locking services to disable them. You have to use IRQ-disabling > services unless you are in IRQ context. Maybe if you want to only exclude the driver's irq, you may only disable the irq line at IC level, with rtdm_irq_disable? -- Gilles. |
|
From: Jan K. <jan...@si...> - 2010-10-07 09:22:46
|
Am 07.10.2010 11:13, Gilles Chanteperdrix wrote: > Jan Kiszka wrote: >> Am 04.10.2010 22:13, Edward Hoffman wrote: >>> I am attempting to port the Broadcom tg3 NetXtreme linux driver to >>> rtnet. >> >> Your port will be welcome! >> >>> The Broadcom driver internally utilizes >>> spin_lock_bh/spin_unlock_bh to disable/enable software interrupts. >>> There is no rtdm lock equivalent. >>> I believe that software interrupts are used by the driver for the >>> following reasons: >>> 1. To allow for simultaneous DMA read write operations >>> 2. To use shared local variables in a critical section of the driver. >>> 3. To receive changes to settings variables in realtime from the >>> userpace Nextreme utilities. (not sure about this one). >>> >>> What RTDM lock functions should I use to substitute for >>> spin_lock_bh/spin_unlock_bh? Should I substitute >>> rtdm_lock_get_irqsave/rtdm_lock_put_irqrestore? Or does rtdm need to ne >>> extended for softirqs used internally by the driver? >>> >>> I looked through the other device drivers and could not detect the >>> correct substitute. >> >> As RTDM provides no abstraction of bottom-halves, there are also no >> specific locking services to disable them. You have to use IRQ-disabling >> services unless you are in IRQ context. > > Maybe if you want to only exclude the driver's irq, you may only disable > the irq line at IC level, with rtdm_irq_disable? Usually discouraged due to potentially slow line-disabling services and the breakage this will cause when the line is shared. Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux |
|
From: Jan K. <jan...@si...> - 2010-10-07 09:09:36
|
Am 04.10.2010 22:13, Edward Hoffman wrote: > I am attempting to port the Broadcom tg3 NetXtreme linux driver to > rtnet. Your port will be welcome! > The Broadcom driver internally utilizes > spin_lock_bh/spin_unlock_bh to disable/enable software interrupts. > There is no rtdm lock equivalent. > I believe that software interrupts are used by the driver for the > following reasons: > 1. To allow for simultaneous DMA read write operations > 2. To use shared local variables in a critical section of the driver. > 3. To receive changes to settings variables in realtime from the > userpace Nextreme utilities. (not sure about this one). > > What RTDM lock functions should I use to substitute for > spin_lock_bh/spin_unlock_bh? Should I substitute > rtdm_lock_get_irqsave/rtdm_lock_put_irqrestore? Or does rtdm need to ne > extended for softirqs used internally by the driver? > > I looked through the other device drivers and could not detect the > correct substitute. As RTDM provides no abstraction of bottom-halves, there are also no specific locking services to disable them. You have to use IRQ-disabling services unless you are in IRQ context. Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux |
|
From: Edward H. <qua...@op...> - 2010-10-04 20:13:19
|
I am attempting to port the Broadcom tg3 NetXtreme linux driver to rtnet. The Broadcom driver internally utilizes spin_lock_bh/spin_unlock_bh to disable/enable software interrupts. There is no rtdm lock equivalent. I believe that software interrupts are used by the driver for the following reasons: 1. To allow for simultaneous DMA read write operations 2. To use shared local variables in a critical section of the driver. 3. To receive changes to settings variables in realtime from the userpace Nextreme utilities. (not sure about this one). What RTDM lock functions should I use to substitute for spin_lock_bh/spin_unlock_bh? Should I substitute rtdm_lock_get_irqsave/rtdm_lock_put_irqrestore? Or does rtdm need to ne extended for softirqs used internally by the driver? I looked through the other device drivers and could not detect the correct substitute. RSVP. Thanks. Best Regards, Ed |
|
From: Glen W. <gl...@je...> - 2010-09-29 10:47:39
|
One other thing Ethernet ip does is implement ethernet QoS which make it's packets have higher priority. This only works when going through level 3 switches and cards. -- Glen Wernersbach President & CTO Jetsoft Development Co 629 Old St. Rt. 74 - Suite 210 Cincinnati Ohio 45244 Custom Programming Web Site: www.JetsoftDev.com Retail Product Web Site: www.ScanHelp.com Phone: 513-528-6660 Fax: 513-528-3470 On Sep 28, 2010, at 4:39 PM, Jan Kiszka <jan...@we...> wrote: > Am 28.09.2010 16:22, Glen Wernersbach wrote: >> Hi Guys, >> >> Woule you see any reason while we could not the open source Ethernet IP >> project on top of RTNet? >> >> Basically this builds on Linux stack but I want to run my stuff inside of it >> real time. For that I need a real time driver for the ethenet that emulates >> linux. >> >> Ethernet IP project: >> https://sourceforge.net/projects/opener/ > > I cannot asses what you could gain this way (do Ethernet IP devices > provide real-time qualities via UDP/TCP?), but porting that stack over > RTnet's UDP/TCP services should be feasible. The portability layer of > opener looks rather thin and should be well covered by Xenomai's POSIX > skin. However, our TCP implementation is still fresh, so you /may/ run > into some of it current limitations. If you find such quirks, please let > us know. > > Jan > |
|
From: Glen W. <gl...@je...> - 2010-09-28 21:56:23
|
In EthernetIp tcp is used for non critical data such as parameters and to set what the contents of the UDP packets are. I can only control the EthernetIP implantation on my device. Thanks, Glen -- Glen Wernersbach President & CTO Jetsoft Development Co 629 Old St. Rt. 74 - Suite 210 Cincinnati Ohio 45244 Custom Programming Web Site: www.JetsoftDev.com Retail Product Web Site: www.ScanHelp.com Phone: 513-528-6660 Fax: 513-528-3470 On Sep 28, 2010, at 5:35 PM, Jan Kiszka <jan...@we...> wrote: > Am 28.09.2010 23:17, Glen Wernersbach wrote: >> Jan, >> >> My basic concern was to keep the data flowing from the NIC to my program as >> close to hard real time as possible. I know doing Linux calls can break hard >> real time especially when I want to receive a UDP packet every 1ms. > > Is that UDP packet part of your control loop? Why do you need the whole > opener stack then? Or is there more? > >> >> The only thing Ethernet IP really gives you real time is optional support >> for the Ethernet 1588 timing standard so you know what the jitter was in >> receiving the packet over the Ethernet line. The 1588 support is not yet >> implemented in opener. The rest is just a protocol for handling critical >> data over TCP and UDP. Unfortunately, I have a device that needs to work on >> Allen Bradley controllers which only support this real time protocol. >> >> All I want to do is replace the Linux calls in networkhandler.c >> (socket,bind,listen...) with RTAI real time calls. > > Additionally, you will have to configure the socket resources, namely > how many buffers each socket should pre-allocate for its real-time > operation (that's the major difference to normal networking stack that > do on-demand allocations). See the examples. > > Jan > |
|
From: Jan K. <jan...@we...> - 2010-09-28 21:44:50
|
Am 28.09.2010 23:17, Glen Wernersbach wrote: > Jan, > > My basic concern was to keep the data flowing from the NIC to my program as > close to hard real time as possible. I know doing Linux calls can break hard > real time especially when I want to receive a UDP packet every 1ms. Is that UDP packet part of your control loop? Why do you need the whole opener stack then? Or is there more? > > The only thing Ethernet IP really gives you real time is optional support > for the Ethernet 1588 timing standard so you know what the jitter was in > receiving the packet over the Ethernet line. The 1588 support is not yet > implemented in opener. The rest is just a protocol for handling critical > data over TCP and UDP. Unfortunately, I have a device that needs to work on > Allen Bradley controllers which only support this real time protocol. > > All I want to do is replace the Linux calls in networkhandler.c > (socket,bind,listen...) with RTAI real time calls. Additionally, you will have to configure the socket resources, namely how many buffers each socket should pre-allocate for its real-time operation (that's the major difference to normal networking stack that do on-demand allocations). See the examples. Jan |
|
From: Glen W. <gl...@je...> - 2010-09-28 21:18:01
|
Jan, My basic concern was to keep the data flowing from the NIC to my program as close to hard real time as possible. I know doing Linux calls can break hard real time especially when I want to receive a UDP packet every 1ms. The only thing Ethernet IP really gives you real time is optional support for the Ethernet 1588 timing standard so you know what the jitter was in receiving the packet over the Ethernet line. The 1588 support is not yet implemented in opener. The rest is just a protocol for handling critical data over TCP and UDP. Unfortunately, I have a device that needs to work on Allen Bradley controllers which only support this real time protocol. All I want to do is replace the Linux calls in networkhandler.c (socket,bind,listen...) with RTAI real time calls. Glen On 9/28/10 4:39 PM, "Jan Kiszka" <jan...@we...> wrote: > Am 28.09.2010 16:22, Glen Wernersbach wrote: >> Hi Guys, >> >> Woule you see any reason while we could not the open source Ethernet IP >> project on top of RTNet? >> >> Basically this builds on Linux stack but I want to run my stuff inside of it >> real time. For that I need a real time driver for the ethenet that emulates >> linux. >> >> Ethernet IP project: >> https://sourceforge.net/projects/opener/ > > I cannot asses what you could gain this way (do Ethernet IP devices > provide real-time qualities via UDP/TCP?), but porting that stack over > RTnet's UDP/TCP services should be feasible. The portability layer of > opener looks rather thin and should be well covered by Xenomai's POSIX > skin. However, our TCP implementation is still fresh, so you /may/ run > into some of it current limitations. If you find such quirks, please let > us know. > > Jan > -- Glen Wernersbach President & CTO Jetsoft Development Co. 629 Old St Rt. 74 Suite 210 Cincinnati, Oh 45244 Custom Programming Web Site: www.jetsoftdev.com Retail Products Web Site: www.scanhelp.com Phone: 513-528-6660 Fax: 513-528-3470 ---- "Support Dyslexia Research" |
|
From: Jan K. <jan...@we...> - 2010-09-28 20:39:54
|
Am 28.09.2010 16:22, Glen Wernersbach wrote: > Hi Guys, > > Woule you see any reason while we could not the open source Ethernet IP > project on top of RTNet? > > Basically this builds on Linux stack but I want to run my stuff inside of it > real time. For that I need a real time driver for the ethenet that emulates > linux. > > Ethernet IP project: > https://sourceforge.net/projects/opener/ I cannot asses what you could gain this way (do Ethernet IP devices provide real-time qualities via UDP/TCP?), but porting that stack over RTnet's UDP/TCP services should be feasible. The portability layer of opener looks rather thin and should be well covered by Xenomai's POSIX skin. However, our TCP implementation is still fresh, so you /may/ run into some of it current limitations. If you find such quirks, please let us know. Jan |
|
From: Jan K. <jan...@we...> - 2010-09-28 20:14:18
|
Am 19.09.2010 05:52, Edward Hoffman wrote: > In the current version rtnet-0.9.12 following the standard build > instructions on xenomai site (RTnet: Installation and Testing) > the firewire driver stack does not get built I have tried using bothe > the config files generated by menuconfig and manually specifying the > parameters. Are there any updated instructions or fixes? I'm afraid the problem already starts with the rtfirewire stack itself. Last time I tried to build it (more than a year ago), it failed on recent kernels with recent Xenomai. Jan |
|
From: Glen W. <gl...@je...> - 2010-09-28 14:23:31
|
Hi Guys, Woule you see any reason while we could not the open source Ethernet IP project on top of RTNet? Basically this builds on Linux stack but I want to run my stuff inside of it real time. For that I need a real time driver for the ethenet that emulates linux. Ethernet IP project: https://sourceforge.net/projects/opener/ Glen -- Glen Wernersbach President & CTO Jetsoft Development Co. 629 Old St Rt. 74 Suite 210 Cincinnati, Oh 45244 Custom Programming Web Site: www.jetsoftdev.com Retail Products Web Site: www.scanhelp.com Phone: 513-528-6660 Fax: 513-528-3470 ---- "Support Dyslexia Research" |
|
From: Edward H. <qua...@op...> - 2010-09-19 03:52:07
|
In the current version rtnet-0.9.12 following the standard build instructions on xenomai site (RTnet: Installation and Testing) the firewire driver stack does not get built I have tried using bothe the config files generated by menuconfig and manually specifying the parameters. Are there any updated instructions or fixes? RSVP. Ed |
|
From: Jan K. <jan...@we...> - 2010-08-02 00:56:19
|
Am 22.07.2010 15:01, Adam wrote: > Would it be possible to implement the SERCOS III protocol > (http://en.wikipedia.org/wiki/SERCOS_III) in RTnet? There is a non-OS specific, > LGPL licensed SERCOS III library available called COSEMA > (http://sourceforge.net/projects/cosema/) originally developed by Bosch that > might be useful. There is also http://dwarf.members.selfnet.de/hpblog/?p=35. I'm not very deep into the Sercos III programming model, and the code you cited looks like one has the learn the spec by heart first to understand it. However, I have a vague feeling RTnet could not contribute much to a Sercos stack. Hans-Peter, please correct me if I'm wrong. Jan |
|
From: Adam <vo...@gm...> - 2010-07-22 19:05:17
|
Would it be possible to implement the SERCOS III protocol (http://en.wikipedia.org/wiki/SERCOS_III) in RTnet? There is a non-OS specific, LGPL licensed SERCOS III library available called COSEMA (http://sourceforge.net/projects/cosema/) originally developed by Bosch that might be useful. Thanks. Adam |
|
From: Jan K. <jan...@si...> - 2010-04-27 10:24:20
|
Sebastian Smolorz wrote: > Jan Kiszka wrote: >> Sebastian Smolorz wrote: >>> Hi Jan, >>> >>> something went wrong with the latest stable release - e.g. in the >>> documentation directory, where README.tcp, README.eth1394 and >>> README.gigabit are missing, see >>> >>> http://www.rts.uni-hannover.de/rtnet/lxr/source/Documentation/ >> Yep, lacking dist rules. The first one is new to the club, the other two >> were broken before. >> >> Still, nothing should be missing that is relevant for building the code. >> Did you find more discrepancies? > > No I didn't. But the missing README.tcp could leave people helpless who > download 0.9.12 and want to try out RT-TCP. Yes, true. Maybe I will role out an update soon, also to enable 2.6.34 kernel support that is now available in git. Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux |