rtnet-users Mailing List for RTnet - Real-Time Networking for Linux (Page 3)
Brought to you by:
bet-frogger,
kiszka
You can subscribe to this list here.
2003 |
Jan
(31) |
Feb
(54) |
Mar
(37) |
Apr
(25) |
May
(77) |
Jun
(56) |
Jul
(38) |
Aug
(21) |
Sep
(49) |
Oct
(22) |
Nov
(45) |
Dec
(42) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(46) |
Feb
(56) |
Mar
(70) |
Apr
(22) |
May
(36) |
Jun
(33) |
Jul
(23) |
Aug
(22) |
Sep
(20) |
Oct
(85) |
Nov
(40) |
Dec
(23) |
2005 |
Jan
(17) |
Feb
(13) |
Mar
(17) |
Apr
(23) |
May
(72) |
Jun
(56) |
Jul
(41) |
Aug
(17) |
Sep
(29) |
Oct
(19) |
Nov
(62) |
Dec
(44) |
2006 |
Jan
(33) |
Feb
(9) |
Mar
(131) |
Apr
(32) |
May
(39) |
Jun
(26) |
Jul
(45) |
Aug
(124) |
Sep
(57) |
Oct
(80) |
Nov
(69) |
Dec
(26) |
2007 |
Jan
(50) |
Feb
(39) |
Mar
(53) |
Apr
(23) |
May
(148) |
Jun
(59) |
Jul
(71) |
Aug
(91) |
Sep
(99) |
Oct
(63) |
Nov
(113) |
Dec
(27) |
2008 |
Jan
(9) |
Feb
(12) |
Mar
(38) |
Apr
(65) |
May
(65) |
Jun
(16) |
Jul
(8) |
Aug
(55) |
Sep
(15) |
Oct
(29) |
Nov
(28) |
Dec
(7) |
2009 |
Jan
(6) |
Feb
(6) |
Mar
(6) |
Apr
(10) |
May
(4) |
Jun
(7) |
Jul
(28) |
Aug
(4) |
Sep
(17) |
Oct
(16) |
Nov
(18) |
Dec
(30) |
2010 |
Jan
(6) |
Feb
(15) |
Mar
(46) |
Apr
(48) |
May
(63) |
Jun
(13) |
Jul
(40) |
Aug
(11) |
Sep
(28) |
Oct
(35) |
Nov
(5) |
Dec
(8) |
2011 |
Jan
(14) |
Feb
(17) |
Mar
(15) |
Apr
(9) |
May
(2) |
Jun
|
Jul
(10) |
Aug
(6) |
Sep
(13) |
Oct
(2) |
Nov
(9) |
Dec
(1) |
2012 |
Jan
(19) |
Feb
(13) |
Mar
(1) |
Apr
|
May
(14) |
Jun
(21) |
Jul
(13) |
Aug
(3) |
Sep
(8) |
Oct
(28) |
Nov
|
Dec
(6) |
2013 |
Jan
(3) |
Feb
|
Mar
(8) |
Apr
|
May
(18) |
Jun
(16) |
Jul
|
Aug
(12) |
Sep
(5) |
Oct
(14) |
Nov
(9) |
Dec
(8) |
2014 |
Jan
(5) |
Feb
(4) |
Mar
(3) |
Apr
(3) |
May
(13) |
Jun
(6) |
Jul
(2) |
Aug
|
Sep
(2) |
Oct
(2) |
Nov
(8) |
Dec
(8) |
2015 |
Jan
(22) |
Feb
|
Mar
(1) |
Apr
(3) |
May
(4) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Gilles C. <gil...@xe...> - 2014-11-21 14:19:31
|
On Fri, Nov 21, 2014 at 12:22:18PM +0100, Jan Kiszka wrote: > On 2014-11-21 12:10, Gilles Chanteperdrix wrote: > > On Fri, Nov 21, 2014 at 11:32:30AM +0100, Jan Kiszka wrote: > >> On 2014-11-21 10:08, Gilles Chanteperdrix wrote: > >>> Hi, > >>> > >>> as some of you may have seen, I have sent the pull request to > >>> Philippe for the integration of RTnet in Xenomai 3, those of you who > >>> want will be able to test it when Xenomai 3 next release candidate > >>> is released. What will be in that release candidate is what was in > >>> RTnet git, patched up to adapt it to the Linux and Xenomai API > >>> changes that broke its compilation, and to add the bits and pieces > >>> to be able to run some tests on the hardware I have (namely, the > >>> 8139too, r8169, at91_ether and macb drivers). > >> > >> For x86, support for e1000e and possibly also igb will be very helpful. > >> Those NICs dominate the market now, specifically due to their on-chip / > >> on-board presence. I think those two drives are in better shape than the > >> legacy ones. > > > > When compiling Xenomai 3 with RTnet on x86 (32 and 64), I enabled > > all the PCI drivers. So, they all compile as far as I know. I have > > not tested them of course, but since the rtnet stack has not changed > > (yet), they should continue to work if they were in a working state. > > Ah, ok, perfect. > > > > > > >>> - the NAPI will be implemented. The NAPI thread will run with the > >>> priority of the highest priority waiting thread, and will call > >>> rt_stack_deliver, in order not to increase the RX latency compared > >>> to the current solution. This will make porting recent drivers easy > >>> and has the additional advantage of irq handlers not creating large > >>> irq masking sections like in the current design, which even borders > >>> priority inversion if the bulk of the received packets is for RTmac > >>> vnics or rtnetproxy. > >> > >> Will be an interesting feature. However, whenever you share a link for > >> RT and non-RT packets, you do have an unavoidable prio-inversion risk. > >> The way to mitigate this is non-RT traffic control. > > > > This can only made on the sending side (which the solution I propose > > for tx queuing should somewhat achieve, BTW). On the receive side, > > the best we can do is get the NAPI thread to inherit the priority of > > the highest priority waiter, and reschedule as soon as it delivers a > > packet to a thread. So, the NAPI thread should not delay high > > priority tasks not currently waiting for a packet if there is no > > higher priority thread waiting for a packet. > > > >> > >>> > >>> Maybe the RTnet drivers contain some modifications for low latency > >>> like reducing the TX ring length, but I believe putting these > >>> modifications in the drivers is not a so good idea: > >> > >> The key modifications that were needed for drivers so far are: > >> - TX/RX time stamping support > >> - disabling of IRQ coalescing features for low-latency signaling > >> - support for pre-mapping rings (to avoid triggering IOMMU paths > >> during runtime) > > > > Ok, thanks. Could you point me at a drivers which have these > > modifications? Particularly the third one, because I believe > > mainline has RX/TX time stamping as well, and the NAPI should handle > > the second one. > > Regarding time stamping: yes, mainline may have this now. You just need > to check when it happens. The original philosophy was to have that as > close to triggering the TX / receiving an RX event as feasible. > > Regarding IRQ coalescing: This is a hardware feature that aims at > optimizing throughput and lowering CPU load. As such, it works against > the goal of lowering individual event latencies. However, maybe such > things are controllable in standard drivers today, just not in a > consistent way. If you mean RX irq coalescing, I guess NAPI is just the same thing done in software. For TX, I realized recently that the "TX complete" irq, was kind of bad for performances, this results in way to many interrupts when trying and generating traffic at full bandwidth. Especially since we are not that much in a hurry for reclaiming the TX packets memory, this does not impact latency visible to application, so coalescing TX irq is desirable as well. But this all seem to be doable in software. I will check that. > And regarding premapping: Just look that rt_igb or rt_e1000e. See e.g. > c05d7bbfba. This change is mandatory for RT, at least for dual-domain > setups. Ok, I will have a look, thanks. -- Gilles. |
From: Jan K. <jan...@we...> - 2014-11-21 11:22:30
|
On 2014-11-21 12:10, Gilles Chanteperdrix wrote: > On Fri, Nov 21, 2014 at 11:32:30AM +0100, Jan Kiszka wrote: >> On 2014-11-21 10:08, Gilles Chanteperdrix wrote: >>> Hi, >>> >>> as some of you may have seen, I have sent the pull request to >>> Philippe for the integration of RTnet in Xenomai 3, those of you who >>> want will be able to test it when Xenomai 3 next release candidate >>> is released. What will be in that release candidate is what was in >>> RTnet git, patched up to adapt it to the Linux and Xenomai API >>> changes that broke its compilation, and to add the bits and pieces >>> to be able to run some tests on the hardware I have (namely, the >>> 8139too, r8169, at91_ether and macb drivers). >> >> For x86, support for e1000e and possibly also igb will be very helpful. >> Those NICs dominate the market now, specifically due to their on-chip / >> on-board presence. I think those two drives are in better shape than the >> legacy ones. > > When compiling Xenomai 3 with RTnet on x86 (32 and 64), I enabled > all the PCI drivers. So, they all compile as far as I know. I have > not tested them of course, but since the rtnet stack has not changed > (yet), they should continue to work if they were in a working state. Ah, ok, perfect. > > >>> - the NAPI will be implemented. The NAPI thread will run with the >>> priority of the highest priority waiting thread, and will call >>> rt_stack_deliver, in order not to increase the RX latency compared >>> to the current solution. This will make porting recent drivers easy >>> and has the additional advantage of irq handlers not creating large >>> irq masking sections like in the current design, which even borders >>> priority inversion if the bulk of the received packets is for RTmac >>> vnics or rtnetproxy. >> >> Will be an interesting feature. However, whenever you share a link for >> RT and non-RT packets, you do have an unavoidable prio-inversion risk. >> The way to mitigate this is non-RT traffic control. > > This can only made on the sending side (which the solution I propose > for tx queuing should somewhat achieve, BTW). On the receive side, > the best we can do is get the NAPI thread to inherit the priority of > the highest priority waiter, and reschedule as soon as it delivers a > packet to a thread. So, the NAPI thread should not delay high > priority tasks not currently waiting for a packet if there is no > higher priority thread waiting for a packet. > >> >>> >>> Maybe the RTnet drivers contain some modifications for low latency >>> like reducing the TX ring length, but I believe putting these >>> modifications in the drivers is not a so good idea: >> >> The key modifications that were needed for drivers so far are: >> - TX/RX time stamping support >> - disabling of IRQ coalescing features for low-latency signaling >> - support for pre-mapping rings (to avoid triggering IOMMU paths >> during runtime) > > Ok, thanks. Could you point me at a drivers which have these > modifications? Particularly the third one, because I believe > mainline has RX/TX time stamping as well, and the NAPI should handle > the second one. Regarding time stamping: yes, mainline may have this now. You just need to check when it happens. The original philosophy was to have that as close to triggering the TX / receiving an RX event as feasible. Regarding IRQ coalescing: This is a hardware feature that aims at optimizing throughput and lowering CPU load. As such, it works against the goal of lowering individual event latencies. However, maybe such things are controllable in standard drivers today, just not in a consistent way. And regarding premapping: Just look that rt_igb or rt_e1000e. See e.g. c05d7bbfba. This change is mandatory for RT, at least for dual-domain setups. Jan |
From: Gilles C. <gil...@xe...> - 2014-11-21 11:10:26
|
On Fri, Nov 21, 2014 at 11:32:30AM +0100, Jan Kiszka wrote: > On 2014-11-21 10:08, Gilles Chanteperdrix wrote: > > Hi, > > > > as some of you may have seen, I have sent the pull request to > > Philippe for the integration of RTnet in Xenomai 3, those of you who > > want will be able to test it when Xenomai 3 next release candidate > > is released. What will be in that release candidate is what was in > > RTnet git, patched up to adapt it to the Linux and Xenomai API > > changes that broke its compilation, and to add the bits and pieces > > to be able to run some tests on the hardware I have (namely, the > > 8139too, r8169, at91_ether and macb drivers). > > For x86, support for e1000e and possibly also igb will be very helpful. > Those NICs dominate the market now, specifically due to their on-chip / > on-board presence. I think those two drives are in better shape than the > legacy ones. When compiling Xenomai 3 with RTnet on x86 (32 and 64), I enabled all the PCI drivers. So, they all compile as far as I know. I have not tested them of course, but since the rtnet stack has not changed (yet), they should continue to work if they were in a working state. > > - the NAPI will be implemented. The NAPI thread will run with the > > priority of the highest priority waiting thread, and will call > > rt_stack_deliver, in order not to increase the RX latency compared > > to the current solution. This will make porting recent drivers easy > > and has the additional advantage of irq handlers not creating large > > irq masking sections like in the current design, which even borders > > priority inversion if the bulk of the received packets is for RTmac > > vnics or rtnetproxy. > > Will be an interesting feature. However, whenever you share a link for > RT and non-RT packets, you do have an unavoidable prio-inversion risk. > The way to mitigate this is non-RT traffic control. This can only made on the sending side (which the solution I propose for tx queuing should somewhat achieve, BTW). On the receive side, the best we can do is get the NAPI thread to inherit the priority of the highest priority waiter, and reschedule as soon as it delivers a packet to a thread. So, the NAPI thread should not delay high priority tasks not currently waiting for a packet if there is no higher priority thread waiting for a packet. > > > > > Maybe the RTnet drivers contain some modifications for low latency > > like reducing the TX ring length, but I believe putting these > > modifications in the drivers is not a so good idea: > > The key modifications that were needed for drivers so far are: > - TX/RX time stamping support > - disabling of IRQ coalescing features for low-latency signaling > - support for pre-mapping rings (to avoid triggering IOMMU paths > during runtime) Ok, thanks. Could you point me at a drivers which have these modifications? Particularly the third one, because I believe mainline has RX/TX time stamping as well, and the NAPI should handle the second one. > > > - it means that the modification is to be made in each driver, and > > needs to be maintained; > > - in the particular case of the TX ring, reducing the number of > > queued packets in hardware is not a really precise way to guarantee > > a bounded latency, because the latency is proportional to the number > > of bytes queued, not on the number of packets, and ethernet packets > > have widely varying sizes. > > > > So, I propose to try and implement these modifications in the rtdev > > stack. For the case of the TX ring, implementing a TX queue which > > keeps track of how much time are worth the number of bytes currently > > queued in hardware (including preamble and inter packets gap) and > > stop queuing when this value reaches a configurable threshold (the > > maximum TX latency), and restart the queue when this value reaches > > the interrupt latency. The queue will be ordered by sender priority, > > so that when a high priority thread queues a packet, the packet will > > never take more than the threshold to reach the wire, even if a low > > priority or the RTmac vnic drivers or rtnetproxy are using the full > > bandwidth (which will remain possible if the threshold is higher > > than the current average irq latency). > > > > Please feel free to send any reaction to this mail. > > Thanks for picking up this task, it is very welcome and should help to > keep this project alive! I ran out of time to take care of it, as people > surely noticed. But it was always my plan as well to hand this over to > the stronger Xenomai community when the code is in acceptable state. > It's great to see this happening now! Thanks, as you know I have used RTnet a long time ago on an (ARM IXP465 based) ISP mixed IPBX/DSL internet gateway when working for this ISP, and I always wanted to contribute back the ideas I had implemented or could not even implement at the time. -- Gilles. |
From: Jan K. <jan...@we...> - 2014-11-21 10:32:44
|
On 2014-11-21 10:08, Gilles Chanteperdrix wrote: > Hi, > > as some of you may have seen, I have sent the pull request to > Philippe for the integration of RTnet in Xenomai 3, those of you who > want will be able to test it when Xenomai 3 next release candidate > is released. What will be in that release candidate is what was in > RTnet git, patched up to adapt it to the Linux and Xenomai API > changes that broke its compilation, and to add the bits and pieces > to be able to run some tests on the hardware I have (namely, the > 8139too, r8169, at91_ether and macb drivers). For x86, support for e1000e and possibly also igb will be very helpful. Those NICs dominate the market now, specifically due to their on-chip / on-board presence. I think those two drives are in better shape than the legacy ones. > > Doing that job, I have found a few things to fix or to do > differently, and am now able to say how I would like to do it. This > mail is cross-posted on the RTnet mailing list, because I believe it > somewhat concerns RTnet users and developers. > > We can divide RTnet into three parts: > - the tools > - the stack > - the drivers > > The tools are in good shape, I do not see any reason to fix them, > except maybe for the rtcfg stuff where an ioctl uses filp_open in > kernel-space which I find a bit useless, but this is not important, > and can wait. > > The stack is not in bad shape either. The code needs a bit of > refreshing, for instance using Linux list and hash lists instead of > open coding linked list. But this will not cause any API change, so > can wait too. > > The drivers, on the other hand, are a bit more worrying. They are > based on seriously outdated versions of the corresponding Linux > drivers, with support for much less hardware, and probably missing > some bug fixes. So, putting the drivers into a better shape and > making it easy to track mainline changes will be my first priority. > > With the 3 drivers I had to adapt, I tested the two possible > approach to updating a driver. For r8169 I hand picked in Linux > driver what was needed to support the chip I have (a 8136) and > adapted it to the code difference between the two versions of the > driver. For at91_ether and macb drivers, I restarted from the > current state of the mainline Linux. The second approach is easier > and more satisfying than the first, because at least you can get all > the mainline fixes, but to my taste, not easy enough. > > I believe the first order of business is to change the rtdev API so > that this port is easy, and in fact, if possible has a first > automated step. So, I intend to change rtdm and rtdev API to reach > this goal: > - rt_stack_connect, rt_rtdev_connect, rt_stack_mark will be removed > and replaced with mechanisms integrated in the rtdev API > - rtdm_irq_request/rtdm_irq_free will be modified to have almost the > same interface as request_irq, in particular removing the need for a > driver to provide the storage for the rtdm_irq_t handles, which will > then become unneeded. This makes life easier for drivers which > register multiple irq handlers. > - rtdm_devm_irq_request or rtdev_irq_request will be introduced with > a behaviour similar to devm_request_irq, that is automatic > unregistration of the irqs handlers at device destruction. Because > automatically adding the missing calls to rtdm_irq_free to a code > using devm_request_irq is hard. Sounds good. > - the NAPI will be implemented. The NAPI thread will run with the > priority of the highest priority waiting thread, and will call > rt_stack_deliver, in order not to increase the RX latency compared > to the current solution. This will make porting recent drivers easy > and has the additional advantage of irq handlers not creating large > irq masking sections like in the current design, which even borders > priority inversion if the bulk of the received packets is for RTmac > vnics or rtnetproxy. Will be an interesting feature. However, whenever you share a link for RT and non-RT packets, you do have an unavoidable prio-inversion risk. The way to mitigate this is non-RT traffic control. > > Maybe the RTnet drivers contain some modifications for low latency > like reducing the TX ring length, but I believe putting these > modifications in the drivers is not a so good idea: The key modifications that were needed for drivers so far are: - TX/RX time stamping support - disabling of IRQ coalescing features for low-latency signaling - support for pre-mapping rings (to avoid triggering IOMMU paths during runtime) > - it means that the modification is to be made in each driver, and > needs to be maintained; > - in the particular case of the TX ring, reducing the number of > queued packets in hardware is not a really precise way to guarantee > a bounded latency, because the latency is proportional to the number > of bytes queued, not on the number of packets, and ethernet packets > have widely varying sizes. > > So, I propose to try and implement these modifications in the rtdev > stack. For the case of the TX ring, implementing a TX queue which > keeps track of how much time are worth the number of bytes currently > queued in hardware (including preamble and inter packets gap) and > stop queuing when this value reaches a configurable threshold (the > maximum TX latency), and restart the queue when this value reaches > the interrupt latency. The queue will be ordered by sender priority, > so that when a high priority thread queues a packet, the packet will > never take more than the threshold to reach the wire, even if a low > priority or the RTmac vnic drivers or rtnetproxy are using the full > bandwidth (which will remain possible if the threshold is higher > than the current average irq latency). > > Please feel free to send any reaction to this mail. Thanks for picking up this task, it is very welcome and should help to keep this project alive! I ran out of time to take care of it, as people surely noticed. But it was always my plan as well to hand this over to the stronger Xenomai community when the code is in acceptable state. It's great to see this happening now! Jan |
From: Gilles C. <gil...@xe...> - 2014-11-21 09:09:06
|
Hi, as some of you may have seen, I have sent the pull request to Philippe for the integration of RTnet in Xenomai 3, those of you who want will be able to test it when Xenomai 3 next release candidate is released. What will be in that release candidate is what was in RTnet git, patched up to adapt it to the Linux and Xenomai API changes that broke its compilation, and to add the bits and pieces to be able to run some tests on the hardware I have (namely, the 8139too, r8169, at91_ether and macb drivers). Doing that job, I have found a few things to fix or to do differently, and am now able to say how I would like to do it. This mail is cross-posted on the RTnet mailing list, because I believe it somewhat concerns RTnet users and developers. We can divide RTnet into three parts: - the tools - the stack - the drivers The tools are in good shape, I do not see any reason to fix them, except maybe for the rtcfg stuff where an ioctl uses filp_open in kernel-space which I find a bit useless, but this is not important, and can wait. The stack is not in bad shape either. The code needs a bit of refreshing, for instance using Linux list and hash lists instead of open coding linked list. But this will not cause any API change, so can wait too. The drivers, on the other hand, are a bit more worrying. They are based on seriously outdated versions of the corresponding Linux drivers, with support for much less hardware, and probably missing some bug fixes. So, putting the drivers into a better shape and making it easy to track mainline changes will be my first priority. With the 3 drivers I had to adapt, I tested the two possible approach to updating a driver. For r8169 I hand picked in Linux driver what was needed to support the chip I have (a 8136) and adapted it to the code difference between the two versions of the driver. For at91_ether and macb drivers, I restarted from the current state of the mainline Linux. The second approach is easier and more satisfying than the first, because at least you can get all the mainline fixes, but to my taste, not easy enough. I believe the first order of business is to change the rtdev API so that this port is easy, and in fact, if possible has a first automated step. So, I intend to change rtdm and rtdev API to reach this goal: - rt_stack_connect, rt_rtdev_connect, rt_stack_mark will be removed and replaced with mechanisms integrated in the rtdev API - rtdm_irq_request/rtdm_irq_free will be modified to have almost the same interface as request_irq, in particular removing the need for a driver to provide the storage for the rtdm_irq_t handles, which will then become unneeded. This makes life easier for drivers which register multiple irq handlers. - rtdm_devm_irq_request or rtdev_irq_request will be introduced with a behaviour similar to devm_request_irq, that is automatic unregistration of the irqs handlers at device destruction. Because automatically adding the missing calls to rtdm_irq_free to a code using devm_request_irq is hard. - the NAPI will be implemented. The NAPI thread will run with the priority of the highest priority waiting thread, and will call rt_stack_deliver, in order not to increase the RX latency compared to the current solution. This will make porting recent drivers easy and has the additional advantage of irq handlers not creating large irq masking sections like in the current design, which even borders priority inversion if the bulk of the received packets is for RTmac vnics or rtnetproxy. Maybe the RTnet drivers contain some modifications for low latency like reducing the TX ring length, but I believe putting these modifications in the drivers is not a so good idea: - it means that the modification is to be made in each driver, and needs to be maintained; - in the particular case of the TX ring, reducing the number of queued packets in hardware is not a really precise way to guarantee a bounded latency, because the latency is proportional to the number of bytes queued, not on the number of packets, and ethernet packets have widely varying sizes. So, I propose to try and implement these modifications in the rtdev stack. For the case of the TX ring, implementing a TX queue which keeps track of how much time are worth the number of bytes currently queued in hardware (including preamble and inter packets gap) and stop queuing when this value reaches a configurable threshold (the maximum TX latency), and restart the queue when this value reaches the interrupt latency. The queue will be ordered by sender priority, so that when a high priority thread queues a packet, the packet will never take more than the threshold to reach the wire, even if a low priority or the RTmac vnic drivers or rtnetproxy are using the full bandwidth (which will remain possible if the threshold is higher than the current average irq latency). Please feel free to send any reaction to this mail. Regards. -- Gilles. |
From: Gilles C. <gil...@xe...> - 2014-10-14 05:09:31
|
On Tue, Oct 14, 2014 at 04:46:33AM +0000, YOGESH GARG wrote: > Hi All, > > > > As Xenomai 3 stable version is released, is there any plan by RTnet developers > to support Xenomai 3. The plan is to integrate rtnet into xenomai 3. I will do it soon. -- Gilles. |
From: YOGESH G. <yog...@sa...> - 2014-10-14 04:46:44
|
<HTML><HEAD><TITLE>Samsung Enterprise Portal mySingle</TITLE> <META content=IE=5 http-equiv=X-UA-Compatible> <META content="text/html; charset=windows-1252" http-equiv=Content-Type> <STYLE id=mysingle_style type=text/css>P { MARGIN-BOTTOM: 5px; FONT-SIZE: 9pt; FONT-FAMILY: Arial, arial; MARGIN-TOP: 5px } TD { MARGIN-BOTTOM: 5px; FONT-SIZE: 9pt; FONT-FAMILY: Arial, arial; MARGIN-TOP: 5px } LI { MARGIN-BOTTOM: 5px; FONT-SIZE: 9pt; FONT-FAMILY: Arial, arial; MARGIN-TOP: 5px } BODY { FONT-SIZE: 9pt; FONT-FAMILY: Arial, arial; MARGIN: 10px; LINE-HEIGHT: 1.4 } </STYLE> <META name=GENERATOR content=ActiveSquare></HEAD> <BODY> <P style="LINE-HEIGHT: 100%"><FONT style="FONT-SIZE: 10pt; FONT-FAMILY: Times New Roman" face="Bodoni MT Black"><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Arial">Hi All,</SPAN></FONT></P> <P style="LINE-HEIGHT: 100%"><FONT style="FONT-SIZE: 10pt; FONT-FAMILY: Times New Roman" face="Bodoni MT Black"><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Arial"></SPAN></FONT> </P> <P style="LINE-HEIGHT: 100%"><FONT style="FONT-SIZE: 10pt; FONT-FAMILY: Times New Roman" face="Bodoni MT Black"><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Arial">As Xenomai 3 stable version is released, is there any plan by RTnet developers to support Xenomai 3.</SPAN></FONT></P> <P style="LINE-HEIGHT: 100%"><FONT style="FONT-SIZE: 10pt; FONT-FAMILY: Times New Roman" face="Bodoni MT Black"><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Arial"></SPAN></FONT> </P> <P style="LINE-HEIGHT: 100%"><FONT style="FONT-SIZE: 10pt; FONT-FAMILY: Times New Roman" face="Bodoni MT Black"><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Arial">Regards,</SPAN></FONT></P> <P style="LINE-HEIGHT: 100%"><FONT style="FONT-SIZE: 10pt; FONT-FAMILY: Times New Roman" face="Bodoni MT Black"><SPAN style="FONT-SIZE: 12pt; FONT-FAMILY: Arial"></SPAN></FONT><FONT style="FONT-SIZE: 12pt; FONT-FAMILY: Times New Roman" face="Arial Black"><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Arial">Yogesh Garg</SPAN></FONT></P> <P style="LINE-HEIGHT: 100%"><FONT style="FONT-SIZE: 12pt; FONT-FAMILY: Times New Roman" face="Arial Black"><SPAN style="FONT-FAMILY: Arial"></SPAN></FONT><FONT style="FONT-SIZE: 10pt; FONT-FAMILY: Arial" face="Arial Black">HME Part, </FONT><FONT style="FONT-SIZE: 10pt; FONT-FAMILY: Arial" face="Arial Black">SRI-B</FONT></P> <P style="LINE-HEIGHT: 100%"><FONT style="FONT-SIZE: 10pt; FONT-FAMILY: Arial" face="Arial Black">Bangalore </FONT></P> <P style="LINE-HEIGHT: 100%"><FONT style="FONT-SIZE: 10pt; FONT-FAMILY: Arial" face="Arial Black">(</FONT><FONT style="FONT-SIZE: 10pt; FONT-FAMILY: Arial" face="Arial Black">M: +919986648900)</FONT></P> <P style="LINE-HEIGHT: 2pt"><FONT style="FONT-SIZE: 10pt" face=Arial>--------------------------------------------------------------------------------------------------------</FONT></P> <P style="LINE-HEIGHT: 2pt"><FONT style="FONT-SIZE: 24pt" face="Times New Roman"><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Arial"> </SPAN></FONT></P><!--yogesh.garg:EP--> <P> </P> <TABLE id=confidentialsignimg> <TBODY> <TR> <TD NAMO_LOCK> <P><IMG border=0 src="http://www.samsung.net//pt_images/C10/securityimage/MSI_20140516060452246.gif"><!--NamoWec7 Generated Comment [http://www.samsung.net//pt_images/C10/securityimage/MSI_20140516060452246.gif]--></P></TD></TR></TBODY></TABLE></BODY></HTML><img src='http://ext.samsung.net/mailcheck/SeenTimeChecker?do=98934eddeedafa1ed2e458c95b85ad7a0ca857ca12ed3cb1fb0fe005e08c772519a3cb386762cfc8ccf6268eecbc3c808048f33cf1ed6b02263067d8b8f027b11b20909a04efd4d2748cfe1d4e847419cf878f9a26ce15a0' border=0 width=0 height=0 style='display:none'> |
From: Stéphane A. <san...@fr...> - 2014-09-17 12:30:15
|
Hi, I am thinking about an implementation I need, and I am not sure which way will be the best . 1/ I need to catch tcp/ip packets routed from non realtime nic eth0 . 2/ I encapsulate the tcp/ip frames in the final protocol and send the frame request to rteth0 (realtime) 3/ I will receive reply on realtime rteth0 s. I will then remove encapsulation from this frame and send it back to non realtime net through eth0. In this scenario, I will need routing in rtnet stack ? I don't want to manage ARP messages. Do I need tun/tap interface ? ETH0 ===> Encapsulating packet with an application ===> RTETH0 reply : RTETH0 => removing encapsulation => ETH0 Regards, Steph |
From: Long Q. <lq...@jh...> - 2014-09-10 06:08:13
|
Hi all, I was doing some timing test between RTnet and non-RT ethernet module. I tried nomac and rtmac without TDMA. The problem I come with is same for the two configurations. A socket program on RTnet keeps sending and receiving Ethernet frame, the non-RT side is just an echo program. When the program continuously sends out frames from RTnet without any interval, the socket will receive same echo frame for 3 times. But when an interval of some millisecond is inserted, one echo frame will be received, which is what I expected. rtnet, rt_e1000e, rtmac, nomac, rtipv4, rtcap, rtudp, rtpacket modules are inserted when the program is running. My RTnet version is 0.9.13 and Xenomai version is 2.6.38.? Does anyone has any idea about this? Thanks! Qian |
From: Hidde V. (E2M) <hve...@e2...> - 2014-07-01 14:18:09
|
On Mon, Jun 30, 2014 at 6:35 PM, Gilles Chanteperdrix <gil...@xe...> wrote: > On 06/30/2014 06:24 PM, Hidde Verstoep (E2M) wrote: >> Hello, >> >> I'm trying to get RTnet to work on the Beaglebone Black (BBB). >> However, I have come to a point where I don't know how to proceed. I >> think here is a good place to ask the questions I have. >> >> Installing Xenomai was quite simple using this tutorial: >> http://brunosmartins.info/xenomai-on-the-beaglebone-black-in-14-easy-steps/. >> I only deviated from these steps to ensure I compiled the the network >> driver components as modules. >> >> There are three modules for the network driver (as far as I can tell). >> Loading the non-RTnet modules goes as follows: >> $ modprobe smsc >> $ modprobe davinci_mdio >> $ modprobe ti_cpsw >> Insert the network cable, and after a while the link comes up and >> everything works fine. See dmesg output in the attached file >> ticpsw.log. >> >> My ported version for the modules (as well as the r8169 driver I >> ported earlier) can be found here: >> https://github.com/hiddeate2m/rtnet/tree/master/. This is the entire >> RTnet source tree. The specific modules can be found in >> drivers/rt_smsc.c, drivers/rt_davinci_mdio.c and drivers/ticps/. The >> modules can be build and installed as usual. >> >> Loading the modules ported to RTnet is similar: >> $ insmod modules/rtnet.ko >> $ insmod modules/rt_smsc.ko >> $ insmod modules/rt_davinci_mdio.ko >> $ insmod modules/rt_ticpsw.ko >> $ ./sbin/rtifconfig rteth0 up >> Insert the network cable. When the link comes up after a while the BBB >> freezes completely. All the dmesg output I managed to capture can be >> found in the attached file rt_ticpsw.log. >> >> One important difference between the logs is of course the WARNING >> from ipipe.c. I searched the file, but could not find out if this is >> important or not. Does anyone know? > > The warning comes from the fact that you are running an SMP kernel on a > machine with only one processor. You can avoid it by compiling without > CONFIG_SMP. Another advantage of disabling CONFIG_SMP is that you will > gain better performances. > > -- > Gilles. Thanks Gilles. Disabling CONFIG_SMP did indeed remove the warning. It also approximately halved the worst case latency. Unfortunately it did not change the behaviour of the ported network driver. The attached logs are now almost identical. Do you (or anyone else) have any tips on how to debug this issue? I would like to have a trace from the point where the BBB freezes. However, I don't know how I could get something like that. Are there any (Xenomai specific?) resources available that could help me debug this issue further? Hidde |
From: Elker C. <elk...@gm...> - 2014-07-01 09:31:42
|
Hi all, after looking at driver source, I've figured out that rt_e1000e driver do not support link detection. I've added basic ethtool support for GLINK on e1000e, see attached patch. Regards, Elker Il 25/06/2014 11.53, Elker Cavina ha scritto: > Hi > I'm using RTNet-0.9.13 with RTAI-3.9.2; my embedded pc has two e1000e > ethernet ports (so rt_e1000e is employed). > > The first port is configured as standard linux (eth1), second port > gets configured to rtnet (firstly unbinded by linux driver, then > configured as rteth0). So far I'm working correctly in rtnet side; > I'm also using rtcap and configured rteth0 on linux side and tcpdump > works great to sniff realtime packets. > > Now I want to detect if carrier is lost on rtnet port; on linux > managed ethernet port looking at /sys/class/net/eth1/carrier I can > detect link loss, however in /sys/class/net/rteth0/carrier I always > get link active even on cable disconnected. > > Digging into code I can see carrier detected and managed in > rtdev->link_state, but I can't find the link to the sys filesystem or > an ioctl call to detect link status. > > Is possible to detect link status on rt_e1000e driver? > If not, can someone point to some code to implement this? > > Thanks, > Elker |
From: Gilles C. <gil...@xe...> - 2014-06-30 16:35:17
|
On 06/30/2014 06:24 PM, Hidde Verstoep (E2M) wrote: > Hello, > > I'm trying to get RTnet to work on the Beaglebone Black (BBB). > However, I have come to a point where I don't know how to proceed. I > think here is a good place to ask the questions I have. > > Installing Xenomai was quite simple using this tutorial: > http://brunosmartins.info/xenomai-on-the-beaglebone-black-in-14-easy-steps/. > I only deviated from these steps to ensure I compiled the the network > driver components as modules. > > There are three modules for the network driver (as far as I can tell). > Loading the non-RTnet modules goes as follows: > $ modprobe smsc > $ modprobe davinci_mdio > $ modprobe ti_cpsw > Insert the network cable, and after a while the link comes up and > everything works fine. See dmesg output in the attached file > ticpsw.log. > > My ported version for the modules (as well as the r8169 driver I > ported earlier) can be found here: > https://github.com/hiddeate2m/rtnet/tree/master/. This is the entire > RTnet source tree. The specific modules can be found in > drivers/rt_smsc.c, drivers/rt_davinci_mdio.c and drivers/ticps/. The > modules can be build and installed as usual. > > Loading the modules ported to RTnet is similar: > $ insmod modules/rtnet.ko > $ insmod modules/rt_smsc.ko > $ insmod modules/rt_davinci_mdio.ko > $ insmod modules/rt_ticpsw.ko > $ ./sbin/rtifconfig rteth0 up > Insert the network cable. When the link comes up after a while the BBB > freezes completely. All the dmesg output I managed to capture can be > found in the attached file rt_ticpsw.log. > > One important difference between the logs is of course the WARNING > from ipipe.c. I searched the file, but could not find out if this is > important or not. Does anyone know? The warning comes from the fact that you are running an SMP kernel on a machine with only one processor. You can avoid it by compiling without CONFIG_SMP. Another advantage of disabling CONFIG_SMP is that you will gain better performances. -- Gilles. |
From: Hidde V. (E2M) <hve...@e2...> - 2014-06-30 16:24:47
|
Hello, I'm trying to get RTnet to work on the Beaglebone Black (BBB). However, I have come to a point where I don't know how to proceed. I think here is a good place to ask the questions I have. Installing Xenomai was quite simple using this tutorial: http://brunosmartins.info/xenomai-on-the-beaglebone-black-in-14-easy-steps/. I only deviated from these steps to ensure I compiled the the network driver components as modules. There are three modules for the network driver (as far as I can tell). Loading the non-RTnet modules goes as follows: $ modprobe smsc $ modprobe davinci_mdio $ modprobe ti_cpsw Insert the network cable, and after a while the link comes up and everything works fine. See dmesg output in the attached file ticpsw.log. My ported version for the modules (as well as the r8169 driver I ported earlier) can be found here: https://github.com/hiddeate2m/rtnet/tree/master/. This is the entire RTnet source tree. The specific modules can be found in drivers/rt_smsc.c, drivers/rt_davinci_mdio.c and drivers/ticps/. The modules can be build and installed as usual. Loading the modules ported to RTnet is similar: $ insmod modules/rtnet.ko $ insmod modules/rt_smsc.ko $ insmod modules/rt_davinci_mdio.ko $ insmod modules/rt_ticpsw.ko $ ./sbin/rtifconfig rteth0 up Insert the network cable. When the link comes up after a while the BBB freezes completely. All the dmesg output I managed to capture can be found in the attached file rt_ticpsw.log. One important difference between the logs is of course the WARNING from ipipe.c. I searched the file, but could not find out if this is important or not. Does anyone know? If this is not important, I would appreciate it if someone could go over the changes I made and check whether I missed something obvious. I don't have a lot of experience developing kernel modules, so that would be entirely possible. One specific doubt I have is about this change: https://github.com/hiddeate2m/rtnet/commit/6d146ecb3e96f6e8246bab75f0215426758273b6#diff-4b6eba354f4ac056e652c3d4f3468d63R722. Does this method really support the rtnet_device structure? It is similar to the construct found in the rt_macb driver though. Kind regards, Hidde Verstoep E2M Technologies B.V. |
From: Wojciech D. <woj...@gm...> - 2014-06-30 11:48:09
|
I have found similar solution to the one presented at https://www.mail-archive.com/rtnet-users@.../msg01326.html In my case, it was required to edit the rtnet script. I have added 5 seconds of delay in submit_cfg() function, just after bringing rteth0 device up. Does anyone know why the rt_e1000e driver can't handle this? Wojciech Domski Domski.pl Wojciech.Domski.pl W dniu 2014-04-01 12:40, Wojciech Domski pisze: > Dear all, > > I am experiencing some problems using the rt_e1000e driver for RTnet. > > What I am doing is pretty simple, however not everything is working as > it supposed to. After I start the rtnet service I'm keep getting such > communicates from syslog: > > TDMA: Failed to transmit sync frame! > > and occasionally: > > rt_e1000e: Reset adapter > > After quite long time (at least few minutes) > the driver manages to establish the connection > and then everything is working as it should. > > As my set-up I'm using mother board with those ethernet devices: > > 00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network > Connection (rev 04) > 04:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network > Connection > a 8-port switch > a dedicated device bases on a microcontroller running RTnet. > > As I described before I was able to establish the transmission and > then everything is working as it should. The time needed for > establishing the connection is however unacceptable. > > Also I have tried repeating the same scenario but this time > only using the switch and everything is as previous. > Moreover, I tested this on different machines and I'm experiencing > the same problem. > > I've found this on mailing list: > > https://www.mail-archive.com/rtnet-users@.../msg01326.html > > > encouraged I've tried it by editing rtnet script by adding a substantial > delay > of few seconds after all the drivers were loaded. Unfortunately, > that didn't solved the problem. > > Different approach was to try the patch for the newest version > for e1000e driver and this one also didn't solve the problem. > > Best regards, > > -- > Wojciech Domski > > Domski.pl > > Wojciech.Domski.pl > > > > ------------------------------------------------------------------------------ > The best possible search technologies are now affordable for all companies. > Download your FREE open source Enterprise Search Engine today! > Our experts will assist you in its installation for $59/mo, no commitment. > Test it for FREE on our Cloud platform anytime! > http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk > _______________________________________________ > RTnet-users mailing list > RTn...@li... > https://lists.sourceforge.net/lists/listinfo/rtnet-users |
From: Klemen D. <kle...@ya...> - 2014-06-29 19:46:24
|
Hello everybody, I am testing the rtnet tdma master/slave configuration on two machines with 100 Mb ethernet card. I ran the TDMA calibration and the dmesg showed the result: "[ 3878.693492] TDMA: calibrated master-to-slave packet delay: 3 us (min/max: 2/4 us)". I am not sure how is it possible that the number is that small. The smallest possible ethernet frame, 64B in length, transmitted at 100 Mb/s would need at least 64*8/100E9 = 5,12 us to travel from master to slave. Does anybody know why do i get such a result? Regards Klemen |
From: Elker C. <elk...@gm...> - 2014-06-25 09:54:15
|
Hi I'm using RTNet-0.9.13 with RTAI-3.9.2; my embedded pc has two e1000e ethernet ports (so rt_e1000e is employed). The first port is configured as standard linux (eth1), second port gets configured to rtnet (firstly unbinded by linux driver, then configured as rteth0). So far I'm working correctly in rtnet side; I'm also using rtcap and configured rteth0 on linux side and tcpdump works great to sniff realtime packets. Now I want to detect if carrier is lost on rtnet port; on linux managed ethernet port looking at /sys/class/net/eth1/carrier I can detect link loss, however in /sys/class/net/rteth0/carrier I always get link active even on cable disconnected. Digging into code I can see carrier detected and managed in rtdev->link_state, but I can't find the link to the sys filesystem or an ioctl call to detect link status. Is possible to detect link status on rt_e1000e driver? If not, can someone point to some code to implement this? Thanks, Elker |
From: Mnatsakanyan, M. <m.m...@tu...> - 2014-06-20 11:23:03
|
Hi I use RTNet with Xenomai and I set up a connection with Windows machine (my purpose is to avoid switches to secondary mode). Now I am able to send packets from RT to non-RT station (I use UDP). The packets sent from non-RT station show up in Wireshark on RT station but rt_dev_recvfrom() does not receive them. In kernel log I see this: RTnet: no one cared for packet with layer 3 protocol type 0x4500 Can anyone suggest where should I look for a problem? The same situation is with rtping: packets are sent out and returned packets show up in Wireshark but rtping does not get them. I tried two configurations but found no difference in behavior: 1. without RTmac 2. with RTmac and nomac Regards, Mari |
From: Klemen D. <kle...@ya...> - 2014-05-30 09:45:44
|
Hello everybody, Has anybody successfully tested captured the sync frame on RTAI. I tried to run the tdma-api.c example on rtai (i modified the code a bit), but when i issue the rt_dev_ioctl(fd, RTMAC_RTIOC_WAITONCYCLE_EX, &waitinfo); it returns the "error code -38". Any idea why is that? I set the machine to tdma master on rteth0 interface and the sync frame is transmited. I sniffed the lan with wireshark and it is there. The code i tested is pasted below. Regards Klemen --------------------------------------------------------------------- #include <stdlib.h> #include <stdio.h> #include <signal.h> #include <sys/mman.h> #include <rtnet.h> #include <rtmac.h> int fd; int main(int argc, char *argv[]) { RT_TASK *task; struct rtmac_waitinfo waitinfo; int err; mlockall(MCL_CURRENT | MCL_FUTURE); fd = rt_dev_open("TDMA0", O_RDWR); if (fd < 0) { fprintf(stderr, "failed to open TDMA0\n"); exit(1); } rt_make_hard_real_time(); waitinfo.type = TDMA_WAIT_ON_SYNC; waitinfo.size = sizeof(waitinfo); while (1) { do { err = rt_dev_ioctl(fd, RTMAC_RTIOC_WAITONCYCLE_EX, &waitinfo); if (err) { fprintf(stderr, "failed to issue RTMAC_RTIOC_WAITONCYCLE_EX, " "error code %d\n", err); rt_dev_close(fd); exit(1); } } while (waitinfo.cycle_no%100 != 0); printf("cycle #%ld, start %.9f s, offset %lld ns\n", waitinfo.cycle_no, (waitinfo.cycle_start+waitinfo.clock_offset)/1000000000.0, (unsigned long long)waitinfo.clock_offset); } rt_make_soft_real_time(); rt_dev_close(fd); return(0); } |
From: Mariusz J. <mar...@wp...> - 2014-05-27 19:37:53
|
Dnia Wtorek, 27 Maja 2014 19:00 Gilles Chanteperdrix <gil...@xe...> napisał(a) > On 05/27/2014 12:47 PM, Mariusz Janiak wrote: > > Hi, > > > >> There are so many reasons that could happen. At first, are you sure > >> , your xenomai system has not any jitter involved ? And how have > >> you checked it ? > > > > It seems that, Xenomai is running stable, we have tested it with > > standard xeno utilities. We have observed similar problem with > > rt_e1000e driver on several different machines. The problem is > > following, tdma module periodically allocate memory for > > synchronization frame and then issue tx service from driver. When > > driver is not sending a frames, it does not release memory in tx > > interrupt. After a while, RTnet stack is out of memory and tdma > > report "TDMA: Failed to transmit sync frame!" (function > > tdma_xmit_sync_frame(...) in tdma_proto.c). Unfortunately I am not > > understand how it can be related with jitter. I wonder why driver in > > not sending a frame after calling rtmac_xmit(rtskb); by tdma module. Hi Gilles, > Are you sure that the problem is not simply that the phy is in its > auto-negotiation phase? Do you poll the mdio registers? > I have suspected a problem with PHY but I wasn't able to get deeper into driver code, especially PHY configuration and control part. I do not feel confident in this area. In a first step we have tried to understand why a standard linux driver operate without such problem, and what has changed since last rt driver update. We thought that problem may be outdated rt dirver. Thanks you suggestion we have new clue, now. Assuming that you are right, what should driver do in this case? Drop tx frames or wait until auto-negotiation will finish? Mariusz |
From: Gilles C. <gil...@xe...> - 2014-05-27 17:15:27
|
On 05/27/2014 12:47 PM, Mariusz Janiak wrote: > Hi, > >> There are so many reasons that could happen. At first, are you sure >> , your xenomai system has not any jitter involved ? And how have >> you checked it ? > > It seems that, Xenomai is running stable, we have tested it with > standard xeno utilities. We have observed similar problem with > rt_e1000e driver on several different machines. The problem is > following, tdma module periodically allocate memory for > synchronization frame and then issue tx service from driver. When > driver is not sending a frames, it does not release memory in tx > interrupt. After a while, RTnet stack is out of memory and tdma > report "TDMA: Failed to transmit sync frame!" (function > tdma_xmit_sync_frame(...) in tdma_proto.c). Unfortunately I am not > understand how it can be related with jitter. I wonder why driver in > not sending a frame after calling rtmac_xmit(rtskb); by tdma module. Are you sure that the problem is not simply that the phy is in its auto-negotiation phase? Do you poll the mdio registers? -- Gilles. |
From: Mariusz J. <mar...@wp...> - 2014-05-27 10:47:43
|
Hi, > There are so many reasons that could happen. > At first, are you sure , your xenomai system has not any jitter involved > ? And how have you checked it ? It seems that, Xenomai is running stable, we have tested it with standard xeno utilities. We have observed similar problem with rt_e1000e driver on several different machines. The problem is following, tdma module periodically allocate memory for synchronization frame and then issue tx service from driver. When driver is not sending a frames, it does not release memory in tx interrupt. After a while, RTnet stack is out of memory and tdma report "TDMA: Failed to transmit sync frame!" (function tdma_xmit_sync_frame(...) in tdma_proto.c). Unfortunately I am not understand how it can be related with jitter. I wonder why driver in not sending a frame after calling rtmac_xmit(rtskb); by tdma module. Mariusz > > Regards, > > S.Ancelot > > > > > > On 26/05/2014 11:16, Mariusz Janiak wrote: > > > Dear all, > > > > > > We are looking for a person who is able to update the rt_e1000e driver to make it supporting the newer Intel NIC like 82574L, 82579LM, 82567V correctly under the Xenomai 2.6.3 with a kernel at least 3.8.13 or newer. The problem is following, while RTnet modules and drivers load correctly without any warnings, NIC is not able to send ethernet frames for some random time. During that time, RTnet stack report that is not able to transmit synchronization frame ("TDMA: Failed to transmit sync > > > frame!"), the rt_e1000e driver reset device several times, and two leds integrated with connector are not blinking. After a while, device start to operate normally, sync frame are sending normally, and there is no further issues. The main problem is, that we don't know when RTnet stack start to work, so we have to wait unpredictable period of time. Also just after NIC start sending sync frames, it has buffered several old ones, so it send them one ofter another as fast as possible, what > make > > > problem for our custom devices during synchronization phase. Currently, we observe similar non acceptable behavior on following machines: > > > - Supermicro MBD-X9SPV-M4-3QE-O, > > > - Advantech PCM-3363D-1GS8A1E, > > > - Commel Mini-ITX Motherboard LV-67C. > > > with latest RTnet git version, Xenomai 2.6.3, kernel 3.8.13 and Ubuntu 12.04. > > > > > > We expect that an involved person should prepare a patch which remove described issues and applies cleanly to the latest RTnet git version. The patch should be available for a community on the same license as RTnet stack. Patch will be tested on listed devices. > > > > > > We offer a contract to perform a specified task, one of the parties will be > > > > > > Wroclaw University of Technology > > > Wybrzeże Wyspiańskiego 27 > > > 50-370 Wrocław > > > tel. 71 320 26 00 > > > NIP: 896-000-58-51 > > > REGON: 000001614 > > > www.pwr.edu.pl > > > > > > Please send your letter of application including, short CV, expected salary (EUR or PLN) and estimated time period required for patch preparation, to > > > > > > Mariusz Janiak > > > Chair of Cybernetics and Robotics, > > > Wroclaw University of Technology > > > Janiszewskiego 11/17, > > > Wroclaw, Poland, > > > tel: +48 71 320 26 44 > > > email: mar...@pw... > > > > > > Best regards, > > > Mariusz > > > > > > > > > > > > > > > ------------------------------------------------------------------------------ > > > The best possible search technologies are now affordable for all companies. > > > Download your FREE open source Enterprise Search Engine today! > > > Our experts will assist you in its installation for $59/mo, no commitment. > > > Test it for FREE on our Cloud platform anytime! > > > http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk > > > _______________________________________________ > > > RTnet-users mailing list > > > RTn...@li... > > > https://lists.sourceforge.net/lists/listinfo/rtnet-users > > > > > > ------------------------------------------------------------------------------ > > The best possible search technologies are now affordable for all companies. > > Download your FREE open source Enterprise Search Engine today! > > Our experts will assist you in its installation for $59/mo, no commitment. > > Test it for FREE on our Cloud platform anytime! > > http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk > > _______________________________________________ > > RTnet-users mailing list > > RTn...@li... > > https://lists.sourceforge.net/lists/listinfo/rtnet-users |
From: Stéphane A. <san...@fr...> - 2014-05-27 07:41:29
|
Hi, There are so many reasons that could happen. At first, are you sure , your xenomai system has not any jitter involved ? And how have you checked it ? Regards, S.Ancelot On 26/05/2014 11:16, Mariusz Janiak wrote: > Dear all, > > We are looking for a person who is able to update the rt_e1000e driver to make it supporting the newer Intel NIC like 82574L, 82579LM, 82567V correctly under the Xenomai 2.6.3 with a kernel at least 3.8.13 or newer. The problem is following, while RTnet modules and drivers load correctly without any warnings, NIC is not able to send ethernet frames for some random time. During that time, RTnet stack report that is not able to transmit synchronization frame ("TDMA: Failed to transmit sync > frame!"), the rt_e1000e driver reset device several times, and two leds integrated with connector are not blinking. After a while, device start to operate normally, sync frame are sending normally, and there is no further issues. The main problem is, that we don't know when RTnet stack start to work, so we have to wait unpredictable period of time. Also just after NIC start sending sync frames, it has buffered several old ones, so it send them one ofter another as fast as possible, what make > problem for our custom devices during synchronization phase. Currently, we observe similar non acceptable behavior on following machines: > - Supermicro MBD-X9SPV-M4-3QE-O, > - Advantech PCM-3363D-1GS8A1E, > - Commel Mini-ITX Motherboard LV-67C. > with latest RTnet git version, Xenomai 2.6.3, kernel 3.8.13 and Ubuntu 12.04. > > We expect that an involved person should prepare a patch which remove described issues and applies cleanly to the latest RTnet git version. The patch should be available for a community on the same license as RTnet stack. Patch will be tested on listed devices. > > We offer a contract to perform a specified task, one of the parties will be > > Wroclaw University of Technology > Wybrzeże Wyspiańskiego 27 > 50-370 Wrocław > tel. 71 320 26 00 > NIP: 896-000-58-51 > REGON: 000001614 > www.pwr.edu.pl > > Please send your letter of application including, short CV, expected salary (EUR or PLN) and estimated time period required for patch preparation, to > > Mariusz Janiak > Chair of Cybernetics and Robotics, > Wroclaw University of Technology > Janiszewskiego 11/17, > Wroclaw, Poland, > tel: +48 71 320 26 44 > email: mar...@pw... > > Best regards, > Mariusz > > > > > ------------------------------------------------------------------------------ > The best possible search technologies are now affordable for all companies. > Download your FREE open source Enterprise Search Engine today! > Our experts will assist you in its installation for $59/mo, no commitment. > Test it for FREE on our Cloud platform anytime! > http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk > _______________________________________________ > RTnet-users mailing list > RTn...@li... > https://lists.sourceforge.net/lists/listinfo/rtnet-users |
From: Mariusz J. <mar...@wp...> - 2014-05-26 09:16:25
|
Dear all, We are looking for a person who is able to update the rt_e1000e driver to make it supporting the newer Intel NIC like 82574L, 82579LM, 82567V correctly under the Xenomai 2.6.3 with a kernel at least 3.8.13 or newer. The problem is following, while RTnet modules and drivers load correctly without any warnings, NIC is not able to send ethernet frames for some random time. During that time, RTnet stack report that is not able to transmit synchronization frame ("TDMA: Failed to transmit sync frame!"), the rt_e1000e driver reset device several times, and two leds integrated with connector are not blinking. After a while, device start to operate normally, sync frame are sending normally, and there is no further issues. The main problem is, that we don't know when RTnet stack start to work, so we have to wait unpredictable period of time. Also just after NIC start sending sync frames, it has buffered several old ones, so it send them one ofter another as fast as possible, what make problem for our custom devices during synchronization phase. Currently, we observe similar non acceptable behavior on following machines: - Supermicro MBD-X9SPV-M4-3QE-O, - Advantech PCM-3363D-1GS8A1E, - Commel Mini-ITX Motherboard LV-67C. with latest RTnet git version, Xenomai 2.6.3, kernel 3.8.13 and Ubuntu 12.04. We expect that an involved person should prepare a patch which remove described issues and applies cleanly to the latest RTnet git version. The patch should be available for a community on the same license as RTnet stack. Patch will be tested on listed devices. We offer a contract to perform a specified task, one of the parties will be Wroclaw University of Technology Wybrzeże Wyspiańskiego 27 50-370 Wrocław tel. 71 320 26 00 NIP: 896-000-58-51 REGON: 000001614 www.pwr.edu.pl Please send your letter of application including, short CV, expected salary (EUR or PLN) and estimated time period required for patch preparation, to Mariusz Janiak Chair of Cybernetics and Robotics, Wroclaw University of Technology Janiszewskiego 11/17, Wroclaw, Poland, tel: +48 71 320 26 44 email: mar...@pw... Best regards, Mariusz |
From: Mnatsakanyan, M. <m.m...@tu...> - 2014-05-13 12:36:00
|
Hi, I need to setup communication between a realtime and a non-realtime machines. I am not expert in this field so I tried to read all relevant documents and posts carefully. There are several posts related to this problem but they did not help me to solve the problem. Can someone check what I am doing wrong or just list the steps which I need to follow? ----------Real time setup----------- Linux kernel version 3.8.13 xenomai version 2.6.3 RTNet version 0.9.13 NIC driver: r8169 Installation of RTNet and rt version of the driver were successful and I am able to bring up interfaces. I am trying to bring up communication with non-rt windows machine. Here is what I did: let's say ipaddr, macaddr, gwaddr are IP, MAC and Gateway addresses of my rt machine and ipaddr_w, macaddr_w for windows machine. I did not use rtnet script as inserts also TDMA module, which I cannot use in this case. rmmod r8169 mknod /dev/rtnet c 10 240 insmod rtnet.ko insmod rtipv4.ko insmod rt_r8169.ko insmod rtmac.ko insmod nomac.ko insmod rtudp.ko insmod rtpacket.ko insmod rt_loopback.ko insmod rtcap.ko rtifconfig rtlo up 127.0.0.1 rtifconfig rteth1 up ipaddr netmask 255.255.255.0 nomaccfg rteth1 attach rtroute add ipaddr netmask 255.255.255.0 gw gwaddr rtroute add gwaddr macaddr dev rteth1 rtroute add ipaddr_w macaddr_w dev rteth1 I can confirm that rteth1 and rtlo are up. When I try to ping from both machines, I get request timed out (or 100% packet loss) . Am I missing something? Best regards, Mari |
From: Mariusz J. <mar...@wp...> - 2014-05-07 11:09:56
|
Hi, I am not an expert on this field, maybe Jan can help you. You can try to add vendor id and device id (lspci -nn) to pci_device_id rtl8169_pci_tbl[]. You have to identify what has change in r8169 linux driver since last update of rt_r8169 rtnet driver and port these changes to rtnet driver. Mariusz Dnia Środa, 7 Maja 2014 11:48 Mnatsakanyan, M. <m.m...@tu...> napisał(a) > Hello, > > Thank you for response. > > My non-RT driver module is r8169 and I used rt_r8169 available in RTNet. > So I guess the problem is with NIC. > > I would like to give a try to maintain the driver to support the NIC I have. > > Could you guide me through what has to be done? > > This is the controller I have: > Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 01) > > Best regards, > Mari > ________________________________________ > From: Mariusz Janiak [mar...@wp...] > Sent: Tuesday, May 06, 2014 6:43 PM > To: Mnatsakanyan, M. > Cc: rtn...@li... > Subject: RE: [RTnet-users] RTNet compilation failed > > Hi, > > I think, that you have chosen wrong driver or your NIC is not supported by selected rt driver. Unfortunately rt drivers haven't been updated recently, so there can be a problem with newer NICs. > > Best regards, > Maruisz > > > Dnia Wtorek, 6 Maja 2014 16:23 Mnatsakanyan, M. <m.m...@tu...> napisał(a) > > Thank you, that solved the problem with compilation, but now I am having a different problem. > > > > Now I get errors on "rtnet start". Again, I follow the installation & test instructions. > > > > The errors are as follows: > > > > rteth0: ERROR while getting interface flags: No such device > > rteth0-mac: ERROR while getting interface flags: No such device > > ioctl: No such device > > ioctl: No such device > > ioctl: No such device > > ioctl: No such device > > ioctl (add): No such device > > vnic0: ERROR while getting interface flags: No such device > > SIOCSIFADDR: No such device > > vnic0: ERROR while getting interface flags: No such device > > SIOCSIFNETMASK: No such device > > Waiting for all slaves...ioctl: No such device > > ioctl: No such device > > > > Regards, > > Mari > > ________________________________________ > > From: Mariusz Janiak [mar...@wp...] > > Sent: Tuesday, May 06, 2014 1:45 PM > > To: Mnatsakanyan, M. > > Cc: rtn...@li... > > Subject: Re: [RTnet-users] RTNet compilation failed > > > > Hi, > > > > Try RTnet git version, this issue has been resolved already. The problem is with __devinit and __devexit that are not supported by newer kernels. You can remove them by your own. > > > > Best regards, > > Mariusz > > > > Dnia Wtorek, 6 Maja 2014 13:29 Mnatsakanyan, M. <m.m...@tu...> napisał(a) > > > Hi, > > > > > > > > > I am having a problem with compilation. > > > I used instructions in the page http://www.xenomai.org/index.php/RTnet:Installation_%26_Testing > > > > > > > > > Here are versions I use: > > > > > > > > > RTNet: 0.9.13 > > > Xenomai: 2.6.3 > > > Linux kernel: 3.8.13 > > > > > > In config, I selected only Realtek 8169 (Gigabit) driver, which corresponds to my network card. > > > > > > I get the following errors: > > > > > > > > > /usr/src/rtnet/drivers/rt_r8169.c:218:47: error: expected =, ,, ;, asm or __attribute__ before __devinitdata > > > static struct pci_device_id rtl8169_pci_tbl[] __devinitdata = { > > > ^ > > > /usr/src/rtnet/drivers/rt_r8169.c:656:22: error: expected =, ,, ;, asm or __attribute__ before rtl8169_init_board > > > static int __devinit rtl8169_init_board ( struct pci_dev *pdev, struct rtnet_device **dev_out, unsigned long *ioaddr_out) > > > ^ > > > /usr/src/rtnet/drivers/rt_r8169.c:821:22: error: expected =, ,, ;, asm or __attribute__ before rtl8169_init_one > > > static int __devinit rtl8169_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) > > > ^ > > > /usr/src/rtnet/drivers/rt_r8169.c:1066:23: error: expected =, ,, ;, asm or __attribute__ before rtl8169_remove_one > > > static void __devexit rtl8169_remove_one (struct pci_dev *pdev) > > > ^ > > > /usr/src/rtnet/drivers/rt_r8169.c:2071:12: error: rtl8169_pci_tbl undeclared here (not in a function) > > > id_table: rtl8169_pci_tbl, > > > ^ > > > /usr/src/rtnet/drivers/rt_r8169.c:2072:10: error: rtl8169_init_one undeclared here (not in a function) > > > probe: rtl8169_init_one, > > > ^ > > > /usr/src/rtnet/drivers/rt_r8169.c:2073:11: error: rtl8169_remove_one undeclared here (not in a function) > > > remove: rtl8169_remove_one, > > > > > > > > > Regards, > > > Mari > > > > > > > > > > > > > > > ------------------------------------------------------------------------------ > > > Is your legacy SCM system holding you back? Join Perforce May 7 to find out: > > > • 3 signs your SCM is hindering your productivity > > > • Requirements for releasing software faster > > > • Expert tips and advice for migrating your SCM now > > > http://p.sf.net/sfu/perforce > > > _______________________________________________ > > > RTnet-users mailing list > > > RTn...@li... > > > https://lists.sourceforge.net/lists/listinfo/rtnet-users |