Thread: [RTnet-developers] RTnet in Xenomai 3, status and future
Brought to you by:
bet-frogger,
kiszka
From: Gilles C. <gil...@xe...> - 2014-11-21 09:09:06
|
Hi, as some of you may have seen, I have sent the pull request to Philippe for the integration of RTnet in Xenomai 3, those of you who want will be able to test it when Xenomai 3 next release candidate is released. What will be in that release candidate is what was in RTnet git, patched up to adapt it to the Linux and Xenomai API changes that broke its compilation, and to add the bits and pieces to be able to run some tests on the hardware I have (namely, the 8139too, r8169, at91_ether and macb drivers). Doing that job, I have found a few things to fix or to do differently, and am now able to say how I would like to do it. This mail is cross-posted on the RTnet mailing list, because I believe it somewhat concerns RTnet users and developers. We can divide RTnet into three parts: - the tools - the stack - the drivers The tools are in good shape, I do not see any reason to fix them, except maybe for the rtcfg stuff where an ioctl uses filp_open in kernel-space which I find a bit useless, but this is not important, and can wait. The stack is not in bad shape either. The code needs a bit of refreshing, for instance using Linux list and hash lists instead of open coding linked list. But this will not cause any API change, so can wait too. The drivers, on the other hand, are a bit more worrying. They are based on seriously outdated versions of the corresponding Linux drivers, with support for much less hardware, and probably missing some bug fixes. So, putting the drivers into a better shape and making it easy to track mainline changes will be my first priority. With the 3 drivers I had to adapt, I tested the two possible approach to updating a driver. For r8169 I hand picked in Linux driver what was needed to support the chip I have (a 8136) and adapted it to the code difference between the two versions of the driver. For at91_ether and macb drivers, I restarted from the current state of the mainline Linux. The second approach is easier and more satisfying than the first, because at least you can get all the mainline fixes, but to my taste, not easy enough. I believe the first order of business is to change the rtdev API so that this port is easy, and in fact, if possible has a first automated step. So, I intend to change rtdm and rtdev API to reach this goal: - rt_stack_connect, rt_rtdev_connect, rt_stack_mark will be removed and replaced with mechanisms integrated in the rtdev API - rtdm_irq_request/rtdm_irq_free will be modified to have almost the same interface as request_irq, in particular removing the need for a driver to provide the storage for the rtdm_irq_t handles, which will then become unneeded. This makes life easier for drivers which register multiple irq handlers. - rtdm_devm_irq_request or rtdev_irq_request will be introduced with a behaviour similar to devm_request_irq, that is automatic unregistration of the irqs handlers at device destruction. Because automatically adding the missing calls to rtdm_irq_free to a code using devm_request_irq is hard. - the NAPI will be implemented. The NAPI thread will run with the priority of the highest priority waiting thread, and will call rt_stack_deliver, in order not to increase the RX latency compared to the current solution. This will make porting recent drivers easy and has the additional advantage of irq handlers not creating large irq masking sections like in the current design, which even borders priority inversion if the bulk of the received packets is for RTmac vnics or rtnetproxy. Maybe the RTnet drivers contain some modifications for low latency like reducing the TX ring length, but I believe putting these modifications in the drivers is not a so good idea: - it means that the modification is to be made in each driver, and needs to be maintained; - in the particular case of the TX ring, reducing the number of queued packets in hardware is not a really precise way to guarantee a bounded latency, because the latency is proportional to the number of bytes queued, not on the number of packets, and ethernet packets have widely varying sizes. So, I propose to try and implement these modifications in the rtdev stack. For the case of the TX ring, implementing a TX queue which keeps track of how much time are worth the number of bytes currently queued in hardware (including preamble and inter packets gap) and stop queuing when this value reaches a configurable threshold (the maximum TX latency), and restart the queue when this value reaches the interrupt latency. The queue will be ordered by sender priority, so that when a high priority thread queues a packet, the packet will never take more than the threshold to reach the wire, even if a low priority or the RTmac vnic drivers or rtnetproxy are using the full bandwidth (which will remain possible if the threshold is higher than the current average irq latency). Please feel free to send any reaction to this mail. Regards. -- Gilles. |
From: Jan K. <jan...@we...> - 2014-11-21 10:32:44
Attachments:
signature.asc
|
On 2014-11-21 10:08, Gilles Chanteperdrix wrote: > Hi, > > as some of you may have seen, I have sent the pull request to > Philippe for the integration of RTnet in Xenomai 3, those of you who > want will be able to test it when Xenomai 3 next release candidate > is released. What will be in that release candidate is what was in > RTnet git, patched up to adapt it to the Linux and Xenomai API > changes that broke its compilation, and to add the bits and pieces > to be able to run some tests on the hardware I have (namely, the > 8139too, r8169, at91_ether and macb drivers). For x86, support for e1000e and possibly also igb will be very helpful. Those NICs dominate the market now, specifically due to their on-chip / on-board presence. I think those two drives are in better shape than the legacy ones. > > Doing that job, I have found a few things to fix or to do > differently, and am now able to say how I would like to do it. This > mail is cross-posted on the RTnet mailing list, because I believe it > somewhat concerns RTnet users and developers. > > We can divide RTnet into three parts: > - the tools > - the stack > - the drivers > > The tools are in good shape, I do not see any reason to fix them, > except maybe for the rtcfg stuff where an ioctl uses filp_open in > kernel-space which I find a bit useless, but this is not important, > and can wait. > > The stack is not in bad shape either. The code needs a bit of > refreshing, for instance using Linux list and hash lists instead of > open coding linked list. But this will not cause any API change, so > can wait too. > > The drivers, on the other hand, are a bit more worrying. They are > based on seriously outdated versions of the corresponding Linux > drivers, with support for much less hardware, and probably missing > some bug fixes. So, putting the drivers into a better shape and > making it easy to track mainline changes will be my first priority. > > With the 3 drivers I had to adapt, I tested the two possible > approach to updating a driver. For r8169 I hand picked in Linux > driver what was needed to support the chip I have (a 8136) and > adapted it to the code difference between the two versions of the > driver. For at91_ether and macb drivers, I restarted from the > current state of the mainline Linux. The second approach is easier > and more satisfying than the first, because at least you can get all > the mainline fixes, but to my taste, not easy enough. > > I believe the first order of business is to change the rtdev API so > that this port is easy, and in fact, if possible has a first > automated step. So, I intend to change rtdm and rtdev API to reach > this goal: > - rt_stack_connect, rt_rtdev_connect, rt_stack_mark will be removed > and replaced with mechanisms integrated in the rtdev API > - rtdm_irq_request/rtdm_irq_free will be modified to have almost the > same interface as request_irq, in particular removing the need for a > driver to provide the storage for the rtdm_irq_t handles, which will > then become unneeded. This makes life easier for drivers which > register multiple irq handlers. > - rtdm_devm_irq_request or rtdev_irq_request will be introduced with > a behaviour similar to devm_request_irq, that is automatic > unregistration of the irqs handlers at device destruction. Because > automatically adding the missing calls to rtdm_irq_free to a code > using devm_request_irq is hard. Sounds good. > - the NAPI will be implemented. The NAPI thread will run with the > priority of the highest priority waiting thread, and will call > rt_stack_deliver, in order not to increase the RX latency compared > to the current solution. This will make porting recent drivers easy > and has the additional advantage of irq handlers not creating large > irq masking sections like in the current design, which even borders > priority inversion if the bulk of the received packets is for RTmac > vnics or rtnetproxy. Will be an interesting feature. However, whenever you share a link for RT and non-RT packets, you do have an unavoidable prio-inversion risk. The way to mitigate this is non-RT traffic control. > > Maybe the RTnet drivers contain some modifications for low latency > like reducing the TX ring length, but I believe putting these > modifications in the drivers is not a so good idea: The key modifications that were needed for drivers so far are: - TX/RX time stamping support - disabling of IRQ coalescing features for low-latency signaling - support for pre-mapping rings (to avoid triggering IOMMU paths during runtime) > - it means that the modification is to be made in each driver, and > needs to be maintained; > - in the particular case of the TX ring, reducing the number of > queued packets in hardware is not a really precise way to guarantee > a bounded latency, because the latency is proportional to the number > of bytes queued, not on the number of packets, and ethernet packets > have widely varying sizes. > > So, I propose to try and implement these modifications in the rtdev > stack. For the case of the TX ring, implementing a TX queue which > keeps track of how much time are worth the number of bytes currently > queued in hardware (including preamble and inter packets gap) and > stop queuing when this value reaches a configurable threshold (the > maximum TX latency), and restart the queue when this value reaches > the interrupt latency. The queue will be ordered by sender priority, > so that when a high priority thread queues a packet, the packet will > never take more than the threshold to reach the wire, even if a low > priority or the RTmac vnic drivers or rtnetproxy are using the full > bandwidth (which will remain possible if the threshold is higher > than the current average irq latency). > > Please feel free to send any reaction to this mail. Thanks for picking up this task, it is very welcome and should help to keep this project alive! I ran out of time to take care of it, as people surely noticed. But it was always my plan as well to hand this over to the stronger Xenomai community when the code is in acceptable state. It's great to see this happening now! Jan |
From: Gilles C. <gil...@xe...> - 2014-11-21 11:10:26
Attachments:
signature.asc
|
On Fri, Nov 21, 2014 at 11:32:30AM +0100, Jan Kiszka wrote: > On 2014-11-21 10:08, Gilles Chanteperdrix wrote: > > Hi, > > > > as some of you may have seen, I have sent the pull request to > > Philippe for the integration of RTnet in Xenomai 3, those of you who > > want will be able to test it when Xenomai 3 next release candidate > > is released. What will be in that release candidate is what was in > > RTnet git, patched up to adapt it to the Linux and Xenomai API > > changes that broke its compilation, and to add the bits and pieces > > to be able to run some tests on the hardware I have (namely, the > > 8139too, r8169, at91_ether and macb drivers). > > For x86, support for e1000e and possibly also igb will be very helpful. > Those NICs dominate the market now, specifically due to their on-chip / > on-board presence. I think those two drives are in better shape than the > legacy ones. When compiling Xenomai 3 with RTnet on x86 (32 and 64), I enabled all the PCI drivers. So, they all compile as far as I know. I have not tested them of course, but since the rtnet stack has not changed (yet), they should continue to work if they were in a working state. > > - the NAPI will be implemented. The NAPI thread will run with the > > priority of the highest priority waiting thread, and will call > > rt_stack_deliver, in order not to increase the RX latency compared > > to the current solution. This will make porting recent drivers easy > > and has the additional advantage of irq handlers not creating large > > irq masking sections like in the current design, which even borders > > priority inversion if the bulk of the received packets is for RTmac > > vnics or rtnetproxy. > > Will be an interesting feature. However, whenever you share a link for > RT and non-RT packets, you do have an unavoidable prio-inversion risk. > The way to mitigate this is non-RT traffic control. This can only made on the sending side (which the solution I propose for tx queuing should somewhat achieve, BTW). On the receive side, the best we can do is get the NAPI thread to inherit the priority of the highest priority waiter, and reschedule as soon as it delivers a packet to a thread. So, the NAPI thread should not delay high priority tasks not currently waiting for a packet if there is no higher priority thread waiting for a packet. > > > > > Maybe the RTnet drivers contain some modifications for low latency > > like reducing the TX ring length, but I believe putting these > > modifications in the drivers is not a so good idea: > > The key modifications that were needed for drivers so far are: > - TX/RX time stamping support > - disabling of IRQ coalescing features for low-latency signaling > - support for pre-mapping rings (to avoid triggering IOMMU paths > during runtime) Ok, thanks. Could you point me at a drivers which have these modifications? Particularly the third one, because I believe mainline has RX/TX time stamping as well, and the NAPI should handle the second one. > > > - it means that the modification is to be made in each driver, and > > needs to be maintained; > > - in the particular case of the TX ring, reducing the number of > > queued packets in hardware is not a really precise way to guarantee > > a bounded latency, because the latency is proportional to the number > > of bytes queued, not on the number of packets, and ethernet packets > > have widely varying sizes. > > > > So, I propose to try and implement these modifications in the rtdev > > stack. For the case of the TX ring, implementing a TX queue which > > keeps track of how much time are worth the number of bytes currently > > queued in hardware (including preamble and inter packets gap) and > > stop queuing when this value reaches a configurable threshold (the > > maximum TX latency), and restart the queue when this value reaches > > the interrupt latency. The queue will be ordered by sender priority, > > so that when a high priority thread queues a packet, the packet will > > never take more than the threshold to reach the wire, even if a low > > priority or the RTmac vnic drivers or rtnetproxy are using the full > > bandwidth (which will remain possible if the threshold is higher > > than the current average irq latency). > > > > Please feel free to send any reaction to this mail. > > Thanks for picking up this task, it is very welcome and should help to > keep this project alive! I ran out of time to take care of it, as people > surely noticed. But it was always my plan as well to hand this over to > the stronger Xenomai community when the code is in acceptable state. > It's great to see this happening now! Thanks, as you know I have used RTnet a long time ago on an (ARM IXP465 based) ISP mixed IPBX/DSL internet gateway when working for this ISP, and I always wanted to contribute back the ideas I had implemented or could not even implement at the time. -- Gilles. |
From: Jan K. <jan...@we...> - 2014-11-21 11:22:30
Attachments:
signature.asc
|
On 2014-11-21 12:10, Gilles Chanteperdrix wrote: > On Fri, Nov 21, 2014 at 11:32:30AM +0100, Jan Kiszka wrote: >> On 2014-11-21 10:08, Gilles Chanteperdrix wrote: >>> Hi, >>> >>> as some of you may have seen, I have sent the pull request to >>> Philippe for the integration of RTnet in Xenomai 3, those of you who >>> want will be able to test it when Xenomai 3 next release candidate >>> is released. What will be in that release candidate is what was in >>> RTnet git, patched up to adapt it to the Linux and Xenomai API >>> changes that broke its compilation, and to add the bits and pieces >>> to be able to run some tests on the hardware I have (namely, the >>> 8139too, r8169, at91_ether and macb drivers). >> >> For x86, support for e1000e and possibly also igb will be very helpful. >> Those NICs dominate the market now, specifically due to their on-chip / >> on-board presence. I think those two drives are in better shape than the >> legacy ones. > > When compiling Xenomai 3 with RTnet on x86 (32 and 64), I enabled > all the PCI drivers. So, they all compile as far as I know. I have > not tested them of course, but since the rtnet stack has not changed > (yet), they should continue to work if they were in a working state. Ah, ok, perfect. > > >>> - the NAPI will be implemented. The NAPI thread will run with the >>> priority of the highest priority waiting thread, and will call >>> rt_stack_deliver, in order not to increase the RX latency compared >>> to the current solution. This will make porting recent drivers easy >>> and has the additional advantage of irq handlers not creating large >>> irq masking sections like in the current design, which even borders >>> priority inversion if the bulk of the received packets is for RTmac >>> vnics or rtnetproxy. >> >> Will be an interesting feature. However, whenever you share a link for >> RT and non-RT packets, you do have an unavoidable prio-inversion risk. >> The way to mitigate this is non-RT traffic control. > > This can only made on the sending side (which the solution I propose > for tx queuing should somewhat achieve, BTW). On the receive side, > the best we can do is get the NAPI thread to inherit the priority of > the highest priority waiter, and reschedule as soon as it delivers a > packet to a thread. So, the NAPI thread should not delay high > priority tasks not currently waiting for a packet if there is no > higher priority thread waiting for a packet. > >> >>> >>> Maybe the RTnet drivers contain some modifications for low latency >>> like reducing the TX ring length, but I believe putting these >>> modifications in the drivers is not a so good idea: >> >> The key modifications that were needed for drivers so far are: >> - TX/RX time stamping support >> - disabling of IRQ coalescing features for low-latency signaling >> - support for pre-mapping rings (to avoid triggering IOMMU paths >> during runtime) > > Ok, thanks. Could you point me at a drivers which have these > modifications? Particularly the third one, because I believe > mainline has RX/TX time stamping as well, and the NAPI should handle > the second one. Regarding time stamping: yes, mainline may have this now. You just need to check when it happens. The original philosophy was to have that as close to triggering the TX / receiving an RX event as feasible. Regarding IRQ coalescing: This is a hardware feature that aims at optimizing throughput and lowering CPU load. As such, it works against the goal of lowering individual event latencies. However, maybe such things are controllable in standard drivers today, just not in a consistent way. And regarding premapping: Just look that rt_igb or rt_e1000e. See e.g. c05d7bbfba. This change is mandatory for RT, at least for dual-domain setups. Jan |
From: Gilles C. <gil...@xe...> - 2014-11-21 14:19:31
Attachments:
signature.asc
|
On Fri, Nov 21, 2014 at 12:22:18PM +0100, Jan Kiszka wrote: > On 2014-11-21 12:10, Gilles Chanteperdrix wrote: > > On Fri, Nov 21, 2014 at 11:32:30AM +0100, Jan Kiszka wrote: > >> On 2014-11-21 10:08, Gilles Chanteperdrix wrote: > >>> Hi, > >>> > >>> as some of you may have seen, I have sent the pull request to > >>> Philippe for the integration of RTnet in Xenomai 3, those of you who > >>> want will be able to test it when Xenomai 3 next release candidate > >>> is released. What will be in that release candidate is what was in > >>> RTnet git, patched up to adapt it to the Linux and Xenomai API > >>> changes that broke its compilation, and to add the bits and pieces > >>> to be able to run some tests on the hardware I have (namely, the > >>> 8139too, r8169, at91_ether and macb drivers). > >> > >> For x86, support for e1000e and possibly also igb will be very helpful. > >> Those NICs dominate the market now, specifically due to their on-chip / > >> on-board presence. I think those two drives are in better shape than the > >> legacy ones. > > > > When compiling Xenomai 3 with RTnet on x86 (32 and 64), I enabled > > all the PCI drivers. So, they all compile as far as I know. I have > > not tested them of course, but since the rtnet stack has not changed > > (yet), they should continue to work if they were in a working state. > > Ah, ok, perfect. > > > > > > >>> - the NAPI will be implemented. The NAPI thread will run with the > >>> priority of the highest priority waiting thread, and will call > >>> rt_stack_deliver, in order not to increase the RX latency compared > >>> to the current solution. This will make porting recent drivers easy > >>> and has the additional advantage of irq handlers not creating large > >>> irq masking sections like in the current design, which even borders > >>> priority inversion if the bulk of the received packets is for RTmac > >>> vnics or rtnetproxy. > >> > >> Will be an interesting feature. However, whenever you share a link for > >> RT and non-RT packets, you do have an unavoidable prio-inversion risk. > >> The way to mitigate this is non-RT traffic control. > > > > This can only made on the sending side (which the solution I propose > > for tx queuing should somewhat achieve, BTW). On the receive side, > > the best we can do is get the NAPI thread to inherit the priority of > > the highest priority waiter, and reschedule as soon as it delivers a > > packet to a thread. So, the NAPI thread should not delay high > > priority tasks not currently waiting for a packet if there is no > > higher priority thread waiting for a packet. > > > >> > >>> > >>> Maybe the RTnet drivers contain some modifications for low latency > >>> like reducing the TX ring length, but I believe putting these > >>> modifications in the drivers is not a so good idea: > >> > >> The key modifications that were needed for drivers so far are: > >> - TX/RX time stamping support > >> - disabling of IRQ coalescing features for low-latency signaling > >> - support for pre-mapping rings (to avoid triggering IOMMU paths > >> during runtime) > > > > Ok, thanks. Could you point me at a drivers which have these > > modifications? Particularly the third one, because I believe > > mainline has RX/TX time stamping as well, and the NAPI should handle > > the second one. > > Regarding time stamping: yes, mainline may have this now. You just need > to check when it happens. The original philosophy was to have that as > close to triggering the TX / receiving an RX event as feasible. > > Regarding IRQ coalescing: This is a hardware feature that aims at > optimizing throughput and lowering CPU load. As such, it works against > the goal of lowering individual event latencies. However, maybe such > things are controllable in standard drivers today, just not in a > consistent way. If you mean RX irq coalescing, I guess NAPI is just the same thing done in software. For TX, I realized recently that the "TX complete" irq, was kind of bad for performances, this results in way to many interrupts when trying and generating traffic at full bandwidth. Especially since we are not that much in a hurry for reclaiming the TX packets memory, this does not impact latency visible to application, so coalescing TX irq is desirable as well. But this all seem to be doable in software. I will check that. > And regarding premapping: Just look that rt_igb or rt_e1000e. See e.g. > c05d7bbfba. This change is mandatory for RT, at least for dual-domain > setups. Ok, I will have a look, thanks. -- Gilles. |
From: Gilles C. <gil...@xe...> - 2014-11-21 21:48:47
|
On Fri, Nov 21, 2014 at 10:53:05AM +0100, Richard Cochran wrote: > Hi Gilles, > > On Fri, Nov 21, 2014 at 10:08:57AM +0100, Gilles Chanteperdrix wrote: > > The drivers, on the other hand, are a bit more worrying. They are > > based on seriously outdated versions of the corresponding Linux > > drivers, with support for much less hardware, and probably missing > > some bug fixes. So, putting the drivers into a better shape and > > making it easy to track mainline changes will be my first priority. > > This is the reason that, every time I considered using rtnet, I ended > up deciding against it. Well, on the other hand, picking one driver, and improving it for the project which needs it was not an insurmountable task. > > > Please feel free to send any reaction to this mail. > > Good luck, Thanks. > this work is *long* overdue! Well, for it to be overdue, it would have to have been due in the first place. And since as far as I know, nobody paid for it and was expecting a result, and nobody ever promised anything (well up to now, I kind of made the promise, now), I do not really think it was due. Or maybe I am just misinterpreting the meaning of overdue. Anyway, to make things perfectly clear on that subject, I am not doing this because I feel the Xenomai project owes it to anyone, only because I like the idea of working on that project and that it may be useful for a lot of Xenomai dual-kernel applications. Maybe the next step will be to integrate the support for PTP. In that case, I hope we will get your help to do the job. Regards. -- Gilles. |
From: Gilles C. <gil...@xe...> - 2014-11-22 08:56:56
|
On Fri, Nov 21, 2014 at 11:08:44PM +0100, Richard Cochran wrote: > On Fri, Nov 21, 2014 at 10:48:39PM +0100, Gilles Chanteperdrix wrote: > > On Fri, Nov 21, 2014 at 10:53:05AM +0100, Richard Cochran wrote: > > > This is the reason that, every time I considered using rtnet, I ended > > > up deciding against it. > > > > Well, on the other hand, picking one driver, and improving it for > > the project which needs it was not an insurmountable task. > > Except when you have a napi driver. Then, although not insurmountable, > it still is a formidable task. (But that is changing, now ;) > > > > this work is *long* overdue! > > > > Well, for it to be overdue, it would have to have been due in the > > first place. And since as far as I know, nobody paid for it and was > > expecting a result, and nobody ever promised anything (well up to > > now, I kind of made the promise, now), I do not really think it was > > due. Or maybe I am just misinterpreting the meaning of overdue. > > I did mean that you or anyone else owed anyone anything. I only meant > to say that it was unfortunate that rtnet has stagnated, especially > considering the wide industry interest in real time Ethernet. If you need a project, and you let it stagnate, then you can not really complain, you are just as responsible as every other user doing the same. > > And for paying the bill, no one wants to pay for preempt_rt either! I did not say no one pays the bill for Xenomai! Just that no one paid for refreshing RTNet. For the record, just speaking for myself, it is quite the contrary. The contribution to RTnet of support for NIC statistics and Xenomai and RTnet support for the select service, to name the most significant patches, was made while I was paid to work with Xenomai and RTnet. Since then, I have been doing my maintainer job mostly as a hobbyist, which BTW, I believe did not result in a dramatic decrease in either the quality or the quantity of my contributions. You can find on this page: http://www.denx.de/en/Software/SoftwareXenomaiProjects some ports of Xenomai to ARM boards for which I earned money (and a little more which are not on the page, Freescale i.MX53, i.MX6Q, i.MX28 and ST SPEAr600). And finally, I have received donations of boards from several companies, for which I thank them, it is really helping me doing the maintainer job. I am not going to list the companies here, because I am not sure all of them want it to be publicly known that they use Xenomai (Xenomai has that kind of users), but I can give the boards list: Cogent Computer CSB637, a custom board based on AT91SAM9260, an Advantech board based on AMD Geode, an AOpen mini-PC based on Intel Core 2 duo, a Calao systems USBA9263, an ISEE IGEPv2, recently an Atmel Xplained board, based on the brand new AT91SAMA5D3, and just yesterday, a mail from a company proposing me an i.MX6Q based board. > > Maybe the next step will be to integrate the support for PTP. In > > that case, I hope we will get your help to do the job. > > Happy to help, if I can. If the drivers can stay close to mainline, > then probably there isn't too much to do. The ptp stack itself does > not need real time guarantees in order to operate. The one thing I can > think of that might be needed is synchronization from the MAC clock to > the xenomai system clock. This is going to be hard. Maybe easier, we could add a posix clock ID, like CLOCK_PTP, to have access to that clock from primary mode. > > Also, regarding rtnet, the i210 is an inexpensive card that allows > transmission scheduling at given time. That might be something to take > advantage of. Ok, thanks, will look into it. Regards. -- Gilles. |
From: Gilles C. <gil...@xe...> - 2014-11-22 09:17:14
|
On Sat, Nov 22, 2014 at 12:23:49AM +0100, Jeroen Van den Keybus wrote: > > > > > > - the NAPI will be implemented. The NAPI thread will run with the > > priority of the highest priority waiting thread, and will call > > rt_stack_deliver, in order not to increase the RX latency compared > > to the current solution. > > > Would you just poll this thread continuously, at high rate ? I'm asking > because I'd expect the poll method to be called only at moderate rates and > registers may be read or written. You can not really guarantee low RX latency if the NAPI thread does not poll hardware as soon as a packet is received. One thing to understand though is that the NAPI thread will generally be sleeping, and be just woken up on the reception of the first RX irq, at which point RX irq will be disabled at the NIC level, all the received packets will be processed, then the RX irq will be re-enabled at NIC level, and the NAPI thread will go back to sleep. > > Maybe the RTnet drivers contain some modifications for low latency > > like reducing the TX ring length, but I believe putting these > > modifications in the drivers is not a so good idea: > > - it means that the modification is to be made in each driver, and > > needs to be maintained; > > - in the particular case of the TX ring, reducing the number of > > queued packets in hardware is not a really precise way to guarantee > > a bounded latency, because the latency is proportional to the number > > of bytes queued, not on the number of packets, and ethernet packets > > have widely varying sizes. > > > > I fully agree. > > > > Please feel free to send any reaction to this mail. > > > > I've always been wondering if it would be possible to emulate the entire > Linux netdev system in realtime instead, essentially avoiding any driver > modification at all. The plan is to approach this goal. We have to rename the functions to avoid the symbol names clashes, but I would like the conversion to be really automatic. One thing I will not do, however, is spend time trying and emulating the last 20% of the netdev API that would cost me 80% of the development effort. > However, this would also include the handling of the > various task registrations (INIT_WORK) a driver typically does (the e1000e > registers five). But these may still be handled by Linux, perhaps. Actually, I have added the rtdm_schedule_nrt_work() service allowing to trigger workqueues in Linux domain, from Xenomai domain: http://git.xenomai.org/xenomai-gch.git/commit/?h=for-forge&id=ee703d9785f7e242057ef689bc500b965cd75294 > > Though it could very well be a large undertaking, I'm not sure if modifying > the individual drivers (and often the various codepaths for different card > variants inside them) would entail less effort in the end. Well, part, if not majority of the maintenance work is to get the drivers to follow mainline changes. We need this because it allows supporting new hardware, and in case the mainline changes contain fixes. To make that job easy, we need to modify them as little as possible to integrate into RTnet, because every change we make will have to be carried from one version to the next, and have chance to get broken by the mainline changes. -- Gilles. |