Dear all,
I am using the demo_mn_console app on a Windows 11 64 bit host, coupled with the demo_cn_console on a Linux host.
Using openConfigurator, I modified the demo cdc file to get a single CN and to test different cycle times (starting from the default 50 ms and going down, hopefully to 1ms).
Everything works smoothly until 5 ms of cycle time, but any value below that does not work.
Some setup details: the demo_mn app is compiled with the "Linked to Application" default option, because I have tried the Kernel way, but I wasn't able to get it working.
with the CN and MN that resets their stack continuously.
Just another bit of information: with the same setup, I have a simple self-developed C# app, based on the ping windows APIs, and I am able to get a RTT of around 350 us between the two hosts, so that 2 ms of cycle time should be doable.
I am here asking kindly to share with me thoughts and coments to understand if this is a physical limitation of the windows side, or if some workaround exists.
Sorry for the late response. Hope you have progressed from your last post.
In case you are still working on this, let me know if you need any other information.
With regards to your original post, the demo_mn_console and demo_cn_console applications, especially on Windows typically cannot run below 5ms stably. Since these configurations of applications run purely on the user layer, they do not provide the deterministic real-time scheduling needed for the network operation of Powerlink; and thus are meant for a quick evaluation or demo only. You can get marginally better result by running the application and daemon separately.
Besides, in these configurations, the network driver/interface is also shared with other applications on Windows, so you will observe traffic besides just EthernetPowerlink traffic in the network, which is not desirable.
If you are looking for a deployable Windows based Powerlink master/slave, you can consider the NDIS driver based configuration or the Windows PCIe based implementations. The NDIS configuration does recommend cycle time of 5ms based on extensive standard testing configurations (e.g. 5 CN, 10 CN, etc.), but you can try reducing it and check the stability of the network for your deployment target and fix the cycle time which suits you. For even lower cycle time, Windows PCIe is better suited as it runs the realtime scheduling part (including dll) of the OpenPowerlink on a PCIe FPGA card (I'd check the current availability of the card first though).
The thread you referred is useful, as it mentions a couple of other parameters which can/need-to be adjusted when you are optimizing for a lower cycle time. Basically, the scheduler resets the network state, when it is no longer able to maintain the scheduling of frames on time (within configured tolerance).
If Windows is not your final target, then Linux masters will give you options to run the network down to 250us, with any of the supported network interface cards.
Best,
John
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Dear all,
I am using the demo_mn_console app on a Windows 11 64 bit host, coupled with the demo_cn_console on a Linux host.
Using openConfigurator, I modified the demo cdc file to get a single CN and to test different cycle times (starting from the default 50 ms and going down, hopefully to 1ms).
Everything works smoothly until 5 ms of cycle time, but any value below that does not work.
Some setup details: the demo_mn app is compiled with the "Linked to Application" default option, because I have tried the Kernel way, but I wasn't able to get it working.
What I experience is very similar to what is describer here:
https://sourceforge.net/p/openpowerlink/discussion/technology/thread/c345a4b6/
with the CN and MN that resets their stack continuously.
Just another bit of information: with the same setup, I have a simple self-developed C# app, based on the ping windows APIs, and I am able to get a RTT of around 350 us between the two hosts, so that 2 ms of cycle time should be doable.
I am here asking kindly to share with me thoughts and coments to understand if this is a physical limitation of the windows side, or if some workaround exists.
I have read also here https://github.com/OpenAutomationTechnologies/openPOWERLINK_V2/blob/master/doc/supported-platforms/windows.md#ndis-intermediate-driver
that "the NDIS timer object framework for high resolution timer support. As the minimum resolution of these timers is 1ms, cycle times lower than 5ms are not possible with this solution."
And magically the 5 ms limitation arises. However, I thought it was only to the NDIS version, whereas in my case that doesn't apply.
Thank you very much for your kind help
BR,
Federico
Hi Federico,
Sorry for the late response. Hope you have progressed from your last post.
In case you are still working on this, let me know if you need any other information.
With regards to your original post, the demo_mn_console and demo_cn_console applications, especially on Windows typically cannot run below 5ms stably. Since these configurations of applications run purely on the user layer, they do not provide the deterministic real-time scheduling needed for the network operation of Powerlink; and thus are meant for a quick evaluation or demo only. You can get marginally better result by running the application and daemon separately.
Besides, in these configurations, the network driver/interface is also shared with other applications on Windows, so you will observe traffic besides just EthernetPowerlink traffic in the network, which is not desirable.
If you are looking for a deployable Windows based Powerlink master/slave, you can consider the NDIS driver based configuration or the Windows PCIe based implementations. The NDIS configuration does recommend cycle time of 5ms based on extensive standard testing configurations (e.g. 5 CN, 10 CN, etc.), but you can try reducing it and check the stability of the network for your deployment target and fix the cycle time which suits you. For even lower cycle time, Windows PCIe is better suited as it runs the realtime scheduling part (including dll) of the OpenPowerlink on a PCIe FPGA card (I'd check the current availability of the card first though).
The thread you referred is useful, as it mentions a couple of other parameters which can/need-to be adjusted when you are optimizing for a lower cycle time. Basically, the scheduler resets the network state, when it is no longer able to maintain the scheduling of frames on time (within configured tolerance).
If Windows is not your final target, then Linux masters will give you options to run the network down to 250us, with any of the supported network interface cards.
Best,
John