I want to connect 12 similar CNs (Powerlink IP-Core on Spartan-6 FPGA) which transmit a payload of 864 bytes to a Powerlink network, using 4-Port-Hubs (topology as indicated in the attachment, with only 5 nodes shown).
I followed the calculation in this thread to determine my minimum cycle time to 982 µs, so I set the cycle time on the MN (B&R APC620 with Linux RT-PREEMPT) to 1 ms.
I tested this setup by connecting the nodes one after the another and at first it works fine (Node 1 and 2 working), but as soon as more than two nodes are connected, the third node cannot be configured correctly. After this, also the working nodes 1 and 2 fall back into PreOperational mode, and cannot be configured correctly again. I provide the MN output of this behavior in the attachment Case01.txt and the Wireshark dump in the attachment Case01.pcap (starting with the last cycle before node 3 was connected). Does anybody know what could cause this behavior?
I tried to increase the PResTimeout from the default value of 27 µs to 75 µs for the nodes, and tried a startup where 5 nodes are already connected before the MN is started. Again, the configuration is interrupted and it is not possible to reach the Operational Mode. I provide the MN output of this behavior in the attachment Case02.txt and the Wireshark dump in the attachment Case02.pcap.
I appreciate any suggestions for improvements to avoid this behavior (especially since the communication should work for 12 nodes in the end), thanks in advance!
Best regards
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Most likely you have to increase the cycle time, the PResTimout values for the CNs (increasing timeout values for CNs farther away of the MN) and increase the Asynchronous timeout value (time between SoA and ASnd). The reason is that each CN's hub in the line adds a propagation delay for the PReq frame + the PRes frame until it reaches back to the MN.
Alternatively, you can also make use of PResChaining mode for all CNs which suits line topology much better. In PResChaining, the PRes sending is time-triggered and therefore compensates for the hub delays.
Best regards,
Wolfgang
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
thanks for your advices. Unfortunately an increase of the cycle time is a solution which I really should avoid. I had a look at the PRes chaining concept in Part C of the Draft Standard which looks promising. However, if I understand section 3.1 right, the propagation delay measurement is handled by frames in the asynchronous phase of the cycle. Doesn't that mean that I will face exactly the same behavior as described in my last post, since this was also triggered by frames in the asynchronous phase (config frames from MN to new connected CN)?
Best regards!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I'll give it a try today, but what I've noticed so far was that the behavior described in Case 01 also happens if the three CNs are connected to the same hub (star topology).
I'll keep you up to date. Thanks for your help
Last edit: ziggisto 2014-12-18
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I tried to set up the PRes Chaining in the network using Part C of the Draft Standard (using Xilinx Spartan-6 CN implementations), but I might have missed something, cause the CNs get stuck during the transmission of the configuration from MN to CN (first around 10s nothing happens and then I get a ConfError followed by E_NMT_BP01_CF_VERIFY 0x8428). Here's what I did:
- set EPL_DLL_PRES_CHAINING_CN to TRUE
- set EPL_DLL_PRES_CHAINING_MN to TRUE
- set bit 14 of the NodeAssignment object 1F81 to "PResChaining node (No PReq is sent to this node)" for the corresponding CNs
I couldn't find any other required steps (in my case, there is no data transferred in the PReq from MN to any CN, so I guess I don't have to consider the different mapping because of the lack of PReqs), did I miss something?
Another thing which I wanted to clarify a bit more in detail is, why the isochronous phase can be affected by connecting a node to the network, if the initialization and configuration data is transmitted in the asynchronous phase only? I was convinced of a strict distinction between these phases (also if an ASnd takes too long or is lost), so I was a bit confused about this measurement.
Furthermore, it is not really clear for me why the cycle time would have to be increased for more than 2 nodes, even if they are connected to the same hub (see my last post). Increasing PRes and ASnd timeouts did not help, only increasing the cycle time. But following the theoretical calculation in THIS THREAD, the 1ms - cycle should allow more than 10 nodes to send TPDOs of 864 byte each, so even if I consider e.g. PReq-PRes latencies and jitter, it is strange that the communication only works for 2 nodes.
Thanks for your help, I appreciate it. Maybe I got some points wrong, so this clarification is really valuable for my usage of EPL.
Best regards!
Last edit: ziggisto 2014-12-18
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I did a further investigation on the behavior by measuring the RX and TX signals of all nodes in the network and compare the signals to Figure 25 of the EPSG Draft Standard 301 (page 48). It seems like the problem is caused by the fact that in my test network the delay t_PRes-PReq_MN is between 150 and 200 µs after each PRes reception. However, in theory (and in my "best case" cycle time calculation), this delay should be the IFG of approx. 1 µs. Hence an operation of a 1 ms cycle is not possible for more than 3 nodes. The draft standard states that this delay is defined by the MN device description entry D_NMT_MNPRes2PReq_U32. I found the parameter in the generic CiA302-4_MN XDD file in the MNFeatures as NMTMNPRes2PReq="0". So my questions here are:
1.) If this parameter is set to 0 in the xdd-file, does it mean that the delay should be "as low as possible" (=IFG) ?
2.) Is there any way to configure / reduce this delay, or is it determined only by the performance of the MN implementation? Since this parameter is not part of the Object Dictionary, I don't know where to configure it apart from the XDD file.
Thanks in advance! If you find the time, it would be very helpful for me if you could also comment on my last post.
Best regards!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
One more remark: I found out that increasing the CPU load ( as described in this Linux RT-PREEMPT MN paper ) leads to a reduction of this delay to approximately 70 µs in this case. Still it is not clear to me, if anything in the configurations could cause an undesired increase of this delay.
Best regards!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
If you want to change the delay that the MN waits after the Soc message is transferred, you have to change a init parameter. This init parameter you can find in the demo_mn_console app in main.c line 225 (V2.1.0) it is called "initParam.waitSocPreq" and default it is set to 150us. This is the time you describe in your post of 2015-01-07. You can decrease this value until the time it takes for SoC package to arrive at the furthest node, also taking into account the jitter in the system. This you can analyze using the B&R X20ET8819 Powerlink analyzer, or you can calculate it.
Kind regards!
Last edit: Niels 2015-01-26
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
thanks for your answers! Niels, I'm already using this parameter set to 0 (which gives the best results), but this parameter only affects the delay from the SoC to the first PReq. The delay I'm describing in my post from 2015-01-07 is a different one, refer to Figure 25 of the EPSG Draft Standard 301 (page 48) together with my post.
Best regards!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I know this is an old post, however, I have noticed a strange behavior of my OPLK network setup (related to the issue described by Ziggisto), so I just want to share my perspective. Any comments from OPLK experts is very welcome.
I have a setup with two OPLK nodes (one MN and one CN), both based on Altera DE2-115 boards, and one CN based on Hilscher CIFX card. The network operates in Standard PReq-PRes mode. I noticed that PResTimeout value defines the time between two PReqs, irrespective of the instant when Pres from a CN is received, i.e. it seems that OPLK MN is designed in a way that PReq frame sending is time-triggered (instead of event-triggered), which (in my opinion) deviates from the standard. AFAIK, the standard document defines that PReq frame for the next CN is sent immediately after PRes reception (with MN implementation specific delay known as PResPReqMN delay, which is at best equal to IFG), and that PResTimeout value is only used as an error detection mechanism (i.e. to detect late or loss PRes frames). I came to this conclusion when changing this value in openConfigurator (which does not allow the timeout to be configured less than 25us, i.e. it issues a warning).
Just in case that someone is interested to provide comments, I have used OPLK V2.2.0 (demo_mn_embedded and demo_cn_embedded).
@Ziggisto
I do not know if the above applies to your setup (i.e. if you are using OPLK as an MN), but you could try to decrease (not increase) PResTimeout value in order to reduce the cycle duration. Also, you could try to place your MN in the center hub (in your picture), instead in the left one, to reduce the number of hub levels (and therefore propagation delay) between the MN and the most distanced CN.
All best!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
you're right, the openPOWERLINK MN implementation uses the PResTimeout for setting up the hrestimer which start the PReq frame transmission when elapsed. In case of i210 and openMAC the frame transmission is done with hardware support.
In my view it does not introduce any benefit to the network performance if the MN is able to send the subsequent frame immediately, because under worst case (but operative) conditions we have the "just in time" case (4.7.6.1.1.1 case 2) anyway.
Best regards,
Joerg
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I agree with you about the network performance. However, this way you are putting higher effort to the engineering process, given that (in order to achieve the best performance) one need to calculate round-trip delays for each CN, which can be very tedious and error prone task (especially in case of large and networks with more complex topology). By using event triggered approach (as defined by the standard document), you can calculate only worst case round-trip delay and simply set it as a timeout value for every CN in the network.
One additional remark. I noticed that openConfigurator limits this value to 25us minimum, can you explain why? If we consider a network with only few (daisy chained) CNs, the configuration tool should allow for lower values (if we assume that CN response time is equal to IFG), especially for the nodes closer to the MN.
Regards,
Mladen
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
the openPOWERLINK stack version <= V1.6 used the approach of transmitting the next PReq immediately after receiving a PRes. Because of large latencies involved in tranmission of Ethernet frames from stack to Ethernet controller to wire and receiving Ethernet frames vice versa, we descided to change the implementation approach in version 1.7. Especially the interrupt latency has a large performance impact on pure SW implementations. So we decoupled the transmit from the receive path and use the time-triggered transmit approach.
If you have good high-res timers (with very low jitter like on FPGA), you can use very low values for PResTimeout.
And yes, you need to calculate PResTimeout very accurately (especially if you want to reach low cycle times). This is the idea how POWERLINK works. If you use PResChaining, the calculation on the actual and measured round-trip times is done by MN at run-time.
cu,
Daniel
SYS TEC electronic GmbH
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I am not questioning the reasons behind this choice. I just want to say that openPOWERLINK implementation deviates from the specification.
Are you sure that time-triggered PReq sending approach is included from the v1.7? As far as I can see, event-triggered approach was used in v1.8.x as well (look at the line 3877 in EplDllk.c file, function EplDllkChangeState() in oplk v1.08.4). The next PReq is sent as soon as PRes frame is received. I think that it is included from v2.
I am wondering why not use auto-response feature in case of openMAC for sending the next PReq (the PRes filter in the MN could be adjusted every time the new PReq is sent).
And the last question. Is there any plan for creating a user-friendly application that would calculate optimal PRes timeouts automatically (based on topology information entered by a user)? This way, network configuration would be easier to manage. This tool could be integrated into openCONFIGURATOR as well.
Regards,
Mladen
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I am not questioning the reasons behind this choice. I just want to say that openPOWERLINK implementation deviates from the specification.
I wouldn't say so. Both implementation approaches are specification compliant. The specification mentions the option to transmit the PReq immediately after receiving the PRes. That is the earliest point in time the PReq can be transmitted. The latest point in time is the elapse of the PRes timeout. For openPOWERLINK we chosed to always use the latter one.
Are you sure that time-triggered PReq sending approach is included from the v1.7? As far as I can see, event-triggered approach was used in v1.8.x as well (look at the line 3877 in EplDllk.c file, function EplDllkChangeState() in oplk v1.08.4). The next PReq is sent as soon as PRes frame is received. I think that it is included from v2.
The code still includes the old event-triggered implementation, which can be enabled by defining EPL_DLL_DISABLE_EDRV_CYCLIC to TRUE. But default is the time-triggered approach.
I am wondering why not use auto-response feature in case of openMAC for sending the next PReq (the PRes filter in the MN could be adjusted every time the new PReq is sent).
There is only a limited number of Ethernet buffer descriptors for auto-response available (16 if I remember correctly). So the number of CNs would be limited to a very low number. Furthermore the error handling (elapse of PRes Timeout) would be very complicated.
And the last question. Is there any plan for creating a user-friendly application that would calculate optimal PRes timeouts automatically (based on topology information entered by a user)? This way, network configuration would be easier to manage. This tool could be integrated into openCONFIGURATOR as well.
I agree, this kind of tool would be very nice. But from our side there are currently no plans to implement such kind of tool. Maybe Wolfgang Seiss has more information about this.
cu,
Daniel
SYS TEC electronic GmbH
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello everybody,
I want to connect 12 similar CNs (Powerlink IP-Core on Spartan-6 FPGA) which transmit a payload of 864 bytes to a Powerlink network, using 4-Port-Hubs (topology as indicated in the attachment, with only 5 nodes shown).
I followed the calculation in this thread to determine my minimum cycle time to 982 µs, so I set the cycle time on the MN (B&R APC620 with Linux RT-PREEMPT) to 1 ms.
I tested this setup by connecting the nodes one after the another and at first it works fine (Node 1 and 2 working), but as soon as more than two nodes are connected, the third node cannot be configured correctly. After this, also the working nodes 1 and 2 fall back into PreOperational mode, and cannot be configured correctly again. I provide the MN output of this behavior in the attachment Case01.txt and the Wireshark dump in the attachment Case01.pcap (starting with the last cycle before node 3 was connected). Does anybody know what could cause this behavior?
I tried to increase the PResTimeout from the default value of 27 µs to 75 µs for the nodes, and tried a startup where 5 nodes are already connected before the MN is started. Again, the configuration is interrupted and it is not possible to reach the Operational Mode. I provide the MN output of this behavior in the attachment Case02.txt and the Wireshark dump in the attachment Case02.pcap.
I appreciate any suggestions for improvements to avoid this behavior (especially since the communication should work for 12 nodes in the end), thanks in advance!
Best regards
Obviously it's only possible for me to add attachments when creating a topic,
so I have add them in this post
Hi Ziggisto,
Most likely you have to increase the cycle time, the PResTimout values for the CNs (increasing timeout values for CNs farther away of the MN) and increase the Asynchronous timeout value (time between SoA and ASnd). The reason is that each CN's hub in the line adds a propagation delay for the PReq frame + the PRes frame until it reaches back to the MN.
Alternatively, you can also make use of PResChaining mode for all CNs which suits line topology much better. In PResChaining, the PRes sending is time-triggered and therefore compensates for the hub delays.
Best regards,
Wolfgang
Hi Wolfgang,
thanks for your advices. Unfortunately an increase of the cycle time is a solution which I really should avoid. I had a look at the PRes chaining concept in Part C of the Draft Standard which looks promising. However, if I understand section 3.1 right, the propagation delay measurement is handled by frames in the asynchronous phase of the cycle. Doesn't that mean that I will face exactly the same behavior as described in my last post, since this was also triggered by frames in the asynchronous phase (config frames from MN to new connected CN)?
Best regards!
Hi Ziggisto,
You are right, the asynchronous timeout has to be increased in any case.
Anyhow, did you give it a try with PollResponse Chaining yet?
Best regards,
Wolfgang
Hi Wolfgang,
I'll give it a try today, but what I've noticed so far was that the behavior described in Case 01 also happens if the three CNs are connected to the same hub (star topology).
I'll keep you up to date. Thanks for your help
Last edit: ziggisto 2014-12-18
Hi again,
I tried to set up the PRes Chaining in the network using Part C of the Draft Standard (using Xilinx Spartan-6 CN implementations), but I might have missed something, cause the CNs get stuck during the transmission of the configuration from MN to CN (first around 10s nothing happens and then I get a ConfError followed by E_NMT_BP01_CF_VERIFY 0x8428). Here's what I did:
- set EPL_DLL_PRES_CHAINING_CN to TRUE
- set EPL_DLL_PRES_CHAINING_MN to TRUE
- set bit 14 of the NodeAssignment object 1F81 to "PResChaining node (No PReq is sent to this node)" for the corresponding CNs
I couldn't find any other required steps (in my case, there is no data transferred in the PReq from MN to any CN, so I guess I don't have to consider the different mapping because of the lack of PReqs), did I miss something?
Another thing which I wanted to clarify a bit more in detail is, why the isochronous phase can be affected by connecting a node to the network, if the initialization and configuration data is transmitted in the asynchronous phase only? I was convinced of a strict distinction between these phases (also if an ASnd takes too long or is lost), so I was a bit confused about this measurement.
Furthermore, it is not really clear for me why the cycle time would have to be increased for more than 2 nodes, even if they are connected to the same hub (see my last post). Increasing PRes and ASnd timeouts did not help, only increasing the cycle time. But following the theoretical calculation in THIS THREAD, the 1ms - cycle should allow more than 10 nodes to send TPDOs of 864 byte each, so even if I consider e.g. PReq-PRes latencies and jitter, it is strange that the communication only works for 2 nodes.
Thanks for your help, I appreciate it. Maybe I got some points wrong, so this clarification is really valuable for my usage of EPL.
Best regards!
Last edit: ziggisto 2014-12-18
Hello,
I did a further investigation on the behavior by measuring the RX and TX signals of all nodes in the network and compare the signals to Figure 25 of the EPSG Draft Standard 301 (page 48). It seems like the problem is caused by the fact that in my test network the delay t_PRes-PReq_MN is between 150 and 200 µs after each PRes reception. However, in theory (and in my "best case" cycle time calculation), this delay should be the IFG of approx. 1 µs. Hence an operation of a 1 ms cycle is not possible for more than 3 nodes. The draft standard states that this delay is defined by the MN device description entry D_NMT_MNPRes2PReq_U32. I found the parameter in the generic CiA302-4_MN XDD file in the MNFeatures as NMTMNPRes2PReq="0". So my questions here are:
1.) If this parameter is set to 0 in the xdd-file, does it mean that the delay should be "as low as possible" (=IFG) ?
2.) Is there any way to configure / reduce this delay, or is it determined only by the performance of the MN implementation? Since this parameter is not part of the Object Dictionary, I don't know where to configure it apart from the XDD file.
Thanks in advance! If you find the time, it would be very helpful for me if you could also comment on my last post.
Best regards!
One more remark: I found out that increasing the CPU load ( as described in this Linux RT-PREEMPT MN paper ) leads to a reduction of this delay to approximately 70 µs in this case. Still it is not clear to me, if anything in the configurations could cause an undesired increase of this delay.
Best regards!
Hi Ziggisto,
If you want to change the delay that the MN waits after the Soc message is transferred, you have to change a init parameter. This init parameter you can find in the demo_mn_console app in main.c line 225 (V2.1.0) it is called "initParam.waitSocPreq" and default it is set to 150us. This is the time you describe in your post of 2015-01-07. You can decrease this value until the time it takes for SoC package to arrive at the furthest node, also taking into account the jitter in the system. This you can analyze using the B&R X20ET8819 Powerlink analyzer, or you can calculate it.
Kind regards!
Last edit: Niels 2015-01-26
Hi,
@Niels
You mean 150us. The default value is set to 150000ns=150us, as the EPL specification defines WaitSoCPreq to be in ns.
Regards
Hi,
thanks for your answers! Niels, I'm already using this parameter set to 0 (which gives the best results), but this parameter only affects the delay from the SoC to the first PReq. The delay I'm describing in my post from 2015-01-07 is a different one, refer to Figure 25 of the EPSG Draft Standard 301 (page 48) together with my post.
Best regards!
Hi all,
I know this is an old post, however, I have noticed a strange behavior of my OPLK network setup (related to the issue described by Ziggisto), so I just want to share my perspective. Any comments from OPLK experts is very welcome.
I have a setup with two OPLK nodes (one MN and one CN), both based on Altera DE2-115 boards, and one CN based on Hilscher CIFX card. The network operates in Standard PReq-PRes mode. I noticed that PResTimeout value defines the time between two PReqs, irrespective of the instant when Pres from a CN is received, i.e. it seems that OPLK MN is designed in a way that PReq frame sending is time-triggered (instead of event-triggered), which (in my opinion) deviates from the standard. AFAIK, the standard document defines that PReq frame for the next CN is sent immediately after PRes reception (with MN implementation specific delay known as PResPReqMN delay, which is at best equal to IFG), and that PResTimeout value is only used as an error detection mechanism (i.e. to detect late or loss PRes frames). I came to this conclusion when changing this value in openConfigurator (which does not allow the timeout to be configured less than 25us, i.e. it issues a warning).
Just in case that someone is interested to provide comments, I have used OPLK V2.2.0 (demo_mn_embedded and demo_cn_embedded).
@Ziggisto
I do not know if the above applies to your setup (i.e. if you are using OPLK as an MN), but you could try to decrease (not increase) PResTimeout value in order to reduce the cycle duration. Also, you could try to place your MN in the center hub (in your picture), instead in the left one, to reduce the number of hub levels (and therefore propagation delay) between the MN and the most distanced CN.
All best!
Hello Mladen,
you're right, the openPOWERLINK MN implementation uses the PResTimeout for setting up the hrestimer which start the PReq frame transmission when elapsed. In case of i210 and openMAC the frame transmission is done with hardware support.
In my view it does not introduce any benefit to the network performance if the MN is able to send the subsequent frame immediately, because under worst case (but operative) conditions we have the "just in time" case (4.7.6.1.1.1 case 2) anyway.
Best regards,
Joerg
Hi Joerg,
Thank you for the quick answer.
I agree with you about the network performance. However, this way you are putting higher effort to the engineering process, given that (in order to achieve the best performance) one need to calculate round-trip delays for each CN, which can be very tedious and error prone task (especially in case of large and networks with more complex topology). By using event triggered approach (as defined by the standard document), you can calculate only worst case round-trip delay and simply set it as a timeout value for every CN in the network.
One additional remark. I noticed that openConfigurator limits this value to 25us minimum, can you explain why? If we consider a network with only few (daisy chained) CNs, the configuration tool should allow for lower values (if we assume that CN response time is equal to IFG), especially for the nodes closer to the MN.
Regards,
Mladen
Hi Mladen,
the openPOWERLINK stack version <= V1.6 used the approach of transmitting the next PReq immediately after receiving a PRes. Because of large latencies involved in tranmission of Ethernet frames from stack to Ethernet controller to wire and receiving Ethernet frames vice versa, we descided to change the implementation approach in version 1.7. Especially the interrupt latency has a large performance impact on pure SW implementations. So we decoupled the transmit from the receive path and use the time-triggered transmit approach.
If you have good high-res timers (with very low jitter like on FPGA), you can use very low values for PResTimeout.
And yes, you need to calculate PResTimeout very accurately (especially if you want to reach low cycle times). This is the idea how POWERLINK works. If you use PResChaining, the calculation on the actual and measured round-trip times is done by MN at run-time.
cu,
Daniel
SYS TEC electronic GmbH
Hi Daniel,
Thank you for the additional explanations.
I am not questioning the reasons behind this choice. I just want to say that openPOWERLINK implementation deviates from the specification.
Are you sure that time-triggered PReq sending approach is included from the v1.7? As far as I can see, event-triggered approach was used in v1.8.x as well (look at the line 3877 in EplDllk.c file, function EplDllkChangeState() in oplk v1.08.4). The next PReq is sent as soon as PRes frame is received. I think that it is included from v2.
I am wondering why not use auto-response feature in case of openMAC for sending the next PReq (the PRes filter in the MN could be adjusted every time the new PReq is sent).
And the last question. Is there any plan for creating a user-friendly application that would calculate optimal PRes timeouts automatically (based on topology information entered by a user)? This way, network configuration would be easier to manage. This tool could be integrated into openCONFIGURATOR as well.
Regards,
Mladen
Hi Mladen,
I wouldn't say so. Both implementation approaches are specification compliant. The specification mentions the option to transmit the PReq immediately after receiving the PRes. That is the earliest point in time the PReq can be transmitted. The latest point in time is the elapse of the PRes timeout. For openPOWERLINK we chosed to always use the latter one.
The mentioned code line is encapsulated by
The code still includes the old event-triggered implementation, which can be enabled by defining EPL_DLL_DISABLE_EDRV_CYCLIC to TRUE. But default is the time-triggered approach.
There is only a limited number of Ethernet buffer descriptors for auto-response available (16 if I remember correctly). So the number of CNs would be limited to a very low number. Furthermore the error handling (elapse of PRes Timeout) would be very complicated.
I agree, this kind of tool would be very nice. But from our side there are currently no plans to implement such kind of tool. Maybe Wolfgang Seiss has more information about this.
cu,
Daniel
SYS TEC electronic GmbH