linuxptp-users Mailing List for linuxptp (Page 128)
PTP IEEE 1588 stack for Linux
Brought to you by:
rcochran
You can subscribe to this list here.
2012 |
Jan
|
Feb
(10) |
Mar
(47) |
Apr
|
May
(26) |
Jun
(10) |
Jul
(4) |
Aug
(2) |
Sep
(2) |
Oct
(20) |
Nov
(14) |
Dec
(8) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2013 |
Jan
(6) |
Feb
(18) |
Mar
(27) |
Apr
(57) |
May
(32) |
Jun
(21) |
Jul
(79) |
Aug
(108) |
Sep
(13) |
Oct
(73) |
Nov
(51) |
Dec
(24) |
2014 |
Jan
(24) |
Feb
(41) |
Mar
(39) |
Apr
(5) |
May
(6) |
Jun
(2) |
Jul
(5) |
Aug
(15) |
Sep
(7) |
Oct
(6) |
Nov
|
Dec
(7) |
2015 |
Jan
(27) |
Feb
(18) |
Mar
(37) |
Apr
(8) |
May
(13) |
Jun
(44) |
Jul
(4) |
Aug
(50) |
Sep
(35) |
Oct
(6) |
Nov
(24) |
Dec
(19) |
2016 |
Jan
(30) |
Feb
(30) |
Mar
(23) |
Apr
(4) |
May
(12) |
Jun
(19) |
Jul
(26) |
Aug
(13) |
Sep
|
Oct
(23) |
Nov
(37) |
Dec
(15) |
2017 |
Jan
(33) |
Feb
(19) |
Mar
(20) |
Apr
(43) |
May
(39) |
Jun
(23) |
Jul
(20) |
Aug
(27) |
Sep
(10) |
Oct
(15) |
Nov
|
Dec
(24) |
2018 |
Jan
(3) |
Feb
(10) |
Mar
(34) |
Apr
(34) |
May
(28) |
Jun
(50) |
Jul
(27) |
Aug
(75) |
Sep
(21) |
Oct
(42) |
Nov
(25) |
Dec
(31) |
2019 |
Jan
(39) |
Feb
(28) |
Mar
(19) |
Apr
(7) |
May
(30) |
Jun
(22) |
Jul
(54) |
Aug
(36) |
Sep
(19) |
Oct
(33) |
Nov
(36) |
Dec
(32) |
2020 |
Jan
(29) |
Feb
(38) |
Mar
(29) |
Apr
(30) |
May
(39) |
Jun
(45) |
Jul
(31) |
Aug
(52) |
Sep
(40) |
Oct
(8) |
Nov
(48) |
Dec
(30) |
2021 |
Jan
(35) |
Feb
(32) |
Mar
(23) |
Apr
(55) |
May
(43) |
Jun
(63) |
Jul
(17) |
Aug
(24) |
Sep
(9) |
Oct
(31) |
Nov
(67) |
Dec
(55) |
2022 |
Jan
(31) |
Feb
(48) |
Mar
(76) |
Apr
(18) |
May
(13) |
Jun
(46) |
Jul
(75) |
Aug
(54) |
Sep
(59) |
Oct
(65) |
Nov
(44) |
Dec
(7) |
2023 |
Jan
(38) |
Feb
(32) |
Mar
(35) |
Apr
(23) |
May
(46) |
Jun
(53) |
Jul
(18) |
Aug
(10) |
Sep
(24) |
Oct
(15) |
Nov
(40) |
Dec
(6) |
From: Keller, J. E <jac...@in...> - 2015-05-06 21:19:15
|
Hi, +e1000-devel On Tue, 2015-05-05 at 13:58 +0000, Cognault Marc wrote: > Hello > > We try to implement Linuxptp on the following platform: > Intel Core i7 4700EQ > Ethernet controller: Intel I218-LM > Debian 8 - kernel 3.16.7 > Driver: e1000e 3.1.0.2 > > Despite we use the last e1000e release, the synchronization fails from time to time in a way similar to the issue > https://www.mail-archive.com/linuxptp-users%40lists.sourceforge.net/msg00089.html > > Ptp4l log output: > ptp4l[282013.033]: rms 4967 max 13253 freq +25 +/- 6813 delay 24447 +/- 2423 > ptp4l[282014.034]: rms 5018 max 13444 freq -433 +/- 6870 delay 24234 +/- 2564 > ptp4l[282015.035]: rms 5649 max 14758 freq -492 +/- 7760 delay 23901 +/- 2745 > ptp4l[282016.036]: rms 4768 max 12375 freq -46 +/- 6558 delay 23501 +/- 3154 > ptp4l[282017.036]: rms 4923 max 12944 freq -236 +/- 6765 delay 24439 +/- 2169 > ptp4l[282018.038]: rms 9913 max 26066 freq -1041 +/- 13614 delay 21056 +/- 6639 > ptp4l[282018.163]: clockcheck: clock jumped forward or running faster than expected! > ptp4l[282018.163]: port 1: SLAVE to UNCALIBRATED on SYNCHRONIZATION_FAULT > ptp4l[282018.413]: port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED > ptp4l[282019.039]: rms 35184372090195 max 70368744180407 freq -2994 +/- 7974 delay 24286 +/- 2607 > ptp4l[282020.040]: rms 6215 max 11297 freq -3239 +/- 8480 delay 23951 +/- 2655 > ptp4l[282021.040]: rms 5511 max 11347 freq +1238 +/- 7033 delay 24534 +/- 2700 > ptp4l[282022.041]: rms 4514 max 10977 freq +634 +/- 6153 delay 25364 +/- 247 > > This issue happens when both ptp4l and phc2sys are running, and doesn't happen if phc2sys is not started. > > As this issue should have been solved with the e1000e rev 3.0.4, does somebody know where is the problem (linuxptp, e1000e , I218, ...) ? > > Regards > > Marc Cognault I have fowarded this to e1000-devel in order for the proper maintainer of e1000e sees your message. Hopefully they can shed some proper light on this. Regards, Jake |
From: Keller, J. E <jac...@in...> - 2015-05-06 18:27:44
|
On Tue, 2015-05-05 at 17:54 +0200, Richard Cochran wrote: > On Tue, May 05, 2015 at 01:58:24PM +0000, Cognault Marc wrote: > > This issue happens when both ptp4l and phc2sys are running, and doesn't happen if phc2sys is not started. > > > > As this issue should have been solved with the e1000e rev 3.0.4, does somebody know where is the problem (linuxptp, e1000e , I218, ...) ? > > This sounds very similar to a reported hardware issue with the > I217-LM. I have no idea about when or if Intel patched their driver. > One person wrote me that he could trigger the bug by simply reading > out the clock in a tight loop in a test program. > > HTH, > Richard Richard, You don't by chance still have that email? I am trying to find it in the archive.. Regards, Jake |
From: Richard C. <ric...@gm...> - 2015-05-05 15:54:15
|
On Tue, May 05, 2015 at 01:58:24PM +0000, Cognault Marc wrote: > This issue happens when both ptp4l and phc2sys are running, and doesn't happen if phc2sys is not started. > > As this issue should have been solved with the e1000e rev 3.0.4, does somebody know where is the problem (linuxptp, e1000e , I218, ...) ? This sounds very similar to a reported hardware issue with the I217-LM. I have no idea about when or if Intel patched their driver. One person wrote me that he could trigger the bug by simply reading out the clock in a tight loop in a test program. HTH, Richard |
From: Cognault M. <mar...@th...> - 2015-05-05 13:58:38
|
Hello We try to implement Linuxptp on the following platform: Intel Core i7 4700EQ Ethernet controller: Intel I218-LM Debian 8 - kernel 3.16.7 Driver: e1000e 3.1.0.2 Despite we use the last e1000e release, the synchronization fails from time to time in a way similar to the issue https://www.mail-archive.com/linuxptp-users%40lists.sourceforge.net/msg00089.html Ptp4l log output: ptp4l[282013.033]: rms 4967 max 13253 freq +25 +/- 6813 delay 24447 +/- 2423 ptp4l[282014.034]: rms 5018 max 13444 freq -433 +/- 6870 delay 24234 +/- 2564 ptp4l[282015.035]: rms 5649 max 14758 freq -492 +/- 7760 delay 23901 +/- 2745 ptp4l[282016.036]: rms 4768 max 12375 freq -46 +/- 6558 delay 23501 +/- 3154 ptp4l[282017.036]: rms 4923 max 12944 freq -236 +/- 6765 delay 24439 +/- 2169 ptp4l[282018.038]: rms 9913 max 26066 freq -1041 +/- 13614 delay 21056 +/- 6639 ptp4l[282018.163]: clockcheck: clock jumped forward or running faster than expected! ptp4l[282018.163]: port 1: SLAVE to UNCALIBRATED on SYNCHRONIZATION_FAULT ptp4l[282018.413]: port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED ptp4l[282019.039]: rms 35184372090195 max 70368744180407 freq -2994 +/- 7974 delay 24286 +/- 2607 ptp4l[282020.040]: rms 6215 max 11297 freq -3239 +/- 8480 delay 23951 +/- 2655 ptp4l[282021.040]: rms 5511 max 11347 freq +1238 +/- 7033 delay 24534 +/- 2700 ptp4l[282022.041]: rms 4514 max 10977 freq +634 +/- 6153 delay 25364 +/- 247 This issue happens when both ptp4l and phc2sys are running, and doesn't happen if phc2sys is not started. As this issue should have been solved with the e1000e rev 3.0.4, does somebody know where is the problem (linuxptp, e1000e , I218, ...) ? Regards Marc Cognault |
From: Richard C. <ric...@gm...> - 2015-04-21 19:48:31
|
On Tue, Apr 21, 2015 at 06:36:27PM +0000, Matthew Huff wrote: > I've signed up to this list as well as sourceforge, but when I try to access the archives, I get an Error 403, Read access required. Any suggestions? This happened to me before when SF reset everyone's passwords. But I wouldn't bother with the SF site. The UI is awful, putting it mildly. Try gmane instead. http://news.gmane.org/gmane.comp.linux.ptp.user http://news.gmane.org/gmane.comp.linux.ptp.devel Thanks, Richard |
From: Matthew H. <mh...@ox...> - 2015-04-21 19:11:08
|
I've signed up to this list as well as sourceforge, but when I try to access the archives, I get an Error 403, Read access required. Any suggestions? ---- Matthew Huff | 1 Manhattanville Rd Director of Operations | Purchase, NY 10577 OTA Management LLC | Phone: 914-460-4039 aim: matthewbhuff | Fax: 914-694-5669 |
From: Richard C. <ric...@gm...> - 2015-04-03 09:56:01
|
On Thu, Apr 02, 2015 at 11:40:28PM +0000, Daniel Le wrote: > Thanks much for your help. I'll review the design. Meanwhile I have a couple more questions if you don't mind. Regarding the design, it would be better to expose the FPGA clock to ptp4l directly. That means letting ptp4l set and adjust the FPGA clock. You don't have the PHC subsystem in your pre-3.0 kernel, but you can emulate the interface in the FPGA driver using ioctls. For the time stamps, you can hack your MAC driver to obtain the time stamps and deliver them via so_timestamping. > (1) Could you point me in the code where ptp4l slave clock sets for the first time the kernel CLOCK_REALTIME for SW time stamping? And I guess it does that right after transitioning into Slave state from Uncalibrated. I'm thinking the FPGA clock time should be stepped at that moment. That happens in the function clock_synchronize() in clock.c, switch (state) { case SERVO_UNLOCKED: break; case SERVO_JUMP: clockadj_set_freq(c->clkid, -adj); clockadj_step(c->clkid, -tmv_to_nanoseconds(c->master_offset)); here. ----------^ ... case SERVO_LOCKED: ... } > (2) I tried to revert to sendto() and sk_receive() and ran into the following error messages: > > timed out while polling for tx timestamp > increasing tx_timestamp_timeout may correct this issue, but it is likely caused by a driver bug > port <portN>: send delay request failed > > Does that mean sendto() of the original program needs customization? I am not sure what you have done to the kernel and user space code. Without seeing the code, it is not possible for me to answer this question. Sorry, Richard |
From: Daniel Le <dan...@ex...> - 2015-04-02 23:40:36
|
Thanks much for your help. I'll review the design. Meanwhile I have a couple more questions if you don't mind. (1) Could you point me in the code where ptp4l slave clock sets for the first time the kernel CLOCK_REALTIME for SW time stamping? And I guess it does that right after transitioning into Slave state from Uncalibrated. I'm thinking the FPGA clock time should be stepped at that moment. (2) I tried to revert to sendto() and sk_receive() and ran into the following error messages: timed out while polling for tx timestamp increasing tx_timestamp_timeout may correct this issue, but it is likely caused by a driver bug port <portN>: send delay request failed Does that mean sendto() of the original program needs customization? Daniel -----Original Message----- From: Richard Cochran [mailto:ric...@gm...] Sent: Thursday, April 02, 2015 12:19 PM To: Daniel Le Cc: lin...@li... Subject: Re: [Linuxptp-users] SLAVE to UNCALIBRATED on SYNCHRONIZATION_FAULT On Thu, Apr 02, 2015 at 03:38:38PM +0000, Daniel Le wrote: > [DL] The FPGA has its own clock and a proprietary slewing mechanism to sync to a time source. It does not use phc2sys > because my embedded system doesn't have 3.x Linux kernel. > > [DL] In the case of PTP time source, the FPGA engine on the NIC periodically reads the kernel system time > (do_gettimeofday) in order to step/slew to the system time which is synchronized to PTP grandmaster time. (IOW, software time stamping) > [DL] The ptp4l program is run with -S option, however, for example when sending/receiving packets via IPv4 transport > in udp_send() and udp_recv(), a timestamping pipe is used to get the FPGA hardware timestamps of the packets, > instead of the functions sendto() and sk_receive(). (IOW, hardware time stamping) This design seems wrong to me. Why not let the FPGA have it own clock, and then synchronize the Linux system time to it? That is how all the other devices do it. In any case, you have some elaborate custom kernel and ptp4l modifications. I really can't help you with those, sorry. > [DL] Could you further elaborate this clockcheck_sample functionality (such as uncorrected frequency)? Please take a look at the code. It is not all that complicated. > Is my understating of the following correct? > - The synchronized clock is the PTP clock and is maintained by PTP packet TX/RX timestamps per 1588 standard. Yes. > - The system monotonic clock (CLOCK_MONOTONIC) is the Linux kernel system clock. No. See 'man 3 clock_gettime'. > [DL] What is the threshold to determine that clock jumped forward/backward too much? The code does this: Check the sanity of the synchronized clock by comparing its uncorrected frequency with the system monotonic clock. When the measured frequency offset is larger than the value of the sanity_freq_limit option (20% by default), a warning message will be printed and the servo will be reset. Setting the option to zero disables the check. This is useful to detect when the clock is broken or adjusted by another program. > [DL]Upon a system boot-up or restart, how does PTP slave clock sets the system clock initially? Is CLOCK_REALTIME involved? First of all, the ptp4l program will not start at boot-up or restart unless you arrange for that to happen. Secondly, the ptp4l program sets its target clock (either the PHC in the case of HW time stamping, or CLOCK_REALTIME for SW time stamping) to match that of the remote master, according to the 'first_step_threshold' configuration option. HTH, Richard |
From: Richard C. <ric...@gm...> - 2015-04-02 16:18:48
|
On Thu, Apr 02, 2015 at 03:38:38PM +0000, Daniel Le wrote: > [DL] The FPGA has its own clock and a proprietary slewing mechanism to sync to a time source. It does not use phc2sys > because my embedded system doesn't have 3.x Linux kernel. > > [DL] In the case of PTP time source, the FPGA engine on the NIC periodically reads the kernel system time > (do_gettimeofday) in order to step/slew to the system time which is synchronized to PTP grandmaster time. (IOW, software time stamping) > [DL] The ptp4l program is run with -S option, however, for example when sending/receiving packets via IPv4 transport > in udp_send() and udp_recv(), a timestamping pipe is used to get the FPGA hardware timestamps of the packets, > instead of the functions sendto() and sk_receive(). (IOW, hardware time stamping) This design seems wrong to me. Why not let the FPGA have it own clock, and then synchronize the Linux system time to it? That is how all the other devices do it. In any case, you have some elaborate custom kernel and ptp4l modifications. I really can't help you with those, sorry. > [DL] Could you further elaborate this clockcheck_sample functionality (such as uncorrected frequency)? Please take a look at the code. It is not all that complicated. > Is my understating of the following correct? > - The synchronized clock is the PTP clock and is maintained by PTP packet TX/RX timestamps per 1588 standard. Yes. > - The system monotonic clock (CLOCK_MONOTONIC) is the Linux kernel system clock. No. See 'man 3 clock_gettime'. > [DL] What is the threshold to determine that clock jumped forward/backward too much? The code does this: Check the sanity of the synchronized clock by comparing its uncorrected frequency with the system monotonic clock. When the measured frequency offset is larger than the value of the sanity_freq_limit option (20% by default), a warning message will be printed and the servo will be reset. Setting the option to zero disables the check. This is useful to detect when the clock is broken or adjusted by another program. > [DL]Upon a system boot-up or restart, how does PTP slave clock sets the system clock initially? Is CLOCK_REALTIME involved? First of all, the ptp4l program will not start at boot-up or restart unless you arrange for that to happen. Secondly, the ptp4l program sets its target clock (either the PHC in the case of HW time stamping, or CLOCK_REALTIME for SW time stamping) to match that of the remote master, according to the 'first_step_threshold' configuration option. HTH, Richard |
From: Daniel Le <dan...@ex...> - 2015-04-02 15:38:46
|
Please see inline. -----Original Message----- From: Richard Cochran [mailto:ric...@gm...] Sent: Thursday, April 02, 2015 2:32 AM To: Daniel Le Cc: lin...@li... Subject: Re: [Linuxptp-users] SLAVE to UNCALIBRATED on SYNCHRONIZATION_FAULT On Wed, Apr 01, 2015 at 10:18:27PM +0000, Daniel Le wrote: > My PTP slave clock appears to lose sync with a grandmaster clock when > under heavy load and worse it can't recover. The sync is good when > there is low or no other traffic. This slave clock uses software > timestamping to adjust the host system time. The PTP transmit and > receive packets are time stamped by a non-1588 aware NIC's FPGA clock > which is sync'd to the host system clock, i.e. the NIC regularly gets > host system time to step/slew to it. This sounds fishy to me. You say your slave uses SW time stamping, but that the FPGA provides time stamps. That is HW time stamping! Also, since the Linux system time is purely software, how do you get its time into the FPGA? By using phc2sys? [DL] The FPGA has its own clock and a proprietary slewing mechanism to sync to a time source. It does not use phc2sys because my embedded system doesn't have 3.x Linux kernel. [DL] In the case of PTP time source, the FPGA engine on the NIC periodically reads the kernel system time (do_gettimeofday) in order to step/slew to the system time which is synchronized to PTP grandmaster time. [DL] The ptp4l program is run with -S option, however, for example when sending/receiving packets via IPv4 transport in udp_send() and udp_recv(), a timestamping pipe is used to get the FPGA hardware timestamps of the packets, instead of the functions sendto() and sk_receive(). > The log shows: > - port <port#>: SLAVE to UNCALIBRATED on SYNCHRONIZATION_FAULT > > and the following repetitive messages: > - clockcheck: clock jumped forward or running faster than expected! > - clockcheck: clock jumped backward or running slower than expected! > > I would appreciate information to debug this, as well an explanation of what may be happening. That message comes from the function, clockcheck_sample(), in clockcheck.c. It does the following: /* Check the sanity of the synchronized clock by comparing its uncorrected frequency with the system monotonic clock. If the synchronized clock is the system clock, the measured frequency offset will be the current frequency correction of the system clock. */ This is sanity check against CLOCK_MONOTONIC. Probably there is a bug in your custom HW design or in the system/fpga synchronization method. [DL] Could you further elaborate this clockcheck_sample functionality (such as uncorrected frequency)? Is my understating of the following correct? - The synchronized clock is the PTP clock and is maintained by PTP packet TX/RX timestamps per 1588 standard. - The system monotonic clock (CLOCK_MONOTONIC) is the Linux kernel system clock. [DL] What is the threshold to determine that clock jumped forward/backward too much? [DL]Upon a system boot-up or restart, how does PTP slave clock sets the system clock initially? Is CLOCK_REALTIME involved? Thank you. Daniel |
From: Richard C. <ric...@gm...> - 2015-04-02 06:32:13
|
On Wed, Apr 01, 2015 at 10:18:27PM +0000, Daniel Le wrote: > My PTP slave clock appears to lose sync with a grandmaster clock > when under heavy load and worse it can't recover. The sync is good > when there is low or no other traffic. This slave clock uses > software timestamping to adjust the host system time. The PTP > transmit and receive packets are time stamped by a non-1588 aware > NIC's FPGA clock which is sync'd to the host system clock, i.e. the > NIC regularly gets host system time to step/slew to it. This sounds fishy to me. You say your slave uses SW time stamping, but that the FPGA provides time stamps. That is HW time stamping! Also, since the Linux system time is purely software, how do you get its time into the FPGA? By using phc2sys? > The log shows: > - port <port#>: SLAVE to UNCALIBRATED on SYNCHRONIZATION_FAULT > > and the following repetitive messages: > - clockcheck: clock jumped forward or running faster than expected! > - clockcheck: clock jumped backward or running slower than expected! > > I would appreciate information to debug this, as well an explanation of what may be happening. That message comes from the function, clockcheck_sample(), in clockcheck.c. It does the following: /* Check the sanity of the synchronized clock by comparing its uncorrected frequency with the system monotonic clock. If the synchronized clock is the system clock, the measured frequency offset will be the current frequency correction of the system clock. */ This is sanity check against CLOCK_MONOTONIC. Probably there is a bug in your custom HW design or in the system/fpga synchronization method. Thanks, Richard |
From: Daniel Le <dan...@ex...> - 2015-04-01 22:18:35
|
Hello, My PTP slave clock appears to lose sync with a grandmaster clock when under heavy load and worse it can't recover. The sync is good when there is low or no other traffic. This slave clock uses software timestamping to adjust the host system time. The PTP transmit and receive packets are time stamped by a non-1588 aware NIC's FPGA clock which is sync'd to the host system clock, i.e. the NIC regularly gets host system time to step/slew to it. The log shows: - port <port#>: SLAVE to UNCALIBRATED on SYNCHRONIZATION_FAULT and the following repetitive messages: - clockcheck: clock jumped forward or running faster than expected! - clockcheck: clock jumped backward or running slower than expected! I would appreciate information to debug this, as well an explanation of what may be happening. Thanks, Daniel |
From: Chandra M. <sma...@al...> - 2015-03-20 09:52:12
|
Hi Richard, Thank you for the details. As soon as I get free from my project, I will share the details with 10Gbps-1588 system. My understanding is that looking at the offset and its deviation from the rms and the pbb deviation tells us clearly the efficacy of the 1588. Definitely, there is saturation beyond certain number. However, from our experiments, 512 syncs per seconds have looked very promising. You are absolutely right - 1588 cannot beat SyncE due to the latter's operation at layer 1 as opposed to 1588 packet-based operation at layer 2 & above. However, there is an effort to dispel the myth (if I dare say so) that syntonization HAS TO BE achieved with only SyncE. Many requirements out there are not that stringent. In any case, I thank you all for your patient and cordial responses. Thanking you in anticipation, Regards, Chandra (c) : 0175508142 (O): 701.6412 "Knowledge speaks, Wisdom listens" -----Original Message----- From: Richard Cochran [mailto:ric...@gm...] Sent: Friday, March 20, 2015 12:29 AM To: Chandra Mallela Cc: Miroslav Lichvar; lin...@li... Subject: Re: [Linuxptp-users] Expected throughput of the ptp4l On Wed, Mar 18, 2015 at 09:29:02AM +0100, Richard Cochran wrote: > I have seen good results with 1, 2, 4, and 8, packets per second on a > low end embedded system. With 128, some time stamps are dropped due > to hardware/driver constraints. Here is a random metric from the boards on my desk. The CPUs are the TI AM335x, but using the DP83640 PHY as the PTP Hardware Clock. The slave is directly connected to the master with 1 meter cable over a 100 Mbit link. Measuring the edges of a 1 kHz output at various Sync rates, and with a DelayReq rate of 1 Hz, I see the following differences. Rate Offset ---------------- 2^0 +/- 75 ns 2^-3 +/- 25 ns 2^-4 +/- 20 ns 2^-5 +/- 20 ns As expected, increasing the DelayReq rate to 2^-5 makes no difference. So with this hardware, I have already reached the limit of synchronization performance. Increasing the Sync rate to 512 frames per second would not improve the picture. In contrast, using SyncE on the exact same hardware immediately yields an offset of +/- 1 nanosecond. (Actually, it probably is smaller, but I can't measure it with my lousy scope.) HTH, Richard ________________________________ Confidentiality Notice. This message may contain information that is confidential or otherwise protected from disclosure. If you are not the intended recipient, you are hereby notified that any use, disclosure, dissemination, distribution, or copying of this message, or any attachments, is strictly prohibited. If you have received this message in error, please advise the sender by reply e-mail, and delete the message and any attachments. Thank you. |
From: Richard C. <ric...@gm...> - 2015-03-19 16:29:43
|
On Wed, Mar 18, 2015 at 09:29:02AM +0100, Richard Cochran wrote: > I have seen good results with 1, 2, 4, and 8, packets per second on a > low end embedded system. With 128, some time stamps are dropped due > to hardware/driver constraints. Here is a random metric from the boards on my desk. The CPUs are the TI AM335x, but using the DP83640 PHY as the PTP Hardware Clock. The slave is directly connected to the master with 1 meter cable over a 100 Mbit link. Measuring the edges of a 1 kHz output at various Sync rates, and with a DelayReq rate of 1 Hz, I see the following differences. Rate Offset ---------------- 2^0 +/- 75 ns 2^-3 +/- 25 ns 2^-4 +/- 20 ns 2^-5 +/- 20 ns As expected, increasing the DelayReq rate to 2^-5 makes no difference. So with this hardware, I have already reached the limit of synchronization performance. Increasing the Sync rate to 512 frames per second would not improve the picture. In contrast, using SyncE on the exact same hardware immediately yields an offset of +/- 1 nanosecond. (Actually, it probably is smaller, but I can't measure it with my lousy scope.) HTH, Richard |
From: Richard C. <ric...@gm...> - 2015-03-18 12:29:03
|
On Wed, Mar 18, 2015 at 12:11:57PM +0000, Chandra Mallela wrote: > My understanding is that logAnnounceInterval (default 1 in every two seconds) is to be set up at the master and announceReceiptTimeout (default 3 messages before the last message reception) to be at the slave. Am I right? Yes and no. The range of choices for these two parameters is determined by the profile. For any PTP network, all of the nodes must use the profile and the same particular values in order to ensure the correct operation of BMC algorithm. The 'master' and 'slave' roles are not static on a given port, but rather they can change depending on the outcome of the BMC election. Refering to IEEE 1588 might help clarify these and other points for you. Also, take a look at the telecom profile. HTH, Richard |
From: Chandra M. <sma...@al...> - 2015-03-18 12:12:16
|
Hi Richard, I think the following fell through cracks. My main intention is to understand the usage of the options ' logAnnounceInterval' & 'announceReceiptTimeout'. Could you throw some light on them? Do you have any link that elucidates the relation between their usage and the PTP profiles? My understanding is that logAnnounceInterval (default 1 in every two seconds) is to be set up at the master and announceReceiptTimeout (default 3 messages before the last message reception) to be at the slave. Am I right? Thanking you in anticipation, Regards, Chandra (c) : 0175508142 (O): 701.6412 "Knowledge speaks, Wisdom listens" -----Original Message----- From: Richard Cochran [mailto:ric...@gm...] Sent: Wednesday, March 18, 2015 1:57 AM To: Chandra Mallela Cc: lin...@li... Subject: Re: [Linuxptp-users] logAnnounceInterval Vs announceReceiptTimeout On Tue, Mar 17, 2015 at 05:21:47PM +0000, Chandra Mallela wrote: > ***Ours is typical telecom-networking profile. Could you point me to any info link that defines the parameters for different profiles? First of all, the telecom profile is unicast only. We do not support unicast. Secondly, from the telecom profile: A.3.4 REQUEST_UNICAST_TRANSMISSION TLV For requesting unicast Sync messages: The configurable range shall be one message every 16 seconds to 128 messages per second. No default rate is specified. For requesting unicast Delay_Resp messages: The configurable range shall be one message every 16 seconds to 128 messages per second. No default rate is specified. ITU-T G.8265.1 page 20 So, your 512 figure is *way* out of spec. > ***I understand the 1-step part. However, reducing DelayReq might not make sense. Average path delay should be at least half the rate of Sync packets to statistically arrive at better values of path delay. 1Hz seems quite low especially for telecom applications. If the path is not changing, then it does not make sense to measure it so often. I would expect a telecom network to be engineered to have fixed delays. > I will google about SO_SELECT_ERR_QUEUE. However, if you have any good link, please let me know. This means using Linux kernel version 3.10 or newer. Thanks, Richard ________________________________ Confidentiality Notice. This message may contain information that is confidential or otherwise protected from disclosure. If you are not the intended recipient, you are hereby notified that any use, disclosure, dissemination, distribution, or copying of this message, or any attachments, is strictly prohibited. If you have received this message in error, please advise the sender by reply e-mail, and delete the message and any attachments. Thank you. |
From: Chandra M. <sma...@al...> - 2015-03-18 10:03:35
|
Hi Richard, I am sorry for frustrating you. I hear you louder for sure!! You have got me right - yes, the abiliy of 1588 should not stop with just offset correction. I want to understand what it takes for 1588 to match with SyncE. Yes, there are many roadblocks and what you have mentioned/admonished is already under our scrutiny. Thank you all for sharing your answers/opinions. Thanking you in anticipation, Regards, Chandra (c) : 0175508142 (O): 701.6412 "Knowledge speaks, Wisdom listens" -----Original Message----- From: Richard Cochran [mailto:ric...@gm...] Sent: Wednesday, March 18, 2015 4:29 PM To: Chandra Mallela Cc: Miroslav Lichvar; lin...@li... Subject: Re: [Linuxptp-users] Expected throughput of the ptp4l On Wed, Mar 18, 2015 at 02:58:46AM +0000, Chandra Mallela wrote: > From my interactions with the customer as an architect, I could see the need for 512 syncs per second, due to high-end frequency correction (think of a 1588 application to replace/compete with SyncE applicationIn a pure telecom application, high frequency Sync packets might encroach into datatraffic. However, in back-hauling (be it telecom or networking), where high-throughput is already a given parameter, high frequency Sync packets are desirable for frequency corrections. ). On offset correction, I can take liberties on DelayReq sets. If you have SyncE in place already, then you don't need the 512 rate. In fact, all you need is *one* offset measurement in order to correct the offset, once the clocks are locked. If you are trying to replace a SyncE system with a non-SyncE PTP system, then you can forget trying to match the SyncE synchronization performance. It just is not feasible, to my knowledge. > Do we have the performance metrics of the ptp4l in a standard Linux OS? At what rates of the packets, does the stack break down? If we have any data, it will be good for us to know. If you have any ideas as to how to arrive at the numbers, please let me know - I can try it out with the system at my hand. The answer is, "it depends." I have seen good results with 1, 2, 4, and 8, packets per second on a low end embedded system. With 128, some time stamps are dropped due to hardware/driver constraints. Any performance metric is tied to the particular hardware used, and so even if I had such metrics, they would not help you too much. I already told you this more than once: If you really want to run 512 packets per second, then you are going to have to do some careful hardware selection and some serious system tuning. It won't be easy. Good luck, Richard ________________________________ Confidentiality Notice. This message may contain information that is confidential or otherwise protected from disclosure. If you are not the intended recipient, you are hereby notified that any use, disclosure, dissemination, distribution, or copying of this message, or any attachments, is strictly prohibited. If you have received this message in error, please advise the sender by reply e-mail, and delete the message and any attachments. Thank you. |
From: Ledda W. E. <Wil...@it...> - 2015-03-18 09:20:46
|
I'm working for a project with very stringent requirements (50 ns RMS accuracy) and we are able to reach this requirements using the "default" Sync delay of 1 second and a Delay request of 8 seconds. I'm attaching a simple picture that is probably one of the most best result I've observed after hundreds of tests. Results collected with an Intel i350 network chipset on Red Hat 6.5 with MRG-R kernel. Without real time kernel the results observed are quite near, just a little more jitter. The point is: Which is your requirement? Is it about the synch rate? (why?) Is it about the ns accuracy? If it is about the accuracy why to insist on 512 sync packets per seconds if you can reach the same results at 1Hz? As Richard says... > If you really want to run 512 packets per second, then you are going to have to do some careful hardware selection and some serious system tuning. > It won't be easy. > Good luck, HTH William -----Original Message----- From: Richard Cochran [mailto:ric...@gm...] Sent: 18 March 2015 09:29 To: Chandra Mallela Cc: lin...@li... Subject: Re: [Linuxptp-users] Expected throughput of the ptp4l On Wed, Mar 18, 2015 at 02:58:46AM +0000, Chandra Mallela wrote: > From my interactions with the customer as an architect, I could see the need for 512 syncs per second, due to high-end frequency correction (think of a 1588 application to replace/compete with SyncE applicationIn a pure telecom application, high frequency Sync packets might encroach into datatraffic. However, in back-hauling (be it telecom or networking), where high-throughput is already a given parameter, high frequency Sync packets are desirable for frequency corrections. ). On offset correction, I can take liberties on DelayReq sets. If you have SyncE in place already, then you don't need the 512 rate. In fact, all you need is *one* offset measurement in order to correct the offset, once the clocks are locked. If you are trying to replace a SyncE system with a non-SyncE PTP system, then you can forget trying to match the SyncE synchronization performance. It just is not feasible, to my knowledge. > Do we have the performance metrics of the ptp4l in a standard Linux OS? At what rates of the packets, does the stack break down? If we have any data, it will be good for us to know. If you have any ideas as to how to arrive at the numbers, please let me know - I can try it out with the system at my hand. The answer is, "it depends." I have seen good results with 1, 2, 4, and 8, packets per second on a low end embedded system. With 128, some time stamps are dropped due to hardware/driver constraints. Any performance metric is tied to the particular hardware used, and so even if I had such metrics, they would not help you too much. I already told you this more than once: If you really want to run 512 packets per second, then you are going to have to do some careful hardware selection and some serious system tuning. It won't be easy. Good luck, Richard ------------------------------------------------------------------------------ Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ _______________________________________________ Linuxptp-users mailing list Lin...@li... https://lists.sourceforge.net/lists/listinfo/linuxptp-users |
From: Richard C. <ric...@gm...> - 2015-03-18 08:29:14
|
On Wed, Mar 18, 2015 at 02:58:46AM +0000, Chandra Mallela wrote: > From my interactions with the customer as an architect, I could see the need for 512 syncs per second, due to high-end frequency correction (think of a 1588 application to replace/compete with SyncE applicationIn a pure telecom application, high frequency Sync packets might encroach into datatraffic. However, in back-hauling (be it telecom or networking), where high-throughput is already a given parameter, high frequency Sync packets are desirable for frequency corrections. ). On offset correction, I can take liberties on DelayReq sets. If you have SyncE in place already, then you don't need the 512 rate. In fact, all you need is *one* offset measurement in order to correct the offset, once the clocks are locked. If you are trying to replace a SyncE system with a non-SyncE PTP system, then you can forget trying to match the SyncE synchronization performance. It just is not feasible, to my knowledge. > Do we have the performance metrics of the ptp4l in a standard Linux OS? At what rates of the packets, does the stack break down? If we have any data, it will be good for us to know. If you have any ideas as to how to arrive at the numbers, please let me know - I can try it out with the system at my hand. The answer is, "it depends." I have seen good results with 1, 2, 4, and 8, packets per second on a low end embedded system. With 128, some time stamps are dropped due to hardware/driver constraints. Any performance metric is tied to the particular hardware used, and so even if I had such metrics, they would not help you too much. I already told you this more than once: If you really want to run 512 packets per second, then you are going to have to do some careful hardware selection and some serious system tuning. It won't be easy. Good luck, Richard |
From: Miroslav L. <mli...@re...> - 2015-03-18 08:24:50
|
On Tue, Mar 17, 2015 at 05:28:45PM +0000, Chandra Mallela wrote: > 512 sync packets are a reasonable rate for frequency corrections. In case of ToD, the fractional HW arithmetic (2^-16) can give you capability to achieve the highest accuracy possible, if the stack can supply such fine-grained values. AFAIK the Linux time stamping and clock interfaces are limited to nanoseconds. As an excersise to see when things start to fall apart sync rate of -9 may be interesting, but I'm not sure if it there is any practical value. > As for 'what is that supposed to achieve?', in ideal scenario, targeting 50pbb for CDMA is what I look at. I am further trying to analyze SyncE requirements from 1588 perspective, which seems too tough at this moment due to OS scheduling jitter itself. Do you have any specifications on the clock stability and jitter? Unless you are dealing with some very unstable clocks or large jitters, with careful PI configuration keeping the frequency error below 50 ppb shouldn't be difficult even at the default 1Hz sync rate. -- Miroslav Lichvar |
From: Chandra M. <sma...@al...> - 2015-03-18 02:58:59
|
Hi Richard, >From my interactions with the customer as an architect, I could see the need for 512 syncs per second, due to high-end frequency correction (think of a 1588 application to replace/compete with SyncE applicationIn a pure telecom application, high frequency Sync packets might encroach into datatraffic. However, in back-hauling (be it telecom or networking), where high-throughput is already a given parameter, high frequency Sync packets are desirable for frequency corrections. ). On offset correction, I can take liberties on DelayReq sets. Do we have the performance metrics of the ptp4l in a standard Linux OS? At what rates of the packets, does the stack break down? If we have any data, it will be good for us to know. If you have any ideas as to how to arrive at the numbers, please let me know - I can try it out with the system at my hand. Thanking you in anticipation, Regards, Chandra (c) : 0175508142 (O): 701.6412 "Knowledge speaks, Wisdom listens" -----Original Message----- From: Richard Cochran [mailto:ric...@gm...] Sent: Wednesday, March 18, 2015 2:06 AM To: Chandra Mallela Cc: Miroslav Lichvar; lin...@li... Subject: Re: [Linuxptp-users] Expected throughput of the ptp4l On Tue, Mar 17, 2015 at 05:28:45PM +0000, Chandra Mallela wrote: > As for 'what is that supposed to achieve?', in ideal scenario, targeting 50pbb for CDMA is what I look at. I am further trying to analyze SyncE requirements from 1588 perspective, which seems too tough at this moment due to OS scheduling jitter itself. > > Please correct me as appropriate and pour in your thoughts. So 50 pbb means 50 nanoseconds. That should be attainable with a more moderate Sync rate, but it depends on your hardware time stamping resolution, etc. With SyncE, you need to hack ptp4l *not* to touch the frequency adjustment. I have some patches in the works for this, but you can change the code yourself in the mean time. Using SyncE and the default 1 Hz Sync, I have seen two nodes synchronized to within 8 nanoseconds (the clock period of those systems). So, I really, truly don't see the need for 512 Syncs per second. But if you want run that rate, then you are going to have to optimize your system, especially WRT to real time response, as I said before. Thanks, Richard ________________________________ Confidentiality Notice. This message may contain information that is confidential or otherwise protected from disclosure. If you are not the intended recipient, you are hereby notified that any use, disclosure, dissemination, distribution, or copying of this message, or any attachments, is strictly prohibited. If you have received this message in error, please advise the sender by reply e-mail, and delete the message and any attachments. Thank you. |
From: Dale S. <dal...@gm...> - 2015-03-17 21:04:11
|
Yep. I've seen less than 30ppb with the gianfar driver, a Meinberg grandmaster and an Oregano TC at 1Hz Sync. -Dale |
From: Richard C. <ric...@gm...> - 2015-03-17 18:05:53
|
On Tue, Mar 17, 2015 at 05:28:45PM +0000, Chandra Mallela wrote: > As for 'what is that supposed to achieve?', in ideal scenario, targeting 50pbb for CDMA is what I look at. I am further trying to analyze SyncE requirements from 1588 perspective, which seems too tough at this moment due to OS scheduling jitter itself. > > Please correct me as appropriate and pour in your thoughts. So 50 pbb means 50 nanoseconds. That should be attainable with a more moderate Sync rate, but it depends on your hardware time stamping resolution, etc. With SyncE, you need to hack ptp4l *not* to touch the frequency adjustment. I have some patches in the works for this, but you can change the code yourself in the mean time. Using SyncE and the default 1 Hz Sync, I have seen two nodes synchronized to within 8 nanoseconds (the clock period of those systems). So, I really, truly don't see the need for 512 Syncs per second. But if you want run that rate, then you are going to have to optimize your system, especially WRT to real time response, as I said before. Thanks, Richard |
From: Keller, J. E <jac...@in...> - 2015-03-17 18:03:31
|
Hi, The issue isn't the actual network, but the fact of how transmit timestamps are are returned. The stack requests a particular packet to be timestamped, then the driver and device have to send it. Once sent, driver and device determine the transmission time, usually via some set of registers, and then have to report this information to the userspace again. Most devices only support timestamping a few packets at a time, some even only one. Thus, when you transmit too many syncs too fast, you can lose transmit timestamps. Even if you don't lose them, they may not be returned to ptp4l process in time. LinuxPTP is very careful, and causes a full reset when it loses a timestamp. Theoretically you could have it keep trying until you got more failures, but it becomes difficult to say for sure why you lost a timestamp and whether resetting is better than not. Regards, Jake On Tue, 2015-03-17 at 16:15 +0000, Chandra Mallela wrote: > > > > > Hi Friends, > > > > Do we have any performance numbers for the ptp4l stack performance on > a 10Gbps link? When I increase the sync packet frequency to 512 and > the DelayReq packet frequency to 512, I see many synchronization > faults, meaning that the ptp4l runs for some time, faces > synchronization faults, resumes the sync operation, faces the sync > faults and resumes the operation showing a lot of instability in the > overall ptp stack operation. > > > > Is it due to the limitation from the stack performance as the HW > supports a 10G link? The 512 packets per sync and 512 packet sets of > {DelayReq, DelayResp} donot occupy even 1% of the 10Gbps bandwidth. > > > > Could you elaborate your experience with the ptp4l performance? > > > > Thanking you in anticipation, > > Regards, > > Chandra > > > > © : 0175508142 > > (O): 701.6412 > > > > “Knowledge speaks, Wisdom listens” > > > > > > > ______________________________________________________________________ > > Confidentiality Notice. > This message may contain information that is confidential or otherwise > protected from disclosure. If you are not the intended recipient, you > are hereby notified that any use, disclosure, dissemination, > distribution, or copying of this message, or any attachments, is > strictly prohibited. If you have received this message in error, > please advise the sender by reply e-mail, and delete the message and > any attachments. Thank you. > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for all > things parallel software development, from weekly thought leadership blogs to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ Linuxptp-users mailing list Lin...@li... https://lists.sourceforge.net/lists/listinfo/linuxptp-users |
From: Richard C. <ric...@gm...> - 2015-03-17 17:56:56
|
On Tue, Mar 17, 2015 at 05:21:47PM +0000, Chandra Mallela wrote: > ***Ours is typical telecom-networking profile. Could you point me to any info link that defines the parameters for different profiles? First of all, the telecom profile is unicast only. We do not support unicast. Secondly, from the telecom profile: A.3.4 REQUEST_UNICAST_TRANSMISSION TLV For requesting unicast Sync messages: The configurable range shall be one message every 16 seconds to 128 messages per second. No default rate is specified. For requesting unicast Delay_Resp messages: The configurable range shall be one message every 16 seconds to 128 messages per second. No default rate is specified. ITU-T G.8265.1 page 20 So, your 512 figure is *way* out of spec. > ***I understand the 1-step part. However, reducing DelayReq might not make sense. Average path delay should be at least half the rate of Sync packets to statistically arrive at better values of path delay. 1Hz seems quite low especially for telecom applications. If the path is not changing, then it does not make sense to measure it so often. I would expect a telecom network to be engineered to have fixed delays. > I will google about SO_SELECT_ERR_QUEUE. However, if you have any good link, please let me know. This means using Linux kernel version 3.10 or newer. Thanks, Richard |