Thread: [Linuxptp-users] Expected throughput of the ptp4l
PTP IEEE 1588 stack for Linux
Brought to you by:
rcochran
From: Chandra M. <sma...@al...> - 2015-03-17 16:16:01
|
Hi Friends, Do we have any performance numbers for the ptp4l stack performance on a 10Gbps link? When I increase the sync packet frequency to 512 and the DelayReq packet frequency to 512, I see many synchronization faults, meaning that the ptp4l runs for some time, faces synchronization faults, resumes the sync operation, faces the sync faults and resumes the operation showing a lot of instability in the overall ptp stack operation. Is it due to the limitation from the stack performance as the HW supports a 10G link? The 512 packets per sync and 512 packet sets of {DelayReq, DelayResp} donot occupy even 1% of the 10Gbps bandwidth. Could you elaborate your experience with the ptp4l performance? Thanking you in anticipation, Regards, Chandra (c) : 0175508142 (O): 701.6412 "Knowledge speaks, Wisdom listens" ________________________________ Confidentiality Notice. This message may contain information that is confidential or otherwise protected from disclosure. If you are not the intended recipient, you are hereby notified that any use, disclosure, dissemination, distribution, or copying of this message, or any attachments, is strictly prohibited. If you have received this message in error, please advise the sender by reply e-mail, and delete the message and any attachments. Thank you. |
From: Richard C. <ric...@gm...> - 2015-03-17 16:32:49
|
On Tue, Mar 17, 2015 at 04:15:46PM +0000, Chandra Mallela wrote: > Could you elaborate your experience with the ptp4l performance? I never tried 512 Syncs per second, nor do I see any reason to run such a high rate. What is that supposed to achieve? In general, if you want to handle one event every 1.9 milliseconds, then you must be running RT_PREEMPT with ptp4l at sufficiently high priority. Standard Linux will exhibit scheduling latencies well above 2 milliseconds. Thanks, Richard |
From: Miroslav L. <mli...@re...> - 2015-03-17 16:44:09
|
On Tue, Mar 17, 2015 at 05:32:38PM +0100, Richard Cochran wrote: > On Tue, Mar 17, 2015 at 04:15:46PM +0000, Chandra Mallela wrote: > > Could you elaborate your experience with the ptp4l performance? > > I never tried 512 Syncs per second, nor do I see any reason to run > such a high rate. What is that supposed to achieve? I'm wondering that too. Even if we assume a very unstable clock, the clock resolution and jitter would have to be somewhere in low picoseconds to -9 provide any theoretical improvement over -8. -- Miroslav Lichvar |
From: Chandra M. <sma...@al...> - 2015-03-17 17:44:13
|
Hi Miroslav & Richard, Not sure whether I have understood your concern here. High quality PLLs offer stepping in pico seconds. 512 sync packets are a reasonable rate for frequency corrections. In case of ToD, the fractional HW arithmetic (2^-16) can give you capability to achieve the highest accuracy possible, if the stack can supply such fine-grained values. As for 'what is that supposed to achieve?', in ideal scenario, targeting 50pbb for CDMA is what I look at. I am further trying to analyze SyncE requirements from 1588 perspective, which seems too tough at this moment due to OS scheduling jitter itself. Please correct me as appropriate and pour in your thoughts. Thanking you in anticipation, Regards, Chandra (c) : 0175508142 (O): 701.6412 "Knowledge speaks, Wisdom listens" -----Original Message----- From: Miroslav Lichvar [mailto:mli...@re...] Sent: Wednesday, March 18, 2015 12:44 AM To: Richard Cochran Cc: Chandra Mallela; lin...@li... Subject: Re: [Linuxptp-users] Expected throughput of the ptp4l On Tue, Mar 17, 2015 at 05:32:38PM +0100, Richard Cochran wrote: > On Tue, Mar 17, 2015 at 04:15:46PM +0000, Chandra Mallela wrote: > > Could you elaborate your experience with the ptp4l performance? > > I never tried 512 Syncs per second, nor do I see any reason to run > such a high rate. What is that supposed to achieve? I'm wondering that too. Even if we assume a very unstable clock, the clock resolution and jitter would have to be somewhere in low picoseconds to -9 provide any theoretical improvement over -8. -- Miroslav Lichvar ________________________________ Confidentiality Notice. This message may contain information that is confidential or otherwise protected from disclosure. If you are not the intended recipient, you are hereby notified that any use, disclosure, dissemination, distribution, or copying of this message, or any attachments, is strictly prohibited. If you have received this message in error, please advise the sender by reply e-mail, and delete the message and any attachments. Thank you. |
From: Miroslav L. <mli...@re...> - 2015-03-18 08:24:50
|
On Tue, Mar 17, 2015 at 05:28:45PM +0000, Chandra Mallela wrote: > 512 sync packets are a reasonable rate for frequency corrections. In case of ToD, the fractional HW arithmetic (2^-16) can give you capability to achieve the highest accuracy possible, if the stack can supply such fine-grained values. AFAIK the Linux time stamping and clock interfaces are limited to nanoseconds. As an excersise to see when things start to fall apart sync rate of -9 may be interesting, but I'm not sure if it there is any practical value. > As for 'what is that supposed to achieve?', in ideal scenario, targeting 50pbb for CDMA is what I look at. I am further trying to analyze SyncE requirements from 1588 perspective, which seems too tough at this moment due to OS scheduling jitter itself. Do you have any specifications on the clock stability and jitter? Unless you are dealing with some very unstable clocks or large jitters, with careful PI configuration keeping the frequency error below 50 ppb shouldn't be difficult even at the default 1Hz sync rate. -- Miroslav Lichvar |
From: Keller, J. E <jac...@in...> - 2015-03-17 18:03:31
|
Hi, The issue isn't the actual network, but the fact of how transmit timestamps are are returned. The stack requests a particular packet to be timestamped, then the driver and device have to send it. Once sent, driver and device determine the transmission time, usually via some set of registers, and then have to report this information to the userspace again. Most devices only support timestamping a few packets at a time, some even only one. Thus, when you transmit too many syncs too fast, you can lose transmit timestamps. Even if you don't lose them, they may not be returned to ptp4l process in time. LinuxPTP is very careful, and causes a full reset when it loses a timestamp. Theoretically you could have it keep trying until you got more failures, but it becomes difficult to say for sure why you lost a timestamp and whether resetting is better than not. Regards, Jake On Tue, 2015-03-17 at 16:15 +0000, Chandra Mallela wrote: > > > > > Hi Friends, > > > > Do we have any performance numbers for the ptp4l stack performance on > a 10Gbps link? When I increase the sync packet frequency to 512 and > the DelayReq packet frequency to 512, I see many synchronization > faults, meaning that the ptp4l runs for some time, faces > synchronization faults, resumes the sync operation, faces the sync > faults and resumes the operation showing a lot of instability in the > overall ptp stack operation. > > > > Is it due to the limitation from the stack performance as the HW > supports a 10G link? The 512 packets per sync and 512 packet sets of > {DelayReq, DelayResp} donot occupy even 1% of the 10Gbps bandwidth. > > > > Could you elaborate your experience with the ptp4l performance? > > > > Thanking you in anticipation, > > Regards, > > Chandra > > > > © : 0175508142 > > (O): 701.6412 > > > > “Knowledge speaks, Wisdom listens” > > > > > > > ______________________________________________________________________ > > Confidentiality Notice. > This message may contain information that is confidential or otherwise > protected from disclosure. If you are not the intended recipient, you > are hereby notified that any use, disclosure, dissemination, > distribution, or copying of this message, or any attachments, is > strictly prohibited. If you have received this message in error, > please advise the sender by reply e-mail, and delete the message and > any attachments. Thank you. > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for all > things parallel software development, from weekly thought leadership blogs to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ Linuxptp-users mailing list Lin...@li... https://lists.sourceforge.net/lists/listinfo/linuxptp-users |
From: Richard C. <ric...@gm...> - 2015-03-17 18:05:53
|
On Tue, Mar 17, 2015 at 05:28:45PM +0000, Chandra Mallela wrote: > As for 'what is that supposed to achieve?', in ideal scenario, targeting 50pbb for CDMA is what I look at. I am further trying to analyze SyncE requirements from 1588 perspective, which seems too tough at this moment due to OS scheduling jitter itself. > > Please correct me as appropriate and pour in your thoughts. So 50 pbb means 50 nanoseconds. That should be attainable with a more moderate Sync rate, but it depends on your hardware time stamping resolution, etc. With SyncE, you need to hack ptp4l *not* to touch the frequency adjustment. I have some patches in the works for this, but you can change the code yourself in the mean time. Using SyncE and the default 1 Hz Sync, I have seen two nodes synchronized to within 8 nanoseconds (the clock period of those systems). So, I really, truly don't see the need for 512 Syncs per second. But if you want run that rate, then you are going to have to optimize your system, especially WRT to real time response, as I said before. Thanks, Richard |
From: Dale S. <dal...@gm...> - 2015-03-17 21:04:11
|
Yep. I've seen less than 30ppb with the gianfar driver, a Meinberg grandmaster and an Oregano TC at 1Hz Sync. -Dale |
From: Chandra M. <sma...@al...> - 2015-03-18 02:58:59
|
Hi Richard, >From my interactions with the customer as an architect, I could see the need for 512 syncs per second, due to high-end frequency correction (think of a 1588 application to replace/compete with SyncE applicationIn a pure telecom application, high frequency Sync packets might encroach into datatraffic. However, in back-hauling (be it telecom or networking), where high-throughput is already a given parameter, high frequency Sync packets are desirable for frequency corrections. ). On offset correction, I can take liberties on DelayReq sets. Do we have the performance metrics of the ptp4l in a standard Linux OS? At what rates of the packets, does the stack break down? If we have any data, it will be good for us to know. If you have any ideas as to how to arrive at the numbers, please let me know - I can try it out with the system at my hand. Thanking you in anticipation, Regards, Chandra (c) : 0175508142 (O): 701.6412 "Knowledge speaks, Wisdom listens" -----Original Message----- From: Richard Cochran [mailto:ric...@gm...] Sent: Wednesday, March 18, 2015 2:06 AM To: Chandra Mallela Cc: Miroslav Lichvar; lin...@li... Subject: Re: [Linuxptp-users] Expected throughput of the ptp4l On Tue, Mar 17, 2015 at 05:28:45PM +0000, Chandra Mallela wrote: > As for 'what is that supposed to achieve?', in ideal scenario, targeting 50pbb for CDMA is what I look at. I am further trying to analyze SyncE requirements from 1588 perspective, which seems too tough at this moment due to OS scheduling jitter itself. > > Please correct me as appropriate and pour in your thoughts. So 50 pbb means 50 nanoseconds. That should be attainable with a more moderate Sync rate, but it depends on your hardware time stamping resolution, etc. With SyncE, you need to hack ptp4l *not* to touch the frequency adjustment. I have some patches in the works for this, but you can change the code yourself in the mean time. Using SyncE and the default 1 Hz Sync, I have seen two nodes synchronized to within 8 nanoseconds (the clock period of those systems). So, I really, truly don't see the need for 512 Syncs per second. But if you want run that rate, then you are going to have to optimize your system, especially WRT to real time response, as I said before. Thanks, Richard ________________________________ Confidentiality Notice. This message may contain information that is confidential or otherwise protected from disclosure. If you are not the intended recipient, you are hereby notified that any use, disclosure, dissemination, distribution, or copying of this message, or any attachments, is strictly prohibited. If you have received this message in error, please advise the sender by reply e-mail, and delete the message and any attachments. Thank you. |
From: Richard C. <ric...@gm...> - 2015-03-18 08:29:14
|
On Wed, Mar 18, 2015 at 02:58:46AM +0000, Chandra Mallela wrote: > From my interactions with the customer as an architect, I could see the need for 512 syncs per second, due to high-end frequency correction (think of a 1588 application to replace/compete with SyncE applicationIn a pure telecom application, high frequency Sync packets might encroach into datatraffic. However, in back-hauling (be it telecom or networking), where high-throughput is already a given parameter, high frequency Sync packets are desirable for frequency corrections. ). On offset correction, I can take liberties on DelayReq sets. If you have SyncE in place already, then you don't need the 512 rate. In fact, all you need is *one* offset measurement in order to correct the offset, once the clocks are locked. If you are trying to replace a SyncE system with a non-SyncE PTP system, then you can forget trying to match the SyncE synchronization performance. It just is not feasible, to my knowledge. > Do we have the performance metrics of the ptp4l in a standard Linux OS? At what rates of the packets, does the stack break down? If we have any data, it will be good for us to know. If you have any ideas as to how to arrive at the numbers, please let me know - I can try it out with the system at my hand. The answer is, "it depends." I have seen good results with 1, 2, 4, and 8, packets per second on a low end embedded system. With 128, some time stamps are dropped due to hardware/driver constraints. Any performance metric is tied to the particular hardware used, and so even if I had such metrics, they would not help you too much. I already told you this more than once: If you really want to run 512 packets per second, then you are going to have to do some careful hardware selection and some serious system tuning. It won't be easy. Good luck, Richard |
From: Ledda W. E. <Wil...@it...> - 2015-03-18 09:20:46
Attachments:
ptp4l.stp.offset.jpg
|
I'm working for a project with very stringent requirements (50 ns RMS accuracy) and we are able to reach this requirements using the "default" Sync delay of 1 second and a Delay request of 8 seconds. I'm attaching a simple picture that is probably one of the most best result I've observed after hundreds of tests. Results collected with an Intel i350 network chipset on Red Hat 6.5 with MRG-R kernel. Without real time kernel the results observed are quite near, just a little more jitter. The point is: Which is your requirement? Is it about the synch rate? (why?) Is it about the ns accuracy? If it is about the accuracy why to insist on 512 sync packets per seconds if you can reach the same results at 1Hz? As Richard says... > If you really want to run 512 packets per second, then you are going to have to do some careful hardware selection and some serious system tuning. > It won't be easy. > Good luck, HTH William -----Original Message----- From: Richard Cochran [mailto:ric...@gm...] Sent: 18 March 2015 09:29 To: Chandra Mallela Cc: lin...@li... Subject: Re: [Linuxptp-users] Expected throughput of the ptp4l On Wed, Mar 18, 2015 at 02:58:46AM +0000, Chandra Mallela wrote: > From my interactions with the customer as an architect, I could see the need for 512 syncs per second, due to high-end frequency correction (think of a 1588 application to replace/compete with SyncE applicationIn a pure telecom application, high frequency Sync packets might encroach into datatraffic. However, in back-hauling (be it telecom or networking), where high-throughput is already a given parameter, high frequency Sync packets are desirable for frequency corrections. ). On offset correction, I can take liberties on DelayReq sets. If you have SyncE in place already, then you don't need the 512 rate. In fact, all you need is *one* offset measurement in order to correct the offset, once the clocks are locked. If you are trying to replace a SyncE system with a non-SyncE PTP system, then you can forget trying to match the SyncE synchronization performance. It just is not feasible, to my knowledge. > Do we have the performance metrics of the ptp4l in a standard Linux OS? At what rates of the packets, does the stack break down? If we have any data, it will be good for us to know. If you have any ideas as to how to arrive at the numbers, please let me know - I can try it out with the system at my hand. The answer is, "it depends." I have seen good results with 1, 2, 4, and 8, packets per second on a low end embedded system. With 128, some time stamps are dropped due to hardware/driver constraints. Any performance metric is tied to the particular hardware used, and so even if I had such metrics, they would not help you too much. I already told you this more than once: If you really want to run 512 packets per second, then you are going to have to do some careful hardware selection and some serious system tuning. It won't be easy. Good luck, Richard ------------------------------------------------------------------------------ Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ _______________________________________________ Linuxptp-users mailing list Lin...@li... https://lists.sourceforge.net/lists/listinfo/linuxptp-users |
From: Chandra M. <sma...@al...> - 2015-03-18 10:03:35
|
Hi Richard, I am sorry for frustrating you. I hear you louder for sure!! You have got me right - yes, the abiliy of 1588 should not stop with just offset correction. I want to understand what it takes for 1588 to match with SyncE. Yes, there are many roadblocks and what you have mentioned/admonished is already under our scrutiny. Thank you all for sharing your answers/opinions. Thanking you in anticipation, Regards, Chandra (c) : 0175508142 (O): 701.6412 "Knowledge speaks, Wisdom listens" -----Original Message----- From: Richard Cochran [mailto:ric...@gm...] Sent: Wednesday, March 18, 2015 4:29 PM To: Chandra Mallela Cc: Miroslav Lichvar; lin...@li... Subject: Re: [Linuxptp-users] Expected throughput of the ptp4l On Wed, Mar 18, 2015 at 02:58:46AM +0000, Chandra Mallela wrote: > From my interactions with the customer as an architect, I could see the need for 512 syncs per second, due to high-end frequency correction (think of a 1588 application to replace/compete with SyncE applicationIn a pure telecom application, high frequency Sync packets might encroach into datatraffic. However, in back-hauling (be it telecom or networking), where high-throughput is already a given parameter, high frequency Sync packets are desirable for frequency corrections. ). On offset correction, I can take liberties on DelayReq sets. If you have SyncE in place already, then you don't need the 512 rate. In fact, all you need is *one* offset measurement in order to correct the offset, once the clocks are locked. If you are trying to replace a SyncE system with a non-SyncE PTP system, then you can forget trying to match the SyncE synchronization performance. It just is not feasible, to my knowledge. > Do we have the performance metrics of the ptp4l in a standard Linux OS? At what rates of the packets, does the stack break down? If we have any data, it will be good for us to know. If you have any ideas as to how to arrive at the numbers, please let me know - I can try it out with the system at my hand. The answer is, "it depends." I have seen good results with 1, 2, 4, and 8, packets per second on a low end embedded system. With 128, some time stamps are dropped due to hardware/driver constraints. Any performance metric is tied to the particular hardware used, and so even if I had such metrics, they would not help you too much. I already told you this more than once: If you really want to run 512 packets per second, then you are going to have to do some careful hardware selection and some serious system tuning. It won't be easy. Good luck, Richard ________________________________ Confidentiality Notice. This message may contain information that is confidential or otherwise protected from disclosure. If you are not the intended recipient, you are hereby notified that any use, disclosure, dissemination, distribution, or copying of this message, or any attachments, is strictly prohibited. If you have received this message in error, please advise the sender by reply e-mail, and delete the message and any attachments. Thank you. |
From: Richard C. <ric...@gm...> - 2015-03-19 16:29:43
|
On Wed, Mar 18, 2015 at 09:29:02AM +0100, Richard Cochran wrote: > I have seen good results with 1, 2, 4, and 8, packets per second on a > low end embedded system. With 128, some time stamps are dropped due > to hardware/driver constraints. Here is a random metric from the boards on my desk. The CPUs are the TI AM335x, but using the DP83640 PHY as the PTP Hardware Clock. The slave is directly connected to the master with 1 meter cable over a 100 Mbit link. Measuring the edges of a 1 kHz output at various Sync rates, and with a DelayReq rate of 1 Hz, I see the following differences. Rate Offset ---------------- 2^0 +/- 75 ns 2^-3 +/- 25 ns 2^-4 +/- 20 ns 2^-5 +/- 20 ns As expected, increasing the DelayReq rate to 2^-5 makes no difference. So with this hardware, I have already reached the limit of synchronization performance. Increasing the Sync rate to 512 frames per second would not improve the picture. In contrast, using SyncE on the exact same hardware immediately yields an offset of +/- 1 nanosecond. (Actually, it probably is smaller, but I can't measure it with my lousy scope.) HTH, Richard |
From: Chandra M. <sma...@al...> - 2015-03-20 09:52:12
|
Hi Richard, Thank you for the details. As soon as I get free from my project, I will share the details with 10Gbps-1588 system. My understanding is that looking at the offset and its deviation from the rms and the pbb deviation tells us clearly the efficacy of the 1588. Definitely, there is saturation beyond certain number. However, from our experiments, 512 syncs per seconds have looked very promising. You are absolutely right - 1588 cannot beat SyncE due to the latter's operation at layer 1 as opposed to 1588 packet-based operation at layer 2 & above. However, there is an effort to dispel the myth (if I dare say so) that syntonization HAS TO BE achieved with only SyncE. Many requirements out there are not that stringent. In any case, I thank you all for your patient and cordial responses. Thanking you in anticipation, Regards, Chandra (c) : 0175508142 (O): 701.6412 "Knowledge speaks, Wisdom listens" -----Original Message----- From: Richard Cochran [mailto:ric...@gm...] Sent: Friday, March 20, 2015 12:29 AM To: Chandra Mallela Cc: Miroslav Lichvar; lin...@li... Subject: Re: [Linuxptp-users] Expected throughput of the ptp4l On Wed, Mar 18, 2015 at 09:29:02AM +0100, Richard Cochran wrote: > I have seen good results with 1, 2, 4, and 8, packets per second on a > low end embedded system. With 128, some time stamps are dropped due > to hardware/driver constraints. Here is a random metric from the boards on my desk. The CPUs are the TI AM335x, but using the DP83640 PHY as the PTP Hardware Clock. The slave is directly connected to the master with 1 meter cable over a 100 Mbit link. Measuring the edges of a 1 kHz output at various Sync rates, and with a DelayReq rate of 1 Hz, I see the following differences. Rate Offset ---------------- 2^0 +/- 75 ns 2^-3 +/- 25 ns 2^-4 +/- 20 ns 2^-5 +/- 20 ns As expected, increasing the DelayReq rate to 2^-5 makes no difference. So with this hardware, I have already reached the limit of synchronization performance. Increasing the Sync rate to 512 frames per second would not improve the picture. In contrast, using SyncE on the exact same hardware immediately yields an offset of +/- 1 nanosecond. (Actually, it probably is smaller, but I can't measure it with my lousy scope.) HTH, Richard ________________________________ Confidentiality Notice. This message may contain information that is confidential or otherwise protected from disclosure. If you are not the intended recipient, you are hereby notified that any use, disclosure, dissemination, distribution, or copying of this message, or any attachments, is strictly prohibited. If you have received this message in error, please advise the sender by reply e-mail, and delete the message and any attachments. Thank you. |
From: Xavier B. <xav...@fr...> - 2015-09-04 08:49:57
|
OnThu, 19 Mar 2015 09:30:56 -0700 <https://www.mail-archive.com/search?l=lin...@li...&q=date:20150319>, Richard Cochran wrote: > Here is a random metric from the boards on my desk. The CPUs are the > TI AM335x, but using the DP83640 PHY as the PTP Hardware Clock. The > slave is directly connected to the master with 1 meter cable over a > 100 Mbit link. > > Measuring the edges of a 1 kHz output at various Sync rates, and with > a DelayReq rate of 1 Hz, I see the following differences. > > Rate Offset > ---------------- > 2^0 +/- 75 ns > 2^-3 +/- 25 ns > 2^-4 +/- 20 ns > 2^-5 +/- 20 ns > > As expected, increasing the DelayReq rate to 2^-5 makes no difference. > > So with this hardware, I have already reached the limit of > synchronization performance. Increasing the Sync rate to 512 frames > per second would not improve the picture. > > In contrast, using SyncE on the exact same hardware immediately yields > an offset of +/- 1 nanosecond. (Actually, it probably is smaller, but > I can't measure it with my lousy scope.) Hi Richard, Out of curiosity how do you use SyncE on this hardware ? I can't see the DP83640 doing SyncE. Xav |
From: Richard C. <ric...@gm...> - 2015-09-04 09:42:29
|
On Fri, Sep 04, 2015 at 10:49:40AM +0200, Xavier Bestel wrote: > Out of curiosity how do you use SyncE on this hardware ? > I can't see the DP83640 doing SyncE. Set bit 13 in PHYCR2 (PAGE0). HTH Ricahrd |
From: Xavier B. <xav...@fr...> - 2015-09-04 10:36:14
|
Oh, that was a hidden feature ! Thanks Richard. Generally speaking, do you know which support is needed in the host CPU (or SoC) for SyncE ? From what I gather whith PHYCR2:13 you have syntonized the ethernet endpoints, but you still don't send the heartbeat and everything do you ? Xav Le 04/09/2015 11:42, Richard Cochran a écrit : > On Fri, Sep 04, 2015 at 10:49:40AM +0200, Xavier Bestel wrote: >> Out of curiosity how do you use SyncE on this hardware ? >> I can't see the DP83640 doing SyncE. > Set bit 13 in PHYCR2 (PAGE0). > > HTH > Ricahrd > > |
From: Richard C. <ric...@gm...> - 2015-09-04 11:33:45
|
On Fri, Sep 04, 2015 at 12:35:59PM +0200, Xavier Bestel wrote: > Generally speaking, do you know which support is needed in the host CPU (or > SoC) for SyncE ? From what I gather whith PHYCR2:13 you have syntonized the > ethernet endpoints, but you still don't send the heartbeat and everything do > you ? What do you mean by "heartbeat"? Richard |
From: Xavier B. <xav...@fr...> - 2015-09-04 11:42:17
|
Le 04/09/2015 13:33, Richard Cochran a écrit : > On Fri, Sep 04, 2015 at 12:35:59PM +0200, Xavier Bestel wrote: >> Generally speaking, do you know which support is needed in the host CPU (or >> SoC) for SyncE ? From what I gather whith PHYCR2:13 you have syntonized the >> ethernet endpoints, but you still don't send the heartbeat and everything do >> you ? > What do you mean by "heartbeat"? The ESMC message defined here:https://en.wikipedia.org/wiki/Synchronous_Ethernet#Messaging_channel Xav |
From: Richard C. <ric...@gm...> - 2015-09-04 12:06:24
|
On Fri, Sep 04, 2015 at 01:42:01PM +0200, Xavier Bestel wrote: > The ESMC message defined here:https://en.wikipedia.org/wiki/Synchronous_Ethernet#Messaging_channel Oh, that. Yes, I did implement that in 400 LOC for a customer project. It is a very simple protocol. What we really need is an ethtool command to enable and disable SyncE. I posted an idea to the netdev list, and there was a little bit of interest. Maybe if I get some SyncE HW, I'll work on that again. Thanks, Richard |