You can subscribe to this list here.
2012 |
Jan
|
Feb
(10) |
Mar
(47) |
Apr
|
May
(26) |
Jun
(10) |
Jul
(4) |
Aug
(2) |
Sep
(2) |
Oct
(20) |
Nov
(14) |
Dec
(8) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2013 |
Jan
(6) |
Feb
(18) |
Mar
(27) |
Apr
(57) |
May
(32) |
Jun
(21) |
Jul
(79) |
Aug
(108) |
Sep
(13) |
Oct
(73) |
Nov
(51) |
Dec
(24) |
2014 |
Jan
(24) |
Feb
(41) |
Mar
(39) |
Apr
(5) |
May
(6) |
Jun
(2) |
Jul
(5) |
Aug
(15) |
Sep
(7) |
Oct
(6) |
Nov
|
Dec
(7) |
2015 |
Jan
(27) |
Feb
(18) |
Mar
(37) |
Apr
(8) |
May
(13) |
Jun
(44) |
Jul
(4) |
Aug
(50) |
Sep
(35) |
Oct
(6) |
Nov
(24) |
Dec
(19) |
2016 |
Jan
(30) |
Feb
(30) |
Mar
(23) |
Apr
(4) |
May
(12) |
Jun
(19) |
Jul
(26) |
Aug
(13) |
Sep
|
Oct
(23) |
Nov
(37) |
Dec
(15) |
2017 |
Jan
(33) |
Feb
(19) |
Mar
(20) |
Apr
(43) |
May
(39) |
Jun
(23) |
Jul
(20) |
Aug
(27) |
Sep
(10) |
Oct
(15) |
Nov
|
Dec
(24) |
2018 |
Jan
(3) |
Feb
(10) |
Mar
(34) |
Apr
(34) |
May
(28) |
Jun
(50) |
Jul
(27) |
Aug
(75) |
Sep
(21) |
Oct
(42) |
Nov
(25) |
Dec
(31) |
2019 |
Jan
(39) |
Feb
(28) |
Mar
(19) |
Apr
(7) |
May
(30) |
Jun
(22) |
Jul
(54) |
Aug
(36) |
Sep
(19) |
Oct
(33) |
Nov
(36) |
Dec
(32) |
2020 |
Jan
(29) |
Feb
(38) |
Mar
(29) |
Apr
(30) |
May
(39) |
Jun
(45) |
Jul
(31) |
Aug
(52) |
Sep
(40) |
Oct
(8) |
Nov
(48) |
Dec
(30) |
2021 |
Jan
(35) |
Feb
(32) |
Mar
(23) |
Apr
(55) |
May
(43) |
Jun
(63) |
Jul
(17) |
Aug
(24) |
Sep
(9) |
Oct
(31) |
Nov
(67) |
Dec
(55) |
2022 |
Jan
(31) |
Feb
(48) |
Mar
(76) |
Apr
(18) |
May
(13) |
Jun
(46) |
Jul
(75) |
Aug
(54) |
Sep
(59) |
Oct
(65) |
Nov
(44) |
Dec
(7) |
2023 |
Jan
(38) |
Feb
(32) |
Mar
(35) |
Apr
(23) |
May
(46) |
Jun
(53) |
Jul
(18) |
Aug
(10) |
Sep
(24) |
Oct
(15) |
Nov
(40) |
Dec
(6) |
From: Aditya V. <adi...@5g...> - 2023-02-20 11:00:39
|
Got it I was using v2.0 where message rates aren't specified. Switched to 3.1 branch, working fine Thanks, Aditya. On Mon, Feb 20, 2023 at 4:23 PM Miroslav Lichvar <mli...@re...> wrote: > On Mon, Feb 20, 2023 at 04:14:46PM +0530, Aditya Venu via Linuxptp-users > wrote: > > I ran sudo ./ptp4l -f configs/G.8275.1 -i eth0 -2 -m > > It should be -f configs/G.8275.1.cfg. The file specifies > the domain 24, the L2 transport, sync interval of 1/16s, and other > G.8275.1 settings. > > -- > Miroslav Lichvar > > -- Disclaimer:- This footer text is to convey that this email is sent by one of the users of IITH. So, do not mark it as SPAM. |
From: Miroslav L. <mli...@re...> - 2023-02-20 10:53:33
|
On Mon, Feb 20, 2023 at 04:14:46PM +0530, Aditya Venu via Linuxptp-users wrote: > I ran sudo ./ptp4l -f configs/G.8275.1 -i eth0 -2 -m It should be -f configs/G.8275.1.cfg. The file specifies the domain 24, the L2 transport, sync interval of 1/16s, and other G.8275.1 settings. -- Miroslav Lichvar |
From: Aditya V. <adi...@5g...> - 2023-02-20 10:45:08
|
Hi Richard, I ran sudo ./ptp4l -f configs/G.8275.1 -i eth0 -2 -m Thanks, Aditya On Fri, Feb 17, 2023 at 8:36 PM Richard Cochran <ric...@gm...> wrote: > On Fri, Feb 17, 2023 at 05:40:54PM +0530, Aditya Venu via Linuxptp-users > wrote: > > But I want domain number 24, announce/sync/delay to be 8/16/16 per > second. > > I've put these values in default.cfg but still it is not reflecting. > > Did you run: ptp4l -f default.cfg ? > > Thanks, > Richard > -- Disclaimer:- This footer text is to convey that this email is sent by one of the users of IITH. So, do not mark it as SPAM. |
From: Alexis Chatail--R. (S. at CentraleSupelec)
<ale...@st...> - 2023-02-20 10:22:30
|
Hi, I have achieved ptp sync with my 2 linux devices which have 1 Intel xxv710 network adapter each. However I want to see a pps output to see how well my signals are alined. If my network adapter can provide a pps output on its SMA pins, how cam I enable it ? Best regards, Alexis |
From: Richard C. <ric...@gm...> - 2023-02-17 15:06:11
|
On Fri, Feb 17, 2023 at 05:40:54PM +0530, Aditya Venu via Linuxptp-users wrote: > But I want domain number 24, announce/sync/delay to be 8/16/16 per second. > I've put these values in default.cfg but still it is not reflecting. Did you run: ptp4l -f default.cfg ? Thanks, Richard |
From: Aditya V. <adi...@5g...> - 2023-02-17 12:11:12
|
Hi linuxptp users, Is there a way to change domain number and announce, sync, delay request message intervals with G.8275.1 profile? When I run without any modifications, I am getting 1 sync, 1 delay request and 1/2 announce per second and domain number as 0. But I want domain number 24, announce/sync/delay to be 8/16/16 per second. I've put these values in default.cfg but still it is not reflecting. Can you please help? -Aditya -- Disclaimer:- This footer text is to convey that this email is sent by one of the users of IITH. So, do not mark it as SPAM. |
From: Prankur C. <pra...@gm...> - 2023-02-14 12:46:15
|
Dear Wojtek, Thanks for your reply. I did not mention, but I am aware of the REAL TIME CLOCK settings. Thanks for pointing out how it is then related to the phc and /dev/ptp files Just for information we are using Connect X6-Dx cards with REAL_TIME_CLOCK enabled. What you pointed out with PCIe bus read latency is an interesting thought. As I mentioned wIth my test software I can measure two clock with clock_gettime(phcclockID, ts) and show their difference. I observed the difference between mlnx1 and mlnx3 and it is close to 1450 +- 50 ns. The jitter is stable when there is no PCIe traffic (no IP input/output streams) Still my original question remains, if there is some other/better way to synchronize the hardware clocks on multiple NICs other than starting instances of phc2sys per NIC ? Another Idea/question : Is it possible that the cards talk over PCIe without intervention from CPU to sync their hardware clocks, e.g. Mellanox NIC 1 has PTP (hardware clock syned via ptp4l) and other mellanox NIC cards are synced over PCIe in the FPGA. Cheers Prankur On Mon, Feb 13, 2023 at 1:23 PM Wojtek Wasko <ww...@nv...> wrote: > > There is a caveat here : I assume both the ports on the same NIC (mlnx1 > and mlnx2 for example) > > have the same hardware clock source even though under /dev/ they are > mapped to different ptp > > files i.e /dev/ptp8 and /dev/ptp9 respectively. > > Depending on the configuration of the Mellanox NIC, the PHC exposed for > different ports will either be the same PHC (in Real-Time Clock mode) - > though exposed through two /dev/ files or (in non-Real-Time Clock mode) > driver will construct separate PHCs for each of the ports. > You can find some instructions on how to configure it here: > https://docs.nvidia.com/networking/display/NVIDIA5TTechnologyUserManualv10/Real-Time+Clock > And some general documentation on the Real-Time Clock here: > https://docs.nvidia.com/networking/display/NVIDIA5TTechnologyUserManualv10/Real+Time+Clock > > > There are some jitters / unexpected outliers which I attribute to > measurement uncertainties / syscall jitter. > 99% of what you're seeing is likely PCIe read latency jitter. This is the > reason why phc2sys has the '-N' option to mitigate the problem: > > -N phc-num > > Specify the number of master clock readings per one slave clock update. > > Only the fastest reading is used to update the slave clock, this is > useful > > to minimize the error caused by random delays in scheduling and bus > utilization. > > W > -- Cheers Prankur |
From: Wojtek W. <ww...@nv...> - 2023-02-13 12:38:54
|
> There is a caveat here : I assume both the ports on the same NIC (mlnx1 and mlnx2 for example) > have the same hardware clock source even though under /dev/ they are mapped to different ptp > files i.e /dev/ptp8 and /dev/ptp9 respectively. Depending on the configuration of the Mellanox NIC, the PHC exposed for different ports will either be the same PHC (in Real-Time Clock mode) - though exposed through two /dev/ files or (in non-Real-Time Clock mode) driver will construct separate PHCs for each of the ports. You can find some instructions on how to configure it here: https://docs.nvidia.com/networking/display/NVIDIA5TTechnologyUserManualv10/Real-Time+Clock And some general documentation on the Real-Time Clock here: https://docs.nvidia.com/networking/display/NVIDIA5TTechnologyUserManualv10/Real+Time+Clock > There are some jitters / unexpected outliers which I attribute to measurement uncertainties / syscall jitter. 99% of what you're seeing is likely PCIe read latency jitter. This is the reason why phc2sys has the '-N' option to mitigate the problem: > -N phc-num > Specify the number of master clock readings per one slave clock update. > Only the fastest reading is used to update the slave clock, this is useful > to minimize the error caused by random delays in scheduling and bus utilization. W |
From: Prankur C. <pra...@gm...> - 2023-02-13 09:54:46
|
Dear fellow Engineers, I have a server with 1 x Intel 1G NIC and 3 x Mellanox 100G NICs The NICs are named as follows eno1, eno2 (Intel NIC) ( /dev/ptp0 and /dev/ptp1 resp.) mlnx1. mlnx2 ( Mellanox NIC1) ( /dev/ptp8 and /dev/ptp9 resp.) mlnx3, mlnx4 ( Mellanox NIC2) ( /dev/ptp2 and /dev/ptp3 resp.) mlnx5, mlnx6 ( Mellanox NIC3) ( /dev/ptp5 and /dev/ptp6 resp.) My goal is to have PTP running on any one of the NIC ports and sync other NICs to have the same hardware clock (All NICs support Hardware timestamping) This is my current solution : 1. Run ptp4l on Intel NIC card (eno2) (Can be any NIC port does not matter, but at least one port must be connected to the GMC somehow) and sync the system clock to the ptp hardware clock of that NIC port. $ ptp4l -f config_file -i eno2 $ phc2sys -w -s eno2 -O 0 -n 127 2. Sync other NIC ports with the system clock using multiple instances of phc2sys $ phc2sys -s CLOCK_REALTIME -c mlnx1 -O 0 $ phc2sys -s CLOCK_REALTIME -c mlnx3 -O 0 $ phc2sys -s CLOCK_REALTIME -c mlnx5 -O 0 There is a caveat here : I assume both the ports on the same NIC (mlnx1 and mlnx2 for example) have the same hardware clock source even though under /dev/ they are mapped to different ptp files i.e /dev/ptp8 and /dev/ptp9 respectively. Is there another/better approach to solve my problem ? INFO: I measure the time between two clocks, using clock_gettime(phcClock1ID, &ts1) and clock_gettime(phcClock2ID, &ts2) and print the difference ts2 - ts1 every second. The measurement thread is running on an isolated core with FIFO 90 priority. There are some jitters / unexpected outliers which I attribute to measurement uncertainties / syscall jitter. Does anyone have experience with this api and how much jitter in nanoseconds is expected ? -- Cheers Prankur |
From: Miroslav L. <mli...@re...> - 2023-02-13 09:41:17
|
On Thu, Feb 09, 2023 at 11:37:52AM -0800, Richard Cochran wrote: > On Thu, Feb 09, 2023 at 01:32:14PM +0100, Miroslav Lichvar wrote: > > > Yes, but if you know the length of the chain and charateristics of all > > clocks and their timestamping, you can tune the servos to minimize > > their gain peaking for the synchronization of the last clock. This can > > be done with the pi servo in ptp4l. > > Cool. > > Have you ever published examples of this? That would interest me. No, I don't remember doing that. I'd expect there to be a thick book written on this subject, likely from someone working in telco, with a conclusion that it's more practical to require all PLLs to have their gain peaking very small (e.g. 0.1 or 0.2 dB), assuming all chains of clocks are long. -- Miroslav Lichvar |
From: Giammarco Z. <g.z...@gm...> - 2023-02-12 02:53:55
|
Hi all, I noticed a strange behavior when using ptp4l with MTU>1500 on an i225-based adapter. Namely, when ptp4l is running (I was trying automotive-slave or -master configurations), incoming frames larger than 1500 bytes are truncated at the end of the payload. I have attached a pcap trace where I was sending 4000-byte ping requests - once I started ptp4l in slave mode on the same interface, the received replies are missing the last 16 bytes and ping would no longer receive them properly. The only way to restore the correct behavior on the board is to stop ptp4l, set the MTU back to 1500, then again to the previous value. It's probably an issue with the igc driver, but I also wanted to circle this issue on this channel. Please let me know if you have any suggestions on how to debug this issue. Best, Giammarco |
From: Richard C. <ric...@gm...> - 2023-02-09 19:38:06
|
On Thu, Feb 09, 2023 at 01:32:14PM +0100, Miroslav Lichvar wrote: > Yes, but if you know the length of the chain and charateristics of all > clocks and their timestamping, you can tune the servos to minimize > their gain peaking for the synchronization of the last clock. This can > be done with the pi servo in ptp4l. Cool. Have you ever published examples of this? That would interest me. Thanks, Richard |
From: Miroslav L. <mli...@re...> - 2023-02-09 12:32:27
|
On Wed, Feb 08, 2023 at 07:18:17AM -0800, Richard Cochran wrote: > > BTW, synchronization with BCs can work better than TCs if the PLLs are > > well implemented and tuned. TCs are the simpler and safer approach. > > There was a simulation study showing "gain peaking" from a long chain > of servos. Yes, but if you know the length of the chain and charateristics of all clocks and their timestamping, you can tune the servos to minimize their gain peaking for the synchronization of the last clock. This can be done with the pi servo in ptp4l. The linreg servo has significant gain peaking and is not configurable, i.e. unsuitable for longer chains. The way I think about the BC vs TC performance is that with BCs the noise is filtered on each link and what passes to the end is smaller than the sum of noise on all links, which is what it gets with TCs. This assumes the sync interval is sufficiently short for the instability of the clocks to not dominate the errors. -- Miroslav Lichvar |
From: Richard C. <ric...@gm...> - 2023-02-09 04:58:09
|
On Wed, Feb 08, 2023 at 08:28:54PM +0000, Nemo Crypto wrote: > What does NIH mean? Not Invented Here? yes. |
From: Nemo C. <nem...@gm...> - 2023-02-08 20:29:03
|
What does NIH mean? Not Invented Here? On Wednesday, 8 February, 2023 at 10:18:21 am GMT-5, Richard Cochran <ric...@gm...> wrote: On Wed, Feb 08, 2023 at 09:33:49AM +0100, Miroslav Lichvar wrote: > On Tue, Feb 07, 2023 at 07:39:48PM -0800, Richard Cochran wrote: > > Sure, if you have a chain topology of 15 hops, then you would start to > > see benefits from using TAB over BC. But who has that kind of network? > > > > Even then, would an 802.1as TAB outperform an ieee 1588 TC? > > The draft I saw claimed that the synchronization performance of the > network is identical to 1588 using P2P TCs. It's not clear to me what > advantages their approach has. gPTP is full of NIH, IMO. > BTW, synchronization with BCs can work better than TCs if the PLLs are > well implemented and tuned. TCs are the simpler and safer approach. There was a simulation study showing "gain peaking" from a long chain of servos. Thanks, Richard |
From: Nemo C. <nem...@gm...> - 2023-02-08 20:26:20
|
Hi Richard, Thanks for your valuable inputs. 802.1AS is chosen not for better performance in this particular case, but because we should be using Automovitve Profile and compliant to AVnu gPTP standard.But I agree with you, I am waiting for the prototype HW to see how ptp4l BC functions with other gPTP automotive grade devices and see how things work.Thanks,Nemo On Tuesday, 7 February, 2023 at 10:39:51 pm GMT-5, Richard Cochran <ric...@gm...> wrote: On Tue, Feb 07, 2023 at 03:59:23PM +0000, Nemo Crypto wrote: > >As a practical matter, I don't see why you can't simply use linuxptp's > >BC mode, configuring the two ports with 802.1as settings. > > >Who could tell the difference? > Would the linuxptp's BC send all the TLVs required for 802.1AS? What TLVs do you mean? The ptp4l BC will append follow_up_info and PATH_TRACE, if you enable them. See the default gPTP.cfg. In my view, the whole TAB thing in 802.1as is of questionable value. If you have a single hop, as in your use case, I doubt you could even measure the difference in synchronization quality between TAB and BC. Sure, if you have a chain topology of 15 hops, then you would start to see benefits from using TAB over BC. But who has that kind of network? Even then, would an 802.1as TAB outperform an ieee 1588 TC? Color me skeptical. HTH, Richard |
From: Richard C. <ric...@gm...> - 2023-02-08 15:18:27
|
On Wed, Feb 08, 2023 at 09:33:49AM +0100, Miroslav Lichvar wrote: > On Tue, Feb 07, 2023 at 07:39:48PM -0800, Richard Cochran wrote: > > Sure, if you have a chain topology of 15 hops, then you would start to > > see benefits from using TAB over BC. But who has that kind of network? > > > > Even then, would an 802.1as TAB outperform an ieee 1588 TC? > > The draft I saw claimed that the synchronization performance of the > network is identical to 1588 using P2P TCs. It's not clear to me what > advantages their approach has. gPTP is full of NIH, IMO. > BTW, synchronization with BCs can work better than TCs if the PLLs are > well implemented and tuned. TCs are the simpler and safer approach. There was a simulation study showing "gain peaking" from a long chain of servos. Thanks, Richard |
From: Miroslav L. <mli...@re...> - 2023-02-08 08:34:05
|
On Tue, Feb 07, 2023 at 07:39:48PM -0800, Richard Cochran wrote: > Sure, if you have a chain topology of 15 hops, then you would start to > see benefits from using TAB over BC. But who has that kind of network? > > Even then, would an 802.1as TAB outperform an ieee 1588 TC? The draft I saw claimed that the synchronization performance of the network is identical to 1588 using P2P TCs. It's not clear to me what advantages their approach has. BTW, synchronization with BCs can work better than TCs if the PLLs are well implemented and tuned. TCs are the simpler and safer approach. -- Miroslav Lichvar |
From: Richard C. <ric...@gm...> - 2023-02-08 03:40:01
|
On Tue, Feb 07, 2023 at 03:59:23PM +0000, Nemo Crypto wrote: > >As a practical matter, I don't see why you can't simply use linuxptp's > >BC mode, configuring the two ports with 802.1as settings. > > >Who could tell the difference? > Would the linuxptp's BC send all the TLVs required for 802.1AS? What TLVs do you mean? The ptp4l BC will append follow_up_info and PATH_TRACE, if you enable them. See the default gPTP.cfg. In my view, the whole TAB thing in 802.1as is of questionable value. If you have a single hop, as in your use case, I doubt you could even measure the difference in synchronization quality between TAB and BC. Sure, if you have a chain topology of 15 hops, then you would start to see benefits from using TAB over BC. But who has that kind of network? Even then, would an 802.1as TAB outperform an ieee 1588 TC? Color me skeptical. HTH, Richard |
From: Nemo C. <nem...@gm...> - 2023-02-07 15:59:37
|
>As a practical matter, I don't see why you can't simply use linuxptp's >BC mode, configuring the two ports with 802.1as settings. >Who could tell the difference? Would the linuxptp's BC send all the TLVs required for 802.1AS? I mean, the Time Follower that receives Sync/Followup from linuxptp's BC should never see any difference in the frames when compared to the frames it would receive if it is a 802.1AS Time Leader End Point. Please advise. Thanks,Nemo On Tuesday, 7 February, 2023 at 10:53:11 am GMT-5, Richard Cochran <ric...@gm...> wrote: On Tue, Feb 07, 2023 at 01:49:42PM +0000, Nemo Crypto wrote: > Then the actual question becomes, whether linuxptp has support for IEEE802.1AS Time Aware Bridge (Time Relay)? No, TAB/Time Relay is not supported. There was some initial work done (check the archives), but that was never finished. As a practical matter, I don't see why you can't simply use linuxptp's BC mode, configuring the two ports with 802.1as settings. Who could tell the difference? Thanks, Richard |
From: Richard C. <ric...@gm...> - 2023-02-07 15:53:15
|
On Tue, Feb 07, 2023 at 01:49:42PM +0000, Nemo Crypto wrote: > Then the actual question becomes, whether linuxptp has support for IEEE802.1AS Time Aware Bridge (Time Relay)? No, TAB/Time Relay is not supported. There was some initial work done (check the archives), but that was never finished. As a practical matter, I don't see why you can't simply use linuxptp's BC mode, configuring the two ports with 802.1as settings. Who could tell the difference? Thanks, Richard |
From: Nemo C. <nem...@gm...> - 2023-02-07 13:49:56
|
Thanks Miroslav! Then the actual question becomes, whether linuxptp has support for IEEE802.1AS Time Aware Bridge (Time Relay)? Can any other expert comment please? On Tuesday, 7 February, 2023 at 03:29:48 am GMT-5, Miroslav Lichvar <mli...@re...> wrote: On Mon, Feb 06, 2023 at 04:17:30PM +0000, Nemo Crypto wrote: > Hi Miroslav, > Thanks! > In my understanding of "Time Aware Bridge", it doesn't correct/adjust/tune the PHC. Is that not correct? Looking at some freely available drafts of 801.AS, yes, it seems the clocks are supposed to be running free and correct timestamps with the accumulated offsets from the follow up info TLV. However, the example gPTP config doesn't have the free_running option enabled, so I'm not sure if the support is complete or maybe it's only for the end instances and not the bridges/relays. I have no experience with gPTP. -- Miroslav Lichvar |
From: Miroslav L. <mli...@re...> - 2023-02-07 08:29:52
|
On Mon, Feb 06, 2023 at 04:17:30PM +0000, Nemo Crypto wrote: > Hi Miroslav, > Thanks! > In my understanding of "Time Aware Bridge", it doesn't correct/adjust/tune the PHC. Is that not correct? Looking at some freely available drafts of 801.AS, yes, it seems the clocks are supposed to be running free and correct timestamps with the accumulated offsets from the follow up info TLV. However, the example gPTP config doesn't have the free_running option enabled, so I'm not sure if the support is complete or maybe it's only for the end instances and not the bridges/relays. I have no experience with gPTP. -- Miroslav Lichvar |
From: Nemo C. <nem...@gm...> - 2023-02-06 16:17:43
|
Hi Miroslav, Thanks! In my understanding of "Time Aware Bridge", it doesn't correct/adjust/tune the PHC. Is that not correct? Nemo On Monday, 6 February, 2023 at 11:01:51 am GMT-5, Miroslav Lichvar <mli...@re...> wrote: On Mon, Feb 06, 2023 at 03:21:25PM +0000, Nemo Crypto wrote: > Hi Linuxptp-users, > I am using gPTP 802.1AS profile for my network. My simplified network topology looks like this, > TimeLeader --> Eth Switch (802.1as Time Aware Bridge)-->Processor (BC?)--->TimeFollower > In the above topology, The processor runs linuxptp (ptp4l & phc2sys) has 2 interfaces. One interface should act as 802.1AS TimeFollower and other should act as 802.1AS TimeLeader. I know that IEEE1588 BoundaryClock has this feature. But I am not sure if 802.1AS (gPTP) has similar feature. > > Can you please share me details? If this is supported by LinuxPTP, can you please help me how would the configuration file look like? IIRC gPTP has time-aware bridges which are equivalent to PTP boundary clocks. ptp4l as a boundary clock doesn't require any special configuration. For a gPTP example see configs/gPTP.cfg in the linuxptp tarball/repository. If your interfaces don't share a PTP clock, you will need to enable the boundary_clock_jbod option and run phc2sys to keep the two PHCs synchronized. However, that might not meet the gPTP requirements on accuracy. -- Miroslav Lichvar |
From: Miroslav L. <mli...@re...> - 2023-02-06 16:01:59
|
On Mon, Feb 06, 2023 at 03:21:25PM +0000, Nemo Crypto wrote: > Hi Linuxptp-users, > I am using gPTP 802.1AS profile for my network. My simplified network topology looks like this, > TimeLeader --> Eth Switch (802.1as Time Aware Bridge)-->Processor (BC?)--->TimeFollower > In the above topology, The processor runs linuxptp (ptp4l & phc2sys) has 2 interfaces. One interface should act as 802.1AS TimeFollower and other should act as 802.1AS TimeLeader. I know that IEEE1588 BoundaryClock has this feature. But I am not sure if 802.1AS (gPTP) has similar feature. > > Can you please share me details? If this is supported by LinuxPTP, can you please help me how would the configuration file look like? IIRC gPTP has time-aware bridges which are equivalent to PTP boundary clocks. ptp4l as a boundary clock doesn't require any special configuration. For a gPTP example see configs/gPTP.cfg in the linuxptp tarball/repository. If your interfaces don't share a PTP clock, you will need to enable the boundary_clock_jbod option and run phc2sys to keep the two PHCs synchronized. However, that might not meet the gPTP requirements on accuracy. -- Miroslav Lichvar |