Thread: [RTnet-developers] some suggestions for the next TDMA version
Brought to you by:
bet-frogger,
kiszka
|
From: Karl R. <Kar...@gm...> - 2007-08-03 08:26:19
|
Hello, I'm currently working on getting RTnet working on an Infineon TriCore 1130 and found some issues, which could be solved in the next major TDMA version comming with RTnet. Would like to discuss these issues to see other's opinion about that. The TriCore1 architecture is a little bit tricky. When you want to send a frame over Ethernet, the host (the "cpu") has to write it into a shared memory (available via FPI bus). Then the host writes a special register to tell the DMUT (Data Management Unit Transmit) to get this Frame from shared memory and write it into Transmit Buffer TB. The TX block of the MAC takes the frame from TB and sends it to MII, which will put it on the wire. All this can be found in detail in [1], chapter 31. Here comes the problem: When sending request calibration frames, one has to provide a transmission time stamp as close as possible to the real transmission time. As the TriCore does not support changing the frame after writing it into shared memory (by host), there is no way for providing a real transmission time stamp. Only option is to put the scheduled transmission time there which will have the effect, that transmission time on the wire t_trans is always calculated to long because the real transmission is done later. To fix this issue there will be a few possibilities, one comes here: One could change TDMA.spec and allow slaves to send a two-parted request calibration frame. First one has a don't care Transmission Time stamp (marked as don't care by a bit or sth similiar). Slaves will notice their transmission time stamp in their TXISR, and send it in the second ReqCalFrm. This would make the whole process much more precise, but of course requires changes in TDMA.spec. I would really appreciate to get your opinions and comments to that idea! Karl [1] http://www.infineon.com/cms/en/services/download.html?filename=%2fdgdl%2ftc1130_um_v1.3_2004_11_per.pdf%3ffolderId%3ddb3a304412b407950112b41b37c12c2b%26fileId%3ddb3a304412b407950112b41b38162c2c&location=Products.Microcontrollers.32-Bit.TC1130_Family__TC1130__TC1115__TC1100_.TC1130.DOCUMENTS.tc1130_um_v1.3_2004_11_per.pdf -- von Karl Reichert GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS. Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail |
|
From: Jan K. <jan...@we...> - 2007-08-03 08:43:53
Attachments:
signature.asc
|
Karl Reichert wrote: > Hello, >=20 > I'm currently working on getting RTnet working on an Infineon TriCore > 1130 and found some issues, which could be solved in the next major > TDMA version comming with RTnet. Would like to discuss these issues > to see other's opinion about that. >=20 > The TriCore1 architecture is a little bit tricky. When you want to > send a frame over Ethernet, the host (the "cpu") has to write it into > a shared memory (available via FPI bus). Then the host writes a > special register to tell the DMUT (Data Management Unit Transmit) to > get this Frame from shared memory and write it into Transmit Buffer > TB. The TX block of the MAC takes the frame from TB and sends it to > MII, which will put it on the wire. All this can be found in detail > in [1], chapter 31. >=20 > Here comes the problem: When sending request calibration frames, one > has to provide a transmission time stamp as close as possible to the > real transmission time. As the TriCore does not support changing the > frame after writing it into shared memory (by host), there is no way > for providing a real transmission time stamp. Only option is to put > the scheduled transmission time there which will have the effect, > that transmission time on the wire t_trans is always calculated to > long because the real transmission is done later. Unless you have to make the steps of writing to the shared mem + triggering the transmission preemptible, the solution is simple: just push out the local time taken right _before_ you start writing the frame. For your arch, that is the point "as close as possible" to the real tx date. >=20 > To fix this issue there will be a few possibilities, one comes here: > One could change TDMA.spec and allow slaves to send a two-parted > request calibration frame. First one has a don't care Transmission > Time stamp (marked as don't care by a bit or sth similiar). Slaves > will notice their transmission time stamp in their TXISR, and send it > in the second ReqCalFrm. This would make the whole process much more > precise, but of course requires changes in TDMA.spec. Yeah, that's how NTP and PTP work as well (follow-up message carrying the timestamp of the first one), IIRC. Would be possible to extend the specs, but there should be real hard needs for it. Again, as long as the tx timestamp is taken at a fixed date before real tx, TDMA can handle this (may one should clarify the spec in this regard)= =2E Jan |
|
From: Karl R. <Kar...@gm...> - 2007-08-03 09:06:55
|
Jan Kiszka wrote: > Karl Reichert wrote: > > Hello, > > > > I'm currently working on getting RTnet working on an Infineon TriCore > > 1130 and found some issues, which could be solved in the next major > > TDMA version comming with RTnet. Would like to discuss these issues > > to see other's opinion about that. > > > > The TriCore1 architecture is a little bit tricky. When you want to > > send a frame over Ethernet, the host (the "cpu") has to write it into > > a shared memory (available via FPI bus). Then the host writes a > > special register to tell the DMUT (Data Management Unit Transmit) to > > get this Frame from shared memory and write it into Transmit Buffer > > TB. The TX block of the MAC takes the frame from TB and sends it to > > MII, which will put it on the wire. All this can be found in detail > > in [1], chapter 31. > > > > Here comes the problem: When sending request calibration frames, one > > has to provide a transmission time stamp as close as possible to the > > real transmission time. As the TriCore does not support changing the > > frame after writing it into shared memory (by host), there is no way > > for providing a real transmission time stamp. Only option is to put > > the scheduled transmission time there which will have the effect, > > that transmission time on the wire t_trans is always calculated to > > long because the real transmission is done later. > > Unless you have to make the steps of writing to the shared mem + > triggering the transmission preemptible, the solution is simple: just > push out the local time taken right _before_ you start writing the > frame. For your arch, that is the point "as close as possible" to the > real tx date. > > > > > To fix this issue there will be a few possibilities, one comes here: > > One could change TDMA.spec and allow slaves to send a two-parted > > request calibration frame. First one has a don't care Transmission > > Time stamp (marked as don't care by a bit or sth similiar). Slaves > > will notice their transmission time stamp in their TXISR, and send it > > in the second ReqCalFrm. This would make the whole process much more > > precise, but of course requires changes in TDMA.spec. > > Yeah, that's how NTP and PTP work as well (follow-up message carrying > the timestamp of the first one), IIRC. Would be possible to extend the > specs, but there should be real hard needs for it. > > Again, as long as the tx timestamp is taken at a fixed date before real > tx, TDMA can handle this (may one should clarify the spec in this regard). > > Jan > Well, the problem is that FPI bus is a shared bus, which will have the effect that in some cases it may take longer until the frame is on the wire and in some cases it may take less time. As a result, the tx timestamp is _not_ taken at a fixed date before real tx. Karl -- von Karl Reichert Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer |
|
From: Jan K. <jan...@we...> - 2007-08-03 09:02:12
Attachments:
signature.asc
|
Karl Reichert wrote: > Jan Kiszka wrote: >> Karl Reichert wrote: >>> Hello, >>> >>> I'm currently working on getting RTnet working on an Infineon TriCore= >>> 1130 and found some issues, which could be solved in the next major >>> TDMA version comming with RTnet. Would like to discuss these issues >>> to see other's opinion about that. >>> >>> The TriCore1 architecture is a little bit tricky. When you want to >>> send a frame over Ethernet, the host (the "cpu") has to write it into= >>> a shared memory (available via FPI bus). Then the host writes a >>> special register to tell the DMUT (Data Management Unit Transmit) to >>> get this Frame from shared memory and write it into Transmit Buffer >>> TB. The TX block of the MAC takes the frame from TB and sends it to >>> MII, which will put it on the wire. All this can be found in detail >>> in [1], chapter 31. >>> >>> Here comes the problem: When sending request calibration frames, one >>> has to provide a transmission time stamp as close as possible to the >>> real transmission time. As the TriCore does not support changing the >>> frame after writing it into shared memory (by host), there is no way >>> for providing a real transmission time stamp. Only option is to put >>> the scheduled transmission time there which will have the effect, >>> that transmission time on the wire t_trans is always calculated to >>> long because the real transmission is done later. >> Unless you have to make the steps of writing to the shared mem + >> triggering the transmission preemptible, the solution is simple: just >> push out the local time taken right _before_ you start writing the >> frame. For your arch, that is the point "as close as possible" to the >> real tx date. >> >>> To fix this issue there will be a few possibilities, one comes here: >>> One could change TDMA.spec and allow slaves to send a two-parted >>> request calibration frame. First one has a don't care Transmission >>> Time stamp (marked as don't care by a bit or sth similiar). Slaves >>> will notice their transmission time stamp in their TXISR, and send it= >>> in the second ReqCalFrm. This would make the whole process much more >>> precise, but of course requires changes in TDMA.spec. >> Yeah, that's how NTP and PTP work as well (follow-up message carrying >> the timestamp of the first one), IIRC. Would be possible to extend the= >> specs, but there should be real hard needs for it. >> >> Again, as long as the tx timestamp is taken at a fixed date before rea= l >> tx, TDMA can handle this (may one should clarify the spec in this rega= rd). >> >> Jan >> > Well, the problem is that FPI bus is a shared bus, which will have the = effect that in some cases it may take longer until the frame is on the wi= re and in some cases it may take less time. As a result, the tx timestamp= is _not_ taken at a fixed date before real tx. OK, that sounds almost like a "hard need". What are the other users on this bus, and who is controlling the access (host or devices)? And what dimension would the jitter then have (10 of microseconds, more, or less)?= Jan |
|
From: Jan K. <jan...@we...> - 2007-08-03 09:20:17
Attachments:
signature.asc
|
Karl Reichert wrote: > Jan Kiszka wrote: >> Karl Reichert wrote: >>> Jan Kiszka wrote: >>>> Karl Reichert wrote: >>>>> Hello, >>>>> >>>>> I'm currently working on getting RTnet working on an Infineon TriCo= re >>>>> 1130 and found some issues, which could be solved in the next major= >>>>> TDMA version comming with RTnet. Would like to discuss these issues= >>>>> to see other's opinion about that. >>>>> >>>>> The TriCore1 architecture is a little bit tricky. When you want to >>>>> send a frame over Ethernet, the host (the "cpu") has to write it in= to >>>>> a shared memory (available via FPI bus). Then the host writes a >>>>> special register to tell the DMUT (Data Management Unit Transmit) t= o >>>>> get this Frame from shared memory and write it into Transmit Buffer= >>>>> TB. The TX block of the MAC takes the frame from TB and sends it to= >>>>> MII, which will put it on the wire. All this can be found in detail= >>>>> in [1], chapter 31. >>>>> >>>>> Here comes the problem: When sending request calibration frames, on= e >>>>> has to provide a transmission time stamp as close as possible to th= e >>>>> real transmission time. As the TriCore does not support changing th= e >>>>> frame after writing it into shared memory (by host), there is no wa= y >>>>> for providing a real transmission time stamp. Only option is to put= >>>>> the scheduled transmission time there which will have the effect, >>>>> that transmission time on the wire t_trans is always calculated to >>>>> long because the real transmission is done later. >>>> Unless you have to make the steps of writing to the shared mem + >>>> triggering the transmission preemptible, the solution is simple: jus= t >>>> push out the local time taken right _before_ you start writing the >>>> frame. For your arch, that is the point "as close as possible" to th= e >>>> real tx date. >>>> >>>>> To fix this issue there will be a few possibilities, one comes here= : >>>>> One could change TDMA.spec and allow slaves to send a two-parted >>>>> request calibration frame. First one has a don't care Transmission >>>>> Time stamp (marked as don't care by a bit or sth similiar). Slaves >>>>> will notice their transmission time stamp in their TXISR, and send = it >>>>> in the second ReqCalFrm. This would make the whole process much mor= e >>>>> precise, but of course requires changes in TDMA.spec. >>>> Yeah, that's how NTP and PTP work as well (follow-up message carryin= g >>>> the timestamp of the first one), IIRC. Would be possible to extend t= he >>>> specs, but there should be real hard needs for it. >>>> >>>> Again, as long as the tx timestamp is taken at a fixed date before r= eal >>>> tx, TDMA can handle this (may one should clarify the spec in this >> regard). >>>> Jan >>>> >>> Well, the problem is that FPI bus is a shared bus, which will have th= e >> effect that in some cases it may take longer until the frame is on the= wire >> and in some cases it may take less time. As a result, the tx timestamp= is >> _not_ taken at a fixed date before real tx. >> >> OK, that sounds almost like a "hard need". What are the other users on= >> this bus, and who is controlling the access (host or devices)? And wha= t >> dimension would the jitter then have (10 of microseconds, more, or les= s)? >> >> Jan >> > Other users can be in general every device also connected to FPI bus. I= n my case, some memory is also connected to FPI, which has the effect, th= at host and all devices using this memory accessing the bus. As those dev= ices may read/write async from/into the memory, there is no way to ensure= stable times accessing this bus. > The dimension of the jitter is very depending on the situation. In my c= ase, using only host and Ethernet-Controller, this value is always around= 11 =B5s, jitter is less then 2 =B5s. But this is really depending on the= use case. If there would be other devices using the bus (USB, serial ...= ), the jitter will be much higher for sure. Would be worth to do some tes= ts for that, but I'm lacking time ATM, I'm sorry. But they will be high, = for sure. >=20 Well, then let's draft a straight forward protocol extension: Some new frame type shall be introduced, let's call it "Prologue Frame". It shall look like this: Version: 0x0202 Frame ID: 0x0020 <no further fields> That frame is intended to serve as the timestamping reference for the follow-up frames (sync, calibration request/reply). This means that, instead of the RX timestamp of the succeeding frame, the RX time of the prologue frame will be considered by the receiver. This, of course, also means that the frame will _always_ be paired with a standard TDMA frame. It shall be an optional frame, ie. both the sender and the receiver don't need to issue/interpret it. If the receiver ignores it, the error of the succeeding original frame would rise, but TDMA should still work safely (this claim needs more thoughts, though). If the sender doesn't issue it, all would work as before anyway. I can't provide a helping hand on the implementation right now. So if you are interested in drafting such an extension, only review can be provided. The best test-case would be on your side anyway :). Jan |
|
From: Karl R. <Kar...@gm...> - 2007-08-03 09:39:00
|
Jan Kiszka wrote: > Karl Reichert wrote: > > Jan Kiszka wrote: > >> Karl Reichert wrote: > >>> Jan Kiszka wrote: > >>>> Karl Reichert wrote: > >>>>> Hello, > >>>>> > >>>>> I'm currently working on getting RTnet working on an Infineon > TriCore > >>>>> 1130 and found some issues, which could be solved in the next major > >>>>> TDMA version comming with RTnet. Would like to discuss these issues > >>>>> to see other's opinion about that. > >>>>> > >>>>> The TriCore1 architecture is a little bit tricky. When you want to > >>>>> send a frame over Ethernet, the host (the "cpu") has to write it > into > >>>>> a shared memory (available via FPI bus). Then the host writes a > >>>>> special register to tell the DMUT (Data Management Unit Transmit) to > >>>>> get this Frame from shared memory and write it into Transmit Buffer > >>>>> TB. The TX block of the MAC takes the frame from TB and sends it to > >>>>> MII, which will put it on the wire. All this can be found in detail > >>>>> in [1], chapter 31. > >>>>> > >>>>> Here comes the problem: When sending request calibration frames, one > >>>>> has to provide a transmission time stamp as close as possible to the > >>>>> real transmission time. As the TriCore does not support changing the > >>>>> frame after writing it into shared memory (by host), there is no way > >>>>> for providing a real transmission time stamp. Only option is to put > >>>>> the scheduled transmission time there which will have the effect, > >>>>> that transmission time on the wire t_trans is always calculated to > >>>>> long because the real transmission is done later. > >>>> Unless you have to make the steps of writing to the shared mem + > >>>> triggering the transmission preemptible, the solution is simple: just > >>>> push out the local time taken right _before_ you start writing the > >>>> frame. For your arch, that is the point "as close as possible" to the > >>>> real tx date. > >>>> > >>>>> To fix this issue there will be a few possibilities, one comes here: > >>>>> One could change TDMA.spec and allow slaves to send a two-parted > >>>>> request calibration frame. First one has a don't care Transmission > >>>>> Time stamp (marked as don't care by a bit or sth similiar). Slaves > >>>>> will notice their transmission time stamp in their TXISR, and send > it > >>>>> in the second ReqCalFrm. This would make the whole process much more > >>>>> precise, but of course requires changes in TDMA.spec. > >>>> Yeah, that's how NTP and PTP work as well (follow-up message carrying > >>>> the timestamp of the first one), IIRC. Would be possible to extend > the > >>>> specs, but there should be real hard needs for it. > >>>> > >>>> Again, as long as the tx timestamp is taken at a fixed date before > real > >>>> tx, TDMA can handle this (may one should clarify the spec in this > >> regard). > >>>> Jan > >>>> > >>> Well, the problem is that FPI bus is a shared bus, which will have the > >> effect that in some cases it may take longer until the frame is on the > wire > >> and in some cases it may take less time. As a result, the tx timestamp > is > >> _not_ taken at a fixed date before real tx. > >> > >> OK, that sounds almost like a "hard need". What are the other users on > >> this bus, and who is controlling the access (host or devices)? And what > >> dimension would the jitter then have (10 of microseconds, more, or > less)? > >> > >> Jan > >> > > Other users can be in general every device also connected to FPI bus. In > my case, some memory is also connected to FPI, which has the effect, that > host and all devices using this memory accessing the bus. As those devices > may read/write async from/into the memory, there is no way to ensure stable > times accessing this bus. > > The dimension of the jitter is very depending on the situation. In my > case, using only host and Ethernet-Controller, this value is always around 11 > µs, jitter is less then 2 µs. But this is really depending on the use > case. If there would be other devices using the bus (USB, serial ...), the > jitter will be much higher for sure. Would be worth to do some tests for > that, but I'm lacking time ATM, I'm sorry. But they will be high, for sure. > > > > Well, then let's draft a straight forward protocol extension: Some new > frame type shall be introduced, let's call it "Prologue Frame". It shall > look like this: > > Version: 0x0202 > Frame ID: 0x0020 > <no further fields> > > That frame is intended to serve as the timestamping reference for the > follow-up frames (sync, calibration request/reply). This means that, > instead of the RX timestamp of the succeeding frame, the RX time of the > prologue frame will be considered by the receiver. This, of course, also > means that the frame will _always_ be paired with a standard TDMA frame. > > It shall be an optional frame, ie. both the sender and the receiver > don't need to issue/interpret it. If the receiver ignores it, the error > of the succeeding original frame would rise, but TDMA should still work > safely (this claim needs more thoughts, though). If the sender doesn't > issue it, all would work as before anyway. > > I can't provide a helping hand on the implementation right now. So if > you are interested in drafting such an extension, only review can be > provided. The best test-case would be on your side anyway :). > > Jan > Hmm ... sounds logical to me, should work at all. I don't have any time for implementation right now, too. But we should keep this in mind for the future. I could create a draft of TDMA.spec and maybe also of the RTnet implementation, but not now. I am very busy to at the moment, sorry. But we should keep this idea in mind and work on that later. Karl -- von Karl Reichert Psssst! Schon vom neuen GMX MultiMessenger gehört? Der kanns mit allen: http://www.gmx.net/de/go/multimessenger |
|
From: Jan K. <jan...@we...> - 2007-08-03 10:46:33
|
Karl Reichert wrote: > Jan Kiszka wrote: >> Karl Reichert wrote: >>> Jan Kiszka wrote: >>>> Karl Reichert wrote: >>>>> Hello, >>>>> >>>>> I'm currently working on getting RTnet working on an Infineon TriCore >>>>> 1130 and found some issues, which could be solved in the next major >>>>> TDMA version comming with RTnet. Would like to discuss these issues >>>>> to see other's opinion about that. >>>>> >>>>> The TriCore1 architecture is a little bit tricky. When you want to >>>>> send a frame over Ethernet, the host (the "cpu") has to write it into >>>>> a shared memory (available via FPI bus). Then the host writes a >>>>> special register to tell the DMUT (Data Management Unit Transmit) to >>>>> get this Frame from shared memory and write it into Transmit Buffer >>>>> TB. The TX block of the MAC takes the frame from TB and sends it to >>>>> MII, which will put it on the wire. All this can be found in detail >>>>> in [1], chapter 31. >>>>> >>>>> Here comes the problem: When sending request calibration frames, one >>>>> has to provide a transmission time stamp as close as possible to the >>>>> real transmission time. As the TriCore does not support changing the >>>>> frame after writing it into shared memory (by host), there is no way >>>>> for providing a real transmission time stamp. Only option is to put >>>>> the scheduled transmission time there which will have the effect, >>>>> that transmission time on the wire t_trans is always calculated to >>>>> long because the real transmission is done later. >>>> Unless you have to make the steps of writing to the shared mem + >>>> triggering the transmission preemptible, the solution is simple: just >>>> push out the local time taken right _before_ you start writing the >>>> frame. For your arch, that is the point "as close as possible" to the >>>> real tx date. >>>> >>>>> To fix this issue there will be a few possibilities, one comes here: >>>>> One could change TDMA.spec and allow slaves to send a two-parted >>>>> request calibration frame. First one has a don't care Transmission >>>>> Time stamp (marked as don't care by a bit or sth similiar). Slaves >>>>> will notice their transmission time stamp in their TXISR, and send it >>>>> in the second ReqCalFrm. This would make the whole process much more >>>>> precise, but of course requires changes in TDMA.spec. >>>> Yeah, that's how NTP and PTP work as well (follow-up message carrying >>>> the timestamp of the first one), IIRC. Would be possible to extend the >>>> specs, but there should be real hard needs for it. >>>> >>>> Again, as long as the tx timestamp is taken at a fixed date before real >>>> tx, TDMA can handle this (may one should clarify the spec in this >> regard). >>>> Jan >>>> >>> Well, the problem is that FPI bus is a shared bus, which will have the >> effect that in some cases it may take longer until the frame is on the wire >> and in some cases it may take less time. As a result, the tx timestamp is >> _not_ taken at a fixed date before real tx. >> >> OK, that sounds almost like a "hard need". What are the other users on >> this bus, and who is controlling the access (host or devices)? And what >> dimension would the jitter then have (10 of microseconds, more, or less)? >> >> Jan >> > Other users can be in general every device also connected to FPI bus. In my case, some memory is also connected to FPI, which has the effect, that host and all devices using this memory accessing the bus. As those devices may read/write async from/into the memory, there is no way to ensure stable times accessing this bus. > The dimension of the jitter is very depending on the situation. In my case, using only host and Ethernet-Controller, this value is always around 11 µs, jitter is less then 2 µs. But this is really depending on the use case. If there would be other devices using the bus (USB, serial ...), the jitter will be much higher for sure. Would be worth to do some tests for that, but I'm lacking time ATM, I'm sorry. But they will be high, for sure. Thinking about this a bit further: When you can face such high jitters on the FPI bus, won't your RX timestamps be impacted by them as well? And will you compensate for the TX jittery also by uploading the final frames early enough before the scheduled slot time during normal operation? If you FPI bus is really that indeterministic as you fear, the whole system would be fairly unsuited for RT applications. I think I heard that Tricore is used heavily in industrial and automotive, isn't it? Are there better peripheral interfaces then? Well, at least for you scenario, those 2 us jitter would be _far_ better than what you get on any RTOS-managed platform, so no need to worry. Still, this requires some more analysis to be sure - and that means it may cost time... :) Jan |
|
From: Karl R. <Kar...@gm...> - 2007-08-03 13:05:06
|
Jan Kiszka wrote: > Karl Reichert wrote: > > Jan Kiszka wrote: > >> Karl Reichert wrote: > >>> Jan Kiszka wrote: > >>>> Karl Reichert wrote: > >>>>> Hello, > >>>>> > >>>>> I'm currently working on getting RTnet working on an Infineon > TriCore > >>>>> 1130 and found some issues, which could be solved in the next major > >>>>> TDMA version comming with RTnet. Would like to discuss these issues > >>>>> to see other's opinion about that. > >>>>> > >>>>> The TriCore1 architecture is a little bit tricky. When you want to > >>>>> send a frame over Ethernet, the host (the "cpu") has to write it > into > >>>>> a shared memory (available via FPI bus). Then the host writes a > >>>>> special register to tell the DMUT (Data Management Unit Transmit) to > >>>>> get this Frame from shared memory and write it into Transmit Buffer > >>>>> TB. The TX block of the MAC takes the frame from TB and sends it to > >>>>> MII, which will put it on the wire. All this can be found in detail > >>>>> in [1], chapter 31. > >>>>> > >>>>> Here comes the problem: When sending request calibration frames, one > >>>>> has to provide a transmission time stamp as close as possible to the > >>>>> real transmission time. As the TriCore does not support changing the > >>>>> frame after writing it into shared memory (by host), there is no way > >>>>> for providing a real transmission time stamp. Only option is to put > >>>>> the scheduled transmission time there which will have the effect, > >>>>> that transmission time on the wire t_trans is always calculated to > >>>>> long because the real transmission is done later. > >>>> Unless you have to make the steps of writing to the shared mem + > >>>> triggering the transmission preemptible, the solution is simple: just > >>>> push out the local time taken right _before_ you start writing the > >>>> frame. For your arch, that is the point "as close as possible" to the > >>>> real tx date. > >>>> > >>>>> To fix this issue there will be a few possibilities, one comes here: > >>>>> One could change TDMA.spec and allow slaves to send a two-parted > >>>>> request calibration frame. First one has a don't care Transmission > >>>>> Time stamp (marked as don't care by a bit or sth similiar). Slaves > >>>>> will notice their transmission time stamp in their TXISR, and send > it > >>>>> in the second ReqCalFrm. This would make the whole process much more > >>>>> precise, but of course requires changes in TDMA.spec. > >>>> Yeah, that's how NTP and PTP work as well (follow-up message carrying > >>>> the timestamp of the first one), IIRC. Would be possible to extend > the > >>>> specs, but there should be real hard needs for it. > >>>> > >>>> Again, as long as the tx timestamp is taken at a fixed date before > real > >>>> tx, TDMA can handle this (may one should clarify the spec in this > >> regard). > >>>> Jan > >>>> > >>> Well, the problem is that FPI bus is a shared bus, which will have the > >> effect that in some cases it may take longer until the frame is on the > wire > >> and in some cases it may take less time. As a result, the tx timestamp > is > >> _not_ taken at a fixed date before real tx. > >> > >> OK, that sounds almost like a "hard need". What are the other users on > >> this bus, and who is controlling the access (host or devices)? And what > >> dimension would the jitter then have (10 of microseconds, more, or > less)? > >> > >> Jan > >> > > Other users can be in general every device also connected to FPI bus. In > my case, some memory is also connected to FPI, which has the effect, that > host and all devices using this memory accessing the bus. As those devices > may read/write async from/into the memory, there is no way to ensure stable > times accessing this bus. > > The dimension of the jitter is very depending on the situation. In my > case, using only host and Ethernet-Controller, this value is always around 11 > µs, jitter is less then 2 µs. But this is really depending on the use > case. If there would be other devices using the bus (USB, serial ...), the > jitter will be much higher for sure. Would be worth to do some tests for > that, but I'm lacking time ATM, I'm sorry. But they will be high, for sure. > > Thinking about this a bit further: When you can face such high jitters > on the FPI bus, won't your RX timestamps be impacted by them as well? No, they don't because rx timestamp is taken by host in MACRXISR and there is no connection to FPI bus cause the interrupt isn't affected in any way with FPI. > And will you compensate for the TX jittery also by uploading the final > frames early enough before the scheduled slot time during normal > operation? > Yes, that's my strategy: Putting the frame already so close to MACTX unit that only transmit enable signal is set when slot starts. > If you FPI bus is really that indeterministic as you fear, the whole > system would be fairly unsuited for RT applications. I think I heard > that Tricore is used heavily in industrial and automotive, isn't it? Are > there better peripheral interfaces then? Well, at least for you > scenario, those 2 us jitter would be _far_ better than what you get on > any RTOS-managed platform, so no need to worry. Still, this requires > some more analysis to be sure - and that means it may cost time... :) > > Jan It is deterministic, but only in special cases. You can asign priorities to units, so the unit with highest FPI priority will get the bus no matter what and then of course the jitter is very low and deterministic. Problems occur whenever you need more then one unit communication over FPi because in some cases it is necessary to not give highest priority to ETH unit. So, to say it more clear: In my case, in my current situation, using only ETH unit and therefore having a very small jitter of only 2 us, a new TDMA draft is _not_ needed. But when one thinks a bit further those problems have to be faced. Using more units with the need for real time behavior will cause higher latency and let this problem grow. So my idea was not really an urgent case, more sth to discuss when you look in the future ... Karl -- von Karl Reichert NEU! +++ GMX empfiehlt DSL-Komplettpaket von 1&1 +++ Surfen und Telefonieren!*: http://www.gmx.net/de/go/dsl |
|
From: Karl R. <Kar...@gm...> - 2007-08-03 09:01:17
|
Jan Kiszka wrote: > Karl Reichert wrote: > > Jan Kiszka wrote: > >> Karl Reichert wrote: > >>> Hello, > >>> > >>> I'm currently working on getting RTnet working on an Infineon TriCore > >>> 1130 and found some issues, which could be solved in the next major > >>> TDMA version comming with RTnet. Would like to discuss these issues > >>> to see other's opinion about that. > >>> > >>> The TriCore1 architecture is a little bit tricky. When you want to > >>> send a frame over Ethernet, the host (the "cpu") has to write it into > >>> a shared memory (available via FPI bus). Then the host writes a > >>> special register to tell the DMUT (Data Management Unit Transmit) to > >>> get this Frame from shared memory and write it into Transmit Buffer > >>> TB. The TX block of the MAC takes the frame from TB and sends it to > >>> MII, which will put it on the wire. All this can be found in detail > >>> in [1], chapter 31. > >>> > >>> Here comes the problem: When sending request calibration frames, one > >>> has to provide a transmission time stamp as close as possible to the > >>> real transmission time. As the TriCore does not support changing the > >>> frame after writing it into shared memory (by host), there is no way > >>> for providing a real transmission time stamp. Only option is to put > >>> the scheduled transmission time there which will have the effect, > >>> that transmission time on the wire t_trans is always calculated to > >>> long because the real transmission is done later. > >> Unless you have to make the steps of writing to the shared mem + > >> triggering the transmission preemptible, the solution is simple: just > >> push out the local time taken right _before_ you start writing the > >> frame. For your arch, that is the point "as close as possible" to the > >> real tx date. > >> > >>> To fix this issue there will be a few possibilities, one comes here: > >>> One could change TDMA.spec and allow slaves to send a two-parted > >>> request calibration frame. First one has a don't care Transmission > >>> Time stamp (marked as don't care by a bit or sth similiar). Slaves > >>> will notice their transmission time stamp in their TXISR, and send it > >>> in the second ReqCalFrm. This would make the whole process much more > >>> precise, but of course requires changes in TDMA.spec. > >> Yeah, that's how NTP and PTP work as well (follow-up message carrying > >> the timestamp of the first one), IIRC. Would be possible to extend the > >> specs, but there should be real hard needs for it. > >> > >> Again, as long as the tx timestamp is taken at a fixed date before real > >> tx, TDMA can handle this (may one should clarify the spec in this > regard). > >> > >> Jan > >> > > Well, the problem is that FPI bus is a shared bus, which will have the > effect that in some cases it may take longer until the frame is on the wire > and in some cases it may take less time. As a result, the tx timestamp is > _not_ taken at a fixed date before real tx. > > OK, that sounds almost like a "hard need". What are the other users on > this bus, and who is controlling the access (host or devices)? And what > dimension would the jitter then have (10 of microseconds, more, or less)? > > Jan > Other users can be in general every device also connected to FPI bus. In my case, some memory is also connected to FPI, which has the effect, that host and all devices using this memory accessing the bus. As those devices may read/write async from/into the memory, there is no way to ensure stable times accessing this bus. The dimension of the jitter is very depending on the situation. In my case, using only host and Ethernet-Controller, this value is always around 11 µs, jitter is less then 2 µs. But this is really depending on the use case. If there would be other devices using the bus (USB, serial ...), the jitter will be much higher for sure. Would be worth to do some tests for that, but I'm lacking time ATM, I'm sorry. But they will be high, for sure. Karl -- von Karl Reichert Psssst! Schon vom neuen GMX MultiMessenger gehört? Der kanns mit allen: http://www.gmx.net/de/go/multimessenger |