You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(13) |
Sep
(2) |
Oct
(22) |
Nov
(6) |
Dec
(5) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(14) |
Feb
|
Mar
(10) |
Apr
(2) |
May
(2) |
Jun
|
Jul
(22) |
Aug
(41) |
Sep
(18) |
Oct
(7) |
Nov
(6) |
Dec
(17) |
2003 |
Jan
(10) |
Feb
(17) |
Mar
|
Apr
(8) |
May
|
Jun
|
Jul
(2) |
Aug
(1) |
Sep
(38) |
Oct
(17) |
Nov
(8) |
Dec
(5) |
2004 |
Jan
(27) |
Feb
(4) |
Mar
(20) |
Apr
(12) |
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
2005 |
Jan
|
Feb
|
Mar
(4) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2006 |
Jan
(7) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(6) |
From: David C N. <dc...@ad...> - 2004-04-27 16:26:26
|
The first thing I'm discovering is that the diff from 0.2.00 to the current CVS version is HUGE, and it is messy due to such things as the "core" directory tree being renamed to "base". It would be a great help to have a new baseline version like 0.2.10, even if it is just a snapshot of CVS, so I have something to work with that other people could reliably reproduce. Is there any possibility of making a CVS snapshot available as a tarball in the near future? Or if I make one, could it be available on SourceForge so that I can refer to it in the RPM spec file? ------------------------------------------------------- -- David C. Niemi Adeptech Systems, Inc. -- -- Reston, Virginia, USA http://www.adeptech.com/ -- ------------------------------------------------------- |
From: William J. M. <wm...@es...> - 2004-04-21 21:05:34
|
I'll check them in. I wanted something more than my own testing before I did. Agreed on the latency changes. I also think a tuning setting on a channel for minimum fragment size might be easily implemented. That would give you what you were looking for a while ago of setting that to >4k whih means your frames would always g whole. Regards, -bill On Wed, Apr 21, 2004 at 04:36:46PM -0400, David C Niemi wrote: > On Wed, 21 Apr 2004, William J. Mills wrote: > > I am rpretty sure the answe is no. Go for it! > > > > How did the silly window changes work out? > > They worked, and I'd like to see some form of them go into the main tree. > The latency changes need to be generalized. I'd like to put together a > patch with all my changes for your review -- probably next week. > > DCN > > ------------------------------------------------------- > -- David C. Niemi Adeptech Systems, Inc. -- > -- Reston, Virginia, USA http://www.adeptech.com/ -- > ------------------------------------------------------- > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux Tutorials > Free Linux tutorial presented by Daniel Robbins, President and CEO of > GenToo technologies. Learn everything from fundamentals to system > administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click > _______________________________________________ > Beepcore-c-users mailing list > Bee...@li... > https://lists.sourceforge.net/lists/listinfo/beepcore-c-users |
From: David C N. <dc...@ad...> - 2004-04-21 20:36:59
|
On Wed, 21 Apr 2004, William J. Mills wrote: > I am rpretty sure the answe is no. Go for it! > > How did the silly window changes work out? They worked, and I'd like to see some form of them go into the main tree. The latency changes need to be generalized. I'd like to put together a patch with all my changes for your review -- probably next week. DCN ------------------------------------------------------- -- David C. Niemi Adeptech Systems, Inc. -- -- Reston, Virginia, USA http://www.adeptech.com/ -- ------------------------------------------------------- |
From: William J. M. <wm...@es...> - 2004-04-21 16:36:34
|
I am rpretty sure the answe is no. Go for it! How did the silly window changes work out? -bill On Wed, Apr 21, 2004 at 09:59:17AM -0400, David C Niemi wrote: > > In preparation for releasing the project I have been working on, I'd like > to put together a RPM of Beepcore-C. Does anyone know if this has been > done before? > > ------------------------------------------------------- > -- David C. Niemi Adeptech Systems, Inc. -- > -- Reston, Virginia, USA http://www.adeptech.com/ -- > ------------------------------------------------------- > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux Tutorials > Free Linux tutorial presented by Daniel Robbins, President and CEO of > GenToo technologies. Learn everything from fundamentals to system > administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click > _______________________________________________ > Beepcore-c-users mailing list > Bee...@li... > https://lists.sourceforge.net/lists/listinfo/beepcore-c-users |
From: David C N. <dc...@ad...> - 2004-04-21 13:59:32
|
In preparation for releasing the project I have been working on, I'd like to put together a RPM of Beepcore-C. Does anyone know if this has been done before? ------------------------------------------------------- -- David C. Niemi Adeptech Systems, Inc. -- -- Reston, Virginia, USA http://www.adeptech.com/ -- ------------------------------------------------------- |
From: David C N. <dc...@ad...> - 2004-04-07 22:24:36
|
I think basing it on the actual number of sessions would make some sense, but I have anywhere from 1-4 sessions per profile instance right now, and the optimum timeout is consistently near 10 msec, mostly independent of the number of sessions. It might also make sense to only look through the sessions that actually exist instead of the theoretical max supported, perhaps, but it is not quite clear to me how to do that. DCN On Wed, 7 Apr 2004, William J. Mills wrote: > Mark Richardson wrote most of that. > > I suspect the optimization you are doing is optimising for a > single connection. I am wondering what the behavior on the > "server" side will be when it accepts more than 1 connection. > > I think, if I remember correctly, Mark was somewhat concerned > with portability and so made some least common denominator > trype choices in the poll code. I can well believe that tuning > may be needed. I think the timeouts as set are pessimistic, > and probably should be set based on the actual number of session > rather than the max. ------------------------------------------------------- -- David C. Niemi Adeptech Systems, Inc. -- -- Reston, Virginia, USA http://www.adeptech.com/ -- ------------------------------------------------------- |
From: William J. M. <wm...@es...> - 2004-04-07 21:39:39
|
Mark Richardson wrote most of that. I suspect the optimization you are doing is optimising for a single connection. I am wondering what the behavior on the "server" side will be when it accepts more than 1 connection. I think, if I remember correctly, Mark was somewhat concerned with portability and so made some least common denominator trype choices in the poll code. I can well believe that tuning may be needed. I think the timeouts as set are pessimistic, and probably should be set based on the actual number of session rather than the max. -bill On Wed, Apr 07, 2004 at 05:30:18PM -0400, David C Niemi wrote: > On Tue, 6 Apr 2004, Lei Zhang wrote: > > So you have 768 /3 = 256 sessions in a single process? Can you make > > your application host smaller number of sessions? I played a little > > with the timeout value of poll() a little a while ago, my purpose then > > was to see if that helps reduce the CPU usage of my beepcore-c based > > application - and noticed the problem you described in this thread. > > I have done some more real testing, and find that subtle changes in the > vicinity of this poll() call have a rather large impact on performance > (latency in particular). It looks like for my current setup about a 10 > msec timeout to poll() is optimal; reducing it further does not help and > seems to slightly increase CPU consumption. But going up to even 20 msec > makes things noticeably slower. > > Who wrote this code? It is obviously rather intricate and in many ways > hard to improve upon, but it seems to me some sort of change may be called > for (if nothing else, to always set the timeout to 1/HZ). > > ------------------------------------------------------- > -- David C. Niemi Adeptech Systems, Inc. -- > -- Reston, Virginia, USA http://www.adeptech.com/ -- > ------------------------------------------------------- > |
From: David C N. <dc...@ad...> - 2004-04-07 21:30:47
|
On Tue, 6 Apr 2004, Lei Zhang wrote: > So you have 768 /3 = 256 sessions in a single process? Can you make > your application host smaller number of sessions? I played a little > with the timeout value of poll() a little a while ago, my purpose then > was to see if that helps reduce the CPU usage of my beepcore-c based > application - and noticed the problem you described in this thread. I have done some more real testing, and find that subtle changes in the vicinity of this poll() call have a rather large impact on performance (latency in particular). It looks like for my current setup about a 10 msec timeout to poll() is optimal; reducing it further does not help and seems to slightly increase CPU consumption. But going up to even 20 msec makes things noticeably slower. Who wrote this code? It is obviously rather intricate and in many ways hard to improve upon, but it seems to me some sort of change may be called for (if nothing else, to always set the timeout to 1/HZ). ------------------------------------------------------- -- David C. Niemi Adeptech Systems, Inc. -- -- Reston, Virginia, USA http://www.adeptech.com/ -- ------------------------------------------------------- |
From: David C N. <dc...@ad...> - 2004-04-06 19:27:44
|
On Tue, 6 Apr 2004, Lei Zhang wrote: > So you have 768 /3 = 256 sessions in a single process? Can you make > your application host smaller number of sessions? I played a little > with the timeout value of poll() a little a while ago, my purpose then > was to see if that helps reduce the CPU usage of my beepcore-c based > application - and noticed the problem you described in this thread. There are only a few sessions active, though the number will vary somewhat. It would be nice if only active sessions were actually impacting performance, but even better for the number of sessions to not seriously affect performance. > Assume I understand your question, here is an answer: this poll() usally > does not have the WRITE mask set, some other thread will make the WRITE > mask set. However, the mask change won't take effect until poll() times > out, hence any outgoing message may have to wait as long as the poll > timeout value before they get sent out. "3 * pn->size" seems to be a > good timeout value: not too much latency, not too hard on CPU usage. > Otherwise, we need a way to let poll() return whenver a mask for any fd > is changed. Returning right away when a mask is changed might be preferable to having CPU consumption raised by having a very short timeout, or by having multi-millisecond latencies added every time a message sent. Any idea of how to do it? ------------------------------------------------------- -- David C. Niemi Adeptech Systems, Inc. -- -- Reston, Virginia, USA http://www.adeptech.com/ -- ------------------------------------------------------- |
From: Lei Z. <lz...@ju...> - 2004-04-06 19:18:15
|
David C Niemi wrote: >I also noticed that I was seeing rather long latencies, up to 700+ msec, >as if everything was getting batched up and sent once about every 3/4 sec. >I noticed a poll() call in >threaded_os/transport/bp_fpollmgr.c:IW_fpollmgr_fds() with a number of >milliseconds that was set to "3 * pn->size" (pn is the POLL NODE pointer). >This seems to be a rather strange assumption, and I found that it was >translating to 768 milliseconds. > So you have 768 /3 = 256 sessions in a single process? Can you make your application host smaller number of sessions? I played a little with the timeout value of poll() a little a while ago, my purpose then was to see if that helps reduce the CPU usage of my beepcore-c based application - and noticed the problem you described in this thread. > When I cut this down to just a few >milliseconds, the latency of small messages drops by more than an order of >magnitude, down to a few tens of milliseconds. It seems to me something >is going wrong here, or the poll() call would not even need to reach its >timeout. Any ideas? Also oddly, my silly frame cure combined with a >short poll() interval sometimes causes stalls. > >I wonder whether perhaps something else is waiting to happen and cannot >because the pn->lock is locked during the poll() call. Like something >that is trying to set some pollfds, perhaps, as a new message is being >worked with. > Assume I understand your question, here is an answer: this poll() usally does not have the WRITE mask set, some other thread will make the WRITE mask set. However, the mask change won't take effect until poll() times out, hence any outgoing message may have to wait as long as the poll timeout value before they get sent out. "3 * pn->size" seems to be a good timeout value: not too much latency, not too hard on CPU usage. Otherwise, we need a way to let poll() return whenver a mask for any fd is changed. Lei |
From: David C N. <dc...@ad...> - 2004-04-05 19:04:49
|
I have done some benchmarking, and found that silly frames sap 40% of the throughput I get in transferring large messages (in other words, eliminating silly frames improves throughput by 60%). So eliminating silly frames is not just an academic exercise. I also noticed that I was seeing rather long latencies, up to 700+ msec, as if everything was getting batched up and sent once about every 3/4 sec. I noticed a poll() call in threaded_os/transport/bp_fpollmgr.c:IW_fpollmgr_fds() with a number of milliseconds that was set to "3 * pn->size" (pn is the POLL NODE pointer). This seems to be a rather strange assumption, and I found that it was translating to 768 milliseconds. When I cut this down to just a few milliseconds, the latency of small messages drops by more than an order of magnitude, down to a few tens of milliseconds. It seems to me something is going wrong here, or the poll() call would not even need to reach its timeout. Any ideas? Also oddly, my silly frame cure combined with a short poll() interval sometimes causes stalls. I wonder whether perhaps something else is waiting to happen and cannot because the pn->lock is locked during the poll() call. Like something that is trying to set some pollfds, perhaps, as a new message is being worked with. ------------------------------------------------------- -- David C. Niemi Adeptech Systems, Inc. -- -- Reston, Virginia, USA http://www.adeptech.com/ -- ------------------------------------------------------- |
From: David C N. <dc...@ad...> - 2004-03-30 21:31:46
|
On Tue, 30 Mar 2004, William J. Mills wrote: > > It may be avoiding some of the REALLY silly windows like 100 byte > > fragments of a 4K frame. But at apparent_out_window/4, it can break a 4K > > frame up into 4 framelets, and frequently breaks one up into 2 or 3. > > So what is the right behavior? That's the question, should we wait for > the window to open completely so we are sending the max we can in one > chunk? This works if the window is in fact larger than the frame, > but what if the fram is larger than the window, we have to fragment. apparent_out_window is evidently not updated when one's peer increases its window size. That is the problem I am seeing. For a crude test, I just eliminated that part of your test, and the silly windows are gone, with no stalls that I can detect (and I HAD seen stalls for some of my previous attempts...). So I think the right behavior is to keep your test as you proposed it, but find some way to really find out what the peer's current input buffer size is. > Possible that apparent_out_window is not getting maintained... Indeed, it is set to 4K and never changed, as far as I can tell. I suppose if there is some way to propagate this change to one's peer. ------------------------------------------------------- -- David C. Niemi Adeptech Systems, Inc. -- -- Reston, Virginia, USA http://www.adeptech.com/ -- ------------------------------------------------------- |
From: William J. M. <wm...@es...> - 2004-03-30 20:14:54
|
> > It may be avoiding some of the REALLY silly windows like 100 byte > fragments of a 4K frame. But at apparent_out_window/4, it can break a 4K > frame up into 4 framelets, and frequently breaks one up into 2 or 3. So what is the right behavior? That's the question, should we wait for the window to open completely so we are sending the max we can in one chunk? This works if the window is in fact larger than the frame, but what if the fram is larger than the window, we have to fragment. My off the cuff window/4 is to reduce "silliness" while still letting us have a number of fragments in flight, think that it's advantageous to keep the flow going. I am almost tempted to do something like sending chunks of window size/4 if the frame size exceeds the window size, so that we can have frames and seq's in flight and not be stuck in a cycle of filling the whole window and having to wait for the seq in reqturn before sending more. . > > What would happen if I refused to accept anything less than > c->commit_frame->size? Would that cause a stall of some sort? I think so, certainly for c->commit_frame->size > c->apparent_out_window. > > I note that c->apparent_out_window is only 4K, despite seemingly having > set the window size to be 64K at both ends using bpc_set_channel_window() > in pro_start_indication() and my _start() function. Am I setting the > wrong thing, or in the wrong places? > Possible that apparent_out_window is not getting maintained... > BTW, I only have one active channel right now, so various algorithms for > choosing the channel won't make much difference except to the extent they > disqualify our only channel from sending at the moment. > True. -bill |
From: David C N. <dc...@ad...> - 2004-03-30 19:56:09
|
On Tue, 30 Mar 2004, William J. Mills wrote: > I am trying to implement the spirit of "Nagle's algorithm" here, > one part of which is not to send more than one tiny frame in > sequence. The major difference is that it applies to TCP and > uses the concepts of MSS and window size, and we don't have MSS > in BEEP. > > I compiles and ran (with c-> rather than b-> as you note) last > night, using beepng. WIth window size of 4K and sending 4K > payloads you do still end up with tiny frames sent, but they > are much less frequent. With the restriction of opening the > window at least 50% it's rediced the silliness a lot already right? > Sending a small frame that completes the message is not "silly" > > Thoughts? It may be avoiding some of the REALLY silly windows like 100 byte fragments of a 4K frame. But at apparent_out_window/4, it can break a 4K frame up into 4 framelets, and frequently breaks one up into 2 or 3. What would happen if I refused to accept anything less than c->commit_frame->size? Would that cause a stall of some sort? I note that c->apparent_out_window is only 4K, despite seemingly having set the window size to be 64K at both ends using bpc_set_channel_window() in pro_start_indication() and my _start() function. Am I setting the wrong thing, or in the wrong places? BTW, I only have one active channel right now, so various algorithms for choosing the channel won't make much difference except to the extent they disqualify our only channel from sending at the moment. ------------------------------------------------------- -- David C. Niemi Adeptech Systems, Inc. -- -- Reston, Virginia, USA http://www.adeptech.com/ -- ------------------------------------------------------- |
From: William J. M. <wm...@es...> - 2004-03-30 19:33:00
|
I am trying to implement the spirit of "Nagle's algorithm" here, one part of which is not to send more than one tiny frame in sequence. The major difference is that it applies to TCP and uses the concepts of MSS and window size, and we don't have MSS in BEEP. I compiles and ran (with c-> rather than b-> as you note) last night, using beepng. WIth window size of 4K and sending 4K payloads you do still end up with tiny frames sent, but they are much less frequent. With the restriction of opening the window at least 50% it's rediced the silliness a lot already right? Sending a small frame that completes the message is not "silly" Thoughts? -bill On Tue, Mar 30, 2004 at 12:50:05PM -0500, David C Niemi wrote: > > While this looks kind of promising, it doesn't seem to work. > > First, I presume you mean c-> rather than b-> in each of the tests below, > and that you want <= and >= for the two new tests. But I think the > problem is that all this does is to skip certain operations on the channel > (in the "if (b) {" structure), and silliness still happens. I feel we > have peeled the onion several layers only to find we might have the wrong > onion. > > How does the (c->max_out_seq - c->cur_out_seq) situation ever get > improved? If we punt is some other process going to help out and clean > things up? > > DCN > > On Mon, 29 Mar 2004, William J. Mills wrote: > > David, > > > > I am playing with the following change to the below... > > > > while (c) { > > if (c->commit_frame != NULL && ul_lt(c->cur_out_seq, c->max_out_seq) && > > (s->tuning == -1 || c->channel_number == s->tuning) && > > ((b->commit_frame->size < (b->max_out_seq - b->cur_out_seq)) || > > ((b->max_out_seq - b->cur_out_seq) > (b->apparent_out_window / 4)))) { > > b = c; > > break; > > } > > c = c->next; > > } > > > > which adds logic to check if the current frame is smaller than the > > open window or the current window is greater than 1/4 the largest > > apparent window. I think this will reduce silliness. I will see what > > results I get. > > > > -bill > > > > |
From: William J. M. <wm...@es...> - 2004-03-30 18:50:52
|
Eduard, Looking for it in my current checkout I don't see it! I have not done a Win compilation, so I have never encountered this before. First thing is to ask the list and see whether anyone remembers this. List??? We'll try that and then I'll see what else I can hunt up. -bill On Tue, Mar 30, 2004 at 02:21:09AM -0800, Eduard Huber wrote: > > Dear William, > > > > currently i am developing my own application based on the > > B.E.E.P. Framework (C/C++) downloaded from > > www.beepcore.org. > > > > I am going to use C++ library for developing, but it seems > > not so easy to do. I had some problem > > during Peer.cpp file compilation. The file cbtcphelper.h > > (include "cbtcphelper.h" ) could not be found. > > I could find this file in the B.E.E.P project. Is it a > > special file from another project or library? > > > > Can give me an advice where i can find this file or how can > > i eliminate this problem. > > |
From: Eduard H. <Edu...@in...> - 2004-03-30 11:21:42
|
Currently I am developing my own application based on the B.E.E.P. Framework (C/C++) downloaded from sourceforge.net or using cvs. =20 I am going to use C++ library for developing, but it seems not so easy to do. =20 I had some problem during Peer.cpp file compilation. The file cbtcphelper.h (include "cbtcphelper.h" ) could not be found. I could find this file in the BEEP project. Is it a special file from another project or library? =20 Can give me an advice where I can find this file or how can I eliminate this problem. |
From: David C N. <dc...@ad...> - 2004-03-26 03:58:58
|
On Thu, 25 Mar 2004, Darren New wrote: > However, an even more efficient way of doing this is to go > > X <--- msg --- Y > X ---- ans1 --> Y > X ---- ans2 --> Y > X ---- ans3 --> Y > ... > X ---- NUL --> Y OK, so once in a while a NUL would be sent which could trigger a fresh MSG to allow a fresh set of ANSes. I guess that would work, as long as there can be many frames in an answer that have the same answer number, but it sounds like a rather strange way of using the protocol. > > However, the reason I don't want to do RPYs is that (per the end-to-end > > principle) I want to acknowledge things at a higher application > layer, far > > above BEEP, so the BEEP-level RPY is not really needed. > > If you want to do any sort of acks at all, they have to come back > *somewhere*. The end-to-end principle doesn't mean you shouldn't do > low-level acks. It means that low-level acks aren't sufficient for > high-level acks. If Y never sends a frame back to X, then you're not > doing end-to-end principle either. :-) Y eventually sends a frame back, based on the MSG going up to the high level app and generating a high level reply which is carried in a separate BEEP message. There's no practical way to make this a reply to the original message number because it could come from another machine entirely (due to load-balancing). > > Is the > > requirement to do RPYs or ANSes from BEEP spec itself, or from the > > Beepcore-C implementation? It would have made sense to me to have a > > message type that did not expect a reply at the BEEP level. > > From the spec. And no, it wouldn't really, because then you couldn't > reuse message numbers and you'd eventually run out. I'd like to just forget message numbers once the final frame is sent. I think. Just like Answer Numbers are presumably forgotten right away once all the frames in the answer are sent. > > Anyway, the message numbering does seem to have been the issue at hand, > > and I am taking care of it now. Thanks to both William and Darren for > > pointing it out. > > Glad I could remember what that message meant. ;-) By the way, I was also wondering about this comment in CBEEP.c. Do you think it is still a problem? /* BUGGY! THIS ASSUMES NO 2^32-1 WRAPPING! Use "long long" here? */ static void find_best_seq_to_send(struct session * s) { struct channel * best; /* Best */ struct channel * test; /* test */ long b_size; long t_size; ... ------------------------------------------------------- -- David C. Niemi Adeptech Systems, Inc. -- -- Reston, Virginia, USA http://www.adeptech.com/ -- ------------------------------------------------------- |
From: William J. M. <wm...@es...> - 2004-03-25 22:02:32
|
The protocol specifies that there will always be a response to a MSG. I don't think we are quite on the same page for the number wi of messages needed for things. If you wanted to deliver say 10 1K messages that require no app level response you can: - send 10 MSG frames and get back 10 NUL frames. 20 messages. - get the reciever to *request* the data with a message and send 10 ANS and a NUL, so thats 1 MSG, 10 ANS, and a NUL, for a total 12 messages. If you needed to triger that from the sender side, you add on MSG/RPY to that. ANS messages have an answer number which can be used to determine sequence/count and such. The use of the NUL without ANS just to "ACK" the MSG was so that the sender could know the profile on the other end in fact recieved the MSG. Clearer? -bill On Thu, Mar 25, 2004 at 04:33:21PM -0500, David C Niemi wrote: > On Thu, 25 Mar 2004, William J. Mills wrote: > > If you don't need replies back, you can have the end getting all the > > traffic send a MSG and send ANS frames back, as many as you want. > > Ending the stream with a NUL. > > Sending multiple messages in one stream would require me to do my own > message numbering, which is a lot more work. And if I do this switcheroo > per-message, it is just as much overhead (actually one packet more thanks > to the NUL) than if I just sent a RPY back. So for now sending a RPY > seems the thing to do, and serves my purposes for the time being. > > However, the reason I don't want to do RPYs is that (per the end-to-end > principle) I want to acknowledge things at a higher application layer, far > above BEEP, so the BEEP-level RPY is not really needed. Is the > requirement to do RPYs or ANSes from BEEP spec itself, or from the > Beepcore-C implementation? It would have made sense to me to have a > message type that did not expect a reply at the BEEP level. > > Anyway, the message numbering does seem to have been the issue at hand, > and I am taking care of it now. Thanks to both William and Darren for > pointing it out. > > ------------------------------------------------------- > -- David C. Niemi Adeptech Systems, Inc. -- > -- Reston, Virginia, USA http://www.adeptech.com/ -- > ------------------------------------------------------- > |
From: Darren N. <dn...@sa...> - 2004-03-25 21:51:46
|
David C Niemi wrote: > Sending multiple messages in one stream would require me to do my own > message numbering, which is a lot more work. And if I do this switcheroo > per-message, it is just as much overhead (actually one packet more thanks > to the NUL) than if I just sent a RPY back. So for now sending a RPY > seems the thing to do, and serves my purposes for the time being. I think you're misunderstanding. What Bill is describing is this. Right now, you have endpoint X sending a bunch of messages and getting no answers to endpoint Y. X --- msg1 ---> Y X --- msg2 ---> Y X --- msg3 ---> Y X --- msg4 ---> Y BEEP doesn't work that way, because X doesn't know when to free the resources, so you'd normally go X --- msg1 ---> Y X <-- rpy1 ---- Y X --- msg2 ---> Y X <-- rpy2 ---- Y X --- msg3 ---> Y X <-- rpy3 ---- Y X --- msg4 ---> Y X <-- rpy4 ---- Y If the RPY has no actual data in it, you could modify this slightly to go X --- msg1 ---> Y X <-- NUL ---- Y X --- msg2 ---> Y X <-- NUL ---- Y X --- msg3 ---> Y X <-- NUL ---- Y X --- msg4 ---> Y X <-- NUL ---- Y However, an even more efficient way of doing this is to go X <--- msg --- Y X ---- ans1 --> Y X ---- ans2 --> Y X ---- ans3 --> Y ... X ---- NUL --> Y That way, you don't need any acks coming back from Y at all. Of course, you eventually run out of answer numbers. (Mind, "eventually" at 2^32 is a pretty big number, depending on what each message represents.) > However, the reason I don't want to do RPYs is that (per the end-to-end > principle) I want to acknowledge things at a higher application layer, far > above BEEP, so the BEEP-level RPY is not really needed. If you want to do any sort of acks at all, they have to come back *somewhere*. The end-to-end principle doesn't mean you shouldn't do low-level acks. It means that low-level acks aren't sufficient for high-level acks. If Y never sends a frame back to X, then you're not doing end-to-end principle either. :-) > Is the > requirement to do RPYs or ANSes from BEEP spec itself, or from the > Beepcore-C implementation? It would have made sense to me to have a > message type that did not expect a reply at the BEEP level. From the spec. And no, it wouldn't really, because then you couldn't reuse message numbers and you'd eventually run out. > Anyway, the message numbering does seem to have been the issue at hand, > and I am taking care of it now. Thanks to both William and Darren for > pointing it out. Glad I could remember what that message meant. ;-) -- Darren New, San Diego CA USA (PST) I am in geosynchronous orbit, supported by a quantum photon exchange drive.... |
From: David C N. <dc...@ad...> - 2004-03-25 21:33:33
|
On Thu, 25 Mar 2004, William J. Mills wrote: > If you don't need replies back, you can have the end getting all the > traffic send a MSG and send ANS frames back, as many as you want. > Ending the stream with a NUL. Sending multiple messages in one stream would require me to do my own message numbering, which is a lot more work. And if I do this switcheroo per-message, it is just as much overhead (actually one packet more thanks to the NUL) than if I just sent a RPY back. So for now sending a RPY seems the thing to do, and serves my purposes for the time being. However, the reason I don't want to do RPYs is that (per the end-to-end principle) I want to acknowledge things at a higher application layer, far above BEEP, so the BEEP-level RPY is not really needed. Is the requirement to do RPYs or ANSes from BEEP spec itself, or from the Beepcore-C implementation? It would have made sense to me to have a message type that did not expect a reply at the BEEP level. Anyway, the message numbering does seem to have been the issue at hand, and I am taking care of it now. Thanks to both William and Darren for pointing it out. ------------------------------------------------------- -- David C. Niemi Adeptech Systems, Inc. -- -- Reston, Virginia, USA http://www.adeptech.com/ -- ------------------------------------------------------- |
From: David C N. <dc...@ad...> - 2004-03-25 19:29:09
|
On Thu, 25 Mar 2004, Darren New wrote: > David C Niemi wrote: > > Is there any implicit assumption that MSGs will be acked with an ANS or > > RPY? > > Yes. You can't close the channel before all messages have been replied > to. In theory, you shouldn't be processing the second incoming MSG until > the first incoming MSG has had a reply of some sort queued for it. Ah, that does sound like what's happening. Thanks. > I don't recall exactly, but grepping the sources for it would probably > turn up exactly what's happening. It might be that you're reusing MSG > numbers, if you're not getting any answers, in which case you'll get an > error when you queue the second MSG with the same number before the > first MSG has been answered and retired. How are you generating your > message numbers? They're supposed to be sequential, but it's possible I have some sort of error there that can occasionally permit duplication. Or perhaps messages arriving from two different places to the same profile instance are using the same message numbers at times. Is it only necessary to keep message numbers unique within a session, or when I have a listener with multiple sessions is it necessary for them to be globally unique across multiple sessions? ------------------------------------------------------- -- David C. Niemi Adeptech Systems, Inc. -- -- Reston, Virginia, USA http://www.adeptech.com/ -- ------------------------------------------------------- |
From: William J. M. <wm...@es...> - 2004-03-25 19:09:43
|
The expectation is that each MSG gets a response, yes. SO it's possible I guess that you are blowing up the outstanding message list somehow. A reply to a MSG can either be an RPY or ANS*/NUL (That is any number of ANS frames followed by a NUL). If you don't need replies back, you can have the end getting all the traffic send a MSG and send ANS frames back, as many as you want. Ending the stream with a NUL. -bill On Thu, Mar 25, 2004 at 12:41:51PM -0500, David C Niemi wrote: > On Wed, 24 Mar 2004, David C Niemi wrote: > > On Tue, 23 Mar 2004, William J. Mills wrote: > > > Hmmm. What's the action when it throws the error? Sending a MSG or > > > an ANS/RPY/NUL? > > Hmm, come to think of it it would probably be a MSG, as I do not use ANS > or RPY for the type of messages that are most involved here, though there > is one type of message for which a RPY is sent. > > Is there any implicit assumption that MSGs will be acked with an ANS or > RPY? And what is a NUL frame? It almost looks like something is tracking > the state of already-sent messages, and when, say, 256 of them are > outstanding (or something in that ballpark) this message pops up. What is > the message supposed to mean? > > ------------------------------------------------------- > -- David C. Niemi Adeptech Systems, Inc. -- > -- Reston, Virginia, USA http://www.adeptech.com/ -- > ------------------------------------------------------- > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux Tutorials > Free Linux tutorial presented by Daniel Robbins, President and CEO of > GenToo technologies. Learn everything from fundamentals to system > administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click > _______________________________________________ > Beepcore-c-users mailing list > Bee...@li... > https://lists.sourceforge.net/lists/listinfo/beepcore-c-users |
From: Darren N. <dn...@sa...> - 2004-03-25 19:01:01
|
David C Niemi wrote: > Is there any implicit assumption that MSGs will be acked with an ANS or > RPY? Yes. You can't close the channel before all messages have been replied to. In theory, you shouldn't be processing the second incoming MSG until the first incoming MSG has had a reply of some sort queued for it. > And what is a NUL frame? IIRC, it's what you send at the set of a bunch of ANS frames to say that you're done sending ANSs for that MSG. > It almost looks like something is tracking > the state of already-sent messages, and when, say, 256 of them are > outstanding (or something in that ballpark) this message pops up. What is > the message supposed to mean? I don't recall exactly, but grepping the sources for it would probably turn up exactly what's happening. It might be that you're reusing MSG numbers, if you're not getting any answers, in which case you'll get an error when you queue the second MSG with the same number before the first MSG has been answered and retired. How are you generating your message numbers? -- Darren New, San Diego CA USA (PST) I am in geosynchronous orbit, supported by a quantum photon exchange drive.... |
From: David C N. <dc...@ad...> - 2004-03-25 17:41:59
|
On Wed, 24 Mar 2004, David C Niemi wrote: > On Tue, 23 Mar 2004, William J. Mills wrote: > > Hmmm. What's the action when it throws the error? Sending a MSG or > > an ANS/RPY/NUL? Hmm, come to think of it it would probably be a MSG, as I do not use ANS or RPY for the type of messages that are most involved here, though there is one type of message for which a RPY is sent. Is there any implicit assumption that MSGs will be acked with an ANS or RPY? And what is a NUL frame? It almost looks like something is tracking the state of already-sent messages, and when, say, 256 of them are outstanding (or something in that ballpark) this message pops up. What is the message supposed to mean? ------------------------------------------------------- -- David C. Niemi Adeptech Systems, Inc. -- -- Reston, Virginia, USA http://www.adeptech.com/ -- ------------------------------------------------------- |