|
From: Darren N. <dn...@sa...> - 2003-09-16 07:10:42
|
David C Niemi wrote:
> Neither is the case. From what William Mills is saying, this is a known
> flaw -- as frames pile up waiting to be read, the fragmentation results in
> yet more frames piling up because the 4K frames that were "sent" are
> broken up, perhaps at the BEEP.c level.
AFAIK, the frames are broken up at the lowest level, just before being
sent, yes.
> As you say, it would make sense
> that if frames are removed from the buffer 4K at a time, they'd free up 4K
> for another 4K frame. But once a smaller frame is sent, only that much
> will be freed up when it is received, and it never gets quite back to
> full-sized frames again. Perhaps if I carefully choose the window
> size...but a multiple of 4096 bytes doesn't seem to do it.
Ah, OK. It becomes clear now. You *do* send *some* small frames. Are you
getting acknowledgements of these frames? Perhaps after sending a small
frame, you can either wait until a big frame comes back, or design the
profile to open two channels and send big frames.
Or BEEP.c could be modified to wait until it has accumulated some amount
of SEQ window available before starting on with the transmitting. I'm
pretty sure there's a way to query what the receiver's window size
currently is, also. (And there was planned a way of querying the recent
biggest window size, so you could guess what the receiver had set the
buffer size to.)
>>Sure. Any time you have more than one channel, and the receiver is only
>>reading a little at a time, say.
>
> Ah, so when you have many channels receiving small messages via a single
> shared receive buffer, my idea would be rather bad.
Not so much a shared receive buffer, but different channels processing
at different speeds. Think of something like "expidited data" or some
such. If you had, for example, small control messages interspersed with
data messages ("10% done!") you wouldn't want to block after queuing the
progress message before sending the next frame.
Or if you have an IM client that supports file downloads, and you want
the chats to not wait for the file to finish downloading before the chat
messages show up. Or multiple file downloads going at the same time, in
which case I'm pretty sure I remember designing BEEP.c to round-robin
when you have all the window sizes the same.
> And I imagine the
> receiver has no idea whether it's about to receive a tiny fragment or a
> whole tiny message when it says it's got some small amount of space.
Generally, the receiver wouldn't know this, no.
> Would it make any sense to be able to have separate receive buffers for
> each channel, or is that contrary to the protocol or otherwise untenable?
Technically, there are different buffers, sure. The library sucks up as
much memory as you set all your receive windows to. If you have three
channels, each set to a 10K window, the library will happily suck up 30K
of buffer space (plus overhead).
> I suppose it is also conceivable that I could speed up the receiver
> somewhat, but as its ultimate job is to write data any buffering I do
> there will eventually end up waiting for disk I/O.
The problem isn't with the speed of the receiver. It's that you read a
small chunk, and that generates a SEQ right away, which the sender then
uses to open the window, see things pending, and send them off. Having
the sender wait for an acknowlegement after sending a short frame would
help with this.
--
Darren New, San Diego CA USA (PST)
Don't take home left-over tripe.
It'll just digest itself before
you get around to eating it.
|