|
From: Jon M. <jm...@re...> - 2020-03-18 14:47:34
|
On 3/18/20 12:50 AM, Tuong Lien Tong wrote:
> Hi Jon,
>
> Ok, that makes sense (but we should have covered the case a broadcast packet is released too...).
> However, I have another concern about the logic here:
>
>> + /* Enter fast recovery */
>> + if (unlikely(retransmitted)) {
>> + l->ssthresh = max_t(u16, l->window / 2, 300);
>> + l->window = l->ssthresh;
>> + return;
>> + }
> What will if we have a retransmission when it's still in the slow-start phase? For example:
> l->ssthresh = 300
> l-> window = 60
> ==> retransmitted = true, then: l->ssthresh = 300; l->window = 300???
>
> This looks not correct?
> Should it be:
>
>> + /* Enter fast recovery */
>> + if (unlikely(retransmitted)) {
>> + l->ssthresh = max_t(u16, l->window / 2, 300);
>> - l->window = l->ssthresh;
>> + l->window = min_t(u16, l->window, l->ssthresh);
>> + return;
>> + }
> So will fix the issue with broadcast case as well?
Yes, this would fix both issues, so I think it is a good suggestion.
To my surprise I have realized that this code has not been released yet
(I only find it in 5.6-rc1 and later versions) but maybe that is just as
well ;-)
///jon
>
> BR/Tuong
>
> -----Original Message-----
> From: Jon Maloy <jm...@re...>
> Sent: Wednesday, March 18, 2020 1:38 AM
> To: Tuong Lien Tong <tuo...@de...>; 'Jon Maloy' <jon...@er...>; 'Jon Maloy' <ma...@do...>
> Cc: tip...@li...; moh...@er...
> Subject: Re: [tipc-discussion] [net-next 3/3] tipc: introduce variable window congestion control
>
>
>
> On 3/17/20 6:55 AM, Tuong Lien Tong wrote:
>> Hi Jon,
>>
>> For the "variable window congestion control" patch, if I remember correctly,
>> it is for unicast link only? Why did you apply it for broadcast link, a
>> mistake or ...?
> I did it so the code would be the same everywhere. Then, by setting both
> min_win and max_win to the same value BC_LINK_WIN_DEFAULT (==50)
> in the broadcast send link this window should never change.
>
>> It now causes user messages disordered on the receiving side, because on the
>> sending side, the broadcast link's window is suddenly increased to 300 (i.e.
>> max_t(u16, l->window / 2, 300)) at a packet retransmission, leaving some
>> gaps between the link's 'transmq' & 'backlogq' unexpectedly... Will we fix
>> this by removing it?
> That is clearly a bug that breaks the above stated limitation.
> It should be sufficient to check that also l->ssthresh never exceeds
> l->max_win to remedy this.
>
> ///jon
>
>> @@ -1160,7 +1224,6 @@ static int tipc_link_bc_retrans(struct tipc_link *l,
>> struct tipc_link *r,
>> continue;
>> if (more(msg_seqno(hdr), to))
>> break;
>> -
>> if (time_before(jiffies, TIPC_SKB_CB(skb)->nxt_retr))
>> continue;
>> TIPC_SKB_CB(skb)->nxt_retr = TIPC_BC_RETR_LIM;
>> @@ -1173,11 +1236,12 @@ static int tipc_link_bc_retrans(struct tipc_link *l,
>> struct tipc_link *r,
>> _skb->priority = TC_PRIO_CONTROL;
>> __skb_queue_tail(xmitq, _skb);
>> l->stats.retransmitted++;
>> -
>> + retransmitted++;
>> /* Increase actual retrans counter & mark first time */
>> if (!TIPC_SKB_CB(skb)->retr_cnt++)
>> TIPC_SKB_CB(skb)->retr_stamp = jiffies;
>> }
>> + tipc_link_update_cwin(l, 0, retransmitted); // ???
>> return 0;
>> }
>>
>> +static void tipc_link_update_cwin(struct tipc_link *l, int released,
>> + bool retransmitted)
>> +{
>> + int bklog_len = skb_queue_len(&l->backlogq);
>> + struct sk_buff_head *txq = &l->transmq;
>> + int txq_len = skb_queue_len(txq);
>> + u16 cwin = l->window;
>> +
>> + /* Enter fast recovery */
>> + if (unlikely(retransmitted)) {
>> + l->ssthresh = max_t(u16, l->window / 2, 300);
>> + l->window = l->ssthresh;
>> + return;
>> + }
>>
>> BR/Tuong
>>
>> -----Original Message-----
>> From: Jon Maloy <jon...@er...>
>> Sent: Monday, December 2, 2019 7:33 AM
>> To: Jon Maloy <jon...@er...>; Jon Maloy <ma...@do...>
>> Cc: moh...@er...;
>> par...@gm...; tun...@de...;
>> hoa...@de...; tuo...@de...;
>> gor...@de...; yin...@wi...;
>> tip...@li...
>> Subject: [net-next 3/3] tipc: introduce variable window congestion control
>>
>> We introduce a simple variable window congestion control for links.
>> The algorithm is inspired by the Reno algorithm, covering both 'slow
>> start', 'congestion avoidance', and 'fast recovery' modes.
>>
>> - We introduce hard lower and upper window limits per link, still
>> different and configurable per bearer type.
>>
>> - We introduce as 'slow start theshold' variable, initially set to
>> the maximum window size.
>>
>> - We let a link start at the minimum congestion window, i.e. in slow
>> start mode, and then let is grow rapidly (+1 per rceived ACK) until
>> it reaches the slow start threshold and enters congestion avoidance
>> mode.
>>
>> - In congestion avoidance mode we increment the congestion window for
>> each window_size number of acked packets, up to a possible maximum
>> equal to the configured maximum window.
>>
>> - For each non-duplicate NACK received, we drop back to fast recovery
>> mode, by setting the both the slow start threshold to and the
>> congestion window to (current_congestion_window / 2).
>>
>> - If the timeout handler finds that the transmit queue has not moved
>> timeout, it drops the link back to slow start and forces a probe
>> containing the last sent sequence number to the sent to the peer.
>>
>> This change does in reality have effect only on unicast ethernet
>> transport, as we have seen that there is no room whatsoever for
>> increasing the window max size for the UDP bearer.
>> For now, we also choose to keep the limits for the broadcast link
>> unchanged and equal.
>>
>> This algorithm seems to give a 50-100% throughput improvement for
>> messages larger than MTU.
>>
>> Suggested-by: Xin Long <luc...@gm...>
>> Acked-by: Xin Long <luc...@gm...>
>> Signed-off-by: Jon Maloy <jon...@er...>
>> ---
>> net/tipc/bcast.c | 11 ++--
>> net/tipc/bearer.c | 11 ++--
>> net/tipc/bearer.h | 6 +-
>> net/tipc/eth_media.c | 3 +-
>> net/tipc/ib_media.c | 5 +-
>> net/tipc/link.c | 175
>> +++++++++++++++++++++++++++++++++++----------------
>> net/tipc/link.h | 9 +--
>> net/tipc/node.c | 16 ++---
>> net/tipc/udp_media.c | 3 +-
>> 9 files changed, 160 insertions(+), 79 deletions(-)
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> tipc-discussion mailing list
>> tip...@li...
>> https://lists.sourceforge.net/lists/listinfo/tipc-discussion
>>
>
|