You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(6) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(9) |
Feb
(11) |
Mar
(22) |
Apr
(73) |
May
(78) |
Jun
(146) |
Jul
(80) |
Aug
(27) |
Sep
(5) |
Oct
(14) |
Nov
(18) |
Dec
(27) |
2005 |
Jan
(20) |
Feb
(30) |
Mar
(19) |
Apr
(28) |
May
(50) |
Jun
(31) |
Jul
(32) |
Aug
(14) |
Sep
(36) |
Oct
(43) |
Nov
(74) |
Dec
(63) |
2006 |
Jan
(34) |
Feb
(32) |
Mar
(21) |
Apr
(76) |
May
(106) |
Jun
(72) |
Jul
(70) |
Aug
(175) |
Sep
(130) |
Oct
(39) |
Nov
(81) |
Dec
(43) |
2007 |
Jan
(81) |
Feb
(36) |
Mar
(20) |
Apr
(43) |
May
(54) |
Jun
(34) |
Jul
(44) |
Aug
(55) |
Sep
(44) |
Oct
(54) |
Nov
(43) |
Dec
(41) |
2008 |
Jan
(42) |
Feb
(84) |
Mar
(73) |
Apr
(30) |
May
(119) |
Jun
(54) |
Jul
(54) |
Aug
(93) |
Sep
(173) |
Oct
(130) |
Nov
(145) |
Dec
(153) |
2009 |
Jan
(59) |
Feb
(12) |
Mar
(28) |
Apr
(18) |
May
(56) |
Jun
(9) |
Jul
(28) |
Aug
(62) |
Sep
(16) |
Oct
(19) |
Nov
(15) |
Dec
(17) |
2010 |
Jan
(14) |
Feb
(36) |
Mar
(37) |
Apr
(30) |
May
(33) |
Jun
(53) |
Jul
(42) |
Aug
(50) |
Sep
(67) |
Oct
(66) |
Nov
(69) |
Dec
(36) |
2011 |
Jan
(52) |
Feb
(45) |
Mar
(49) |
Apr
(21) |
May
(34) |
Jun
(13) |
Jul
(19) |
Aug
(37) |
Sep
(43) |
Oct
(10) |
Nov
(23) |
Dec
(30) |
2012 |
Jan
(42) |
Feb
(36) |
Mar
(46) |
Apr
(25) |
May
(96) |
Jun
(146) |
Jul
(40) |
Aug
(28) |
Sep
(61) |
Oct
(45) |
Nov
(100) |
Dec
(53) |
2013 |
Jan
(79) |
Feb
(24) |
Mar
(134) |
Apr
(156) |
May
(118) |
Jun
(75) |
Jul
(278) |
Aug
(145) |
Sep
(136) |
Oct
(168) |
Nov
(137) |
Dec
(439) |
2014 |
Jan
(284) |
Feb
(158) |
Mar
(231) |
Apr
(275) |
May
(259) |
Jun
(91) |
Jul
(222) |
Aug
(215) |
Sep
(165) |
Oct
(166) |
Nov
(211) |
Dec
(150) |
2015 |
Jan
(164) |
Feb
(324) |
Mar
(299) |
Apr
(214) |
May
(111) |
Jun
(109) |
Jul
(105) |
Aug
(36) |
Sep
(58) |
Oct
(131) |
Nov
(68) |
Dec
(30) |
2016 |
Jan
(46) |
Feb
(87) |
Mar
(135) |
Apr
(174) |
May
(132) |
Jun
(135) |
Jul
(149) |
Aug
(125) |
Sep
(79) |
Oct
(49) |
Nov
(95) |
Dec
(102) |
2017 |
Jan
(104) |
Feb
(75) |
Mar
(72) |
Apr
(53) |
May
(18) |
Jun
(5) |
Jul
(14) |
Aug
(19) |
Sep
(2) |
Oct
(13) |
Nov
(21) |
Dec
(67) |
2018 |
Jan
(56) |
Feb
(50) |
Mar
(148) |
Apr
(41) |
May
(37) |
Jun
(34) |
Jul
(34) |
Aug
(11) |
Sep
(52) |
Oct
(48) |
Nov
(28) |
Dec
(46) |
2019 |
Jan
(29) |
Feb
(63) |
Mar
(95) |
Apr
(54) |
May
(14) |
Jun
(71) |
Jul
(60) |
Aug
(49) |
Sep
(3) |
Oct
(64) |
Nov
(115) |
Dec
(57) |
2020 |
Jan
(15) |
Feb
(9) |
Mar
(38) |
Apr
(27) |
May
(60) |
Jun
(53) |
Jul
(35) |
Aug
(46) |
Sep
(37) |
Oct
(64) |
Nov
(20) |
Dec
(25) |
2021 |
Jan
(20) |
Feb
(31) |
Mar
(27) |
Apr
(23) |
May
(21) |
Jun
(30) |
Jul
(30) |
Aug
(7) |
Sep
(18) |
Oct
|
Nov
(15) |
Dec
(4) |
2022 |
Jan
(3) |
Feb
(1) |
Mar
(10) |
Apr
|
May
(2) |
Jun
(26) |
Jul
(5) |
Aug
|
Sep
(1) |
Oct
(2) |
Nov
(9) |
Dec
(2) |
2023 |
Jan
(4) |
Feb
(4) |
Mar
(5) |
Apr
(10) |
May
(29) |
Jun
(17) |
Jul
|
Aug
|
Sep
(1) |
Oct
(1) |
Nov
(2) |
Dec
|
2024 |
Jan
|
Feb
(6) |
Mar
|
Apr
(1) |
May
(6) |
Jun
|
Jul
(5) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
|
2025 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: prakash b. <ps1...@gm...> - 2024-02-19 13:41:20
|
Hi John/Xin,Tung, I hope you guys are doing well. I need some suggestions about using a TIPC socket for the "SOCK_SEQPACKET" type while using a UDP bearer. There is a use case where the incoming traffic is very high for a short period intermittently. 1. As TIPC creates a UDP socket for a bearer and multiple TIPC sockets are kind of multiplexed into one UDP socket. Can it slow the performance when the incoming traffic is high? 2. Can we increase the buffer size of the TIPC UDP socket alone without changing the system level default receive buffer size ? I would like to increase the socket buffer size to reduce the packet drops and be efficient but don't want to change the default value at the system level. Please advise what is recommended in this case. Kernel version : 4.19.81 Socket type : SOCK_SEQPACKET Would appreciate any help. Thanks, Prakash |
From: Tung Q. N. <tun...@de...> - 2024-02-16 06:13:10
|
>As part of the shutdown, we are terminating all the processes using SIGKILL. We are expecting the tipc sockets to be closed >automatically by the kernel after some time. > >But sometimes ‘tipc socket list’ is still showing a few sockets as alive. > >Now when we restart the application, the system has two sockets with the same tipc address. > Did your check Gary Duzan's comment ? " If the program is forking off processes, perhaps the child processes aren't closing the socket file descriptor. Using fcntl() to set FD_CLOEXEC on the descriptor in the parent may help." Another thing is you need to make sure that your processes are actually killed (not hung or become zombie). > >Isn't the tipc sockets should have been closed automatically by the kernel once the application is killed ? > They are closed if application is actually killed. |
From: prakash b. <ps1...@gm...> - 2024-02-15 14:53:08
|
Hi John/Xin,Tung, We are facing an issue while closing the TIPC server socket. We are running multiple applications which are in turn creating TIPC sockets. As part of the shutdown, we are terminating all the processes using SIGKILL. We are expecting the tipc sockets to be closed automatically by the kernel after some time. But sometimes ‘tipc socket list’ is still showing a few sockets as alive. Now when we restart the application, the system has two sockets with the same tipc address. Isn't the tipc sockets should have been closed automatically by the kernel once the application is killed ? Kernel version : 4.19.81 Socket type : SOCK_SEQPACKET Would appreciate any help. Thanks, Prakash |
From: Tung Q. N. <tun...@de...> - 2024-02-04 15:34:30
|
> net/tipc/Makefile | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > >diff --git a/net/tipc/Makefile b/net/tipc/Makefile index ee49a9f1dd4f..18e1636aa036 100644 >--- a/net/tipc/Makefile >+++ b/net/tipc/Makefile >@@ -18,5 +18,5 @@ tipc-$(CONFIG_TIPC_MEDIA_IB) += ib_media.o > tipc-$(CONFIG_SYSCTL) += sysctl.o > tipc-$(CONFIG_TIPC_CRYPTO) += crypto.o > >- >-obj-$(CONFIG_TIPC_DIAG) += diag.o >+obj-$(CONFIG_TIPC_DIAG) += tipc_diag.o >+tipc_diag-y += diag.o >-- >2.39.1 > Reviewed-by: Tung Nguyen <tun...@de...> |
From: Xin L. <luc...@gm...> - 2024-02-02 20:11:17
|
It is not appropriate for TIPC to use "diag" as its diag module name while the other protocols are using "$(protoname)_diag" like tcp_diag, udp_diag and sctp_diag etc. So this patch is to rename diag.ko to tipc_diag.ko in tipc's Makefile. Signed-off-by: Xin Long <luc...@gm...> --- net/tipc/Makefile | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/tipc/Makefile b/net/tipc/Makefile index ee49a9f1dd4f..18e1636aa036 100644 --- a/net/tipc/Makefile +++ b/net/tipc/Makefile @@ -18,5 +18,5 @@ tipc-$(CONFIG_TIPC_MEDIA_IB) += ib_media.o tipc-$(CONFIG_SYSCTL) += sysctl.o tipc-$(CONFIG_TIPC_CRYPTO) += crypto.o - -obj-$(CONFIG_TIPC_DIAG) += diag.o +obj-$(CONFIG_TIPC_DIAG) += tipc_diag.o +tipc_diag-y += diag.o -- 2.39.1 |
From: Tung Q. N. <tun...@de...> - 2023-11-23 07:47:56
|
> net/tipc/node.c | 10 ++++++---- > 1 file changed, 6 insertions(+), 4 deletions(-) > >diff --git a/net/tipc/node.c b/net/tipc/node.c index 3105abe97bb9..2a036b8a7da3 100644 >--- a/net/tipc/node.c >+++ b/net/tipc/node.c >@@ -2154,14 +2154,15 @@ void tipc_rcv(struct net *net, struct sk_buff *skb, struct tipc_bearer *b) > /* Receive packet directly if conditions permit */ > tipc_node_read_lock(n); > if (likely((n->state == SELF_UP_PEER_UP) && (usr != TUNNEL_PROTOCOL))) { >+ tipc_node_read_unlock(n); > spin_lock_bh(&le->lock); > if (le->link) { > rc = tipc_link_rcv(le->link, skb, &xmitq); > skb = NULL; > } > spin_unlock_bh(&le->lock); >- } >- tipc_node_read_unlock(n); >+ } else >+ tipc_node_read_unlock(n); > > /* Check/update node state before receiving */ > if (unlikely(skb)) { >@@ -2169,12 +2170,13 @@ void tipc_rcv(struct net *net, struct sk_buff *skb, struct tipc_bearer *b) > goto out_node_put; > tipc_node_write_lock(n); > if (tipc_node_check_state(n, skb, bearer_id, &xmitq)) { >+ tipc_node_write_unlock(n); > if (le->link) { > rc = tipc_link_rcv(le->link, skb, &xmitq); > skb = NULL; > } >- } >- tipc_node_write_unlock(n); >+ } else >+ tipc_node_write_unlock(n); > } > > if (unlikely(rc & TIPC_LINK_UP_EVT)) >-- >2.15.2 > > This patch is wrong. le->link and link status must be protected by node lock. See what happens if tipc_node_timeout() is called, and the link goes down: tipc_node_timeout() tipc_node_link_down() { struct tipc_link *l = le->link; ... if (delete) { kfree(l); le->link = NULL; } ... } |
From: Tung Q. N. <tun...@de...> - 2023-11-23 07:08:51
|
>>This patch is wrong. le->link and link status must be protected by node lock. See what happens if tipc_node_timeout() is called, and >the link goes down: >>tipc_node_timeout() >> tipc_node_link_down() >> { >> struct tipc_link *l = le->link; >> ... >> if (delete) { >> kfree(l); >> le->link = NULL; >> } >> ... >> } > >Happy to see your reply. But Why? 'delete' is false from tipc_node_timeout(). Refer to: >https://elixir.bootlin.com/linux/v6.7-rc2/source/net/tipc/node.c#L844 I should have explained it clearly: 1/ link status must be protected. tipc_node_timeout() tipc_node_link_down() { struct tipc_link *l = le->link; ... __tipc_node_link_down(); <-- link status is referred. ... if (delete) { kfree(l); le->link = NULL; } ... } __tipc_node_link_down() { ... if (!l || tipc_link_is_reset(l)) <-- read link status ... tipc_link_reset(l); <--- this function will reset all things related to link. } 2/ le->link must be protected. bearer_disable() { ... tipc_node_delete_links(net, bearer_id); <--- this will delete all links. ... } tipc_node_delete_links() { ... tipc_node_link_down(n, bearer_id, true); ... } |
From: Jon M. <jm...@re...> - 2023-10-02 20:30:50
|
On 2023-09-27 14:14, Chengfeng Ye wrote: > It seems that tipc_crypto_key_revoke() could be be invoked by > wokequeue tipc_crypto_work_rx() under process context and > timer/rx callback under softirq context, thus the lock acquisition > on &tx->lock seems better use spin_lock_bh() to prevent possible > deadlock. > > This flaw was found by an experimental static analysis tool I am > developing for irq-related deadlock. > > tipc_crypto_work_rx() <workqueue> > --> tipc_crypto_key_distr() > --> tipc_bcast_xmit() > --> tipc_bcbase_xmit() > --> tipc_bearer_bc_xmit() > --> tipc_crypto_xmit() > --> tipc_ehdr_build() > --> tipc_crypto_key_revoke() > --> spin_lock(&tx->lock) > <timer interrupt> > --> tipc_disc_timeout() > --> tipc_bearer_xmit_skb() > --> tipc_crypto_xmit() > --> tipc_ehdr_build() > --> tipc_crypto_key_revoke() > --> spin_lock(&tx->lock) <deadlock here> > > Signed-off-by: Chengfeng Ye <dg5...@gm...> > --- > net/tipc/crypto.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c > index 302fd749c424..43c3f1c971b8 100644 > --- a/net/tipc/crypto.c > +++ b/net/tipc/crypto.c > @@ -1441,14 +1441,14 @@ static int tipc_crypto_key_revoke(struct net *net, u8 tx_key) > struct tipc_crypto *tx = tipc_net(net)->crypto_tx; > struct tipc_key key; > > - spin_lock(&tx->lock); > + spin_lock_bh(&tx->lock); > key = tx->key; > WARN_ON(!key.active || tx_key != key.active); > > /* Free the active key */ > tipc_crypto_key_set_state(tx, key.passive, 0, key.pending); > tipc_crypto_key_detach(tx->aead[key.active], &tx->lock); > - spin_unlock(&tx->lock); > + spin_unlock_bh(&tx->lock); > > pr_warn("%s: key is revoked\n", tx->name); > return -EKEYREVOKED; Acked-by: Jon Maloy <jm...@re...> |
From: Jon M. <jm...@re...> - 2023-09-26 11:19:35
|
On 2023-09-24 02:03, Shigeru Yoshida wrote: > syzbot reported the following uninit-value access issue: > > ===================================================== > BUG: KMSAN: uninit-value in strlen lib/string.c:418 [inline] > BUG: KMSAN: uninit-value in strstr+0xb8/0x2f0 lib/string.c:756 > strlen lib/string.c:418 [inline] > strstr+0xb8/0x2f0 lib/string.c:756 > tipc_nl_node_reset_link_stats+0x3ea/0xb50 net/tipc/node.c:2595 > genl_family_rcv_msg_doit net/netlink/genetlink.c:971 [inline] > genl_family_rcv_msg net/netlink/genetlink.c:1051 [inline] > genl_rcv_msg+0x11ec/0x1290 net/netlink/genetlink.c:1066 > netlink_rcv_skb+0x371/0x650 net/netlink/af_netlink.c:2545 > genl_rcv+0x40/0x60 net/netlink/genetlink.c:1075 > netlink_unicast_kernel net/netlink/af_netlink.c:1342 [inline] > netlink_unicast+0xf47/0x1250 net/netlink/af_netlink.c:1368 > netlink_sendmsg+0x1238/0x13d0 net/netlink/af_netlink.c:1910 > sock_sendmsg_nosec net/socket.c:730 [inline] > sock_sendmsg net/socket.c:753 [inline] > ____sys_sendmsg+0x9c2/0xd60 net/socket.c:2541 > ___sys_sendmsg+0x28d/0x3c0 net/socket.c:2595 > __sys_sendmsg net/socket.c:2624 [inline] > __do_sys_sendmsg net/socket.c:2633 [inline] > __se_sys_sendmsg net/socket.c:2631 [inline] > __x64_sys_sendmsg+0x307/0x490 net/socket.c:2631 > do_syscall_x64 arch/x86/entry/common.c:50 [inline] > do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 > entry_SYSCALL_64_after_hwframe+0x63/0xcd > > Uninit was created at: > slab_post_alloc_hook+0x12f/0xb70 mm/slab.h:767 > slab_alloc_node mm/slub.c:3478 [inline] > kmem_cache_alloc_node+0x577/0xa80 mm/slub.c:3523 > kmalloc_reserve+0x13d/0x4a0 net/core/skbuff.c:559 > __alloc_skb+0x318/0x740 net/core/skbuff.c:650 > alloc_skb include/linux/skbuff.h:1286 [inline] > netlink_alloc_large_skb net/netlink/af_netlink.c:1214 [inline] > netlink_sendmsg+0xb34/0x13d0 net/netlink/af_netlink.c:1885 > sock_sendmsg_nosec net/socket.c:730 [inline] > sock_sendmsg net/socket.c:753 [inline] > ____sys_sendmsg+0x9c2/0xd60 net/socket.c:2541 > ___sys_sendmsg+0x28d/0x3c0 net/socket.c:2595 > __sys_sendmsg net/socket.c:2624 [inline] > __do_sys_sendmsg net/socket.c:2633 [inline] > __se_sys_sendmsg net/socket.c:2631 [inline] > __x64_sys_sendmsg+0x307/0x490 net/socket.c:2631 > do_syscall_x64 arch/x86/entry/common.c:50 [inline] > do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 > entry_SYSCALL_64_after_hwframe+0x63/0xcd > > Link names must be null-terminated strings. If a link name which is not > null-terminated is passed through netlink, strstr() and similar functions > can cause buffer overrun. This causes the above issue. > > This patch fixes this issue by returning -EINVAL if a non-null-terminated > link name is passed. > > Fixes: ae36342b50a9 ("tipc: add link stat reset to new netlink api") > Reported-and-tested-by: syz...@sy... > Closes: https://syzkaller.appspot.com/bug?extid=5138ca807af9d2b42574 > Signed-off-by: Shigeru Yoshida <syo...@re...> > --- > net/tipc/node.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/net/tipc/node.c b/net/tipc/node.c > index 3105abe97bb9..f167bdafc034 100644 > --- a/net/tipc/node.c > +++ b/net/tipc/node.c > @@ -2586,6 +2586,10 @@ int tipc_nl_node_reset_link_stats(struct sk_buff *skb, struct genl_info *info) > > link_name = nla_data(attrs[TIPC_NLA_LINK_NAME]); > > + if (link_name[strnlen(link_name, > + nla_len(attrs[TIPC_NLA_LINK_NAME]))] != '\0') > + return -EINVAL; > + > err = -EINVAL; > if (!strcmp(link_name, tipc_bclink_name)) { > err = tipc_bclink_reset_stats(net, tipc_bc_sndlink(net)); Acked-by: Jon Maloy <jm...@re...> |
From: Tung Q. N. <tun...@de...> - 2023-06-26 09:45:13
|
> I think there was also added a command to force the name table updates > back to the unicast channel, but I don't remember > from the top of my head how it was done. Maybe Tung can fill in here? It is "tipc link set broadcast REPLICAST". |
From: Duzan, G. D <Gar...@fi...> - 2023-06-23 20:58:02
|
I see in your sample code that you are using recvfrom(), but with a NULL address. If you instead pass a pointer to a struct sockaddr_tipc and an address_len of sizeof(struct sockaddr_tipc), a successful recvfrom() will set it to the address of the sender, which you can turn around and pass to sendto() in order to reply. I'm using this in my own code, and so far it has worked fine. Gary Duzan IT Architect Senior GT.M Core Team T: +1.484.302.3226 E: gar...@fi... FIS | Advancing the way the world pays, banks and invests™ ________________________________ From: Rune Torgersen <ru...@in...> Sent: Friday, June 23, 2023 4:07 PM To: Jon Maloy <jm...@re...>; tip...@li... <tip...@li...> Subject: Re: [tipc-discussion] TIPC out-of-order publish message > -----Original Message----- > From: Jon Maloy <jm...@re...> > On 2023-06-22 09:30, Rune Torgersen wrote: > > I can easily make it happen with known service addresses too. > > > > We have shortlived processes that does a query: > > > > Open 226 1 > > Send query to 226 2. > > 226 2 sends response back to 226 1. - Message gets dropped. > > Is there any reason why you don't use the received messages original > socket address instead of its service address? Basically because we did not know you could do that. Will have to look into that. Is there any example code for that? _______________________________________________ tipc-discussion mailing list tip...@li... https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.sourceforge.net%2Flists%2Flistinfo%2Ftipc-discussion&data=05%7C01%7Cgary.duzan%40fisglobal.com%7Cdb99e789c7df49b57f3c08db742596b3%7Ce3ff91d834c84b15a0b418910a6ac575%7C0%7C0%7C638231477021853799%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=EwRsvge2GyFWYIoLAMhh2KT%2BV3x0x0Oe4FGxxfxQPTo%3D&reserved=0<https://lists.sourceforge.net/lists/listinfo/tipc-discussion> The information contained in this message is proprietary and/or confidential. If you are not the intended recipient, please: (i) delete the message and all copies; (ii) do not disclose, distribute or use the message in any manner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to our domain is subject to archiving and review by persons other than the intended recipient. Thank you. |
From: Rune T. <ru...@in...> - 2023-06-23 20:32:37
|
Thank you. Learning something new everyday (used TIPC for close to 20 years, and never realized you could do that...) From: Duzan, Gary D <Gar...@fi...> Sent: Friday, June 23, 2023 3:24 PM To: Rune Torgersen <ru...@in...> Cc: Jon Maloy <jm...@re...>; tip...@li... Subject: Re: [tipc-discussion] TIPC out-of-order publish message This email originated from outside Innovative Systems. Do not click links or open attachments unless you recognize the sender and know the content is safe. I see in your sample code that you are using recvfrom(), but with a NULL address. If you instead pass a pointer to a struct sockaddr_tipc and an address_len of sizeof(struct sockaddr_tipc), a successful recvfrom() will set it to the address of the sender, which you can turn around and pass to sendto() in order to reply. I'm using this in my own code, and so far it has worked fine. Gary Duzan IT Architect Senior GT.M Core Team T: +1.484.302.3226 E: gar...@fi...<mailto:gar...@fi...> FIS | Advancing the way the world pays, banks and invests(tm) ________________________________ From: Rune Torgersen <ru...@in...<mailto:ru...@in...>> Sent: Friday, June 23, 2023 4:07 PM To: Jon Maloy <jm...@re...<mailto:jm...@re...>>; tip...@li...<mailto:tip...@li...> <tip...@li...<mailto:tip...@li...>> Subject: Re: [tipc-discussion] TIPC out-of-order publish message > -----Original Message----- > From: Jon Maloy <jm...@re...<mailto:jm...@re...>> > On 2023-06-22 09:30, Rune Torgersen wrote: > > I can easily make it happen with known service addresses too. > > > > We have shortlived processes that does a query: > > > > Open 226 1 > > Send query to 226 2. > > 226 2 sends response back to 226 1. - Message gets dropped. > > Is there any reason why you don't use the received messages original > socket address instead of its service address? Basically because we did not know you could do that. Will have to look into that. Is there any example code for that? _______________________________________________ tipc-discussion mailing list tip...@li...<mailto:tip...@li...> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.sourceforge.net%2Flists%2Flistinfo%2Ftipc-discussion&data=05%7C01%7Cgary.duzan%40fisglobal.com%7Cdb99e789c7df49b57f3c08db742596b3%7Ce3ff91d834c84b15a0b418910a6ac575%7C0%7C0%7C638231477021853799%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=EwRsvge2GyFWYIoLAMhh2KT%2BV3x0x0Oe4FGxxfxQPTo%3D&reserved=0<https://lists.sourceforge.net/lists/listinfo/tipc-discussion> The information contained in this message is proprietary and/or confidential. If you are not the intended recipient, please: (i) delete the message and all copies; (ii) do not disclose, distribute or use the message in any manner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to our domain is subject to archiving and review by persons other than the intended recipient. Thank you. |
From: Rune T. <ru...@in...> - 2023-06-23 20:08:03
|
> -----Original Message----- > From: Jon Maloy <jm...@re...> > On 2023-06-22 09:30, Rune Torgersen wrote: > > I can easily make it happen with known service addresses too. > > > > We have shortlived processes that does a query: > > > > Open 226 1 > > Send query to 226 2. > > 226 2 sends response back to 226 1. - Message gets dropped. > > Is there any reason why you don't use the received messages original > socket address instead of its service address? Basically because we did not know you could do that. Will have to look into that. Is there any example code for that? |
From: Jon M. <jm...@re...> - 2023-06-23 19:41:29
|
On 2023-06-22 09:30, Rune Torgersen wrote: > I can easily make it happen with known service addresses too. > > We have shortlived processes that does a query: > > Open 226 1 > Send query to 226 2. > 226 2 sends response back to 226 1. - Message gets dropped. It is obviously a timing issue, - an unintended side effect of moving the name table updates over to using the multicast channel. Is there any reason why you don't use the received messages original socket address instead of its service address? If you do that, at least for the first response message, you should be good. I think there was also added a command to force the name table updates back to the unicast channel, but I don't remember from the top of my head how it was done. Maybe Tung can fill in here? ///jon > > And again, from kernels 2.6.32 (Tipc 1.7.4) to Kernel 4.15, this never failed. Now I can make it fail every 2-5 queries. > > Adding a call to the topology server to query to see if a socket you _know_ is open (because you just got a message from it) to send a message back is adding a lot of unnecessary overhead. > > > From: Tung Quang Nguyen <tun...@de...> > Sent: Thursday, June 22, 2023 4:39 AM > To: Rune Torgersen <ru...@in...>; tip...@li... > Subject: Re: TIPC out-of-order publish message > > This email originated from outside Innovative Systems. Do not click links or open attachments unless you recognize the sender and know the content is safe. > >> Also, since the publish and unicast is not guaranteed in order, should not reception of a unicast data message before a publish update the publish table on the receiving end so you can expect to reply back immediately. > No, receiving of user data messages does not update the naming table. It is done via the protocol's internal published messages. > >> Take UDP as the other datagram protocol. You are expected to be able to send back a reply to the sending socket immediately upon reception of a message, as by receiving it you know the farend is up. > The same thing for TIPC if you send back your messages using known service. > > _______________________________________________ > tipc-discussion mailing list > tip...@li... > https://lists.sourceforge.net/lists/listinfo/tipc-discussion > |
From: Rune T. <ru...@in...> - 2023-06-22 13:30:57
|
I can easily make it happen with known service addresses too. We have shortlived processes that does a query: Open 226 1 Send query to 226 2. 226 2 sends response back to 226 1. - Message gets dropped. And again, from kernels 2.6.32 (Tipc 1.7.4) to Kernel 4.15, this never failed. Now I can make it fail every 2-5 queries. Adding a call to the topology server to query to see if a socket you _know_ is open (because you just got a message from it) to send a message back is adding a lot of unnecessary overhead. From: Tung Quang Nguyen <tun...@de...> Sent: Thursday, June 22, 2023 4:39 AM To: Rune Torgersen <ru...@in...>; tip...@li... Subject: Re: TIPC out-of-order publish message This email originated from outside Innovative Systems. Do not click links or open attachments unless you recognize the sender and know the content is safe. > Also, since the publish and unicast is not guaranteed in order, should not reception of a unicast data message before a publish update the publish table on the receiving end so you can expect to reply back immediately. No, receiving of user data messages does not update the naming table. It is done via the protocol's internal published messages. > Take UDP as the other datagram protocol. You are expected to be able to send back a reply to the sending socket immediately upon reception of a message, as by receiving it you know the farend is up. The same thing for TIPC if you send back your messages using known service. |
From: Tung Q. N. <tun...@de...> - 2023-06-22 09:53:24
|
> Also, since the publish and unicast is not guaranteed in order, should not reception of a unicast data message before a publish update the publish table on the receiving end so you can expect to reply back immediately. No, receiving of user data messages does not update the naming table. It is done via the protocol's internal published messages. > Take UDP as the other datagram protocol. You are expected to be able to send back a reply to the sending socket immediately upon reception of a message, as by receiving it you know the farend is up. The same thing for TIPC if you send back your messages using known service. |
From: Tung Q. N. <tun...@de...> - 2023-06-22 09:33:19
|
> if we send a message from a newly opened tipc socket to a different node, we cannot send back a reply immediately, as the tipc stack will silently throw away the message because the publish has not yet been received. You can always send the message back if communication is being performed on the known service (in your example: type 226, instance 2). If you send a message back using new service (type 226, instance addr) which is not known yet by the sender, of course TIPC will drop this message. > We have a workaround right now by querying the topologyserver before each send, but that slows down everthing by several orders of magnitude. This is the right thing to do. From: Tung Quang Nguyen <tun...@de...> Sent: Wednesday, June 21, 2023 1:41 AM To: Rune Torgersen <ru...@in...>; tip...@li... Subject: Re: TIPC out-of-order publish message This email originated from outside Innovative Systems. Do not click links or open attachments unless you recognize the sender and know the content is safe. > if (-1 == bind(sock, (struct sockaddr*)&listen_addr, sizeof(struct sockaddr_tipc))) > perror("Error opening TIPC socket"); > *(int*)buf = addr; > int rc = sendto(sock, buf, sendsize, 0, (struct sockaddr*)&to_addr, sizeof(to_addr)); You are not recommended to design your application this way. Published messages are TIPC internal messages. There is no guarantee that they and user data messages are sent/received in correct order. Especially, since kernel 5.10, Published messages are sent on Broadcast link whereas user data messages are sent on Unicast link. These links have different send queues, sequence numbering engines etc. So, what you showed in TCP dump is an expected behavior. |
From: Rune T. <ru...@in...> - 2023-06-21 13:48:25
|
Also, since the publish and unicast is not guaranteed in order, should not reception of a unicast data message before a publish update the publish table on the receiving end so you can expect to reply back immediately. Take UDP as the other datagram protocol. You are expected to be able to send back a reply to the sending socket immediately upon reception of a message, as by receiving it you know the farend is up. -----Original Message----- From: Rune Torgersen <ru...@in...> Sent: Wednesday, June 21, 2023 8:18 AM To: Tung Quang Nguyen <tun...@de...>; tip...@li... Subject: Re: [tipc-discussion] TIPC out-of-order publish message This email originated from outside Innovative Systems. Do not click links or open attachments unless you recognize the sender and know the content is safe. Example is a extremely paired down version of the problem. What it means in reality for us, is that if we send a message from a newly opened tipc socket to a different node, we cannot send back a reply immediately, as the tipc stack will silently throw away the message because the publish has not yet been received. Problem is, this is in use in applications we've been selling for years (originally released under ubuntu 16, and has always worked correctly until now. We've used TIPC back to 2.6 kernels, and never had this issue before. It is basically not feasible to rewire it all. We have a workaround right now by querying the topologyserver before each send, but that slows down everthing by several orders of magnitude. From: Tung Quang Nguyen <tun...@de...> Sent: Wednesday, June 21, 2023 1:41 AM To: Rune Torgersen <ru...@in...>; tip...@li... Subject: Re: TIPC out-of-order publish message This email originated from outside Innovative Systems. Do not click links or open attachments unless you recognize the sender and know the content is safe. > if (-1 == bind(sock, (struct sockaddr*)&listen_addr, sizeof(struct sockaddr_tipc))) > perror("Error opening TIPC socket"); > *(int*)buf = addr; > int rc = sendto(sock, buf, sendsize, 0, (struct sockaddr*)&to_addr, sizeof(to_addr)); You are not recommended to design your application this way. Published messages are TIPC internal messages. There is no guarantee that they and user data messages are sent/received in correct order. Especially, since kernel 5.10, Published messages are sent on Broadcast link whereas user data messages are sent on Unicast link. These links have different send queues, sequence numbering engines etc. So, what you showed in TCP dump is an expected behavior. _______________________________________________ tipc-discussion mailing list tip...@li... https://lists.sourceforge.net/lists/listinfo/tipc-discussion |
From: Rune T. <ru...@in...> - 2023-06-21 13:18:42
|
Example is a extremely paired down version of the problem. What it means in reality for us, is that if we send a message from a newly opened tipc socket to a different node, we cannot send back a reply immediately, as the tipc stack will silently throw away the message because the publish has not yet been received. Problem is, this is in use in applications we've been selling for years (originally released under ubuntu 16, and has always worked correctly until now. We've used TIPC back to 2.6 kernels, and never had this issue before. It is basically not feasible to rewire it all. We have a workaround right now by querying the topologyserver before each send, but that slows down everthing by several orders of magnitude. From: Tung Quang Nguyen <tun...@de...> Sent: Wednesday, June 21, 2023 1:41 AM To: Rune Torgersen <ru...@in...>; tip...@li... Subject: Re: TIPC out-of-order publish message This email originated from outside Innovative Systems. Do not click links or open attachments unless you recognize the sender and know the content is safe. > if (-1 == bind(sock, (struct sockaddr*)&listen_addr, sizeof(struct sockaddr_tipc))) > perror("Error opening TIPC socket"); > *(int*)buf = addr; > int rc = sendto(sock, buf, sendsize, 0, (struct sockaddr*)&to_addr, sizeof(to_addr)); You are not recommended to design your application this way. Published messages are TIPC internal messages. There is no guarantee that they and user data messages are sent/received in correct order. Especially, since kernel 5.10, Published messages are sent on Broadcast link whereas user data messages are sent on Unicast link. These links have different send queues, sequence numbering engines etc. So, what you showed in TCP dump is an expected behavior. |
From: Tung Q. N. <tun...@de...> - 2023-06-21 08:14:34
|
> if (-1 == bind(sock, (struct sockaddr*)&listen_addr, sizeof(struct sockaddr_tipc))) > perror("Error opening TIPC socket"); > *(int*)buf = addr; > int rc = sendto(sock, buf, sendsize, 0, (struct sockaddr*)&to_addr, sizeof(to_addr)); You are not recommended to design your application this way. Published messages are TIPC internal messages. There is no guarantee that they and user data messages are sent/received in correct order. Especially, since kernel 5.10, Published messages are sent on Broadcast link whereas user data messages are sent on Unicast link. These links have different send queues, sequence numbering engines etc. So, what you showed in TCP dump is an expected behavior. |
From: Rune T. <ru...@in...> - 2023-06-20 22:07:13
|
Hi. We recently upgraded an appliance we have from Ubuntu 18.04 to 22.04, and are now seeing intermittent issues with RDM message. (Kernel 5.15.0-75) This issue was never seen on Ubuntu 18.04 and older (Ubuntu kernel 4.15.0) I have a minimal test program set that will have the issue seen. Basically we have a client side that opens a semi-random tipc-instance port, sends a message to a server on a known address. Part of the message is the client instance address, so the server can send back a reply. Occasionally the server reply is lost. We've tracked it down to that occasionally the publish message trails the actual message on the server side. When that happens, apparently the server tipc stack is dropping the message. This ONLY happens when client and server are on two different pieces of hardware (fairly beefy supermicro server with 2 Xeon 5110 or Silver) Here is a TCPDump of the sequence where it fails: 1 2023-06-20 16:23:09.761711 1.1.1 0.0.0 TIPC 74 Name Dist Publication type:226 inst:19712 2 2023-06-20 16:23:09.761758 1.1.1 1.1.2 TIPC 105 Payld:Low NamedMsg type:226 inst:2 3 2023-06-20 16:23:09.761770 1.1.2 1.1.1 TIPC 105 Payld:Low NamedMsg type:226 inst:19712 4 2023-06-20 16:23:09.761906 1.1.1 0.0.0 TIPC 74 Name Dist Withdrawal type:226 inst:19712 5 2023-06-20 16:23:09.761906 1.1.1 0.0.0 TIPC 74 Name Dist Publication type:226 inst:19968 6 2023-06-20 16:23:09.761954 1.1.1 1.1.2 TIPC 105 Payld:Low NamedMsg type:226 inst:2 7 2023-06-20 16:23:09.761965 1.1.2 1.1.1 TIPC 105 Payld:Low NamedMsg type:226 inst:19968 8 2023-06-20 16:23:09.762054 1.1.1 0.0.0 TIPC 74 Name Dist Withdrawal type:226 inst:19968 9 2023-06-20 16:23:09.762054 1.1.1 0.0.0 TIPC 74 Name Dist Publication type:226 inst:20224 10 2023-06-20 16:23:09.762101 1.1.1 1.1.2 TIPC 105 Payld:Low NamedMsg type:226 inst:2 11 2023-06-20 16:23:09.762112 1.1.2 1.1.1 TIPC 105 Payld:Low NamedMsg type:226 inst:20224 12 2023-06-20 16:23:09.762250 1.1.1 0.0.0 TIPC 74 Name Dist Withdrawal type:226 inst:20224 13 2023-06-20 16:23:09.762250 1.1.1 0.0.0 TIPC 74 Name Dist Publication type:226 inst:20480 14 2023-06-20 16:23:09.762250 1.1.1 1.1.2 TIPC 105 Payld:Low NamedMsg type:226 inst:2 15 2023-06-20 16:23:09.762254 1.1.2 1.1.1 TIPC 58 Link State State 16 2023-06-20 16:23:09.762267 1.1.2 1.1.1 TIPC 105 Payld:Low NamedMsg type:226 inst:20480 17 2023-06-20 16:23:09.762396 1.1.1 0.0.0 TIPC 74 Name Dist Withdrawal type:226 inst:20480 18 2023-06-20 16:23:09.762396 1.1.1 0.0.0 TIPC 74 Name Dist Publication type:226 inst:20736 19 2023-06-20 16:23:09.762397 1.1.1 1.1.2 TIPC 105 Payld:Low NamedMsg type:226 inst:2 20 2023-06-20 16:23:09.762410 1.1.2 1.1.1 TIPC 105 Payld:Low NamedMsg type:226 inst:20736 21 2023-06-20 16:23:09.762592 1.1.1 1.1.2 TIPC 105 Payld:Low NamedMsg type:226 inst:2 22 2023-06-20 16:23:09.762656 1.1.1 0.0.0 TIPC 74 Name Dist Withdrawal type:226 inst:20736 23 2023-06-20 16:23:09.762656 1.1.1 0.0.0 TIPC 74 Name Dist Publication type:226 inst:20992 Packet 23 is the publish, while 21 is the payload. I would have assumed (and I think the older tipc driver did so) that a datagram received from a source that is not yet published would update the nametable too? Test programs will usually recreate within 50-100 iterations Here is the client side that will recreate: #include <stdio.h> #include <stdlib.h> #include <sys/socket.h> #include <linux/tipc.h> #include <string.h> #include <unistd.h> int main(int argc, char * argv[]) { int sendsize = 51; char buf[65535]; memset(buf, 0, sendsize); int c = 0; struct sockaddr_tipc to_addr; to_addr.family = AF_TIPC; to_addr.scope = 0; to_addr.addrtype = TIPC_ADDR_NAME; to_addr.addr.name.name.type = 226; to_addr.addr.name.name.instance = 2; to_addr.addr.name.domain = 0; for (int i = 0; i < 1000; i++) { int addr = (i & 0xff) << 8; int sock = socket(AF_TIPC, SOCK_RDM | SOCK_CLOEXEC, 0); if (sock == -1) perror("opening socket"); struct sockaddr_tipc listen_addr; listen_addr.family = AF_TIPC; listen_addr.addrtype = TIPC_ADDR_NAMESEQ; listen_addr.addr.nameseq.type = 226; listen_addr.addr.nameseq.lower = addr; listen_addr.addr.nameseq.upper = addr; listen_addr.scope = TIPC_CLUSTER_SCOPE; if (-1 == bind(sock, (struct sockaddr*)&listen_addr, sizeof(struct sockaddr_tipc))) perror("Error opening TIPC socket"); *(int*)buf = addr; int rc = sendto(sock, buf, sendsize, 0, (struct sockaddr*)&to_addr, sizeof(to_addr)); printf("tipc send rc = %d\n", rc); if (rc < 0) perror("send err"); rc = recvfrom(sock, buf, 65535, 0, NULL, 0); c++; if (rc < 0) perror("send err"); printf("Received %d\n", c); close(sock); } } Here is the server side: #include <stdio.h> #include <stdlib.h> #include <sys/socket.h> #include <linux/tipc.h> int main(int argc, char * argv[]) { int addr = 2; int sock = socket(AF_TIPC, SOCK_RDM | SOCK_CLOEXEC, 0); if (sock == -1) perror("opening socket"); struct sockaddr_tipc listen_addr; listen_addr.family = AF_TIPC; listen_addr.addrtype = TIPC_ADDR_NAMESEQ; listen_addr.addr.nameseq.type = 226; listen_addr.addr.nameseq.lower = 2; listen_addr.addr.nameseq.upper = 2; listen_addr.scope = TIPC_CLUSTER_SCOPE; if (-1 == bind(sock, (struct sockaddr*)&listen_addr, sizeof(struct sockaddr_tipc))) perror("Error opening TIPC socket"); char buf[65535]; int c = 0; while( true ) { int ret = 0; ret = recvfrom(sock, buf, 65535, 0, NULL, 0); c++; if (ret > 0) { printf("Received %d\n", c); // get return instance addr = *(int*)buf; struct sockaddr_tipc to_addr; to_addr.family = AF_TIPC; to_addr.scope = 0; to_addr.addrtype = TIPC_ADDR_NAME; to_addr.addr.name.name.type = 226; to_addr.addr.name.name.instance = addr; to_addr.addr.name.domain = 0; sendto(sock, buf, ret, 0, (struct sockaddr*)&to_addr, sizeof(to_addr)); } } } |
From: Tung Q. N. <tun...@de...> - 2023-06-15 02:45:15
|
>Subject: [PATCH v2] net: tipc: resize nlattr array to correct size > >According to nla_parse_nested_deprecated(), the tb[] is supposed to the >destination array with maxtype+1 elements. In current >tipc_nl_media_get() and __tipc_nl_media_set(), a larger array is used >which is unnecessary. This patch resize them to a proper size. > >Fixes: 1e55417d8fc6 ("tipc: add media set to new netlink api") >Fixes: 46f15c6794fb ("tipc: add media get/dump to new netlink api") >Signed-off-by: Lin Ma <li...@zj...> >--- >V1 -> V2: add net in title, also add Fixes tag Reviewed-by: Tung Nguyen <tun...@de...> > > net/tipc/bearer.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > >diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c >index 53881406e200..cdcd2731860b 100644 >--- a/net/tipc/bearer.c >+++ b/net/tipc/bearer.c >@@ -1258,7 +1258,7 @@ int tipc_nl_media_get(struct sk_buff *skb, struct genl_info *info) > struct tipc_nl_msg msg; > struct tipc_media *media; > struct sk_buff *rep; >- struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1]; >+ struct nlattr *attrs[TIPC_NLA_MEDIA_MAX + 1]; > > if (!info->attrs[TIPC_NLA_MEDIA]) > return -EINVAL; >@@ -1307,7 +1307,7 @@ int __tipc_nl_media_set(struct sk_buff *skb, struct genl_info *info) > int err; > char *name; > struct tipc_media *m; >- struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1]; >+ struct nlattr *attrs[TIPC_NLA_MEDIA_MAX + 1]; > > if (!info->attrs[TIPC_NLA_MEDIA]) > return -EINVAL; >-- >2.17.1 |
From: Tung Q. N. <tun...@de...> - 2023-06-14 11:03:49
|
>I don't really know the difference :D > >Since this a not any new feature patch but just solving a typo like bug. I guess >it can go to (net) branch instead the (net-next) ? > >Regards >Lin > It is OK to go for net. Please: - append "net" to your patch title. - add Fixes tag to the changelog. |
From: Tung Q. N. <tun...@de...> - 2023-06-14 10:00:41
|
>Subject: [PATCH v1] tipc: resize nlattr array to correct size > >According to nla_parse_nested_deprecated(), the tb[] is supposed to the >destination array with maxtype+1 elements. In current >tipc_nl_media_get() and __tipc_nl_media_set(), a larger array is used >which is unnecessary. This patch resize them to a proper size. > >Signed-off-by: Lin Ma <li...@zj...> >--- Which branch (net or net-next) do you want to apply this change to ? > net/tipc/bearer.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > >diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c >index 53881406e200..cdcd2731860b 100644 >--- a/net/tipc/bearer.c >+++ b/net/tipc/bearer.c >@@ -1258,7 +1258,7 @@ int tipc_nl_media_get(struct sk_buff *skb, struct genl_info *info) > struct tipc_nl_msg msg; > struct tipc_media *media; > struct sk_buff *rep; >- struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1]; >+ struct nlattr *attrs[TIPC_NLA_MEDIA_MAX + 1]; > > if (!info->attrs[TIPC_NLA_MEDIA]) > return -EINVAL; >@@ -1307,7 +1307,7 @@ int __tipc_nl_media_set(struct sk_buff *skb, struct genl_info *info) > int err; > char *name; > struct tipc_media *m; >- struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1]; >+ struct nlattr *attrs[TIPC_NLA_MEDIA_MAX + 1]; > > if (!info->attrs[TIPC_NLA_MEDIA]) > return -EINVAL; >-- >2.17.1 > |
From: Tung Q. N. <tun...@de...> - 2023-06-06 09:35:40
|
>Subject: [PATCH net-next] tipc: replace open-code bearer rcu_dereference access in bearer.c > >Replace these open-code bearer rcu_dereference access with bearer_get(), >like other places in bearer.c. While at it, also use tipc_net() instead >of net_generic(net, tipc_net_id) to get "tn" in bearer.c. > >Signed-off-by: Xin Long <luc...@gm...> >--- Reviewed-by: Tung Nguyen <tun...@de...> > net/tipc/bearer.c | 14 ++++++-------- > 1 file changed, 6 insertions(+), 8 deletions(-) > >diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c >index 114140c49108..1d5d3677bdaf 100644 >--- a/net/tipc/bearer.c >+++ b/net/tipc/bearer.c >@@ -176,7 +176,7 @@ static int bearer_name_validate(const char *name, > */ > struct tipc_bearer *tipc_bearer_find(struct net *net, const char *name) > { >- struct tipc_net *tn = net_generic(net, tipc_net_id); >+ struct tipc_net *tn = tipc_net(net); > struct tipc_bearer *b; > u32 i; > >@@ -211,11 +211,10 @@ int tipc_bearer_get_name(struct net *net, char *name, u32 bearer_id) > > void tipc_bearer_add_dest(struct net *net, u32 bearer_id, u32 dest) > { >- struct tipc_net *tn = net_generic(net, tipc_net_id); > struct tipc_bearer *b; > > rcu_read_lock(); >- b = rcu_dereference(tn->bearer_list[bearer_id]); >+ b = bearer_get(net, bearer_id); > if (b) > tipc_disc_add_dest(b->disc); > rcu_read_unlock(); >@@ -223,11 +222,10 @@ void tipc_bearer_add_dest(struct net *net, u32 bearer_id, u32 dest) > > void tipc_bearer_remove_dest(struct net *net, u32 bearer_id, u32 dest) > { >- struct tipc_net *tn = net_generic(net, tipc_net_id); > struct tipc_bearer *b; > > rcu_read_lock(); >- b = rcu_dereference(tn->bearer_list[bearer_id]); >+ b = bearer_get(net, bearer_id); > if (b) > tipc_disc_remove_dest(b->disc); > rcu_read_unlock(); >@@ -534,7 +532,7 @@ int tipc_bearer_mtu(struct net *net, u32 bearer_id) > struct tipc_bearer *b; > > rcu_read_lock(); >- b = rcu_dereference(tipc_net(net)->bearer_list[bearer_id]); >+ b = bearer_get(net, bearer_id); > if (b) > mtu = b->mtu; > rcu_read_unlock(); >@@ -745,7 +743,7 @@ void tipc_bearer_cleanup(void) > > void tipc_bearer_stop(struct net *net) > { >- struct tipc_net *tn = net_generic(net, tipc_net_id); >+ struct tipc_net *tn = tipc_net(net); > struct tipc_bearer *b; > u32 i; > >@@ -881,7 +879,7 @@ int tipc_nl_bearer_dump(struct sk_buff *skb, struct netlink_callback *cb) > struct tipc_bearer *bearer; > struct tipc_nl_msg msg; > struct net *net = sock_net(skb->sk); >- struct tipc_net *tn = net_generic(net, tipc_net_id); >+ struct tipc_net *tn = tipc_net(net); > > if (i == MAX_BEARERS) > return 0; >-- >2.39.1 |