You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(6) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(9) |
Feb
(11) |
Mar
(22) |
Apr
(73) |
May
(78) |
Jun
(146) |
Jul
(80) |
Aug
(27) |
Sep
(5) |
Oct
(14) |
Nov
(18) |
Dec
(27) |
2005 |
Jan
(20) |
Feb
(30) |
Mar
(19) |
Apr
(28) |
May
(50) |
Jun
(31) |
Jul
(32) |
Aug
(14) |
Sep
(36) |
Oct
(43) |
Nov
(74) |
Dec
(63) |
2006 |
Jan
(34) |
Feb
(32) |
Mar
(21) |
Apr
(76) |
May
(106) |
Jun
(72) |
Jul
(70) |
Aug
(175) |
Sep
(130) |
Oct
(39) |
Nov
(81) |
Dec
(43) |
2007 |
Jan
(81) |
Feb
(36) |
Mar
(20) |
Apr
(43) |
May
(54) |
Jun
(34) |
Jul
(44) |
Aug
(55) |
Sep
(44) |
Oct
(54) |
Nov
(43) |
Dec
(41) |
2008 |
Jan
(42) |
Feb
(84) |
Mar
(73) |
Apr
(30) |
May
(119) |
Jun
(54) |
Jul
(54) |
Aug
(93) |
Sep
(173) |
Oct
(130) |
Nov
(145) |
Dec
(153) |
2009 |
Jan
(59) |
Feb
(12) |
Mar
(28) |
Apr
(18) |
May
(56) |
Jun
(9) |
Jul
(28) |
Aug
(62) |
Sep
(16) |
Oct
(19) |
Nov
(15) |
Dec
(17) |
2010 |
Jan
(14) |
Feb
(36) |
Mar
(37) |
Apr
(30) |
May
(33) |
Jun
(53) |
Jul
(42) |
Aug
(50) |
Sep
(67) |
Oct
(66) |
Nov
(69) |
Dec
(36) |
2011 |
Jan
(52) |
Feb
(45) |
Mar
(49) |
Apr
(21) |
May
(34) |
Jun
(13) |
Jul
(19) |
Aug
(37) |
Sep
(43) |
Oct
(10) |
Nov
(23) |
Dec
(30) |
2012 |
Jan
(42) |
Feb
(36) |
Mar
(46) |
Apr
(25) |
May
(96) |
Jun
(146) |
Jul
(40) |
Aug
(28) |
Sep
(61) |
Oct
(45) |
Nov
(100) |
Dec
(53) |
2013 |
Jan
(79) |
Feb
(24) |
Mar
(134) |
Apr
(156) |
May
(118) |
Jun
(75) |
Jul
(278) |
Aug
(145) |
Sep
(136) |
Oct
(168) |
Nov
(137) |
Dec
(439) |
2014 |
Jan
(284) |
Feb
(158) |
Mar
(231) |
Apr
(275) |
May
(259) |
Jun
(91) |
Jul
(222) |
Aug
(215) |
Sep
(165) |
Oct
(166) |
Nov
(211) |
Dec
(150) |
2015 |
Jan
(164) |
Feb
(324) |
Mar
(299) |
Apr
(214) |
May
(111) |
Jun
(109) |
Jul
(105) |
Aug
(36) |
Sep
(58) |
Oct
(131) |
Nov
(68) |
Dec
(30) |
2016 |
Jan
(46) |
Feb
(87) |
Mar
(135) |
Apr
(174) |
May
(132) |
Jun
(135) |
Jul
(149) |
Aug
(125) |
Sep
(79) |
Oct
(49) |
Nov
(95) |
Dec
(102) |
2017 |
Jan
(104) |
Feb
(75) |
Mar
(72) |
Apr
(53) |
May
(18) |
Jun
(5) |
Jul
(14) |
Aug
(19) |
Sep
(2) |
Oct
(13) |
Nov
(21) |
Dec
(67) |
2018 |
Jan
(56) |
Feb
(50) |
Mar
(148) |
Apr
(41) |
May
(37) |
Jun
(34) |
Jul
(34) |
Aug
(11) |
Sep
(52) |
Oct
(48) |
Nov
(28) |
Dec
(46) |
2019 |
Jan
(29) |
Feb
(63) |
Mar
(95) |
Apr
(54) |
May
(14) |
Jun
(71) |
Jul
(60) |
Aug
(49) |
Sep
(3) |
Oct
(64) |
Nov
(115) |
Dec
(57) |
2020 |
Jan
(15) |
Feb
(9) |
Mar
(38) |
Apr
(27) |
May
(60) |
Jun
(53) |
Jul
(35) |
Aug
(46) |
Sep
(37) |
Oct
(64) |
Nov
(20) |
Dec
(25) |
2021 |
Jan
(20) |
Feb
(31) |
Mar
(27) |
Apr
(23) |
May
(21) |
Jun
(30) |
Jul
(30) |
Aug
(7) |
Sep
(18) |
Oct
|
Nov
(15) |
Dec
(4) |
2022 |
Jan
(3) |
Feb
(1) |
Mar
(10) |
Apr
|
May
(2) |
Jun
(26) |
Jul
(5) |
Aug
|
Sep
(1) |
Oct
(2) |
Nov
(9) |
Dec
(2) |
2023 |
Jan
(4) |
Feb
(4) |
Mar
(5) |
Apr
(10) |
May
(29) |
Jun
(17) |
Jul
|
Aug
|
Sep
(1) |
Oct
(1) |
Nov
(2) |
Dec
|
2024 |
Jan
|
Feb
(6) |
Mar
|
Apr
(1) |
May
(6) |
Jun
|
Jul
(5) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
|
2025 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Jon M. <jon...@er...> - 2004-05-10 21:04:08
|
I guess that is about all that can be converted without too much redesign. I don't think it is a good idea to try with the publication list the way you describe, -it will just add complexity, not reduce it. After all, these lists are supposed to be a support, not an end by itself. Thank you for a great job. /Jon Mark Haverkamp wrote: >Jon, > >Here are updates for the node subscription structures. > >Are there any that are left to be done? I had considered the >publication lists in the sub_seq struct, but since they are copied >around when sub sequences come and go, their list_head next/prev >pointers become invalid. One way around this would be to change the >*sseqs pointer in name_seq to a **sseqs and allocate an array of >pointers. Then the pointers could be sorted and the sub_seq structures >addresses wouldn't change. > > >Mark. > > >cvs diff -u node.h node_subscr.h name_table.h node_subscr.c name_table.c port.c node.c >Index: node.h >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/node.h,v >retrieving revision 1.8 >diff -u -r1.8 node.h >--- node.h 7 May 2004 22:17:26 -0000 1.8 >+++ node.h 10 May 2004 17:34:37 -0000 >@@ -97,7 +97,7 @@ > int last_router; > struct cluster* owner; > tipc_net_addr_t addr; >- struct node_subscr* subscr; >+ struct list_head nsub; > struct node* next; > uint tag; > uint last_in_bcast; >Index: node_subscr.h >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/node_subscr.h,v >retrieving revision 1.2 >diff -u -r1.2 node_subscr.h >--- node_subscr.h 3 Feb 2004 23:27:35 -0000 1.2 >+++ node_subscr.h 10 May 2004 17:34:37 -0000 >@@ -69,8 +69,7 @@ > void *usr_handle; > net_ev_handler handle_node_down; > struct node* node; >- struct node_subscr *next; >- struct node_subscr *prev; >+ struct list_head nodesub_list; > }; > > void nodesub_subscribe(struct node_subscr *, >Index: node_subscr.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/node_subscr.c,v >retrieving revision 1.2 >diff -u -r1.2 node_subscr.c >--- node_subscr.c 3 Feb 2004 23:27:35 -0000 1.2 >+++ node_subscr.c 10 May 2004 17:34:38 -0000 >@@ -104,11 +104,7 @@ > this->usr_handle = usr_handle; > assert(this->node); > spin_lock_bh(&this->node->lock); >- this->prev = 0; >- this->next = this->node->subscr; >- if (this->next) >- this->next->prev = this; >- this->node->subscr = this; >+ list_add_tail(&this->nodesub_list, &this->node->nsub); > spin_unlock_bh(&this->node->lock); > } > >@@ -118,11 +114,6 @@ > if (!this->node) > return; > spin_lock_bh(&this->node->lock); >- if (this->next) >- this->next->prev = this->prev; >- if (this->prev) >- this->prev->next = this->next; >- else >- this->node->subscr = this->next; >+ list_del_init(&this->nodesub_list); > spin_unlock_bh(&this->node->lock); > } >Index: name_table.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.c,v >retrieving revision 1.17 >diff -u -r1.17 name_table.c >--- name_table.c 6 May 2004 22:31:06 -0000 1.17 >+++ name_table.c 10 May 2004 17:34:38 -0000 >@@ -168,6 +168,7 @@ > this->key = key; > INIT_LIST_HEAD(&this->local_list); > INIT_LIST_HEAD(&this->pport_list); >+ INIT_LIST_HEAD(&this->subscr.nodesub_list); > if (node != tipc_own_addr) { > nodesub_subscribe(&this->subscr, node, this, > (net_ev_handler) publ_handle_node_down); >Index: port.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.c,v >retrieving revision 1.18 >diff -u -r1.18 port.c >--- port.c 6 May 2004 22:31:06 -0000 1.18 >+++ port.c 10 May 2004 17:34:38 -0000 >@@ -303,6 +303,7 @@ > this->sent = 1; > this->publ.usr_handle = usr_handle; > INIT_LIST_HEAD(&this->wait_list); >+ INIT_LIST_HEAD(&this->subscription.nodesub_list); > this->congested_link = 0; > this->max_pkt = 1404; /* Ethernet, adjust at connect */ > this->dispatcher = dispatcher; >Index: node.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/node.c,v >retrieving revision 1.17 >diff -u -r1.17 node.c >--- node.c 7 May 2004 22:17:26 -0000 1.17 >+++ node.c 10 May 2004 17:34:38 -0000 >@@ -129,6 +129,7 @@ > this->lock = SPIN_LOCK_UNLOCKED; > this->acked_bcast = 0; > this->last_in_bcast = 0; >+ INIT_LIST_HEAD(&this->nsub); > > > if (!c) >@@ -436,7 +437,7 @@ > node_lost_contact(struct node *this) > { > uint i; >- struct node_subscr *crs = this->subscr; >+ struct node_subscr *ns, *tns; > struct cluster *c; > > if (is_slave(tipc_own_addr)) { >@@ -476,14 +477,11 @@ > link_reset_fragments(l); > } > >- this->subscr = 0; /* One-shot subscribers */ >- > /* Notify subscribers: */ >- while (crs){ >- struct node_subscr *next = crs->next; >- crs->node = 0; >- crs->handle_node_down(crs->usr_handle); >- crs = next; >+ list_for_each_entry_safe(ns, tns, &this->nsub, nodesub_list) { >+ ns->node = 0; >+ list_del_init(&ns->nodesub_list); >+ ns->handle_node_down(ns->usr_handle); > } > } > > > > |
From: Mark H. <ma...@os...> - 2004-05-10 18:22:29
|
Jon, Here are updates for the node subscription structures. Are there any that are left to be done? I had considered the publication lists in the sub_seq struct, but since they are copied around when sub sequences come and go, their list_head next/prev pointers become invalid. One way around this would be to change the *sseqs pointer in name_seq to a **sseqs and allocate an array of pointers. Then the pointers could be sorted and the sub_seq structures addresses wouldn't change. Mark. cvs diff -u node.h node_subscr.h name_table.h node_subscr.c name_table.c port.c node.c Index: node.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/node.h,v retrieving revision 1.8 diff -u -r1.8 node.h --- node.h 7 May 2004 22:17:26 -0000 1.8 +++ node.h 10 May 2004 17:34:37 -0000 @@ -97,7 +97,7 @@ int last_router; struct cluster* owner; tipc_net_addr_t addr; - struct node_subscr* subscr; + struct list_head nsub; struct node* next; uint tag; uint last_in_bcast; Index: node_subscr.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/node_subscr.h,v retrieving revision 1.2 diff -u -r1.2 node_subscr.h --- node_subscr.h 3 Feb 2004 23:27:35 -0000 1.2 +++ node_subscr.h 10 May 2004 17:34:37 -0000 @@ -69,8 +69,7 @@ void *usr_handle; net_ev_handler handle_node_down; struct node* node; - struct node_subscr *next; - struct node_subscr *prev; + struct list_head nodesub_list; }; void nodesub_subscribe(struct node_subscr *, Index: node_subscr.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/node_subscr.c,v retrieving revision 1.2 diff -u -r1.2 node_subscr.c --- node_subscr.c 3 Feb 2004 23:27:35 -0000 1.2 +++ node_subscr.c 10 May 2004 17:34:38 -0000 @@ -104,11 +104,7 @@ this->usr_handle = usr_handle; assert(this->node); spin_lock_bh(&this->node->lock); - this->prev = 0; - this->next = this->node->subscr; - if (this->next) - this->next->prev = this; - this->node->subscr = this; + list_add_tail(&this->nodesub_list, &this->node->nsub); spin_unlock_bh(&this->node->lock); } @@ -118,11 +114,6 @@ if (!this->node) return; spin_lock_bh(&this->node->lock); - if (this->next) - this->next->prev = this->prev; - if (this->prev) - this->prev->next = this->next; - else - this->node->subscr = this->next; + list_del_init(&this->nodesub_list); spin_unlock_bh(&this->node->lock); } Index: name_table.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.c,v retrieving revision 1.17 diff -u -r1.17 name_table.c --- name_table.c 6 May 2004 22:31:06 -0000 1.17 +++ name_table.c 10 May 2004 17:34:38 -0000 @@ -168,6 +168,7 @@ this->key = key; INIT_LIST_HEAD(&this->local_list); INIT_LIST_HEAD(&this->pport_list); + INIT_LIST_HEAD(&this->subscr.nodesub_list); if (node != tipc_own_addr) { nodesub_subscribe(&this->subscr, node, this, (net_ev_handler) publ_handle_node_down); Index: port.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.c,v retrieving revision 1.18 diff -u -r1.18 port.c --- port.c 6 May 2004 22:31:06 -0000 1.18 +++ port.c 10 May 2004 17:34:38 -0000 @@ -303,6 +303,7 @@ this->sent = 1; this->publ.usr_handle = usr_handle; INIT_LIST_HEAD(&this->wait_list); + INIT_LIST_HEAD(&this->subscription.nodesub_list); this->congested_link = 0; this->max_pkt = 1404; /* Ethernet, adjust at connect */ this->dispatcher = dispatcher; Index: node.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/node.c,v retrieving revision 1.17 diff -u -r1.17 node.c --- node.c 7 May 2004 22:17:26 -0000 1.17 +++ node.c 10 May 2004 17:34:38 -0000 @@ -129,6 +129,7 @@ this->lock = SPIN_LOCK_UNLOCKED; this->acked_bcast = 0; this->last_in_bcast = 0; + INIT_LIST_HEAD(&this->nsub); if (!c) @@ -436,7 +437,7 @@ node_lost_contact(struct node *this) { uint i; - struct node_subscr *crs = this->subscr; + struct node_subscr *ns, *tns; struct cluster *c; if (is_slave(tipc_own_addr)) { @@ -476,14 +477,11 @@ link_reset_fragments(l); } - this->subscr = 0; /* One-shot subscribers */ - /* Notify subscribers: */ - while (crs){ - struct node_subscr *next = crs->next; - crs->node = 0; - crs->handle_node_down(crs->usr_handle); - crs = next; + list_for_each_entry_safe(ns, tns, &this->nsub, nodesub_list) { + ns->node = 0; + list_del_init(&ns->nodesub_list); + ns->handle_node_down(ns->usr_handle); } } -- Mark Haverkamp <ma...@os...> |
From: Jon M. <jon...@er...> - 2004-05-10 13:56:52
|
I changed the standalone 'make' support again. I found a less disgusting solution to this, so we don't need a Makefile.standalone in the source code directory any more. This file is now directly under the top directory: unstable/Makefile /Jon |
From: Jon M. <jon...@er...> - 2004-05-07 23:55:29
|
I just checked in the following changes: - Enabled some configuration options in Kconfig: TIPC_UDP,TIPC_DEBUG, TIPC_ZONES,TIPC_NODES etc - Changed code and Makefile so that these options really take effect on what is compiled - Changed Makefile to standard. It is now possible to integrate, configure and build TIPC from inside a Linux kernel tree, just like any other kernel code. - Added a "Makefile.standalone" as an alternaltive. This makes it possible to build TIPC completely outside the kernel tree, as before. This file may not have to go into the final kernel source, since it is a little hackish, but I still find it convenient to use. - Changed debug macros in tipc_adapt.h according to Steven H.'s comments: o If TIPC_DEBUG is OFF we use standard kernel macros. o If TIPC_DEBUG is ON we use our own macros. - Ensured that there are no compiler warnings left. Next: - Change the module parameter settings, so that the hard coded interface names disappear. - More complete run-time configuration support, making the module parameters redundant. - Add support for interface up/down in eth_media.c - Trouble-shooting on parallel links - and more... Have a nice weekend /Jon |
From: Jon M. <jon...@er...> - 2004-05-07 19:45:40
|
Hi, It is not a bug, but you are right that this is less than optimal, and should probably be done the way you suggest. buf_prepend() and the pre-allocated header space was introduced at a late stage, to make it possible to send complete buffers (in the tipc_send_bufXX()) calls directly from the user without copying. I did not realize then that this would be useful even in the "ordinary" tipc_sendXX()" calls. There will not be a buffer overrun, though, because the permitted buffer maximum is adjusted down internally (in media.c) for the pre- allocated headroom. The max_buffer size as seen from the core code is hence only 1404 bytes, which is an obvious waste of space, but not a bug. This is something I would like to change, but I can not prioritize it right now. This is the kind of changes that easily go wrong, and requires some analysis and testing. I will add it to our TODO list. /Jon all...@wi... <mailto:all...@wi...> wrote: Hi Jon: I'm not sure if there is a latent bug in the msg_build() routine in msg.h. Shouldn't the message header be placed into the empty buffer using buf_copy_prepend() rather than buf_copy_append()? My understanding is that buf_acquire() returns a buffer that has reserved space at the front of the buffer for TIPC header (and any bearer header too). However, using buf_copy_append() to insert the message header won't taken advantage of this reserved space, and could result in the ensuing buf_safe_append() operations overwriting the end of the buffer. Can you confirm this bug? My impression from past experience is that Linux is fairly forgiving of buffer overruns of this sort, but it still seems wrong. Regards, Al |
From: <all...@wi...> - 2004-05-07 18:58:39
|
Hi Jon: I'm not sure if there is a latent bug in the msg_build() routine in msg.h. Shouldn't the message header be placed into the empty buffer using buf_copy_prepend() rather than buf_copy_append()? My understanding is that buf_acquire() returns a buffer that has reserved space at the front of the buffer for TIPC header (and any bearer header too). However, using buf_copy_append() to insert the message header won't taken advantage of this reserved space, and could result in the ensuing buf_safe_append() operations overwriting the end of the buffer. Can you confirm this bug? My impression from past experience is that Linux is fairly forgiving of buffer overruns of this sort, but it still seems wrong. Regards, Al |
From: Jon M. <jon...@er...> - 2004-05-06 23:37:40
|
I think your analysis is correct, but I don't know where this omission happens. The problem I am trying to solve may be related, - when I run parallel links between two nodes comunication on one link sometimes seem to stop under heavy load. With a little luck we are looking for the same bug. /Jon Mark Haverkamp wrote: >Jon, > >Daniel and I have been seeing a tipc hang for 3 or 4 weeks when we kill >a running application in a certain order. > >While running the tipc benchmark program we can get tipc to hang the >computer by killing the client while it has the 32 processes running. >Although, to get the hang, I have to have tried to run some management >port accesses which are stalled due to congestion. After doing some >tracing, I have narrowed it down to an exiting process spinning while >trying to get the node lock. Our assumption is that some other process >hasn't released the lock by accident, although its not obvious where. I >have included the stack dump from the sysrq P console command. > >SysRq : Show Regs > >Pid: 2001, comm: client_tipc_tp >EIP: 0060:[<f8a913d9>] CPU: 0 >EIP is at .text.lock.link+0xd7/0x3ce [tipc] > EFLAGS: 00000286 Not tainted (2.6.6-rc3) >EAX: f7c8ef6c EBX: 00000000 ECX: 01001011 EDX: 00000013 >ESI: f7c8eee0 EDI: f359a000 EBP: f359bcf8 DS: 007b ES: 007b >CR0: 8005003b CR2: 080e2ce8 CR3: 0053d000 CR4: 000006d0 >Call Trace: > [<c0126a38>] __do_softirq+0xb8/0xc0 > [<f8a9818b>] net_route_msg+0x48b/0x4ad [tipc] > [<c015b3a1>] __pte_chain_free+0x81/0x90 > [<f8a99e6e>] port_send_proto_msg+0x1ae/0x2d0 [tipc] > [<f8a9af73>] port_abort_peer+0x83/0x90 [tipc] > [<f8a999a1>] tipc_deleteport+0x181/0x2a0 [tipc] > [<f8aa7ae2>] release+0x72/0x130 [tipc] > [<c0378ff9>] sock_release+0x99/0xf0 > [<c0379a16>] sock_close+0x36/0x50 > [<c016740d>] __fput+0x12d/0x140 > [<c0165857>] filp_close+0x57/0x90 > [<c0123adc>] put_files_struct+0x7c/0xf0 > [<c0124b1c>] do_exit+0x26c/0x600 > [<c012cc05>] __dequeue_signal+0xf5/0x1b0 > [<c0125057>] do_group_exit+0x107/0x190 > [<c012cced>] dequeue_signal+0x2d/0x90 > [<c012f14c>] get_signal_to_deliver+0x28c/0x590 > [<c0105286>] do_signal+0xb6/0xf0 > [<c037a736>] sys_send+0x36/0x40 > [<c037af8e>] sys_socketcall+0x12e/0x240 > [<c010531b>] do_notify_resume+0x5b/0x5d > [<c010554a>] work_notifysig+0x13/0x15 > >You can see that the process is trying to exit. I have traced the EIP to >the spin_lock_bh(&node->lock) in link_lock_select from a disassembly of >link.o. > >Any ideas on this? > >Mark. > > |
From: Jon M. <jon...@er...> - 2004-05-06 23:32:22
|
Single BSD licencing would of course be preferrable, but I don't see anybody else doing that in the kernel. On the other hand, I see that some keep the BSD disclaimer (those who bother to have any at all) unaltered, so I guess that is regarded acceptable. I think I will go for that. Thanks /jon Timothy D. Witham wrote: On Thu, 2004-05-06 at 14:16, Jon Maloy wrote: It looks ok, just go ahead and check it in. Un unrelated issue, which I hope you or somebody else on the list can answer: The TIPC code is currently distributed under a dual BSD/GPL licence, but the disclaimer we keep in the code is the BSD one. I had a short glance into other dual-licence modules in the kernel, and see that they also keep the BSD disclaimer, but they have added an extra clause: "ALTERNATIVELY, provided that this notice is retained in full, this product may be distributed under the terms of the GNU General Public License (GPL), in which case the provisions of the GPL apply INSTEAD OF those given above." So, which one is valid? Is it right to think that the code is distributed with GPL when it goes with the Linux kernel, and with BSD when downloaded from SourceForge? Must the latter case be stated explicitly in some way ? I'm not really sure as to what the advantage of having this sort of dual licensing. It is well known that the current BSD licensing is compatible when linked into GPL licensed code. So I'm not sure what the addition of the GPL license adds other than making the licensing process more complex. Tim Regards /jon Mark Haverkamp wrote: Jon, Here are more conversions to list_head macros. I think that there are just a few more to go. Mark. cvs diff -u name_distr.c port.h name_table.c name_table.h port.c Index: name_distr.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_distr.c,v retrieving revision 1.7 diff -u -r1.7 name_distr.c --- name_distr.c 5 May 2004 15:41:28 -0000 1.7 +++ name_distr.c 6 May 2004 20:11:32 -0000 @@ -159,7 +159,6 @@ { struct sk_buff* buf = named_prepare_buf(WITHDRAWAL,ITEM_SIZE,0); struct distr_item *item = (struct distr_item*) msg_data(buf_msg(buf)); - list_del_init(&publ->local_list); publ_cnt--; publ_to_item(item,publ); cluster_broadcast(buf); Index: port.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.h,v retrieving revision 1.8 diff -u -r1.8 port.h --- port.h 5 May 2004 15:41:28 -0000 1.8 +++ port.h 6 May 2004 20:11:32 -0000 @@ -152,7 +152,7 @@ uint waiting_pkts; uint sent; uint acked; - struct publication *publications; + struct list_head publications; uint max_pkt; /* hint */ uint probing_state; uint last_in_seqno; Index: name_table.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.c,v retrieving revision 1.16 diff -u -r1.16 name_table.c --- name_table.c 5 May 2004 19:09:03 -0000 1.16 +++ name_table.c 6 May 2004 20:11:32 -0000 @@ -162,6 +162,8 @@ this->node = node; this->scope = scope; this->key = key; + INIT_LIST_HEAD(&this->local_list); + INIT_LIST_HEAD(&this->pport_list); if (node != tipc_own_addr) { nodesub_subscribe(&this->subscr, node, this, (net_ev_handler) publ_handle_node_down); @@ -888,6 +890,8 @@ if (publ->scope != TIPC_NODE_SCOPE) named_withdraw(publ); write_unlock_bh(&nametbl_lock); + list_del_init(&publ->local_list); + list_del_init(&publ->pport_list); kfree(publ); return 1; } Index: name_table.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.h,v retrieving revision 1.3 diff -u -r1.3 name_table.h --- name_table.h 5 May 2004 15:41:28 -0000 1.3 +++ name_table.h 6 May 2004 20:11:32 -0000 @@ -86,9 +86,7 @@ uint key; uint scope; struct list_head local_list; - struct { - struct publication *next; - }port_list; + struct list_head pport_list; struct { struct publication *next; }node_list; Index: port.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.c,v retrieving revision 1.17 diff -u -r1.17 port.c --- port.c 5 May 2004 15:41:28 -0000 1.17 +++ port.c 6 May 2004 20:11:33 -0000 @@ -305,6 +305,7 @@ this->wakeup = wakeup; this->user_port = 0; spin_lock_bh(&port_lock); + INIT_LIST_HEAD(&this->publications); INIT_LIST_HEAD(&this->port_list); list_add_tail(&this->port_list, &ports); spin_unlock_bh(&port_lock); @@ -637,9 +638,10 @@ void port_print(struct port * this, struct print_buf * buf, const char *str) { - struct publication *publ = this->publications; + struct publication *publ; tipc_printf(buf, str); tipc_printf(buf, "Port: %u \n",this->publ.ref); + if (this->publ.connected) { uint dport = port_peerport(this); uint destnode = port_peernode(this); @@ -647,10 +649,10 @@ tipc_zone(destnode), tipc_cluster(destnode), tipc_node(destnode), dport); } - while (publ) { + + list_for_each_entry(publ, &this->publications, pport_list) { tipc_printf(buf, " bound to: <%u,%u,%u>\n", publ->type, publ->lower, publ->upper); - publ = publ->port_list.next; } } @@ -1001,10 +1003,9 @@ p = nametbl_publish(seq->type, seq->lower,seq->upper, this->publ.ref,scope); - if (p){ + if (p) { res = TIPC_OK; - p->port_list.next = this->publications; - this->publications = p; + list_add(&p->pport_list, &this->publications); this->publ.published = 1; } exit: @@ -1029,40 +1030,29 @@ tipc_withdraw(tipc_ref_t ref,struct tipc_name_seq const * seq) { struct port* this = port_lock_deref(ref); - struct publication *publ; - struct publication *prev = 0; + struct publication *publ, *tpub; uint key = 0; int res = TIPC_FAILURE; if (!this) - return TIPC_FAILURE; + return res; if (!this->publ.published) goto exit; - publ = this->publications; - if (!seq){ - while (this->publications) { - struct publication *next = - this->publications->port_list.next; - nametbl_withdraw(this->publications->type, - this->publications->lower, - this->publications->key); - this->publications = next; - } - this->publ.published = 0; - }else { - while (publ && (publ->lower != seq->lower)){ - prev = publ; - publ = publ->port_list.next; + if (!seq) { + list_for_each_entry_safe(publ, tpub, + &this->publications, pport_list) { + nametbl_withdraw(publ->type, publ->lower, publ->key); } - if (!publ) - goto exit; - key = publ->key; - if (prev) - prev->port_list.next = publ->port_list.next; - else - this->publications = publ->port_list.next; - nametbl_withdraw(seq->type,seq->lower,key); + } else { + list_for_each_entry_safe(publ, tpub, + &this->publications, pport_list) { + if (publ->lower != seq->lower) + continue; + key = publ->key; + nametbl_withdraw(seq->type, seq->lower, key); + res = TIPC_OK; + goto exit; + } } - res = TIPC_OK; exit: spin_unlock_bh(this->publ.lock); return res; ______________________________________________________________________ _______________________________________________ Osdlcluster mailing list Osd...@li... <mailto:Osd...@li...> http://lists.osdl.org/mailman/listinfo/osdlcluster <http://lists.osdl.org/mailman/listinfo/osdlcluster> |
From: Mark H. <ma...@os...> - 2004-05-06 22:41:27
|
Jon, Daniel and I have been seeing a tipc hang for 3 or 4 weeks when we kill a running application in a certain order. While running the tipc benchmark program we can get tipc to hang the computer by killing the client while it has the 32 processes running. Although, to get the hang, I have to have tried to run some management port accesses which are stalled due to congestion. After doing some tracing, I have narrowed it down to an exiting process spinning while trying to get the node lock. Our assumption is that some other process hasn't released the lock by accident, although its not obvious where. I have included the stack dump from the sysrq P console command. SysRq : Show Regs Pid: 2001, comm: client_tipc_tp EIP: 0060:[<f8a913d9>] CPU: 0 EIP is at .text.lock.link+0xd7/0x3ce [tipc] EFLAGS: 00000286 Not tainted (2.6.6-rc3) EAX: f7c8ef6c EBX: 00000000 ECX: 01001011 EDX: 00000013 ESI: f7c8eee0 EDI: f359a000 EBP: f359bcf8 DS: 007b ES: 007b CR0: 8005003b CR2: 080e2ce8 CR3: 0053d000 CR4: 000006d0 Call Trace: [<c0126a38>] __do_softirq+0xb8/0xc0 [<f8a9818b>] net_route_msg+0x48b/0x4ad [tipc] [<c015b3a1>] __pte_chain_free+0x81/0x90 [<f8a99e6e>] port_send_proto_msg+0x1ae/0x2d0 [tipc] [<f8a9af73>] port_abort_peer+0x83/0x90 [tipc] [<f8a999a1>] tipc_deleteport+0x181/0x2a0 [tipc] [<f8aa7ae2>] release+0x72/0x130 [tipc] [<c0378ff9>] sock_release+0x99/0xf0 [<c0379a16>] sock_close+0x36/0x50 [<c016740d>] __fput+0x12d/0x140 [<c0165857>] filp_close+0x57/0x90 [<c0123adc>] put_files_struct+0x7c/0xf0 [<c0124b1c>] do_exit+0x26c/0x600 [<c012cc05>] __dequeue_signal+0xf5/0x1b0 [<c0125057>] do_group_exit+0x107/0x190 [<c012cced>] dequeue_signal+0x2d/0x90 [<c012f14c>] get_signal_to_deliver+0x28c/0x590 [<c0105286>] do_signal+0xb6/0xf0 [<c037a736>] sys_send+0x36/0x40 [<c037af8e>] sys_socketcall+0x12e/0x240 [<c010531b>] do_notify_resume+0x5b/0x5d [<c010554a>] work_notifysig+0x13/0x15 You can see that the process is trying to exit. I have traced the EIP to the spin_lock_bh(&node->lock) in link_lock_select from a disassembly of link.o. Any ideas on this? Mark. -- Mark Haverkamp <ma...@os...> |
From: Timothy D. W. <wo...@os...> - 2004-05-06 21:47:55
|
On Thu, 2004-05-06 at 14:16, Jon Maloy wrote: > It looks ok, just go ahead and check it in. > > Un unrelated issue, which I hope you or somebody else on the list > can answer: The TIPC code is currently distributed under a dual > BSD/GPL > licence, but the disclaimer we keep in the code is the BSD one. I had > a short > glance into other dual-licence modules in the kernel, and see that > they also keep > the BSD disclaimer, but they have added an extra clause: > > "ALTERNATIVELY, provided that this notice is retained in full, this > product > may be distributed under the terms of the GNU General Public License > (GPL), > in which case the provisions of the GPL apply INSTEAD OF those given > above." > > So, which one is valid? Is it right to think that the code is > distributed with GPL > when it goes with the Linux kernel, and with BSD when downloaded from > SourceForge? Must the latter case be stated explicitly in some way ? > I'm not really sure as to what the advantage of having this sort of dual licensing. It is well known that the current BSD licensing is compatible when linked into GPL licensed code. So I'm not sure what the addition of the GPL license adds other than making the licensing process more complex. Tim > Regards /jon > > > Mark Haverkamp wrote: > > Jon, > > > > Here are more conversions to list_head macros. I think that there are > > just a few more to go. > > > > Mark. > > > > cvs diff -u name_distr.c port.h name_table.c name_table.h port.c > > Index: name_distr.c > > =================================================================== > > RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_distr.c,v > > retrieving revision 1.7 > > diff -u -r1.7 name_distr.c > > --- name_distr.c 5 May 2004 15:41:28 -0000 1.7 > > +++ name_distr.c 6 May 2004 20:11:32 -0000 > > @@ -159,7 +159,6 @@ > > { > > struct sk_buff* buf = named_prepare_buf(WITHDRAWAL,ITEM_SIZE,0); > > struct distr_item *item = (struct distr_item*) msg_data(buf_msg(buf)); > > - list_del_init(&publ->local_list); > > publ_cnt--; > > publ_to_item(item,publ); > > cluster_broadcast(buf); > > Index: port.h > > =================================================================== > > RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.h,v > > retrieving revision 1.8 > > diff -u -r1.8 port.h > > --- port.h 5 May 2004 15:41:28 -0000 1.8 > > +++ port.h 6 May 2004 20:11:32 -0000 > > @@ -152,7 +152,7 @@ > > uint waiting_pkts; > > uint sent; > > uint acked; > > - struct publication *publications; > > + struct list_head publications; > > uint max_pkt; /* hint */ > > uint probing_state; > > uint last_in_seqno; > > Index: name_table.c > > =================================================================== > > RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.c,v > > retrieving revision 1.16 > > diff -u -r1.16 name_table.c > > --- name_table.c 5 May 2004 19:09:03 -0000 1.16 > > +++ name_table.c 6 May 2004 20:11:32 -0000 > > @@ -162,6 +162,8 @@ > > this->node = node; > > this->scope = scope; > > this->key = key; > > + INIT_LIST_HEAD(&this->local_list); > > + INIT_LIST_HEAD(&this->pport_list); > > if (node != tipc_own_addr) { > > nodesub_subscribe(&this->subscr, node, this, > > (net_ev_handler) publ_handle_node_down); > > @@ -888,6 +890,8 @@ > > if (publ->scope != TIPC_NODE_SCOPE) > > named_withdraw(publ); > > write_unlock_bh(&nametbl_lock); > > + list_del_init(&publ->local_list); > > + list_del_init(&publ->pport_list); > > kfree(publ); > > return 1; > > } > > Index: name_table.h > > =================================================================== > > RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.h,v > > retrieving revision 1.3 > > diff -u -r1.3 name_table.h > > --- name_table.h 5 May 2004 15:41:28 -0000 1.3 > > +++ name_table.h 6 May 2004 20:11:32 -0000 > > @@ -86,9 +86,7 @@ > > uint key; > > uint scope; > > struct list_head local_list; > > - struct { > > - struct publication *next; > > - }port_list; > > + struct list_head pport_list; > > struct { > > struct publication *next; > > }node_list; > > Index: port.c > > =================================================================== > > RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.c,v > > retrieving revision 1.17 > > diff -u -r1.17 port.c > > --- port.c 5 May 2004 15:41:28 -0000 1.17 > > +++ port.c 6 May 2004 20:11:33 -0000 > > @@ -305,6 +305,7 @@ > > this->wakeup = wakeup; > > this->user_port = 0; > > spin_lock_bh(&port_lock); > > + INIT_LIST_HEAD(&this->publications); > > INIT_LIST_HEAD(&this->port_list); > > list_add_tail(&this->port_list, &ports); > > spin_unlock_bh(&port_lock); > > @@ -637,9 +638,10 @@ > > void > > port_print(struct port * this, struct print_buf * buf, const char *str) > > { > > - struct publication *publ = this->publications; > > + struct publication *publ; > > tipc_printf(buf, str); > > tipc_printf(buf, "Port: %u \n",this->publ.ref); > > + > > if (this->publ.connected) { > > uint dport = port_peerport(this); > > uint destnode = port_peernode(this); > > @@ -647,10 +649,10 @@ > > tipc_zone(destnode), tipc_cluster(destnode), > > tipc_node(destnode), dport); > > } > > - while (publ) { > > + > > + list_for_each_entry(publ, &this->publications, pport_list) { > > tipc_printf(buf, " bound to: <%u,%u,%u>\n", > > publ->type, publ->lower, publ->upper); > > - publ = publ->port_list.next; > > } > > } > > > > @@ -1001,10 +1003,9 @@ > > p = nametbl_publish(seq->type, > > seq->lower,seq->upper, > > this->publ.ref,scope); > > - if (p){ > > + if (p) { > > res = TIPC_OK; > > - p->port_list.next = this->publications; > > - this->publications = p; > > + list_add(&p->pport_list, &this->publications); > > this->publ.published = 1; > > } > > exit: > > @@ -1029,40 +1030,29 @@ > > tipc_withdraw(tipc_ref_t ref,struct tipc_name_seq const * seq) > > { > > struct port* this = port_lock_deref(ref); > > - struct publication *publ; > > - struct publication *prev = 0; > > + struct publication *publ, *tpub; > > uint key = 0; > > int res = TIPC_FAILURE; > > if (!this) > > - return TIPC_FAILURE; > > + return res; > > if (!this->publ.published) > > goto exit; > > - publ = this->publications; > > - if (!seq){ > > - while (this->publications) { > > - struct publication *next = > > - this->publications->port_list.next; > > - nametbl_withdraw(this->publications->type, > > - this->publications->lower, > > - this->publications->key); > > - this->publications = next; > > - } > > - this->publ.published = 0; > > - }else { > > - while (publ && (publ->lower != seq->lower)){ > > - prev = publ; > > - publ = publ->port_list.next; > > + if (!seq) { > > + list_for_each_entry_safe(publ, tpub, > > + &this->publications, pport_list) { > > + nametbl_withdraw(publ->type, publ->lower, publ->key); > > } > > - if (!publ) > > - goto exit; > > - key = publ->key; > > - if (prev) > > - prev->port_list.next = publ->port_list.next; > > - else > > - this->publications = publ->port_list.next; > > - nametbl_withdraw(seq->type,seq->lower,key); > > + } else { > > + list_for_each_entry_safe(publ, tpub, > > + &this->publications, pport_list) { > > + if (publ->lower != seq->lower) > > + continue; > > + key = publ->key; > > + nametbl_withdraw(seq->type, seq->lower, key); > > + res = TIPC_OK; > > + goto exit; > > + } > > } > > - res = TIPC_OK; > > exit: > > spin_unlock_bh(this->publ.lock); > > return res; > > > > > > > > ______________________________________________________________________ > _______________________________________________ > Osdlcluster mailing list > Osd...@li... > http://lists.osdl.org/mailman/listinfo/osdlcluster -- Timothy D. Witham - Lab Director - wo...@os... Open Source Development Lab Inc - A non-profit corporation 12725 SW Millikan Way - Suite 400 - Beaverton OR, 97005 (503)-626-2455 x11 (office) (503)-702-2871 (cell) (503)-626-2436 (fax) |
From: Jon M. <jon...@er...> - 2004-05-06 21:17:05
|
It looks ok, just go ahead and check it in. Un unrelated issue, which I hope you or somebody else on the list can answer: The TIPC code is currently distributed under a dual BSD/GPL licence, but the disclaimer we keep in the code is the BSD one. I had a short glance into other dual-licence modules in the kernel, and see that they also keep the BSD disclaimer, but they have added an extra clause: "ALTERNATIVELY, provided that this notice is retained in full, this product may be distributed under the terms of the GNU General Public License (GPL), in which case the provisions of the GPL apply INSTEAD OF those given above." So, which one is valid? Is it right to think that the code is distributed with GPL when it goes with the Linux kernel, and with BSD when downloaded from SourceForge? Must the latter case be stated explicitly in some way ? Regards /jon Mark Haverkamp wrote: Jon, Here are more conversions to list_head macros. I think that there are just a few more to go. Mark. cvs diff -u name_distr.c port.h name_table.c name_table.h port.c Index: name_distr.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_distr.c,v retrieving revision 1.7 diff -u -r1.7 name_distr.c --- name_distr.c 5 May 2004 15:41:28 -0000 1.7 +++ name_distr.c 6 May 2004 20:11:32 -0000 @@ -159,7 +159,6 @@ { struct sk_buff* buf = named_prepare_buf(WITHDRAWAL,ITEM_SIZE,0); struct distr_item *item = (struct distr_item*) msg_data(buf_msg(buf)); - list_del_init(&publ->local_list); publ_cnt--; publ_to_item(item,publ); cluster_broadcast(buf); Index: port.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.h,v retrieving revision 1.8 diff -u -r1.8 port.h --- port.h 5 May 2004 15:41:28 -0000 1.8 +++ port.h 6 May 2004 20:11:32 -0000 @@ -152,7 +152,7 @@ uint waiting_pkts; uint sent; uint acked; - struct publication *publications; + struct list_head publications; uint max_pkt; /* hint */ uint probing_state; uint last_in_seqno; Index: name_table.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.c,v retrieving revision 1.16 diff -u -r1.16 name_table.c --- name_table.c 5 May 2004 19:09:03 -0000 1.16 +++ name_table.c 6 May 2004 20:11:32 -0000 @@ -162,6 +162,8 @@ this->node = node; this->scope = scope; this->key = key; + INIT_LIST_HEAD(&this->local_list); + INIT_LIST_HEAD(&this->pport_list); if (node != tipc_own_addr) { nodesub_subscribe(&this->subscr, node, this, (net_ev_handler) publ_handle_node_down); @@ -888,6 +890,8 @@ if (publ->scope != TIPC_NODE_SCOPE) named_withdraw(publ); write_unlock_bh(&nametbl_lock); + list_del_init(&publ->local_list); + list_del_init(&publ->pport_list); kfree(publ); return 1; } Index: name_table.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.h,v retrieving revision 1.3 diff -u -r1.3 name_table.h --- name_table.h 5 May 2004 15:41:28 -0000 1.3 +++ name_table.h 6 May 2004 20:11:32 -0000 @@ -86,9 +86,7 @@ uint key; uint scope; struct list_head local_list; - struct { - struct publication *next; - }port_list; + struct list_head pport_list; struct { struct publication *next; }node_list; Index: port.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.c,v retrieving revision 1.17 diff -u -r1.17 port.c --- port.c 5 May 2004 15:41:28 -0000 1.17 +++ port.c 6 May 2004 20:11:33 -0000 @@ -305,6 +305,7 @@ this->wakeup = wakeup; this->user_port = 0; spin_lock_bh(&port_lock); + INIT_LIST_HEAD(&this->publications); INIT_LIST_HEAD(&this->port_list); list_add_tail(&this->port_list, &ports); spin_unlock_bh(&port_lock); @@ -637,9 +638,10 @@ void port_print(struct port * this, struct print_buf * buf, const char *str) { - struct publication *publ = this->publications; + struct publication *publ; tipc_printf(buf, str); tipc_printf(buf, "Port: %u \n",this->publ.ref); + if (this->publ.connected) { uint dport = port_peerport(this); uint destnode = port_peernode(this); @@ -647,10 +649,10 @@ tipc_zone(destnode), tipc_cluster(destnode), tipc_node(destnode), dport); } - while (publ) { + + list_for_each_entry(publ, &this->publications, pport_list) { tipc_printf(buf, " bound to: <%u,%u,%u>\n", publ->type, publ->lower, publ->upper); - publ = publ->port_list.next; } } @@ -1001,10 +1003,9 @@ p = nametbl_publish(seq->type, seq->lower,seq->upper, this->publ.ref,scope); - if (p){ + if (p) { res = TIPC_OK; - p->port_list.next = this->publications; - this->publications = p; + list_add(&p->pport_list, &this->publications); this->publ.published = 1; } exit: @@ -1029,40 +1030,29 @@ tipc_withdraw(tipc_ref_t ref,struct tipc_name_seq const * seq) { struct port* this = port_lock_deref(ref); - struct publication *publ; - struct publication *prev = 0; + struct publication *publ, *tpub; uint key = 0; int res = TIPC_FAILURE; if (!this) - return TIPC_FAILURE; + return res; if (!this->publ.published) goto exit; - publ = this->publications; - if (!seq){ - while (this->publications) { - struct publication *next = - this->publications->port_list.next; - nametbl_withdraw(this->publications->type, - this->publications->lower, - this->publications->key); - this->publications = next; - } - this->publ.published = 0; - }else { - while (publ && (publ->lower != seq->lower)){ - prev = publ; - publ = publ->port_list.next; + if (!seq) { + list_for_each_entry_safe(publ, tpub, + &this->publications, pport_list) { + nametbl_withdraw(publ->type, publ->lower, publ->key); } - if (!publ) - goto exit; - key = publ->key; - if (prev) - prev->port_list.next = publ->port_list.next; - else - this->publications = publ->port_list.next; - nametbl_withdraw(seq->type,seq->lower,key); + } else { + list_for_each_entry_safe(publ, tpub, + &this->publications, pport_list) { + if (publ->lower != seq->lower) + continue; + key = publ->key; + nametbl_withdraw(seq->type, seq->lower, key); + res = TIPC_OK; + goto exit; + } } - res = TIPC_OK; exit: spin_unlock_bh(this->publ.lock); return res; |
From: Mark H. <ma...@os...> - 2004-05-06 20:22:57
|
Jon, Here are more conversions to list_head macros. I think that there are just a few more to go. Mark. cvs diff -u name_distr.c port.h name_table.c name_table.h port.c Index: name_distr.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_distr.c,v retrieving revision 1.7 diff -u -r1.7 name_distr.c --- name_distr.c 5 May 2004 15:41:28 -0000 1.7 +++ name_distr.c 6 May 2004 20:11:32 -0000 @@ -159,7 +159,6 @@ { struct sk_buff* buf = named_prepare_buf(WITHDRAWAL,ITEM_SIZE,0); struct distr_item *item = (struct distr_item*) msg_data(buf_msg(buf)); - list_del_init(&publ->local_list); publ_cnt--; publ_to_item(item,publ); cluster_broadcast(buf); Index: port.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.h,v retrieving revision 1.8 diff -u -r1.8 port.h --- port.h 5 May 2004 15:41:28 -0000 1.8 +++ port.h 6 May 2004 20:11:32 -0000 @@ -152,7 +152,7 @@ uint waiting_pkts; uint sent; uint acked; - struct publication *publications; + struct list_head publications; uint max_pkt; /* hint */ uint probing_state; uint last_in_seqno; Index: name_table.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.c,v retrieving revision 1.16 diff -u -r1.16 name_table.c --- name_table.c 5 May 2004 19:09:03 -0000 1.16 +++ name_table.c 6 May 2004 20:11:32 -0000 @@ -162,6 +162,8 @@ this->node = node; this->scope = scope; this->key = key; + INIT_LIST_HEAD(&this->local_list); + INIT_LIST_HEAD(&this->pport_list); if (node != tipc_own_addr) { nodesub_subscribe(&this->subscr, node, this, (net_ev_handler) publ_handle_node_down); @@ -888,6 +890,8 @@ if (publ->scope != TIPC_NODE_SCOPE) named_withdraw(publ); write_unlock_bh(&nametbl_lock); + list_del_init(&publ->local_list); + list_del_init(&publ->pport_list); kfree(publ); return 1; } Index: name_table.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.h,v retrieving revision 1.3 diff -u -r1.3 name_table.h --- name_table.h 5 May 2004 15:41:28 -0000 1.3 +++ name_table.h 6 May 2004 20:11:32 -0000 @@ -86,9 +86,7 @@ uint key; uint scope; struct list_head local_list; - struct { - struct publication *next; - }port_list; + struct list_head pport_list; struct { struct publication *next; }node_list; Index: port.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.c,v retrieving revision 1.17 diff -u -r1.17 port.c --- port.c 5 May 2004 15:41:28 -0000 1.17 +++ port.c 6 May 2004 20:11:33 -0000 @@ -305,6 +305,7 @@ this->wakeup = wakeup; this->user_port = 0; spin_lock_bh(&port_lock); + INIT_LIST_HEAD(&this->publications); INIT_LIST_HEAD(&this->port_list); list_add_tail(&this->port_list, &ports); spin_unlock_bh(&port_lock); @@ -637,9 +638,10 @@ void port_print(struct port * this, struct print_buf * buf, const char *str) { - struct publication *publ = this->publications; + struct publication *publ; tipc_printf(buf, str); tipc_printf(buf, "Port: %u \n",this->publ.ref); + if (this->publ.connected) { uint dport = port_peerport(this); uint destnode = port_peernode(this); @@ -647,10 +649,10 @@ tipc_zone(destnode), tipc_cluster(destnode), tipc_node(destnode), dport); } - while (publ) { + + list_for_each_entry(publ, &this->publications, pport_list) { tipc_printf(buf, " bound to: <%u,%u,%u>\n", publ->type, publ->lower, publ->upper); - publ = publ->port_list.next; } } @@ -1001,10 +1003,9 @@ p = nametbl_publish(seq->type, seq->lower,seq->upper, this->publ.ref,scope); - if (p){ + if (p) { res = TIPC_OK; - p->port_list.next = this->publications; - this->publications = p; + list_add(&p->pport_list, &this->publications); this->publ.published = 1; } exit: @@ -1029,40 +1030,29 @@ tipc_withdraw(tipc_ref_t ref,struct tipc_name_seq const * seq) { struct port* this = port_lock_deref(ref); - struct publication *publ; - struct publication *prev = 0; + struct publication *publ, *tpub; uint key = 0; int res = TIPC_FAILURE; if (!this) - return TIPC_FAILURE; + return res; if (!this->publ.published) goto exit; - publ = this->publications; - if (!seq){ - while (this->publications) { - struct publication *next = - this->publications->port_list.next; - nametbl_withdraw(this->publications->type, - this->publications->lower, - this->publications->key); - this->publications = next; - } - this->publ.published = 0; - }else { - while (publ && (publ->lower != seq->lower)){ - prev = publ; - publ = publ->port_list.next; + if (!seq) { + list_for_each_entry_safe(publ, tpub, + &this->publications, pport_list) { + nametbl_withdraw(publ->type, publ->lower, publ->key); } - if (!publ) - goto exit; - key = publ->key; - if (prev) - prev->port_list.next = publ->port_list.next; - else - this->publications = publ->port_list.next; - nametbl_withdraw(seq->type,seq->lower,key); + } else { + list_for_each_entry_safe(publ, tpub, + &this->publications, pport_list) { + if (publ->lower != seq->lower) + continue; + key = publ->key; + nametbl_withdraw(seq->type, seq->lower, key); + res = TIPC_OK; + goto exit; + } } - res = TIPC_OK; exit: spin_unlock_bh(this->publ.lock); return res; -- Mark Haverkamp <ma...@os...> |
From: Mark H. <ma...@os...> - 2004-05-05 23:12:33
|
On Wed, 2004-05-05 at 15:55, Jon Maloy wrote: > Hi, > It looks ok at a quick glance. You will have to merge > with some changes I just made and checked in, but I don't > think there will be much overlap. I updated my tree with your changes. cvs ran into a few conflicts. I'll resolve those tomorrow morning and check in after testing. Mark. > I also worked to get rid of warnings, but mainly it was > changes in the structure, like removing the linux-2.X > subdirectories and removing placeholder code and #ifdefs. > > I also made changes in the makefile, so if you want > to build as a standalone module you must run > "make standalone" from now on. > > /Jon -- Mark Haverkamp <ma...@os...> |
From: Jon M. <jon...@er...> - 2004-05-05 22:55:20
|
Hi, It looks ok at a quick glance. You will have to merge with some changes I just made and checked in, but I don't think there will be much overlap. I also worked to get rid of warnings, but mainly it was changes in the structure, like removing the linux-2.X subdirectories and removing placeholder code and #ifdefs. I also made changes in the makefile, so if you want to build as a standalone module you must run "make standalone" from now on. /Jon Mark Haverkamp wrote: >Jon, > >Attached are diffs for more list fixing up. I also got rid of a few >compiler warnings in this. > >Mark. > >cvs diff -u media.h bcast.h link.c sendbcast.c media.c link.h bcast.c manager.c >Index: media.h >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/media.h,v >retrieving revision 1.5 >diff -u -r1.5 media.h >--- media.h 11 Feb 2004 20:12:07 -0000 1.5 >+++ media.h 5 May 2004 22:36:56 -0000 >@@ -90,17 +90,19 @@ > struct bearer { > struct tipc_bearer publ; > struct media *media; >- struct link *cong_links; >+ struct list_head cong_links; > uint max_packet; /* Adjusted */ > uint priority; > uint bcast_interval; > uint identity; >- struct link *links; >+ struct list_head links; > uint continue_count; > int active; > char net_plane; > }; > >+struct link; >+ > uint bearer_get_media(char* raw,uint size); > void bearer_schedule(struct bearer *this, struct link *); > void bearer_stop(void); >@@ -132,7 +134,7 @@ > static inline int > bearer_congested(struct bearer *this, struct link * link) > { >- if (!this->cong_links) >+ if (list_empty(&this->cong_links)) > return 0; > return !bearer_resolve_congestion(this,link); > } >Index: bcast.h >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/bcast.h,v >retrieving revision 1.11 >diff -u -r1.11 bcast.h >--- bcast.h 5 May 2004 15:41:28 -0000 1.11 >+++ bcast.h 5 May 2004 22:36:56 -0000 >@@ -166,12 +166,14 @@ > void tipc_bcast_stop(void); > > void bnode_outqueue_release(int ackno); >+void blink_outqueue_release(struct link *ln, int ackno); > int in_list_node(struct list_head *list, tipc_net_addr_t destnode); > int in_list(struct list_head *list, uint destport, tipc_net_addr_t destnode); > > struct bcastlinkset* blink_select(int selector); > > >+ > extern struct broadcast_outqueue bcast_outqueue; > extern unsigned long int send_timer_ref; > extern struct bcastlinkset *linkset[MAX_BEARERS]; >Index: link.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/link.c,v >retrieving revision 1.20 >diff -u -r1.20 link.c >--- link.c 30 Apr 2004 21:23:17 -0000 1.20 >+++ link.c 5 May 2004 22:36:56 -0000 >@@ -210,7 +210,7 @@ > */ > > static void link_state_event(struct link *this, uint event); >-static void link_dump(struct link *); >+void link_dump(struct link *); > > > /* >@@ -554,17 +554,17 @@ > struct tipc_msg *msg; > this = (struct link*) k_malloc(sizeof(*this)); > memset(this,0,sizeof(*this)); >+ INIT_LIST_HEAD(&this->waiting_ports); >+ INIT_LIST_HEAD(&this->link_list); > this->addr = peer; > this->priority = b->priority; > link_set_queue_limits(this, b->media->window); > this->bearer = b; >- this->next_on_bearer = this->bearer->links; >- this->bearer->links = this; >+ list_add_tail(&this->link_list, &b->links); > this->checkpoint = 1; > this->state = RESET_UNKNOWN; > this->next_out_no = 1; > this->bcastlink = NULL; >- INIT_LIST_HEAD(&this->waiting_ports); > if (LINK_LOG_BUF_SIZE) { > char *buf = k_malloc(LINK_LOG_BUF_SIZE); > printbuf_init(&this->print_buf, buf, LINK_LOG_BUF_SIZE); >@@ -621,6 +621,7 @@ > } > #define DBG_OUTPUT 0 > >+#ifdef INTER_CLUSTER_COMM > static void > link_setup_timer(struct link *this) > { >@@ -634,6 +635,7 @@ > link_delete(this); > /* cfg_init_link_setup(addr); */ > } >+#endif > > void > link_start(struct link *this) >@@ -1126,7 +1128,7 @@ > int res = msg_data_sz(msg); > if (likely(!link_congested(this))) { > if (likely(msg_size(msg) <= link_max_pkt(this))) { >- if (likely(!this->bearer->cong_links)) { >+ if (likely(list_empty(&this->bearer->cong_links))) { > uint ack = mod(this->next_in_no - 1); > uint seqno = mod(this->next_out_no++); > msg_set_word(msg, 2, ((ack << 16) | seqno)); >@@ -1220,7 +1222,8 @@ > if (unlikely(res < 0)) > goto exit; > >- if (link_congested(this) || this->bearer->cong_links){ >+ if (link_congested(this) || >+ !list_empty(&this->bearer->cong_links)) { > res = link_schedule_port(this,sender->publ.ref,res); > goto exit; > } >@@ -2421,7 +2424,7 @@ > } > #define DBG_OUTPUT 0 > >-static void >+void > link_dump(struct link *this) > { > tipc_dump(&this->print_buf,"\n\nDumping link %s:\n",this->name); >Index: sendbcast.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/sendbcast.c,v >retrieving revision 1.13 >diff -u -r1.13 sendbcast.c >--- sendbcast.c 22 Apr 2004 07:06:59 -0000 1.13 >+++ sendbcast.c 5 May 2004 22:36:56 -0000 >@@ -236,7 +236,10 @@ > count = count_mc_member(&mc_head); > > >- #define REPLICA_NODES 0 >+#ifdef REPLICA_NODES >+#undef REPLICA_NODES >+#endif >+#define REPLICA_NODES 0 > if (count <= REPLICA_NODES){ > res = tipc_forward_buf2nameseq(ref,(struct tipc_name_seq*)seq, > b,&orig,res,17,&mc_head); >@@ -485,7 +488,7 @@ > int res = TIPC_OK; > if (likely(!link_congested(this))) { > if (likely(msg_size(msg) <= link_max_pkt(this))) { >- if (likely(!this->bearer->cong_links)) { >+ if (likely(list_empty(&this->bearer->cong_links))) { > msg_set_ack(msg,0); > blink_add_to_outqueue(this, buf); > if (this->first_out) >Index: media.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/media.c,v >retrieving revision 1.12 >diff -u -r1.12 media.c >--- media.c 16 Apr 2004 18:15:04 -0000 1.12 >+++ media.c 5 May 2004 22:36:56 -0000 >@@ -113,13 +113,7 @@ > */ > void bearer_detach_link(struct bearer* this,struct link* l) > { >- struct link **crs = &this->links; >- for (; (*crs); crs = &((*crs)->next)) { >- if ((*crs) == l) { >- *crs = l->next; >- break; >- } >- } >+ list_del_init(&l->link_list); > } > > /* >@@ -133,24 +127,19 @@ > bearer_push(struct bearer *this) > { > uint res = TIPC_OK; >+ struct link *ln, *tln; >+ > if (this->publ.blocked) > return 0; > >- while (this->cong_links && (res != PUSH_FAILED)) { >- struct link *link = this->cong_links; >- res = link_push_packet(this->cong_links); >- if (res == TIPC_OK) { >- this->cong_links = this->cong_links->next; >- continue; >- } else if (res == PUSH_FINISHED) { >- this->cong_links = >- (link != link->next) ? link->next : 0; >- link->next->prev = link->prev; >- link->prev->next = link->next; >- link->next = link->prev = 0; >- } >+ list_for_each_entry_safe(ln, tln, &this->cong_links, link_list) { >+ res = link_push_packet(ln); >+ if (res == PUSH_FAILED) >+ break; >+ if (res == PUSH_FINISHED) >+ list_move_tail(&ln->link_list, &this->links); > } >- return (this->cong_links == 0); >+ return list_empty(&this->cong_links); > } > > static int >@@ -175,7 +164,7 @@ > spin_lock_bh(&this->publ.lock); > this->continue_count++; > assert(this->publ.blocked); >- if (this->cong_links) >+ if (!list_empty(&this->cong_links)) > k_signal((Handler) bearer_lock_push, (void *) this); > this->publ.blocked = 0; > spin_unlock_bh(&this->publ.lock); >@@ -192,22 +181,7 @@ > static void > bearer_schedule_unlocked(struct bearer *this, struct link * link) > { >- if (link->next) >- return; >- assert(link->prev == 0); >- assert(this->cong_links != link); >- if (!this->cong_links) { >- this->cong_links = link; >- link->next = link; >- link->prev = link; >- } else { /* Link in last in list */ >- >- struct link *last = this->cong_links->prev; >- link->next = this->cong_links; >- link->prev = last; >- last->next = link; >- this->cong_links->prev = link; >- } >+ list_move_tail(&link->link_list, &this->cong_links); > } > > /* >@@ -221,8 +195,6 @@ > void > bearer_schedule(struct bearer *this, struct link * link) > { >- if (link->next) >- return; > spin_lock_bh(&this->publ.lock); > bearer_schedule_unlocked(this,link); > spin_unlock_bh(&this->publ.lock); >@@ -238,7 +210,7 @@ > bearer_resolve_congestion(struct bearer *this, struct link * link) > { > int res = 1; >- if (!this->cong_links) >+ if (list_empty(&this->cong_links)) > return 1; > spin_lock_bh(&this->publ.lock); > if (!bearer_push(this)) { >@@ -430,6 +402,8 @@ > this->media = media; > this->net_plane = bearer_id + 'A'; > this->active = 1; >+ INIT_LIST_HEAD(&this->cong_links); >+ INIT_LIST_HEAD(&this->links); > if (media->broadcast) { > cfg_init_link_req(bcast_domain,2,this, > &media->bcast_addr); >@@ -454,6 +428,8 @@ > { > uint i = 0; > struct bearer *this = 0; >+ struct link *ln, *tln; >+ > if (!bearer_name_valid(name)) > return TIPC_FAILURE; > write_lock_bh(&net_lock); >@@ -475,13 +451,11 @@ > write_lock_bh(&net_lock); > spin_lock_bh(&this->publ.lock); > } >- while (this->links) { >- struct link *next = this->links->next; >- struct node *owner = this->links->owner; >+ list_for_each_entry_safe(ln, tln, &this->links, link_list) { >+ struct node *owner = ln->owner; > spin_lock_bh(&owner->lock); >- link_delete(this->links); >+ link_delete(ln); > spin_unlock_bh(&owner->lock); >- this->links = next; > } > spin_unlock_bh(&this->publ.lock); > memset(this, 0, sizeof (struct bearer)); >Index: link.h >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/link.h,v >retrieving revision 1.11 >diff -u -r1.11 link.h >--- link.h 30 Apr 2004 21:23:17 -0000 1.11 >+++ link.h 5 May 2004 22:36:56 -0000 >@@ -117,9 +117,7 @@ > struct tipc_media_addr media_addr; > unsigned long int timer_ref; > struct node *owner; >- struct link *next; >- struct link *prev; >- struct link *next_on_bearer; >+ struct list_head link_list; > struct link *bcastlink; > > /* Management and link supervision data: */ >Index: bcast.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/bcast.c,v >retrieving revision 1.19 >diff -u -r1.19 bcast.c >--- bcast.c 22 Apr 2004 07:06:59 -0000 1.19 >+++ bcast.c 5 May 2004 22:36:57 -0000 >@@ -309,9 +309,7 @@ > > blink = (struct bcastlink*)k_malloc(sizeof(struct bcastlink)); > memcpy(&blink->link,this,sizeof(struct link)); >- blink->link.next = NULL; >- blink->link.prev = NULL; >- blink->link.next_on_bearer = NULL; >+ INIT_LIST_HEAD(&blink->link.link_list); > blink->link.out_queue_size = 0; > blink->link.first_out = NULL; > blink->link.last_out = NULL; >@@ -573,12 +571,11 @@ > > void free_mclist(struct list_head *list_head) > { >- struct list_head *tmp; >+ struct mc_identity *mci, *tmci; > >- while(!list_empty(list_head)) { >- tmp = list_head->next; >- list_del(tmp); >- kfree(list_entry(tmp, struct mc_identity, list)); >+ list_for_each_entry_safe(mci, tmci, list_head, list) { >+ list_del(&mci->list); >+ kfree(mci); > } > > } >Index: manager.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/manager.c,v >retrieving revision 1.8 >diff -u -r1.8 manager.c >--- manager.c 16 Apr 2004 18:15:04 -0000 1.8 >+++ manager.c 5 May 2004 22:36:57 -0000 >@@ -110,25 +110,24 @@ > > int mng_enabled = 0; > >-struct subscr_data{ >+struct subscr_data { > void* usr_handle; > uint domain; > tipc_ref_t subscr_ref; > tipc_ref_t port_ref; > struct tipc_subscr sub; >- struct subscr_data* next; >- struct subscr_data* prev; >+ struct list_head subd_list; > }; > >-struct manager{ >+struct manager { > tipc_ref_t user_ref; > tipc_ref_t port_ref; > tipc_ref_t subscr_ref; > tipc_ref_t conn_port_ref; > uint name_subscriptions; >- struct subscr_data* subscribers; >+ struct list_head subscribers; > uint link_subscriptions; >- struct subscr_data* link_subscribers; >+ struct list_head link_subscribers; > }; > > static struct manager mng; >@@ -188,12 +187,7 @@ > tipc_disconnect(sd->port_ref); > tipc_deleteport(sd->port_ref); > tipc_unsubscribe(sd->subscr_ref); >- if (sd->prev) >- sd->prev->next = sd->next; >- else >- mng.subscribers = sd->next; >- if (sd->next) >- sd->next->prev = sd->prev; >+ list_del(&sd->subd_list); > kfree(sd); > mng.name_subscriptions--; > } >@@ -226,12 +220,13 @@ > void > mng_link_event(tipc_net_addr_t addr,char* name, int up) > { >- struct subscr_data* sub = mng.link_subscribers; >+ struct subscr_data* sub; > struct tipc_cmd_result_msg rmsg; > struct tipc_msg_section sct[3]; > struct tipc_link_info* l = &rmsg.result.links[0]; >- if (!sub) >+ if (list_empty(&mng.link_subscribers)) > return; >+ sub = list_entry(&mng.link_subscribers, struct subscr_data, subd_list); > mng_prepare_res_msg(TIPC_LINK_SUBSCRIBE, > sub->usr_handle, > TIPC_OK, >@@ -243,13 +238,12 @@ > l->up = up; > l->dest = htonl(addr); > rmsg.result_len = htonl(sct[1].size + sct[2].size); >- while (sub){ >+ list_for_each_entry(sub, &mng.link_subscribers, subd_list) { > if (in_scope(sub->domain,addr)){ >- memcpy(rmsg.usr_handle,sub->usr_handle, >+ memcpy(rmsg.usr_handle, sub->usr_handle, > sizeof(rmsg.usr_handle)); >- tipc_send(sub->port_ref,3u,sct); >+ tipc_send(sub->port_ref, 3u, sct); > } >- sub = sub->next; > } > } > >@@ -262,12 +256,7 @@ > if (connected) > tipc_disconnect(sub->port_ref); > tipc_deleteport(sub->port_ref); >- if (sub->prev) >- sub->prev->next = sub->next; >- else >- mng.link_subscribers = sub->next; >- if (sub->next) >- sub->next->prev = sub->prev; >+ list_del(&sub->subd_list); > kfree(sub); > mng.link_subscriptions--; > } >@@ -321,6 +310,7 @@ > memcpy(&sub->usr_handle,msg->usr_handle, > sizeof(sub->usr_handle)); > memcpy(&sub->sub,a,sizeof(*a)); >+ INIT_LIST_HEAD(&sub->subd_list); > > /* Create a port for subscription */ > tipc_createport(mng.user_ref, >@@ -353,10 +343,7 @@ > break; > } > >- sub->next = mng.subscribers; >- mng.subscribers = sub; >- if (sub->next) sub->next->prev = sub; >- sub->prev = 0; >+ list_add_tail(&sub->subd_list, &mng.subscribers); > > /* Establish connection */ > tipc_connect2port(sub->port_ref,orig); >@@ -370,6 +357,7 @@ > if (mng.link_subscriptions > 64) > break; > sub = (struct subscr_data*)k_malloc(sizeof(*sub)); >+ INIT_LIST_HEAD(&sub->subd_list); > tipc_createport(mng.user_ref, > (void*)sub, > TIPC_HIGH_IMPORTANCE, >@@ -384,11 +372,7 @@ > memcpy(&sub->usr_handle,msg->usr_handle, > sizeof(sub->usr_handle)); > sub->domain = msg->argv.domain; >- sub->prev = 0; >- sub->next = mng.link_subscribers; >- if (sub->next) >- sub->next->prev = sub; >- mng.link_subscribers = sub; >+ list_add_tail(&sub->subd_list, &mng.link_subscribers); > tipc_connect2port(sub->port_ref,orig); > rmsg.retval = TIPC_OK; > tipc_send(sub->port_ref, 2u, sct); >@@ -540,6 +524,8 @@ > { > struct tipc_name_seq seq; > memset(&mng, 0, sizeof (mng)); >+ INIT_LIST_HEAD(&mng.subscribers); >+ INIT_LIST_HEAD(&mng.link_subscribers); > tipc_attach(&mng.user_ref, 0, 0); > tipc_createport(mng.user_ref, > 0, > > > |
From: Mark H. <ma...@os...> - 2004-05-05 22:44:25
|
Jon, Attached are diffs for more list fixing up. I also got rid of a few compiler warnings in this. Mark. cvs diff -u media.h bcast.h link.c sendbcast.c media.c link.h bcast.c manager.c Index: media.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/media.h,v retrieving revision 1.5 diff -u -r1.5 media.h --- media.h 11 Feb 2004 20:12:07 -0000 1.5 +++ media.h 5 May 2004 22:36:56 -0000 @@ -90,17 +90,19 @@ struct bearer { struct tipc_bearer publ; struct media *media; - struct link *cong_links; + struct list_head cong_links; uint max_packet; /* Adjusted */ uint priority; uint bcast_interval; uint identity; - struct link *links; + struct list_head links; uint continue_count; int active; char net_plane; }; +struct link; + uint bearer_get_media(char* raw,uint size); void bearer_schedule(struct bearer *this, struct link *); void bearer_stop(void); @@ -132,7 +134,7 @@ static inline int bearer_congested(struct bearer *this, struct link * link) { - if (!this->cong_links) + if (list_empty(&this->cong_links)) return 0; return !bearer_resolve_congestion(this,link); } Index: bcast.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/bcast.h,v retrieving revision 1.11 diff -u -r1.11 bcast.h --- bcast.h 5 May 2004 15:41:28 -0000 1.11 +++ bcast.h 5 May 2004 22:36:56 -0000 @@ -166,12 +166,14 @@ void tipc_bcast_stop(void); void bnode_outqueue_release(int ackno); +void blink_outqueue_release(struct link *ln, int ackno); int in_list_node(struct list_head *list, tipc_net_addr_t destnode); int in_list(struct list_head *list, uint destport, tipc_net_addr_t destnode); struct bcastlinkset* blink_select(int selector); + extern struct broadcast_outqueue bcast_outqueue; extern unsigned long int send_timer_ref; extern struct bcastlinkset *linkset[MAX_BEARERS]; Index: link.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/link.c,v retrieving revision 1.20 diff -u -r1.20 link.c --- link.c 30 Apr 2004 21:23:17 -0000 1.20 +++ link.c 5 May 2004 22:36:56 -0000 @@ -210,7 +210,7 @@ */ static void link_state_event(struct link *this, uint event); -static void link_dump(struct link *); +void link_dump(struct link *); /* @@ -554,17 +554,17 @@ struct tipc_msg *msg; this = (struct link*) k_malloc(sizeof(*this)); memset(this,0,sizeof(*this)); + INIT_LIST_HEAD(&this->waiting_ports); + INIT_LIST_HEAD(&this->link_list); this->addr = peer; this->priority = b->priority; link_set_queue_limits(this, b->media->window); this->bearer = b; - this->next_on_bearer = this->bearer->links; - this->bearer->links = this; + list_add_tail(&this->link_list, &b->links); this->checkpoint = 1; this->state = RESET_UNKNOWN; this->next_out_no = 1; this->bcastlink = NULL; - INIT_LIST_HEAD(&this->waiting_ports); if (LINK_LOG_BUF_SIZE) { char *buf = k_malloc(LINK_LOG_BUF_SIZE); printbuf_init(&this->print_buf, buf, LINK_LOG_BUF_SIZE); @@ -621,6 +621,7 @@ } #define DBG_OUTPUT 0 +#ifdef INTER_CLUSTER_COMM static void link_setup_timer(struct link *this) { @@ -634,6 +635,7 @@ link_delete(this); /* cfg_init_link_setup(addr); */ } +#endif void link_start(struct link *this) @@ -1126,7 +1128,7 @@ int res = msg_data_sz(msg); if (likely(!link_congested(this))) { if (likely(msg_size(msg) <= link_max_pkt(this))) { - if (likely(!this->bearer->cong_links)) { + if (likely(list_empty(&this->bearer->cong_links))) { uint ack = mod(this->next_in_no - 1); uint seqno = mod(this->next_out_no++); msg_set_word(msg, 2, ((ack << 16) | seqno)); @@ -1220,7 +1222,8 @@ if (unlikely(res < 0)) goto exit; - if (link_congested(this) || this->bearer->cong_links){ + if (link_congested(this) || + !list_empty(&this->bearer->cong_links)) { res = link_schedule_port(this,sender->publ.ref,res); goto exit; } @@ -2421,7 +2424,7 @@ } #define DBG_OUTPUT 0 -static void +void link_dump(struct link *this) { tipc_dump(&this->print_buf,"\n\nDumping link %s:\n",this->name); Index: sendbcast.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/sendbcast.c,v retrieving revision 1.13 diff -u -r1.13 sendbcast.c --- sendbcast.c 22 Apr 2004 07:06:59 -0000 1.13 +++ sendbcast.c 5 May 2004 22:36:56 -0000 @@ -236,7 +236,10 @@ count = count_mc_member(&mc_head); - #define REPLICA_NODES 0 +#ifdef REPLICA_NODES +#undef REPLICA_NODES +#endif +#define REPLICA_NODES 0 if (count <= REPLICA_NODES){ res = tipc_forward_buf2nameseq(ref,(struct tipc_name_seq*)seq, b,&orig,res,17,&mc_head); @@ -485,7 +488,7 @@ int res = TIPC_OK; if (likely(!link_congested(this))) { if (likely(msg_size(msg) <= link_max_pkt(this))) { - if (likely(!this->bearer->cong_links)) { + if (likely(list_empty(&this->bearer->cong_links))) { msg_set_ack(msg,0); blink_add_to_outqueue(this, buf); if (this->first_out) Index: media.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/media.c,v retrieving revision 1.12 diff -u -r1.12 media.c --- media.c 16 Apr 2004 18:15:04 -0000 1.12 +++ media.c 5 May 2004 22:36:56 -0000 @@ -113,13 +113,7 @@ */ void bearer_detach_link(struct bearer* this,struct link* l) { - struct link **crs = &this->links; - for (; (*crs); crs = &((*crs)->next)) { - if ((*crs) == l) { - *crs = l->next; - break; - } - } + list_del_init(&l->link_list); } /* @@ -133,24 +127,19 @@ bearer_push(struct bearer *this) { uint res = TIPC_OK; + struct link *ln, *tln; + if (this->publ.blocked) return 0; - while (this->cong_links && (res != PUSH_FAILED)) { - struct link *link = this->cong_links; - res = link_push_packet(this->cong_links); - if (res == TIPC_OK) { - this->cong_links = this->cong_links->next; - continue; - } else if (res == PUSH_FINISHED) { - this->cong_links = - (link != link->next) ? link->next : 0; - link->next->prev = link->prev; - link->prev->next = link->next; - link->next = link->prev = 0; - } + list_for_each_entry_safe(ln, tln, &this->cong_links, link_list) { + res = link_push_packet(ln); + if (res == PUSH_FAILED) + break; + if (res == PUSH_FINISHED) + list_move_tail(&ln->link_list, &this->links); } - return (this->cong_links == 0); + return list_empty(&this->cong_links); } static int @@ -175,7 +164,7 @@ spin_lock_bh(&this->publ.lock); this->continue_count++; assert(this->publ.blocked); - if (this->cong_links) + if (!list_empty(&this->cong_links)) k_signal((Handler) bearer_lock_push, (void *) this); this->publ.blocked = 0; spin_unlock_bh(&this->publ.lock); @@ -192,22 +181,7 @@ static void bearer_schedule_unlocked(struct bearer *this, struct link * link) { - if (link->next) - return; - assert(link->prev == 0); - assert(this->cong_links != link); - if (!this->cong_links) { - this->cong_links = link; - link->next = link; - link->prev = link; - } else { /* Link in last in list */ - - struct link *last = this->cong_links->prev; - link->next = this->cong_links; - link->prev = last; - last->next = link; - this->cong_links->prev = link; - } + list_move_tail(&link->link_list, &this->cong_links); } /* @@ -221,8 +195,6 @@ void bearer_schedule(struct bearer *this, struct link * link) { - if (link->next) - return; spin_lock_bh(&this->publ.lock); bearer_schedule_unlocked(this,link); spin_unlock_bh(&this->publ.lock); @@ -238,7 +210,7 @@ bearer_resolve_congestion(struct bearer *this, struct link * link) { int res = 1; - if (!this->cong_links) + if (list_empty(&this->cong_links)) return 1; spin_lock_bh(&this->publ.lock); if (!bearer_push(this)) { @@ -430,6 +402,8 @@ this->media = media; this->net_plane = bearer_id + 'A'; this->active = 1; + INIT_LIST_HEAD(&this->cong_links); + INIT_LIST_HEAD(&this->links); if (media->broadcast) { cfg_init_link_req(bcast_domain,2,this, &media->bcast_addr); @@ -454,6 +428,8 @@ { uint i = 0; struct bearer *this = 0; + struct link *ln, *tln; + if (!bearer_name_valid(name)) return TIPC_FAILURE; write_lock_bh(&net_lock); @@ -475,13 +451,11 @@ write_lock_bh(&net_lock); spin_lock_bh(&this->publ.lock); } - while (this->links) { - struct link *next = this->links->next; - struct node *owner = this->links->owner; + list_for_each_entry_safe(ln, tln, &this->links, link_list) { + struct node *owner = ln->owner; spin_lock_bh(&owner->lock); - link_delete(this->links); + link_delete(ln); spin_unlock_bh(&owner->lock); - this->links = next; } spin_unlock_bh(&this->publ.lock); memset(this, 0, sizeof (struct bearer)); Index: link.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/link.h,v retrieving revision 1.11 diff -u -r1.11 link.h --- link.h 30 Apr 2004 21:23:17 -0000 1.11 +++ link.h 5 May 2004 22:36:56 -0000 @@ -117,9 +117,7 @@ struct tipc_media_addr media_addr; unsigned long int timer_ref; struct node *owner; - struct link *next; - struct link *prev; - struct link *next_on_bearer; + struct list_head link_list; struct link *bcastlink; /* Management and link supervision data: */ Index: bcast.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/bcast.c,v retrieving revision 1.19 diff -u -r1.19 bcast.c --- bcast.c 22 Apr 2004 07:06:59 -0000 1.19 +++ bcast.c 5 May 2004 22:36:57 -0000 @@ -309,9 +309,7 @@ blink = (struct bcastlink*)k_malloc(sizeof(struct bcastlink)); memcpy(&blink->link,this,sizeof(struct link)); - blink->link.next = NULL; - blink->link.prev = NULL; - blink->link.next_on_bearer = NULL; + INIT_LIST_HEAD(&blink->link.link_list); blink->link.out_queue_size = 0; blink->link.first_out = NULL; blink->link.last_out = NULL; @@ -573,12 +571,11 @@ void free_mclist(struct list_head *list_head) { - struct list_head *tmp; + struct mc_identity *mci, *tmci; - while(!list_empty(list_head)) { - tmp = list_head->next; - list_del(tmp); - kfree(list_entry(tmp, struct mc_identity, list)); + list_for_each_entry_safe(mci, tmci, list_head, list) { + list_del(&mci->list); + kfree(mci); } } Index: manager.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/manager.c,v retrieving revision 1.8 diff -u -r1.8 manager.c --- manager.c 16 Apr 2004 18:15:04 -0000 1.8 +++ manager.c 5 May 2004 22:36:57 -0000 @@ -110,25 +110,24 @@ int mng_enabled = 0; -struct subscr_data{ +struct subscr_data { void* usr_handle; uint domain; tipc_ref_t subscr_ref; tipc_ref_t port_ref; struct tipc_subscr sub; - struct subscr_data* next; - struct subscr_data* prev; + struct list_head subd_list; }; -struct manager{ +struct manager { tipc_ref_t user_ref; tipc_ref_t port_ref; tipc_ref_t subscr_ref; tipc_ref_t conn_port_ref; uint name_subscriptions; - struct subscr_data* subscribers; + struct list_head subscribers; uint link_subscriptions; - struct subscr_data* link_subscribers; + struct list_head link_subscribers; }; static struct manager mng; @@ -188,12 +187,7 @@ tipc_disconnect(sd->port_ref); tipc_deleteport(sd->port_ref); tipc_unsubscribe(sd->subscr_ref); - if (sd->prev) - sd->prev->next = sd->next; - else - mng.subscribers = sd->next; - if (sd->next) - sd->next->prev = sd->prev; + list_del(&sd->subd_list); kfree(sd); mng.name_subscriptions--; } @@ -226,12 +220,13 @@ void mng_link_event(tipc_net_addr_t addr,char* name, int up) { - struct subscr_data* sub = mng.link_subscribers; + struct subscr_data* sub; struct tipc_cmd_result_msg rmsg; struct tipc_msg_section sct[3]; struct tipc_link_info* l = &rmsg.result.links[0]; - if (!sub) + if (list_empty(&mng.link_subscribers)) return; + sub = list_entry(&mng.link_subscribers, struct subscr_data, subd_list); mng_prepare_res_msg(TIPC_LINK_SUBSCRIBE, sub->usr_handle, TIPC_OK, @@ -243,13 +238,12 @@ l->up = up; l->dest = htonl(addr); rmsg.result_len = htonl(sct[1].size + sct[2].size); - while (sub){ + list_for_each_entry(sub, &mng.link_subscribers, subd_list) { if (in_scope(sub->domain,addr)){ - memcpy(rmsg.usr_handle,sub->usr_handle, + memcpy(rmsg.usr_handle, sub->usr_handle, sizeof(rmsg.usr_handle)); - tipc_send(sub->port_ref,3u,sct); + tipc_send(sub->port_ref, 3u, sct); } - sub = sub->next; } } @@ -262,12 +256,7 @@ if (connected) tipc_disconnect(sub->port_ref); tipc_deleteport(sub->port_ref); - if (sub->prev) - sub->prev->next = sub->next; - else - mng.link_subscribers = sub->next; - if (sub->next) - sub->next->prev = sub->prev; + list_del(&sub->subd_list); kfree(sub); mng.link_subscriptions--; } @@ -321,6 +310,7 @@ memcpy(&sub->usr_handle,msg->usr_handle, sizeof(sub->usr_handle)); memcpy(&sub->sub,a,sizeof(*a)); + INIT_LIST_HEAD(&sub->subd_list); /* Create a port for subscription */ tipc_createport(mng.user_ref, @@ -353,10 +343,7 @@ break; } - sub->next = mng.subscribers; - mng.subscribers = sub; - if (sub->next) sub->next->prev = sub; - sub->prev = 0; + list_add_tail(&sub->subd_list, &mng.subscribers); /* Establish connection */ tipc_connect2port(sub->port_ref,orig); @@ -370,6 +357,7 @@ if (mng.link_subscriptions > 64) break; sub = (struct subscr_data*)k_malloc(sizeof(*sub)); + INIT_LIST_HEAD(&sub->subd_list); tipc_createport(mng.user_ref, (void*)sub, TIPC_HIGH_IMPORTANCE, @@ -384,11 +372,7 @@ memcpy(&sub->usr_handle,msg->usr_handle, sizeof(sub->usr_handle)); sub->domain = msg->argv.domain; - sub->prev = 0; - sub->next = mng.link_subscribers; - if (sub->next) - sub->next->prev = sub; - mng.link_subscribers = sub; + list_add_tail(&sub->subd_list, &mng.link_subscribers); tipc_connect2port(sub->port_ref,orig); rmsg.retval = TIPC_OK; tipc_send(sub->port_ref, 2u, sct); @@ -540,6 +524,8 @@ { struct tipc_name_seq seq; memset(&mng, 0, sizeof (mng)); + INIT_LIST_HEAD(&mng.subscribers); + INIT_LIST_HEAD(&mng.link_subscribers); tipc_attach(&mng.user_ref, 0, 0); tipc_createport(mng.user_ref, 0, -- Mark Haverkamp <ma...@os...> |
From: Mark H. <ma...@os...> - 2004-05-05 16:03:22
|
On Tue, 2004-05-04 at 16:52, Jon Maloy wrote: > Hi Mark, > Sorry for my late answer, I ended up with some different work than > planned yesterday and today. > Your changes look correct to me, and you are right: the loops where > I release subscription and ports in reg.c were very wrong. Good > that you detected that one. > OK, I checked in the files. Mark. -- Mark Haverkamp <ma...@os...> |
From: Jon M. <jon...@er...> - 2004-05-04 23:52:37
|
Hi Mark, Sorry for my late answer, I ended up with some different work than planned yesterday and today. Your changes look correct to me, and you are right: the loops where I release subscription and ports in reg.c were very wrong. Good that you detected that one. Thanks /jon Mark Haverkamp wrote: >Jon, >Since I didn't hear from you about my last patch yet, I have added to >it. Take a look at tipc_detach in particular. The original code didn't >make sense to me. > >cvs diff -u bcast.h name_table.c name_table.h name_subscr.c port.c name_subscr.h port.h name_distr.c reg.c >Index: bcast.h >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/bcast.h,v >retrieving revision 1.10 >diff -u -r1.10 bcast.h >--- bcast.h 16 Apr 2004 04:03:46 -0000 1.10 >+++ bcast.h 4 May 2004 22:07:46 -0000 >@@ -165,6 +165,10 @@ > void tipc_bcast_start(void); > void tipc_bcast_stop(void); > >+void bnode_outqueue_release(int ackno); >+int in_list_node(struct list_head *list, tipc_net_addr_t destnode); >+int in_list(struct list_head *list, uint destport, tipc_net_addr_t destnode); >+ > struct bcastlinkset* blink_select(int selector); > > >Index: name_table.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.c,v >retrieving revision 1.14 >diff -u -r1.14 name_table.c >--- name_table.c 28 Apr 2004 21:55:26 -0000 1.14 >+++ name_table.c 4 May 2004 22:07:46 -0000 >@@ -118,8 +118,8 @@ > #include "bcast.h" > > >-static void nametbl_dump(void); >-static void nametbl_print(struct print_buf *buf, const char *str); >+void nametbl_dump(void); >+void nametbl_print(struct print_buf *buf, const char *str); > rwlock_t nametbl_lock = RW_LOCK_UNLOCKED; > > /* >@@ -196,14 +196,13 @@ > struct sub_seq* sseqs; > uint alloc; > uint first_free; >- struct name_seq *prev; >- struct name_seq *next; >- struct name_subscr *subscriptions; >+ struct hlist_node ns_list; >+ struct list_head subscriptions; > spinlock_t lock; > }; > > struct name_seq * >-nameseq_create(uint type, struct name_seq *next) >+nameseq_create(uint type, struct hlist_head *seq_head) > { > struct name_seq *this = > (struct name_seq *)k_malloc(sizeof(*this)); >@@ -211,12 +210,12 @@ > this->lock = SPIN_LOCK_UNLOCKED; > this->type = type; > this->sseqs = subseq_alloc(1); >- dbg("nameseq_create() this = %x type %u, next =%x, ssseqs %x, ff: %u\n", >- this,type, next,this->sseqs,this->first_free); >+ dbg("nameseq_create() this = %x type %u, ssseqs %x, ff: %u\n", >+ this,type, this->sseqs,this->first_free); > this->alloc = 1; >- this->next = next; >- if (next) >- next->prev = this; >+ INIT_HLIST_NODE(&this->ns_list); >+ INIT_LIST_HEAD(&this->subscriptions); >+ hlist_add_head(&this->ns_list, seq_head); > return this; > } > >@@ -288,7 +287,6 @@ > uint scope, > uint key) > { >- struct name_subscr *s = this->subscriptions; > struct publication *publ; > struct sub_seq *sseq; > int created_subseq = 0; >@@ -365,18 +363,21 @@ > } > } > >- if (!created_subseq) >- return publ; >+ /* >+ * Any subscriptions waiting for notification? >+ */ >+ if (created_subseq) { >+ struct name_subscr *s, *st; >+ list_for_each_entry_safe(s, st, >+ &this->subscriptions, nsub_list) { > >- /* Any subscriptions waiting ? */ >- while (s) { >- namesub_report_overlap(s, >+ namesub_report_overlap(s, > publ->lower, > publ->upper, > TIPC_PUBLISHED, > publ->ref, > publ->node); >- s = s->next; >+ } > } > return publ; > } >@@ -388,11 +389,11 @@ > tipc_net_addr_t node, > uint key) > { >- struct name_subscr *s = this->subscriptions; > struct publication *publ; > struct publication *prev = 0; > struct sub_seq *sseq = nameseq_find_subseq(this,inst); > struct sub_seq *free; >+ struct name_subscr *s, *st; > assert(this); > if (!sseq) { > int i; >@@ -473,20 +474,22 @@ > if (sseq->node_list || sseq->cluster_list || sseq->zone_list) > return publ; > >- /* No more publications,contract subseq list: */ >- >+ /* >+ * No more publications,contract subseq list: >+ */ > free = &this->sseqs[this->first_free--]; > memmove(sseq,sseq+1,(free-(sseq+1))*sizeof(*sseq)); > >- /* Any subscriptions waiting ? */ >- while (s) { >+ /* >+ * Any subscriptions waiting ? >+ */ >+ list_for_each_entry_safe(s, st, &this->subscriptions, nsub_list) { > namesub_report_overlap(s, > publ->lower, > publ->upper, > TIPC_WITHDRAWN, > publ->ref, > publ->node); >- s = s->next; > } > return publ; > } >@@ -529,8 +532,7 @@ > { > struct sub_seq *sseq = this->sseqs; > >- s->next = this->subscriptions; >- this->subscriptions = s; >+ list_add(&s->nsub_list, &this->subscriptions); > > if (!sseq) > return; >@@ -545,10 +547,6 @@ > } > > >- >- >- >- > /* > * struct name_table: translation table containing all existing > * port name publications. Consists of 'name_seq' objects >@@ -560,7 +558,7 @@ > #define TABLE_SIZE (1<<14) > > struct name_table { >- struct name_seq *types[TABLE_SIZE]; >+ struct hlist_head types[TABLE_SIZE]; > uint key; > uint local_publ_count; > }; >@@ -576,15 +574,22 @@ > > static struct name_seq* nametbl_find_seq(uint type) > { >- struct name_seq* seq; >- dbg("find_seq %u,(%u,0x%x) table = %x, hash[type] = %u\n", >- type,ntohl(type),type,table,hash(type)); >- seq = table->types[hash(type)]; >- dbg("found %x\n",seq); >- while (seq && (seq->type != type)) { >- seq = seq->next; >+ struct hlist_head *seq_head; >+ struct hlist_node *seq_node; >+ struct name_seq *ns; >+ >+ dbg("find_seq %u,(%u,0x%x) table = %p, hash[type] = %u\n", >+ type, ntohl(type), type, table, hash(type)); >+ >+ seq_head = &table->types[hash(type)]; >+ hlist_for_each_entry(ns, seq_node, seq_head, ns_list) { >+ if (ns->type == type) { >+ dbg("found %x\n", ns); >+ return ns; >+ } > } >- return seq; >+ >+ return 0; > }; > > >@@ -606,8 +611,7 @@ > } > dbg("Publishing <%u,%u,%u> from %x\n", type, lower, upper, node); > if (!seq) { >- struct name_seq *head = table->types[hash(type)]; >- seq = table->types[hash(type)] = nameseq_create(type, head); >+ seq = nameseq_create(type, &table->types[hash(type)]); > dbg("nametbl_insert_publ: created %x\n",seq); > } > assert(seq->type == type); >@@ -626,13 +630,9 @@ > return 0; > dbg("Withdrawing <%u,%u> from %x\n", type, lower, node); > publ = nameseq_remove_publ(seq, lower, node, key); >- if (!seq->first_free && !seq->subscriptions) { >- if (seq->prev) >- seq->prev->next = seq->next; >- else >- table->types[hash(seq->type)] = seq->next; >- if (seq->next) >- seq->next->prev = seq->prev; >+ >+ if (!seq->first_free && list_empty(&seq->subscriptions)) { >+ hlist_del_init(&seq->ns_list); > kfree(seq->sseqs); > kfree(seq); > } >@@ -778,7 +778,6 @@ > spin_unlock_bh(&seq->lock); > goto not_found; > } >- found: > spin_unlock_bh(&seq->lock); > read_unlock_bh(&nametbl_lock); > return true; >@@ -797,7 +796,6 @@ > struct name_seq* seq; > int i = 0; > struct publication* publ; >- struct publication *publhead; > struct sub_seq *sseq; > int low_seq, high_seq; > uint destport; >@@ -921,8 +919,7 @@ > struct name_seq *seq; > seq = nametbl_find_seq(type); > if (!seq) { >- struct name_seq *head = table->types[hash(type)]; >- seq = table->types[hash(type)] = nameseq_create(type, head); >+ seq = nameseq_create(type, &table->types[hash(type)]); > } > spin_lock_bh(&seq->lock); > dbg("nametbl_subscribe:found %x for <%u,%u,%u>\n", >@@ -936,22 +933,12 @@ > nametbl_unsubscribe(struct name_subscr *s) > { > uint type = s->publ.s.seq.type; >- struct name_subscr *prev = 0; > struct name_subscr *crs; > struct name_seq *seq = nametbl_find_seq(type); > assert(seq); >+ > spin_lock_bh(&seq->lock); >- crs = seq->subscriptions; >- while ((crs != 0) && (crs != s)) { >- prev = crs; >- crs = crs->next; >- } >- if (crs){ >- if (!prev) >- seq->subscriptions = crs->next; >- else >- prev->next = crs->next; >- } >+ list_del_init(&s->nsub_list); > spin_unlock_bh(&seq->lock); > } > >@@ -1026,13 +1013,15 @@ > static void > nametbl_list(struct print_buf *buf, uint type, uint depth) > { >- struct name_seq *seq = 0; >+ struct hlist_head *seq_head; >+ struct hlist_node *seq_node; >+ struct name_seq *seq; >+ > uint i; > for (i = 0; i < TABLE_SIZE; i++) { >- seq = table->types[i]; >- while (seq) { >- nameseq_list(seq,buf,type,depth,i); >- seq = seq->next; >+ seq_head = &table->types[i]; >+ hlist_for_each_entry(seq, seq_node, seq_head, ns_list) { >+ nameseq_list(seq, buf, type, depth, i); > } > } > } >@@ -1081,11 +1070,15 @@ > nametbl_stop(void) > { > uint i; >+ struct hlist_head *seq_head; >+ struct hlist_node *seq_node, *tmp; >+ struct name_seq *seq; >+ > write_lock_bh(&nametbl_lock); > for (i = 0; i < TABLE_SIZE; i++) { >- struct name_seq *seq = table->types[i]; >- while (seq) { >- struct name_seq *next_seq = seq->next; >+ seq_head = &table->types[i]; >+ hlist_for_each_entry_safe(seq, seq_node, >+ tmp, seq_head, ns_list) { > struct sub_seq *sseq = seq->sseqs; > > for(;sseq != &seq->sseqs[seq->first_free];sseq++){ >@@ -1098,9 +1091,9 @@ > } > while(publ != sseq->zone_list); > } >- seq = next_seq; > } > } > kfree(table); >+ table = 0; > write_unlock_bh(&nametbl_lock); > } >Index: name_table.h >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.h,v >retrieving revision 1.2 >diff -u -r1.2 name_table.h >--- name_table.h 3 Feb 2004 23:27:35 -0000 1.2 >+++ name_table.h 4 May 2004 22:07:46 -0000 >@@ -81,10 +81,7 @@ > tipc_net_addr_t node; > uint key; > uint scope; >- struct { >- struct publication *prev; >- struct publication *next; >- }local_list; >+ struct list_head local_list; > struct { > struct publication *next; > }port_list; >Index: name_subscr.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_subscr.c,v >retrieving revision 1.6 >diff -u -r1.6 name_subscr.c >--- name_subscr.c 11 Mar 2004 01:27:14 -0000 1.6 >+++ name_subscr.c 4 May 2004 22:07:46 -0000 >@@ -238,6 +238,8 @@ > } > this = (struct name_subscr*)k_malloc(sizeof(*this)); > memset(this,0,sizeof(*this)); >+ INIT_LIST_HEAD(&this->nsub_list); >+ INIT_LIST_HEAD(&this->reg.reg_list); > s->ref = ref_lock_acquire(this,&this->lock); > if (!s->ref){ > warn("Obtained no ref for subsription\n"); >Index: port.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.c,v >retrieving revision 1.16 >diff -u -r1.16 port.c >--- port.c 30 Apr 2004 21:23:17 -0000 1.16 >+++ port.c 4 May 2004 22:07:46 -0000 >@@ -935,6 +935,7 @@ > p->named_msg_cb = named_msg_cb; > p->conn_msg_cb = conn_msg_cb; > p->continue_event_cb = continue_event_cb; >+ INIT_LIST_HEAD(&p->uport_list); > reg_add_port(user_ref,p); > *portref = this->publ.ref; > dbg(" tipc_createport: %x with ref %u\n",this,this->publ.ref); >Index: name_subscr.h >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_subscr.h,v >retrieving revision 1.3 >diff -u -r1.3 name_subscr.h >--- name_subscr.h 16 Feb 2004 23:00:02 -0000 1.3 >+++ name_subscr.h 4 May 2004 22:07:46 -0000 >@@ -89,14 +89,13 @@ > > struct name_subscr { > struct tipc_name_subscr publ; >- struct name_subscr *next; >+ struct list_head nsub_list; > void (*event_handler) (struct tipc_name_event *); > long unsigned int timer_ref; > int expired; > spinlock_t *lock; > struct { >- struct name_subscr *next; >- struct name_subscr *prev; >+ struct list_head reg_list; > tipc_ref_t ref; > }reg; > }; >Index: port.h >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.h,v >retrieving revision 1.7 >diff -u -r1.7 port.h >--- port.h 30 Apr 2004 21:23:17 -0000 1.7 >+++ port.h 4 May 2004 22:07:46 -0000 >@@ -135,8 +135,7 @@ > tipc_named_msg_event named_msg_cb; > tipc_conn_msg_event conn_msg_cb; > tipc_continue_event continue_event_cb; >- struct user_port* next; >- struct user_port* prev; >+ struct list_head uport_list; > }; > > struct port { >Index: name_distr.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_distr.c,v >retrieving revision 1.6 >diff -u -r1.6 name_distr.c >--- name_distr.c 23 Apr 2004 15:12:06 -0000 1.6 >+++ name_distr.c 4 May 2004 22:07:46 -0000 >@@ -77,7 +77,7 @@ > > #define ITEM_SIZE sizeof(struct distr_item) > >-static struct publication *publ_root = 0; >+static LIST_HEAD(publ_root); > static uint publ_cnt = 0; > > struct distr_item { >@@ -109,30 +109,33 @@ > > void named_node_up(tipc_net_addr_t node) > { >- uint rest; > struct publication* publ; >+ struct distr_item* item = 0; >+ struct sk_buff* buf = 0; >+ uint left = 0; >+ uint rest; >+ uint max_item_buf; >+ > assert(in_own_cluster(node)); > read_lock_bh(&nametbl_lock); >- publ = publ_root; >+ max_item_buf = TIPC_MAX_MSG_SIZE / ITEM_SIZE; >+ max_item_buf *= ITEM_SIZE; > rest = publ_cnt*ITEM_SIZE; >- if (publ_cnt) { >- while (rest > 0){ >- struct sk_buff* buf; >- struct distr_item* item; >- uint size = rest <= TIPC_MAX_MSG_SIZE?rest:TIPC_MAX_MSG_SIZE; >- uint left = size; >- buf = named_prepare_buf(PUBLICATION,size,node); >+ >+ list_for_each_entry(publ, &publ_root, local_list) { >+ if (!buf) { >+ left = (rest <= max_item_buf) ?rest :max_item_buf; >+ rest -= left; >+ buf = named_prepare_buf(PUBLICATION, left, node); > item = (struct distr_item*)msg_data(buf_msg(buf)); >- do { >- publ_to_item(item,publ); >- publ = publ->local_list.next; >- item++; >- left -= ITEM_SIZE; >- } >- while(left > 0); >- msg_set_link_selector(buf_msg(buf),node); >+ } >+ publ_to_item(item,publ); >+ item++; >+ left -= ITEM_SIZE; >+ if (!left) { >+ msg_set_link_selector(buf_msg(buf),node); > link_send(buf,node,node); >- rest-=size; >+ buf = 0; > } > } > read_unlock_bh(&nametbl_lock); >@@ -142,11 +145,7 @@ > { > struct sk_buff* buf = named_prepare_buf(PUBLICATION,ITEM_SIZE,0); > struct distr_item *item = (struct distr_item*) msg_data(buf_msg(buf)); >- if (publ_root) >- publ_root->local_list.prev = publ; >- publ->local_list.next = publ_root; >- publ->local_list.prev = 0; >- publ_root = publ; >+ list_add(&publ->local_list, &publ_root); > publ_cnt++; > publ_to_item(item,publ); > cluster_broadcast(buf); >@@ -156,12 +155,7 @@ > { > struct sk_buff* buf = named_prepare_buf(WITHDRAWAL,ITEM_SIZE,0); > struct distr_item *item = (struct distr_item*) msg_data(buf_msg(buf)); >- if (publ_root == publ) >- publ_root = publ->local_list.next; >- if (publ->local_list.prev) >- publ->local_list.prev->local_list.next = publ->local_list.next; >- if (publ->local_list.next) >- publ->local_list.next->local_list.prev = publ->local_list.prev; >+ list_del_init(&publ->local_list); > publ_cnt--; > publ_to_item(item,publ); > cluster_broadcast(buf); >Index: reg.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/reg.c,v >retrieving revision 1.6 >diff -u -r1.6 reg.c >--- reg.c 16 Feb 2004 23:00:02 -0000 1.6 >+++ reg.c 4 May 2004 22:07:46 -0000 >@@ -182,8 +182,8 @@ > void *usr_handle; > uint next; > tipc_started_event callback; >- struct user_port *ports; >- struct name_subscr *subs; >+ struct list_head ports; >+ struct list_head subs; > }; > > #define MAX_USERID 64 >@@ -204,11 +204,7 @@ > spin_lock_bh(®_lock); > user = &users[ref]; > p->reg_ref = ref; >- p->prev = 0; >- p->next = user->ports; >- if (p->next) >- p->next->prev = p; >- user->ports = p; >+ list_add(&p->uport_list, &user->ports); > spin_unlock_bh(®_lock); > return TIPC_OK; > } >@@ -224,12 +220,7 @@ > return TIPC_OK; > spin_lock_bh(®_lock); > user = &users[p->reg_ref]; >- if (p->prev) >- p->prev->next = p->next; >- else >- user->ports = p->next; >- if (p->next) >- p->next->prev = p->prev; >+ list_del_init(&p->uport_list); > spin_unlock_bh(®_lock); > return TIPC_OK; > } >@@ -247,18 +238,13 @@ > spin_lock_bh(®_lock); > user = &users[s->reg.ref]; > s->reg.ref = ref; >- s->reg.prev = 0; >- s->reg.next = user->subs; >- if (s->reg.next) >- s->reg.next->reg.prev = s; >- user->subs = s; >+ list_add(&s->reg.reg_list, &user->subs); > spin_unlock_bh(®_lock); > return TIPC_OK; > } > > int reg_remove_subscr(struct name_subscr* s) > { >- struct tipc_user *user; > if (!tipc_started || !users) > return TIPC_FAILURE; > if (s->reg.ref > MAX_USERID) >@@ -266,27 +252,24 @@ > if (s->reg.ref == 0) > return TIPC_OK; > spin_lock_bh(®_lock); >- user = &users[s->reg.ref]; >- if (s->reg.prev) >- s->reg.prev->reg.next = s->reg.next; >- else >- user->subs = s->reg.next; >- if (s->next) >- s->reg.next->reg.prev = s->reg.prev; >+ list_del_init(&s->reg.reg_list); > spin_unlock_bh(®_lock); > return TIPC_OK; > } > > static void reg_init(void) > { >- uint i = 1; >+ uint i; >+ > if (users) > return; > spin_lock_bh(®_lock); > users = (struct tipc_user*)k_malloc(USER_LIST_SIZE); >- memset(users, 0,USER_LIST_SIZE); >+ memset(users, 0, USER_LIST_SIZE); > for (i = 1; i <= MAX_USERID; i++) { > users[i].next = i - 1; >+ INIT_LIST_HEAD(&users[i].subs); >+ INIT_LIST_HEAD(&users[i].ports); > } > next_free_user = MAX_USERID; > spin_unlock_bh(®_lock); >@@ -324,7 +307,7 @@ > reg_stop(void) > { > uint id; >- for (id = 0; id <= MAX_USERID; id++) { >+ for (id = 1; id <= MAX_USERID; id++) { > if (users[id].next == 0) > tipc_detach(id); > } >@@ -359,24 +342,20 @@ > void > tipc_detach(tipc_ref_t userid) > { >- struct name_subscr* s = 0; >- struct user_port* p = 0; >+ struct name_subscr *s, *st; >+ struct user_port *p, *pt; > struct tipc_user *user; > spin_lock_bh(®_lock); > user = &users[userid]; > if (!user->next){ >- s = user->subs; >- p = user->ports; >- user->subs = 0; >- user->ports = 0; > user->next = next_free_user; > next_free_user = userid; >- spin_unlock_bh(®_lock); > } >- while (s) { >+ spin_unlock_bh(®_lock); >+ list_for_each_entry_safe(s, st, &user->subs, reg.reg_list) { > tipc_unsubscribe(s->publ.s.ref); > } >- while (user->ports) { >+ list_for_each_entry_safe(p, pt, &user->ports, uport_list) { > tipc_deleteport(p->ref); > } > } > > > |
From: Mark H. <ma...@os...> - 2004-05-04 23:16:36
|
Jon, Since I didn't hear from you about my last patch yet, I have added to it. Take a look at tipc_detach in particular. The original code didn't make sense to me. cvs diff -u bcast.h name_table.c name_table.h name_subscr.c port.c name_subscr.h port.h name_distr.c reg.c Index: bcast.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/bcast.h,v retrieving revision 1.10 diff -u -r1.10 bcast.h --- bcast.h 16 Apr 2004 04:03:46 -0000 1.10 +++ bcast.h 4 May 2004 22:07:46 -0000 @@ -165,6 +165,10 @@ void tipc_bcast_start(void); void tipc_bcast_stop(void); +void bnode_outqueue_release(int ackno); +int in_list_node(struct list_head *list, tipc_net_addr_t destnode); +int in_list(struct list_head *list, uint destport, tipc_net_addr_t destnode); + struct bcastlinkset* blink_select(int selector); Index: name_table.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.c,v retrieving revision 1.14 diff -u -r1.14 name_table.c --- name_table.c 28 Apr 2004 21:55:26 -0000 1.14 +++ name_table.c 4 May 2004 22:07:46 -0000 @@ -118,8 +118,8 @@ #include "bcast.h" -static void nametbl_dump(void); -static void nametbl_print(struct print_buf *buf, const char *str); +void nametbl_dump(void); +void nametbl_print(struct print_buf *buf, const char *str); rwlock_t nametbl_lock = RW_LOCK_UNLOCKED; /* @@ -196,14 +196,13 @@ struct sub_seq* sseqs; uint alloc; uint first_free; - struct name_seq *prev; - struct name_seq *next; - struct name_subscr *subscriptions; + struct hlist_node ns_list; + struct list_head subscriptions; spinlock_t lock; }; struct name_seq * -nameseq_create(uint type, struct name_seq *next) +nameseq_create(uint type, struct hlist_head *seq_head) { struct name_seq *this = (struct name_seq *)k_malloc(sizeof(*this)); @@ -211,12 +210,12 @@ this->lock = SPIN_LOCK_UNLOCKED; this->type = type; this->sseqs = subseq_alloc(1); - dbg("nameseq_create() this = %x type %u, next =%x, ssseqs %x, ff: %u\n", - this,type, next,this->sseqs,this->first_free); + dbg("nameseq_create() this = %x type %u, ssseqs %x, ff: %u\n", + this,type, this->sseqs,this->first_free); this->alloc = 1; - this->next = next; - if (next) - next->prev = this; + INIT_HLIST_NODE(&this->ns_list); + INIT_LIST_HEAD(&this->subscriptions); + hlist_add_head(&this->ns_list, seq_head); return this; } @@ -288,7 +287,6 @@ uint scope, uint key) { - struct name_subscr *s = this->subscriptions; struct publication *publ; struct sub_seq *sseq; int created_subseq = 0; @@ -365,18 +363,21 @@ } } - if (!created_subseq) - return publ; + /* + * Any subscriptions waiting for notification? + */ + if (created_subseq) { + struct name_subscr *s, *st; + list_for_each_entry_safe(s, st, + &this->subscriptions, nsub_list) { - /* Any subscriptions waiting ? */ - while (s) { - namesub_report_overlap(s, + namesub_report_overlap(s, publ->lower, publ->upper, TIPC_PUBLISHED, publ->ref, publ->node); - s = s->next; + } } return publ; } @@ -388,11 +389,11 @@ tipc_net_addr_t node, uint key) { - struct name_subscr *s = this->subscriptions; struct publication *publ; struct publication *prev = 0; struct sub_seq *sseq = nameseq_find_subseq(this,inst); struct sub_seq *free; + struct name_subscr *s, *st; assert(this); if (!sseq) { int i; @@ -473,20 +474,22 @@ if (sseq->node_list || sseq->cluster_list || sseq->zone_list) return publ; - /* No more publications,contract subseq list: */ - + /* + * No more publications,contract subseq list: + */ free = &this->sseqs[this->first_free--]; memmove(sseq,sseq+1,(free-(sseq+1))*sizeof(*sseq)); - /* Any subscriptions waiting ? */ - while (s) { + /* + * Any subscriptions waiting ? + */ + list_for_each_entry_safe(s, st, &this->subscriptions, nsub_list) { namesub_report_overlap(s, publ->lower, publ->upper, TIPC_WITHDRAWN, publ->ref, publ->node); - s = s->next; } return publ; } @@ -529,8 +532,7 @@ { struct sub_seq *sseq = this->sseqs; - s->next = this->subscriptions; - this->subscriptions = s; + list_add(&s->nsub_list, &this->subscriptions); if (!sseq) return; @@ -545,10 +547,6 @@ } - - - - /* * struct name_table: translation table containing all existing * port name publications. Consists of 'name_seq' objects @@ -560,7 +558,7 @@ #define TABLE_SIZE (1<<14) struct name_table { - struct name_seq *types[TABLE_SIZE]; + struct hlist_head types[TABLE_SIZE]; uint key; uint local_publ_count; }; @@ -576,15 +574,22 @@ static struct name_seq* nametbl_find_seq(uint type) { - struct name_seq* seq; - dbg("find_seq %u,(%u,0x%x) table = %x, hash[type] = %u\n", - type,ntohl(type),type,table,hash(type)); - seq = table->types[hash(type)]; - dbg("found %x\n",seq); - while (seq && (seq->type != type)) { - seq = seq->next; + struct hlist_head *seq_head; + struct hlist_node *seq_node; + struct name_seq *ns; + + dbg("find_seq %u,(%u,0x%x) table = %p, hash[type] = %u\n", + type, ntohl(type), type, table, hash(type)); + + seq_head = &table->types[hash(type)]; + hlist_for_each_entry(ns, seq_node, seq_head, ns_list) { + if (ns->type == type) { + dbg("found %x\n", ns); + return ns; + } } - return seq; + + return 0; }; @@ -606,8 +611,7 @@ } dbg("Publishing <%u,%u,%u> from %x\n", type, lower, upper, node); if (!seq) { - struct name_seq *head = table->types[hash(type)]; - seq = table->types[hash(type)] = nameseq_create(type, head); + seq = nameseq_create(type, &table->types[hash(type)]); dbg("nametbl_insert_publ: created %x\n",seq); } assert(seq->type == type); @@ -626,13 +630,9 @@ return 0; dbg("Withdrawing <%u,%u> from %x\n", type, lower, node); publ = nameseq_remove_publ(seq, lower, node, key); - if (!seq->first_free && !seq->subscriptions) { - if (seq->prev) - seq->prev->next = seq->next; - else - table->types[hash(seq->type)] = seq->next; - if (seq->next) - seq->next->prev = seq->prev; + + if (!seq->first_free && list_empty(&seq->subscriptions)) { + hlist_del_init(&seq->ns_list); kfree(seq->sseqs); kfree(seq); } @@ -778,7 +778,6 @@ spin_unlock_bh(&seq->lock); goto not_found; } - found: spin_unlock_bh(&seq->lock); read_unlock_bh(&nametbl_lock); return true; @@ -797,7 +796,6 @@ struct name_seq* seq; int i = 0; struct publication* publ; - struct publication *publhead; struct sub_seq *sseq; int low_seq, high_seq; uint destport; @@ -921,8 +919,7 @@ struct name_seq *seq; seq = nametbl_find_seq(type); if (!seq) { - struct name_seq *head = table->types[hash(type)]; - seq = table->types[hash(type)] = nameseq_create(type, head); + seq = nameseq_create(type, &table->types[hash(type)]); } spin_lock_bh(&seq->lock); dbg("nametbl_subscribe:found %x for <%u,%u,%u>\n", @@ -936,22 +933,12 @@ nametbl_unsubscribe(struct name_subscr *s) { uint type = s->publ.s.seq.type; - struct name_subscr *prev = 0; struct name_subscr *crs; struct name_seq *seq = nametbl_find_seq(type); assert(seq); + spin_lock_bh(&seq->lock); - crs = seq->subscriptions; - while ((crs != 0) && (crs != s)) { - prev = crs; - crs = crs->next; - } - if (crs){ - if (!prev) - seq->subscriptions = crs->next; - else - prev->next = crs->next; - } + list_del_init(&s->nsub_list); spin_unlock_bh(&seq->lock); } @@ -1026,13 +1013,15 @@ static void nametbl_list(struct print_buf *buf, uint type, uint depth) { - struct name_seq *seq = 0; + struct hlist_head *seq_head; + struct hlist_node *seq_node; + struct name_seq *seq; + uint i; for (i = 0; i < TABLE_SIZE; i++) { - seq = table->types[i]; - while (seq) { - nameseq_list(seq,buf,type,depth,i); - seq = seq->next; + seq_head = &table->types[i]; + hlist_for_each_entry(seq, seq_node, seq_head, ns_list) { + nameseq_list(seq, buf, type, depth, i); } } } @@ -1081,11 +1070,15 @@ nametbl_stop(void) { uint i; + struct hlist_head *seq_head; + struct hlist_node *seq_node, *tmp; + struct name_seq *seq; + write_lock_bh(&nametbl_lock); for (i = 0; i < TABLE_SIZE; i++) { - struct name_seq *seq = table->types[i]; - while (seq) { - struct name_seq *next_seq = seq->next; + seq_head = &table->types[i]; + hlist_for_each_entry_safe(seq, seq_node, + tmp, seq_head, ns_list) { struct sub_seq *sseq = seq->sseqs; for(;sseq != &seq->sseqs[seq->first_free];sseq++){ @@ -1098,9 +1091,9 @@ } while(publ != sseq->zone_list); } - seq = next_seq; } } kfree(table); + table = 0; write_unlock_bh(&nametbl_lock); } Index: name_table.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.h,v retrieving revision 1.2 diff -u -r1.2 name_table.h --- name_table.h 3 Feb 2004 23:27:35 -0000 1.2 +++ name_table.h 4 May 2004 22:07:46 -0000 @@ -81,10 +81,7 @@ tipc_net_addr_t node; uint key; uint scope; - struct { - struct publication *prev; - struct publication *next; - }local_list; + struct list_head local_list; struct { struct publication *next; }port_list; Index: name_subscr.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_subscr.c,v retrieving revision 1.6 diff -u -r1.6 name_subscr.c --- name_subscr.c 11 Mar 2004 01:27:14 -0000 1.6 +++ name_subscr.c 4 May 2004 22:07:46 -0000 @@ -238,6 +238,8 @@ } this = (struct name_subscr*)k_malloc(sizeof(*this)); memset(this,0,sizeof(*this)); + INIT_LIST_HEAD(&this->nsub_list); + INIT_LIST_HEAD(&this->reg.reg_list); s->ref = ref_lock_acquire(this,&this->lock); if (!s->ref){ warn("Obtained no ref for subsription\n"); Index: port.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.c,v retrieving revision 1.16 diff -u -r1.16 port.c --- port.c 30 Apr 2004 21:23:17 -0000 1.16 +++ port.c 4 May 2004 22:07:46 -0000 @@ -935,6 +935,7 @@ p->named_msg_cb = named_msg_cb; p->conn_msg_cb = conn_msg_cb; p->continue_event_cb = continue_event_cb; + INIT_LIST_HEAD(&p->uport_list); reg_add_port(user_ref,p); *portref = this->publ.ref; dbg(" tipc_createport: %x with ref %u\n",this,this->publ.ref); Index: name_subscr.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_subscr.h,v retrieving revision 1.3 diff -u -r1.3 name_subscr.h --- name_subscr.h 16 Feb 2004 23:00:02 -0000 1.3 +++ name_subscr.h 4 May 2004 22:07:46 -0000 @@ -89,14 +89,13 @@ struct name_subscr { struct tipc_name_subscr publ; - struct name_subscr *next; + struct list_head nsub_list; void (*event_handler) (struct tipc_name_event *); long unsigned int timer_ref; int expired; spinlock_t *lock; struct { - struct name_subscr *next; - struct name_subscr *prev; + struct list_head reg_list; tipc_ref_t ref; }reg; }; Index: port.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.h,v retrieving revision 1.7 diff -u -r1.7 port.h --- port.h 30 Apr 2004 21:23:17 -0000 1.7 +++ port.h 4 May 2004 22:07:46 -0000 @@ -135,8 +135,7 @@ tipc_named_msg_event named_msg_cb; tipc_conn_msg_event conn_msg_cb; tipc_continue_event continue_event_cb; - struct user_port* next; - struct user_port* prev; + struct list_head uport_list; }; struct port { Index: name_distr.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_distr.c,v retrieving revision 1.6 diff -u -r1.6 name_distr.c --- name_distr.c 23 Apr 2004 15:12:06 -0000 1.6 +++ name_distr.c 4 May 2004 22:07:46 -0000 @@ -77,7 +77,7 @@ #define ITEM_SIZE sizeof(struct distr_item) -static struct publication *publ_root = 0; +static LIST_HEAD(publ_root); static uint publ_cnt = 0; struct distr_item { @@ -109,30 +109,33 @@ void named_node_up(tipc_net_addr_t node) { - uint rest; struct publication* publ; + struct distr_item* item = 0; + struct sk_buff* buf = 0; + uint left = 0; + uint rest; + uint max_item_buf; + assert(in_own_cluster(node)); read_lock_bh(&nametbl_lock); - publ = publ_root; + max_item_buf = TIPC_MAX_MSG_SIZE / ITEM_SIZE; + max_item_buf *= ITEM_SIZE; rest = publ_cnt*ITEM_SIZE; - if (publ_cnt) { - while (rest > 0){ - struct sk_buff* buf; - struct distr_item* item; - uint size = rest <= TIPC_MAX_MSG_SIZE?rest:TIPC_MAX_MSG_SIZE; - uint left = size; - buf = named_prepare_buf(PUBLICATION,size,node); + + list_for_each_entry(publ, &publ_root, local_list) { + if (!buf) { + left = (rest <= max_item_buf) ?rest :max_item_buf; + rest -= left; + buf = named_prepare_buf(PUBLICATION, left, node); item = (struct distr_item*)msg_data(buf_msg(buf)); - do { - publ_to_item(item,publ); - publ = publ->local_list.next; - item++; - left -= ITEM_SIZE; - } - while(left > 0); - msg_set_link_selector(buf_msg(buf),node); + } + publ_to_item(item,publ); + item++; + left -= ITEM_SIZE; + if (!left) { + msg_set_link_selector(buf_msg(buf),node); link_send(buf,node,node); - rest-=size; + buf = 0; } } read_unlock_bh(&nametbl_lock); @@ -142,11 +145,7 @@ { struct sk_buff* buf = named_prepare_buf(PUBLICATION,ITEM_SIZE,0); struct distr_item *item = (struct distr_item*) msg_data(buf_msg(buf)); - if (publ_root) - publ_root->local_list.prev = publ; - publ->local_list.next = publ_root; - publ->local_list.prev = 0; - publ_root = publ; + list_add(&publ->local_list, &publ_root); publ_cnt++; publ_to_item(item,publ); cluster_broadcast(buf); @@ -156,12 +155,7 @@ { struct sk_buff* buf = named_prepare_buf(WITHDRAWAL,ITEM_SIZE,0); struct distr_item *item = (struct distr_item*) msg_data(buf_msg(buf)); - if (publ_root == publ) - publ_root = publ->local_list.next; - if (publ->local_list.prev) - publ->local_list.prev->local_list.next = publ->local_list.next; - if (publ->local_list.next) - publ->local_list.next->local_list.prev = publ->local_list.prev; + list_del_init(&publ->local_list); publ_cnt--; publ_to_item(item,publ); cluster_broadcast(buf); Index: reg.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/reg.c,v retrieving revision 1.6 diff -u -r1.6 reg.c --- reg.c 16 Feb 2004 23:00:02 -0000 1.6 +++ reg.c 4 May 2004 22:07:46 -0000 @@ -182,8 +182,8 @@ void *usr_handle; uint next; tipc_started_event callback; - struct user_port *ports; - struct name_subscr *subs; + struct list_head ports; + struct list_head subs; }; #define MAX_USERID 64 @@ -204,11 +204,7 @@ spin_lock_bh(®_lock); user = &users[ref]; p->reg_ref = ref; - p->prev = 0; - p->next = user->ports; - if (p->next) - p->next->prev = p; - user->ports = p; + list_add(&p->uport_list, &user->ports); spin_unlock_bh(®_lock); return TIPC_OK; } @@ -224,12 +220,7 @@ return TIPC_OK; spin_lock_bh(®_lock); user = &users[p->reg_ref]; - if (p->prev) - p->prev->next = p->next; - else - user->ports = p->next; - if (p->next) - p->next->prev = p->prev; + list_del_init(&p->uport_list); spin_unlock_bh(®_lock); return TIPC_OK; } @@ -247,18 +238,13 @@ spin_lock_bh(®_lock); user = &users[s->reg.ref]; s->reg.ref = ref; - s->reg.prev = 0; - s->reg.next = user->subs; - if (s->reg.next) - s->reg.next->reg.prev = s; - user->subs = s; + list_add(&s->reg.reg_list, &user->subs); spin_unlock_bh(®_lock); return TIPC_OK; } int reg_remove_subscr(struct name_subscr* s) { - struct tipc_user *user; if (!tipc_started || !users) return TIPC_FAILURE; if (s->reg.ref > MAX_USERID) @@ -266,27 +252,24 @@ if (s->reg.ref == 0) return TIPC_OK; spin_lock_bh(®_lock); - user = &users[s->reg.ref]; - if (s->reg.prev) - s->reg.prev->reg.next = s->reg.next; - else - user->subs = s->reg.next; - if (s->next) - s->reg.next->reg.prev = s->reg.prev; + list_del_init(&s->reg.reg_list); spin_unlock_bh(®_lock); return TIPC_OK; } static void reg_init(void) { - uint i = 1; + uint i; + if (users) return; spin_lock_bh(®_lock); users = (struct tipc_user*)k_malloc(USER_LIST_SIZE); - memset(users, 0,USER_LIST_SIZE); + memset(users, 0, USER_LIST_SIZE); for (i = 1; i <= MAX_USERID; i++) { users[i].next = i - 1; + INIT_LIST_HEAD(&users[i].subs); + INIT_LIST_HEAD(&users[i].ports); } next_free_user = MAX_USERID; spin_unlock_bh(®_lock); @@ -324,7 +307,7 @@ reg_stop(void) { uint id; - for (id = 0; id <= MAX_USERID; id++) { + for (id = 1; id <= MAX_USERID; id++) { if (users[id].next == 0) tipc_detach(id); } @@ -359,24 +342,20 @@ void tipc_detach(tipc_ref_t userid) { - struct name_subscr* s = 0; - struct user_port* p = 0; + struct name_subscr *s, *st; + struct user_port *p, *pt; struct tipc_user *user; spin_lock_bh(®_lock); user = &users[userid]; if (!user->next){ - s = user->subs; - p = user->ports; - user->subs = 0; - user->ports = 0; user->next = next_free_user; next_free_user = userid; - spin_unlock_bh(®_lock); } - while (s) { + spin_unlock_bh(®_lock); + list_for_each_entry_safe(s, st, &user->subs, reg.reg_list) { tipc_unsubscribe(s->publ.s.ref); } - while (user->ports) { + list_for_each_entry_safe(p, pt, &user->ports, uport_list) { tipc_deleteport(p->ref); } } -- Mark Haverkamp <ma...@os...> |
From: Mark H. <ma...@os...> - 2004-05-03 20:17:28
|
Jon, Here are the next set of list_head changes. These update the name sequence and subscription structures. The name sequence list is an hlist_head/hlist_node instead of a list_head. It's head has only one pointer so that the hash table didn't increase in size. Mark. cvs diff -u name_subscr.c name_subscr.h reg.c name_table.c bcast.h Index: name_subscr.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_subscr.c,v retrieving revision 1.6 diff -u -r1.6 name_subscr.c --- name_subscr.c 11 Mar 2004 01:27:14 -0000 1.6 +++ name_subscr.c 3 May 2004 19:48:46 -0000 @@ -238,6 +238,7 @@ } this = (struct name_subscr*)k_malloc(sizeof(*this)); memset(this,0,sizeof(*this)); + INIT_LIST_HEAD(&this->nsub_list); s->ref = ref_lock_acquire(this,&this->lock); if (!s->ref){ warn("Obtained no ref for subsription\n"); Index: name_subscr.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_subscr.h,v retrieving revision 1.3 diff -u -r1.3 name_subscr.h --- name_subscr.h 16 Feb 2004 23:00:02 -0000 1.3 +++ name_subscr.h 3 May 2004 19:48:46 -0000 @@ -89,7 +89,7 @@ struct name_subscr { struct tipc_name_subscr publ; - struct name_subscr *next; + struct list_head nsub_list; void (*event_handler) (struct tipc_name_event *); long unsigned int timer_ref; int expired; Index: reg.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/reg.c,v retrieving revision 1.6 diff -u -r1.6 reg.c --- reg.c 16 Feb 2004 23:00:02 -0000 1.6 +++ reg.c 3 May 2004 19:48:46 -0000 @@ -267,12 +267,7 @@ return TIPC_OK; spin_lock_bh(®_lock); user = &users[s->reg.ref]; - if (s->reg.prev) - s->reg.prev->reg.next = s->reg.next; - else - user->subs = s->reg.next; - if (s->next) - s->reg.next->reg.prev = s->reg.prev; + list_del_init(&s->nsub_list); spin_unlock_bh(®_lock); return TIPC_OK; } Index: name_table.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.c,v retrieving revision 1.14 diff -u -r1.14 name_table.c --- name_table.c 28 Apr 2004 21:55:26 -0000 1.14 +++ name_table.c 3 May 2004 19:48:46 -0000 @@ -118,8 +118,8 @@ #include "bcast.h" -static void nametbl_dump(void); -static void nametbl_print(struct print_buf *buf, const char *str); +void nametbl_dump(void); +void nametbl_print(struct print_buf *buf, const char *str); rwlock_t nametbl_lock = RW_LOCK_UNLOCKED; /* @@ -196,14 +196,13 @@ struct sub_seq* sseqs; uint alloc; uint first_free; - struct name_seq *prev; - struct name_seq *next; - struct name_subscr *subscriptions; + struct hlist_node ns_list; + struct list_head subscriptions; spinlock_t lock; }; struct name_seq * -nameseq_create(uint type, struct name_seq *next) +nameseq_create(uint type, struct hlist_head *seq_head) { struct name_seq *this = (struct name_seq *)k_malloc(sizeof(*this)); @@ -211,12 +210,12 @@ this->lock = SPIN_LOCK_UNLOCKED; this->type = type; this->sseqs = subseq_alloc(1); - dbg("nameseq_create() this = %x type %u, next =%x, ssseqs %x, ff: %u\n", - this,type, next,this->sseqs,this->first_free); + dbg("nameseq_create() this = %x type %u, ssseqs %x, ff: %u\n", + this,type, this->sseqs,this->first_free); this->alloc = 1; - this->next = next; - if (next) - next->prev = this; + INIT_HLIST_NODE(&this->ns_list); + INIT_LIST_HEAD(&this->subscriptions); + hlist_add_head(&this->ns_list, seq_head); return this; } @@ -288,7 +287,6 @@ uint scope, uint key) { - struct name_subscr *s = this->subscriptions; struct publication *publ; struct sub_seq *sseq; int created_subseq = 0; @@ -365,18 +363,21 @@ } } - if (!created_subseq) - return publ; + /* + * Any subscriptions waiting for notification? + */ + if (created_subseq) { + struct name_subscr *s, *st; + list_for_each_entry_safe(s, st, + &this->subscriptions, nsub_list) { - /* Any subscriptions waiting ? */ - while (s) { - namesub_report_overlap(s, + namesub_report_overlap(s, publ->lower, publ->upper, TIPC_PUBLISHED, publ->ref, publ->node); - s = s->next; + } } return publ; } @@ -388,11 +389,11 @@ tipc_net_addr_t node, uint key) { - struct name_subscr *s = this->subscriptions; struct publication *publ; struct publication *prev = 0; struct sub_seq *sseq = nameseq_find_subseq(this,inst); struct sub_seq *free; + struct name_subscr *s, *st; assert(this); if (!sseq) { int i; @@ -473,20 +474,22 @@ if (sseq->node_list || sseq->cluster_list || sseq->zone_list) return publ; - /* No more publications,contract subseq list: */ - + /* + * No more publications,contract subseq list: + */ free = &this->sseqs[this->first_free--]; memmove(sseq,sseq+1,(free-(sseq+1))*sizeof(*sseq)); - /* Any subscriptions waiting ? */ - while (s) { + /* + * Any subscriptions waiting ? + */ + list_for_each_entry_safe(s, st, &this->subscriptions, nsub_list) { namesub_report_overlap(s, publ->lower, publ->upper, TIPC_WITHDRAWN, publ->ref, publ->node); - s = s->next; } return publ; } @@ -529,8 +532,7 @@ { struct sub_seq *sseq = this->sseqs; - s->next = this->subscriptions; - this->subscriptions = s; + list_move(&s->nsub_list, &this->subscriptions); if (!sseq) return; @@ -545,10 +547,6 @@ } - - - - /* * struct name_table: translation table containing all existing * port name publications. Consists of 'name_seq' objects @@ -560,7 +558,7 @@ #define TABLE_SIZE (1<<14) struct name_table { - struct name_seq *types[TABLE_SIZE]; + struct hlist_head types[TABLE_SIZE]; uint key; uint local_publ_count; }; @@ -576,15 +574,22 @@ static struct name_seq* nametbl_find_seq(uint type) { - struct name_seq* seq; - dbg("find_seq %u,(%u,0x%x) table = %x, hash[type] = %u\n", - type,ntohl(type),type,table,hash(type)); - seq = table->types[hash(type)]; - dbg("found %x\n",seq); - while (seq && (seq->type != type)) { - seq = seq->next; + struct hlist_head *seq_head; + struct hlist_node *seq_node; + struct name_seq *ns; + + dbg("find_seq %u,(%u,0x%x) table = %p, hash[type] = %u\n", + type, ntohl(type), type, table, hash(type)); + + seq_head = &table->types[hash(type)]; + hlist_for_each_entry(ns, seq_node, seq_head, ns_list) { + if (ns->type == type) { + dbg("found %x\n", ns); + return ns; + } } - return seq; + + return 0; }; @@ -606,8 +611,7 @@ } dbg("Publishing <%u,%u,%u> from %x\n", type, lower, upper, node); if (!seq) { - struct name_seq *head = table->types[hash(type)]; - seq = table->types[hash(type)] = nameseq_create(type, head); + seq = nameseq_create(type, &table->types[hash(type)]); dbg("nametbl_insert_publ: created %x\n",seq); } assert(seq->type == type); @@ -626,13 +630,9 @@ return 0; dbg("Withdrawing <%u,%u> from %x\n", type, lower, node); publ = nameseq_remove_publ(seq, lower, node, key); - if (!seq->first_free && !seq->subscriptions) { - if (seq->prev) - seq->prev->next = seq->next; - else - table->types[hash(seq->type)] = seq->next; - if (seq->next) - seq->next->prev = seq->prev; + + if (!seq->first_free && list_empty(&seq->subscriptions)) { + hlist_del_init(&seq->ns_list); kfree(seq->sseqs); kfree(seq); } @@ -778,7 +778,6 @@ spin_unlock_bh(&seq->lock); goto not_found; } - found: spin_unlock_bh(&seq->lock); read_unlock_bh(&nametbl_lock); return true; @@ -797,7 +796,6 @@ struct name_seq* seq; int i = 0; struct publication* publ; - struct publication *publhead; struct sub_seq *sseq; int low_seq, high_seq; uint destport; @@ -921,8 +919,7 @@ struct name_seq *seq; seq = nametbl_find_seq(type); if (!seq) { - struct name_seq *head = table->types[hash(type)]; - seq = table->types[hash(type)] = nameseq_create(type, head); + seq = nameseq_create(type, &table->types[hash(type)]); } spin_lock_bh(&seq->lock); dbg("nametbl_subscribe:found %x for <%u,%u,%u>\n", @@ -936,21 +933,16 @@ nametbl_unsubscribe(struct name_subscr *s) { uint type = s->publ.s.seq.type; - struct name_subscr *prev = 0; struct name_subscr *crs; struct name_seq *seq = nametbl_find_seq(type); assert(seq); + spin_lock_bh(&seq->lock); - crs = seq->subscriptions; - while ((crs != 0) && (crs != s)) { - prev = crs; - crs = crs->next; - } - if (crs){ - if (!prev) - seq->subscriptions = crs->next; - else - prev->next = crs->next; + list_for_each_entry(crs, &seq->subscriptions, nsub_list) { + if (crs == s) { + list_del_init(&s->nsub_list); + break; + } } spin_unlock_bh(&seq->lock); } @@ -1026,13 +1018,15 @@ static void nametbl_list(struct print_buf *buf, uint type, uint depth) { - struct name_seq *seq = 0; + struct hlist_head *seq_head; + struct hlist_node *seq_node; + struct name_seq *seq; + uint i; for (i = 0; i < TABLE_SIZE; i++) { - seq = table->types[i]; - while (seq) { - nameseq_list(seq,buf,type,depth,i); - seq = seq->next; + seq_head = &table->types[i]; + hlist_for_each_entry(seq, seq_node, seq_head, ns_list) { + nameseq_list(seq, buf, type, depth, i); } } } @@ -1081,11 +1075,15 @@ nametbl_stop(void) { uint i; + struct hlist_head *seq_head; + struct hlist_node *seq_node, *tmp; + struct name_seq *seq; + write_lock_bh(&nametbl_lock); for (i = 0; i < TABLE_SIZE; i++) { - struct name_seq *seq = table->types[i]; - while (seq) { - struct name_seq *next_seq = seq->next; + seq_head = &table->types[i]; + hlist_for_each_entry_safe(seq, seq_node, + tmp, seq_head, ns_list) { struct sub_seq *sseq = seq->sseqs; for(;sseq != &seq->sseqs[seq->first_free];sseq++){ @@ -1098,9 +1096,9 @@ } while(publ != sseq->zone_list); } - seq = next_seq; } } kfree(table); + table = 0; write_unlock_bh(&nametbl_lock); } Index: bcast.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/bcast.h,v retrieving revision 1.10 diff -u -r1.10 bcast.h --- bcast.h 16 Apr 2004 04:03:46 -0000 1.10 +++ bcast.h 3 May 2004 19:48:46 -0000 @@ -165,6 +165,10 @@ void tipc_bcast_start(void); void tipc_bcast_stop(void); +void bnode_outqueue_release(int ackno); +int in_list_node(struct list_head *list, tipc_net_addr_t destnode); +int in_list(struct list_head *list, uint destport, tipc_net_addr_t destnode); + struct bcastlinkset* blink_select(int selector); -- Mark Haverkamp <ma...@os...> |
From: John C. <ch...@os...> - 2004-04-30 22:48:21
|
I wanted to let this mailing list know about the availability of an early public draft of the Carrier Grade Linux v3.0 Clusters specification, available at http://www.osdl.org/docs/cgl_clustering_requirements_definition___v30.pdf We acknowledge that the clustering requirements in this draft are being implemented in a variety of ways and many of the requirements in this document exist in current clustering implementations. We understand from our usage model study that there can be NO one clustering model that fits and meets the needs of all carrier applications. The clustering requirements in this document are aimed at supporting clustered applications in a carrier-grade environment as an effective way to achieve highly available services inside a network element. We specifically have not addressed the use of clustering to maximize performance, such as in the Beowulf and Mosix types of solutions. Again, this is an early draft document of the v3.0 clustering requirements spec. Past OSDL Carrier Grade Linux technical documents have contained all requirements in a single document. For OSDL CGL v3.0 draft releases, we are releasing them as more granular sections, roughly split on functional boundaries. These boundaries are Standards, Availability, Clustering (this document), Hardware, Performance, Security, and Serviceability. More information on Carrier Grade Linux and the Carrier Grade Linux Working Group can be found at http://osdl.org/lab_activities/carrier_grade_linux/. Feel free to direct any comments on the spec to me directly at ch...@os... or to this mailing list at cgl...@os.... John |
From: Jon M. <jon...@er...> - 2004-04-30 21:43:13
|
See below Have a nice weekend /Jon Daniel McNeil wrote: >Hi Jon, > >I looking over the changes to the topology events and had some >questions: > >old way was: > >ioctl(tipc_fd, NETWORK_SUBSCRIBE, 0); > >poll() looking for NETWORK_EVENT >ioctl(tipc_fd, GET_NETWORK_EVENT, &netw_subscr) > > which return a node_id and up/down indication > >New way is: > >ioctl(tipc_fd, TIPC_SUBSCRIBE, &tipc_subscr) > > with > > tipc_subscr.seq.type = 0 > tipc_subscr.seq.lower = tipc_addr(zone, cluster, 0); > tipc_subscr.seq.upper = tipc_addr(zone, cluster, 0xfff); > tipc_subscr.timeout = TIPC_WAIT_FOREVER; > tipc_subscr.usr_handle = "abc"; > >poll() looking for TIPC_TOPOLOGY_EVENT >ioctl(tipc_fd, TIPC_GET_EVENT, &tipc_event); > > on return: > > tipc_event.event = TIPC_PUBLISHED (node joined?) > TIPC_WITHDRAWN (node died/left?) > TIPC_SUBSCR_TIMEOUT > > tipc_event.port.node = node_who_joined_or_left > >Do I have this correct? Do the other fields mean anything? > Yes, it is correct. The other fields are also meaningful: found_lower == found_upper == also the node who joined/left, because this is the semantics of subsriptions in general. tipc_event.port is the valid port identity of the publishing port, a "hint" that has turned out to be useful in some cases. Also, tipc_event.s is an aggregated complete copy of the original subscription. If you want the to recover the original usr_handle, or if you have several subscriptions pending and want to know to which the event pertains, you can look here. > >Also in manager.c, the mng_cmd_event() function has > > case TIPC_SUBSCRIBE: > >How is this used? Can you subscribe to events on the management port? >How are results returned? > Yes, you can. If you e.g. want to kkep track of which nodes can bee seen from a node in a different cluster, or simply to verify the consistensy of the assumed fully meshed network in the own cluster, this is a useful feature. When you order such a subscription you must do this on a dedicated socket, because the TIPC manager will conecct back to that socket and keep the connection until the subscription times out or is cancelled. Events are returned on that connection in a tipc_cmd_msg, in the member result.event, which is just like any other event. > >Thanks, > >Daniel > > |
From: Daniel M. <da...@os...> - 2004-04-30 20:43:15
|
Hi Jon, I looking over the changes to the topology events and had some questions: old way was: ioctl(tipc_fd, NETWORK_SUBSCRIBE, 0); poll() looking for NETWORK_EVENT ioctl(tipc_fd, GET_NETWORK_EVENT, &netw_subscr) which return a node_id and up/down indication New way is: ioctl(tipc_fd, TIPC_SUBSCRIBE, &tipc_subscr) with tipc_subscr.seq.type = 0 tipc_subscr.seq.lower = tipc_addr(zone, cluster, 0); tipc_subscr.seq.upper = tipc_addr(zone, cluster, 0xfff); tipc_subscr.timeout = TIPC_WAIT_FOREVER; tipc_subscr.usr_handle = "abc"; poll() looking for TIPC_TOPOLOGY_EVENT ioctl(tipc_fd, TIPC_GET_EVENT, &tipc_event); on return: tipc_event.event = TIPC_PUBLISHED (node joined?) TIPC_WITHDRAWN (node died/left?) TIPC_SUBSCR_TIMEOUT tipc_event.port.node = node_who_joined_or_left Do I have this correct? Do the other fields mean anything? Also in manager.c, the mng_cmd_event() function has case TIPC_SUBSCRIBE: How is this used? Can you subscribe to events on the management port? How are results returned? Thanks, Daniel |
From: Jon M. <jon...@er...> - 2004-04-30 20:43:12
|
Excellent, I think there is ~10-15 linked lists needing conversion, so it is a doable task. I have not been able to do any coding the last days, but I will be back on track on Monday. /Jon Mark Haverkamp wrote: >Jon, > >Here is the first set of linked lists converted to use list head. I ran >the new code using the tipc benchmark with comparable results to the >current version. > >Mark. > >cvs diff -u link.h port.h link.c port.c >Index: link.h >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/link.h,v >retrieving revision 1.10 >diff -u -r1.10 link.h >--- link.h 23 Apr 2004 15:12:05 -0000 1.10 >+++ link.h 30 Apr 2004 15:26:02 -0000 >@@ -105,6 +105,8 @@ > #define PUSH_FINISHED 2 > #define STARTING_EVT 856384768 > >+struct port; >+ > struct link { > tipc_net_addr_t addr; > char name[MAX_LINK_NAME]; >@@ -162,8 +164,7 @@ > uint retransm_queue_head; > uint retransm_queue_size; > struct sk_buff *next_out; >- struct port *first_waiting_port; >- struct port *last_waiting_port; >+ struct list_head waiting_ports; > > /* Fragmentation/defragmentation: */ > uint long_msg_seq_no; >Index: port.h >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.h,v >retrieving revision 1.6 >diff -u -r1.6 port.h >--- port.h 11 Mar 2004 01:27:14 -0000 1.6 >+++ port.h 30 Apr 2004 15:26:03 -0000 >@@ -137,12 +137,10 @@ > > struct port { > struct tipc_port publ; >- struct port *next; >- struct port *prev; >+ struct list_head port_list; > uint (*dispatcher) (struct tipc_port*,struct sk_buff *); > void (*wakeup) (struct tipc_port *); >- struct port *next_waiting; >- struct port *prev_waiting; >+ struct list_head wait_list; > struct link *congested_link; > uint waiting_pkts; > uint sent; >@@ -207,8 +205,9 @@ > goto error; > } > err = this->dispatcher(&this->publ,buf); >- if (unlikely(err)) >+ if (unlikely(err)) { > return tipc_reject_msg(buf,err); >+ } > return err; > } > >Index: link.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/link.c,v >retrieving revision 1.19 >diff -u -r1.19 link.c >--- link.c 28 Apr 2004 21:56:57 -0000 1.19 >+++ link.c 30 Apr 2004 15:26:03 -0000 >@@ -429,23 +429,12 @@ > spin_lock_bh(&port_lock); > if (!port->wakeup) > goto exit; >- if (port->next_waiting) >- goto exit; >- if (port->prev_waiting) >- goto exit; >- if (this->first_waiting_port == port) >+ if (!list_empty(&port->wait_list)) > goto exit; > port->congested_link = this; > port->publ.congested = 1; > port->waiting_pkts = 1 + sz/link_max_pkt(this); >- if (!this->first_waiting_port) { >- this->first_waiting_port = this->last_waiting_port = port; >- } else { /* Append port to queue */ >- >- port->prev_waiting = this->last_waiting_port; >- this->last_waiting_port->next_waiting = port; >- this->last_waiting_port = port; >- } >+ list_add_tail(&port->wait_list, &this->waiting_ports); > exit: > spin_unlock_bh(&port_lock); > spin_unlock_bh(port->publ.lock); >@@ -455,36 +444,27 @@ > static void > link_wakeup_ports(struct link *this,int all) > { >- struct port *port; >+ struct port *port, *tp; > int win = this->queue_limit[0] - this->out_queue_size; > if (all) > win = 100000; > if (win <= 0) > return; > spin_lock_bh(&port_lock); >- port = this->first_waiting_port; > if (link_congested(this)) > goto exit; >- while (port && (win > 0)) { >- struct port *next = port->next_waiting; >+ list_for_each_entry_safe(port, tp, &this->waiting_ports, wait_list) { >+ if (win <= 0) >+ break; >+ list_del_init(&port->wait_list); > port->congested_link = 0; >- port->prev_waiting = port->next_waiting = 0; > assert(port->wakeup); > spin_lock_bh(port->publ.lock); > port->publ.congested = 0; > port->wakeup(&port->publ); > win =- port->waiting_pkts; >- port = next; > } >- this->first_waiting_port = port; > >- /* >- * Make sure that this port isn't pointing at >- * any port just removed from congestion >- */ >- if (port) { >- port->prev_waiting = 0; >- } > exit: > spin_unlock_bh(&port_lock); > } >@@ -580,6 +560,7 @@ > this->state = RESET_UNKNOWN; > this->next_out_no = 1; > this->bcastlink = NULL; >+ INIT_LIST_HEAD(&this->waiting_ports); > if (LINK_LOG_BUF_SIZE) { > char *buf = k_malloc(LINK_LOG_BUF_SIZE); > printbuf_init(&this->print_buf, buf, LINK_LOG_BUF_SIZE); >@@ -740,7 +721,7 @@ > buf_discard(iter); > iter = next; > } >- if (this->first_waiting_port) >+ if (!list_empty(&this->waiting_ports)) > link_wakeup_ports(this,1); > this->retransm_queue_head = 0; > this->retransm_queue_size = 0; >@@ -1633,7 +1614,7 @@ > } > if (unlikely(this->next_out)) > link_push_queue(this); >- if (unlikely(this->first_waiting_port)) >+ if (unlikely(!list_empty(&this->waiting_ports))) > link_wakeup_ports(this,0); > if (unlikely(++this->unacked_window >= 10)) { > this->stats.sent_acks++; >Index: port.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.c,v >retrieving revision 1.15 >diff -u -r1.15 port.c >--- port.c 22 Apr 2004 23:28:12 -0000 1.15 >+++ port.c 30 Apr 2004 15:26:03 -0000 >@@ -202,7 +202,7 @@ > spinlock_t port_lock = SPIN_LOCK_UNLOCKED; > static spinlock_t queue_lock = SPIN_LOCK_UNLOCKED; > >-static struct port* ports = 0; >+LIST_HEAD(ports); > static void port_handle_node_down(struct port * this, tipc_net_addr_t); > static void port_abort_self(struct port *,uint err); > static void port_abort_peer(struct port *,uint err); >@@ -290,18 +290,15 @@ > this->last_in_seqno = 41; > this->sent = 1; > this->publ.usr_handle = usr_handle; >- this->next_waiting = this->prev_waiting = 0; >+ INIT_LIST_HEAD(&this->wait_list); > this->congested_link = 0; > this->max_pkt = 1404; /* Ethernet, adjust at connect */ > this->dispatcher = dispatcher; > this->wakeup = wakeup; > this->user_port = 0; > spin_lock_bh(&port_lock); >- this->prev = 0; >- this->next = ports; >- if (this->next) >- this->next->prev = this; >- ports = this; >+ INIT_LIST_HEAD(&this->port_list); >+ list_add_tail(&this->port_list, &ports); > spin_unlock_bh(&port_lock); > return &this->publ; > } >@@ -310,7 +307,6 @@ > tipc_deleteport(const uint ref) > { > struct port *this; >- struct link* cong; > > tipc_withdraw(ref,0); > spin_lock_bh(&port_lock); >@@ -323,34 +319,16 @@ > port_cancel_timer(this); > } > ref_unlock_discard(ref); >- dbg("up = %x, nx = %x, pr = %x, cong %x,nw = %x,lw %x\n", >- this->user_port,this->next,this->prev,this->congested_link, >- this->next_waiting,this->prev_waiting); >- if (this->user_port){ >+ dbg("up = %p, cong %p\n", this->user_port, this->congested_link); >+ if (this->user_port) { > reg_remove_port(this->user_port); > kfree(this->user_port); > } > /* Unlink from port list: */ >- if (this->next) >- this->next->prev = this->prev; >- if (this->prev) >- this->prev->next = this->next; >- else >- ports = this->next; >+ list_del(&this->port_list); > > /* Unlink from link congestion queue, if any: */ >- cong = this->congested_link; >- if (cong){ >- if (this->next_waiting) >- this->next_waiting->prev_waiting = this->prev_waiting; >- else >- cong->last_waiting_port = this->prev_waiting; >- >- if (this->prev_waiting) >- this->prev_waiting->next_waiting = this->next_waiting; >- else >- cong->first_waiting_port = this->next_waiting; >- } >+ list_del(&this->wait_list); > spin_unlock_bh(&port_lock); > kfree(this); > dbg("Deleted port %u\n", ref); >@@ -693,8 +671,7 @@ > struct print_buf b; > printbuf_init(&b,raw,sz); > spin_lock_bh(&port_lock); >- p = ports; >- for (; p; p = p->next) { >+ list_for_each_entry(p, &ports, port_list) { > port_print(p,&b,""); > } > spin_unlock_bh(&port_lock); >@@ -1558,7 +1535,7 @@ > void > port_stop(void) > { >- struct port* p = ports; >+ struct port *p, *tp; > struct sk_buff* b; > spin_lock_bh(&queue_lock); > b = msg_queue_head; >@@ -1569,9 +1546,7 @@ > buf_discard(b); > b = next; > } >- while (p) { >- struct port *next = p->next; >+ list_for_each_entry_safe(p, tp, &ports, port_list) { > tipc_deleteport(p->publ.ref); >- p = next; > } > } > > > |
From: Mark H. <ma...@os...> - 2004-04-30 16:21:37
|
Jon, Here is the first set of linked lists converted to use list head. I ran the new code using the tipc benchmark with comparable results to the current version. Mark. cvs diff -u link.h port.h link.c port.c Index: link.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/link.h,v retrieving revision 1.10 diff -u -r1.10 link.h --- link.h 23 Apr 2004 15:12:05 -0000 1.10 +++ link.h 30 Apr 2004 15:26:02 -0000 @@ -105,6 +105,8 @@ #define PUSH_FINISHED 2 #define STARTING_EVT 856384768 +struct port; + struct link { tipc_net_addr_t addr; char name[MAX_LINK_NAME]; @@ -162,8 +164,7 @@ uint retransm_queue_head; uint retransm_queue_size; struct sk_buff *next_out; - struct port *first_waiting_port; - struct port *last_waiting_port; + struct list_head waiting_ports; /* Fragmentation/defragmentation: */ uint long_msg_seq_no; Index: port.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.h,v retrieving revision 1.6 diff -u -r1.6 port.h --- port.h 11 Mar 2004 01:27:14 -0000 1.6 +++ port.h 30 Apr 2004 15:26:03 -0000 @@ -137,12 +137,10 @@ struct port { struct tipc_port publ; - struct port *next; - struct port *prev; + struct list_head port_list; uint (*dispatcher) (struct tipc_port*,struct sk_buff *); void (*wakeup) (struct tipc_port *); - struct port *next_waiting; - struct port *prev_waiting; + struct list_head wait_list; struct link *congested_link; uint waiting_pkts; uint sent; @@ -207,8 +205,9 @@ goto error; } err = this->dispatcher(&this->publ,buf); - if (unlikely(err)) + if (unlikely(err)) { return tipc_reject_msg(buf,err); + } return err; } Index: link.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/link.c,v retrieving revision 1.19 diff -u -r1.19 link.c --- link.c 28 Apr 2004 21:56:57 -0000 1.19 +++ link.c 30 Apr 2004 15:26:03 -0000 @@ -429,23 +429,12 @@ spin_lock_bh(&port_lock); if (!port->wakeup) goto exit; - if (port->next_waiting) - goto exit; - if (port->prev_waiting) - goto exit; - if (this->first_waiting_port == port) + if (!list_empty(&port->wait_list)) goto exit; port->congested_link = this; port->publ.congested = 1; port->waiting_pkts = 1 + sz/link_max_pkt(this); - if (!this->first_waiting_port) { - this->first_waiting_port = this->last_waiting_port = port; - } else { /* Append port to queue */ - - port->prev_waiting = this->last_waiting_port; - this->last_waiting_port->next_waiting = port; - this->last_waiting_port = port; - } + list_add_tail(&port->wait_list, &this->waiting_ports); exit: spin_unlock_bh(&port_lock); spin_unlock_bh(port->publ.lock); @@ -455,36 +444,27 @@ static void link_wakeup_ports(struct link *this,int all) { - struct port *port; + struct port *port, *tp; int win = this->queue_limit[0] - this->out_queue_size; if (all) win = 100000; if (win <= 0) return; spin_lock_bh(&port_lock); - port = this->first_waiting_port; if (link_congested(this)) goto exit; - while (port && (win > 0)) { - struct port *next = port->next_waiting; + list_for_each_entry_safe(port, tp, &this->waiting_ports, wait_list) { + if (win <= 0) + break; + list_del_init(&port->wait_list); port->congested_link = 0; - port->prev_waiting = port->next_waiting = 0; assert(port->wakeup); spin_lock_bh(port->publ.lock); port->publ.congested = 0; port->wakeup(&port->publ); win =- port->waiting_pkts; - port = next; } - this->first_waiting_port = port; - /* - * Make sure that this port isn't pointing at - * any port just removed from congestion - */ - if (port) { - port->prev_waiting = 0; - } exit: spin_unlock_bh(&port_lock); } @@ -580,6 +560,7 @@ this->state = RESET_UNKNOWN; this->next_out_no = 1; this->bcastlink = NULL; + INIT_LIST_HEAD(&this->waiting_ports); if (LINK_LOG_BUF_SIZE) { char *buf = k_malloc(LINK_LOG_BUF_SIZE); printbuf_init(&this->print_buf, buf, LINK_LOG_BUF_SIZE); @@ -740,7 +721,7 @@ buf_discard(iter); iter = next; } - if (this->first_waiting_port) + if (!list_empty(&this->waiting_ports)) link_wakeup_ports(this,1); this->retransm_queue_head = 0; this->retransm_queue_size = 0; @@ -1633,7 +1614,7 @@ } if (unlikely(this->next_out)) link_push_queue(this); - if (unlikely(this->first_waiting_port)) + if (unlikely(!list_empty(&this->waiting_ports))) link_wakeup_ports(this,0); if (unlikely(++this->unacked_window >= 10)) { this->stats.sent_acks++; Index: port.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.c,v retrieving revision 1.15 diff -u -r1.15 port.c --- port.c 22 Apr 2004 23:28:12 -0000 1.15 +++ port.c 30 Apr 2004 15:26:03 -0000 @@ -202,7 +202,7 @@ spinlock_t port_lock = SPIN_LOCK_UNLOCKED; static spinlock_t queue_lock = SPIN_LOCK_UNLOCKED; -static struct port* ports = 0; +LIST_HEAD(ports); static void port_handle_node_down(struct port * this, tipc_net_addr_t); static void port_abort_self(struct port *,uint err); static void port_abort_peer(struct port *,uint err); @@ -290,18 +290,15 @@ this->last_in_seqno = 41; this->sent = 1; this->publ.usr_handle = usr_handle; - this->next_waiting = this->prev_waiting = 0; + INIT_LIST_HEAD(&this->wait_list); this->congested_link = 0; this->max_pkt = 1404; /* Ethernet, adjust at connect */ this->dispatcher = dispatcher; this->wakeup = wakeup; this->user_port = 0; spin_lock_bh(&port_lock); - this->prev = 0; - this->next = ports; - if (this->next) - this->next->prev = this; - ports = this; + INIT_LIST_HEAD(&this->port_list); + list_add_tail(&this->port_list, &ports); spin_unlock_bh(&port_lock); return &this->publ; } @@ -310,7 +307,6 @@ tipc_deleteport(const uint ref) { struct port *this; - struct link* cong; tipc_withdraw(ref,0); spin_lock_bh(&port_lock); @@ -323,34 +319,16 @@ port_cancel_timer(this); } ref_unlock_discard(ref); - dbg("up = %x, nx = %x, pr = %x, cong %x,nw = %x,lw %x\n", - this->user_port,this->next,this->prev,this->congested_link, - this->next_waiting,this->prev_waiting); - if (this->user_port){ + dbg("up = %p, cong %p\n", this->user_port, this->congested_link); + if (this->user_port) { reg_remove_port(this->user_port); kfree(this->user_port); } /* Unlink from port list: */ - if (this->next) - this->next->prev = this->prev; - if (this->prev) - this->prev->next = this->next; - else - ports = this->next; + list_del(&this->port_list); /* Unlink from link congestion queue, if any: */ - cong = this->congested_link; - if (cong){ - if (this->next_waiting) - this->next_waiting->prev_waiting = this->prev_waiting; - else - cong->last_waiting_port = this->prev_waiting; - - if (this->prev_waiting) - this->prev_waiting->next_waiting = this->next_waiting; - else - cong->first_waiting_port = this->next_waiting; - } + list_del(&this->wait_list); spin_unlock_bh(&port_lock); kfree(this); dbg("Deleted port %u\n", ref); @@ -693,8 +671,7 @@ struct print_buf b; printbuf_init(&b,raw,sz); spin_lock_bh(&port_lock); - p = ports; - for (; p; p = p->next) { + list_for_each_entry(p, &ports, port_list) { port_print(p,&b,""); } spin_unlock_bh(&port_lock); @@ -1558,7 +1535,7 @@ void port_stop(void) { - struct port* p = ports; + struct port *p, *tp; struct sk_buff* b; spin_lock_bh(&queue_lock); b = msg_queue_head; @@ -1569,9 +1546,7 @@ buf_discard(b); b = next; } - while (p) { - struct port *next = p->next; + list_for_each_entry_safe(p, tp, &ports, port_list) { tipc_deleteport(p->publ.ref); - p = next; } } -- Mark Haverkamp <ma...@os...> |
From: Jon M. <jon...@er...> - 2004-04-29 22:02:03
|
Thank you for the comments. Most of the problems you point out are omissions that are relatively easy to fix, as far as I can see, but some require more explanation, see below. Some constructive suggestions would also be appreciated, especially in the cases where you seem to have particularly dislike for the code. /jon Stephen Hemminger wrote: >Overall TIPC 1.3 comments: > >* README seems out of date > >* Configuration by module parameters is bad because in most > standard environments, module loading is done automatically. > What do you suggest ? With manual insmod loading this very practical. > >* Need 2.6 only, no standalone build environment > No extra CFLAGS in Makefile please > It was never the intention that the current directory structure and make support at SF would be the one eventually integrated into the kernel. But we do need a standalone build support, so I guess we need to keep the code in two different structures in the future. > >* Hard coding clusters, zones, nodes maximum makes it impossible > for distro vendors to ship a standard kernel. > >* having own caching allocator, if you insist on caching > then use kmem_cache_alloc. > The only caching we do is for ready-built sk_buffs. I don't see how this can be achieved with kmem_cache_alloc, but maybe I am missing something ? > >* it links to network devices, but does not handle up/down/unregister > notifications. THIS IS A SHOWSTOPPER > >* holds references to network devices without holding reference counts. > THIS IS A SHOWSTOPPER > No problem. Something I have missed. > >* why own version of wait_event_interruptible? > I guess you mean wait_event_interruptible_timeout. This did not exist in the versions of 2.4 that we support, so it is just legacy. This has been fixed in all files except proc.c, which is anyway to be removed, -see previous mail. > >* excessive use of /proc to provide interface. /proc is okay > as a statistical interface, not a control channel! > See above. > >* your socket locking is a mess. What is wrong with lock_sock()? > > >* you invent your own socket queues, there are standard macros > to do that. > >* get rid of all the macros in tipc_adapt.h > err, warn, info, dbg, ... > use the standard macros in kernel.h > The Linux standard macros are not sufficent. Neither is Ethereal. This is the most important tool I have for trouble shooting. Try to trace a lost packet when the loss is detected one minute and 2 million packets later, and you will understand what I mean. > >* hardcoding device names 'eth0,eth1,eth2,...' in code and > configuration is unacceptable. What about other media types > and netdevice names can be anything the administrator wants. > Yes, I have been aware of that, but strings are no good either. Any suggestions? > >tipc.h >- cplusplus support not usually done in linux header files > >- get rid of typedef's for tipc_net_addr_t, tipc_ref_t > kernel standard is to not use typedef's if possible > >- tipc_addr,zone,cluster,node > using inline instead would gain type checking > >- no IPV6 support, would probably be a show stopper for many people > It is in the pipe. > >addr.c >* global variable, buffers's waiting to be overwritten, ugh. > > >* k_time_ms == jiffies_to_clock_t > >* The whole TIPC_MAGIC in header crap looks a trap waiting > to happen > I guess you refer to the mangement interface header. We need some minimum level of security, apart from disabling the whole functionality, which was meant to be configurable, and this was the simple solution I came up with. If you literally mean this may cause a trap, I don't understand what you mean. A wrong "magic" will cause a command rejection, and that is it. Please elaborate "crap". I realize that the management interface is not perfect, but the fact that you don't understand it in five seconds does not either imply that it is all wrong. > >That is enough for now, that is just what I found in a 15min >quick look. If Viro saw it, he wouldn't be as nice ;-) > > |