You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(6) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(9) |
Feb
(11) |
Mar
(22) |
Apr
(73) |
May
(78) |
Jun
(146) |
Jul
(80) |
Aug
(27) |
Sep
(5) |
Oct
(14) |
Nov
(18) |
Dec
(27) |
2005 |
Jan
(20) |
Feb
(30) |
Mar
(19) |
Apr
(28) |
May
(50) |
Jun
(31) |
Jul
(32) |
Aug
(14) |
Sep
(36) |
Oct
(43) |
Nov
(74) |
Dec
(63) |
2006 |
Jan
(34) |
Feb
(32) |
Mar
(21) |
Apr
(76) |
May
(106) |
Jun
(72) |
Jul
(70) |
Aug
(175) |
Sep
(130) |
Oct
(39) |
Nov
(81) |
Dec
(43) |
2007 |
Jan
(81) |
Feb
(36) |
Mar
(20) |
Apr
(43) |
May
(54) |
Jun
(34) |
Jul
(44) |
Aug
(55) |
Sep
(44) |
Oct
(54) |
Nov
(43) |
Dec
(41) |
2008 |
Jan
(42) |
Feb
(84) |
Mar
(73) |
Apr
(30) |
May
(119) |
Jun
(54) |
Jul
(54) |
Aug
(93) |
Sep
(173) |
Oct
(130) |
Nov
(145) |
Dec
(153) |
2009 |
Jan
(59) |
Feb
(12) |
Mar
(28) |
Apr
(18) |
May
(56) |
Jun
(9) |
Jul
(28) |
Aug
(62) |
Sep
(16) |
Oct
(19) |
Nov
(15) |
Dec
(17) |
2010 |
Jan
(14) |
Feb
(36) |
Mar
(37) |
Apr
(30) |
May
(33) |
Jun
(53) |
Jul
(42) |
Aug
(50) |
Sep
(67) |
Oct
(66) |
Nov
(69) |
Dec
(36) |
2011 |
Jan
(52) |
Feb
(45) |
Mar
(49) |
Apr
(21) |
May
(34) |
Jun
(13) |
Jul
(19) |
Aug
(37) |
Sep
(43) |
Oct
(10) |
Nov
(23) |
Dec
(30) |
2012 |
Jan
(42) |
Feb
(36) |
Mar
(46) |
Apr
(25) |
May
(96) |
Jun
(146) |
Jul
(40) |
Aug
(28) |
Sep
(61) |
Oct
(45) |
Nov
(100) |
Dec
(53) |
2013 |
Jan
(79) |
Feb
(24) |
Mar
(134) |
Apr
(156) |
May
(118) |
Jun
(75) |
Jul
(278) |
Aug
(145) |
Sep
(136) |
Oct
(168) |
Nov
(137) |
Dec
(439) |
2014 |
Jan
(284) |
Feb
(158) |
Mar
(231) |
Apr
(275) |
May
(259) |
Jun
(91) |
Jul
(222) |
Aug
(215) |
Sep
(165) |
Oct
(166) |
Nov
(211) |
Dec
(150) |
2015 |
Jan
(164) |
Feb
(324) |
Mar
(299) |
Apr
(214) |
May
(111) |
Jun
(109) |
Jul
(105) |
Aug
(36) |
Sep
(58) |
Oct
(131) |
Nov
(68) |
Dec
(30) |
2016 |
Jan
(46) |
Feb
(87) |
Mar
(135) |
Apr
(174) |
May
(132) |
Jun
(135) |
Jul
(149) |
Aug
(125) |
Sep
(79) |
Oct
(49) |
Nov
(95) |
Dec
(102) |
2017 |
Jan
(104) |
Feb
(75) |
Mar
(72) |
Apr
(53) |
May
(18) |
Jun
(5) |
Jul
(14) |
Aug
(19) |
Sep
(2) |
Oct
(13) |
Nov
(21) |
Dec
(67) |
2018 |
Jan
(56) |
Feb
(50) |
Mar
(148) |
Apr
(41) |
May
(37) |
Jun
(34) |
Jul
(34) |
Aug
(11) |
Sep
(52) |
Oct
(48) |
Nov
(28) |
Dec
(46) |
2019 |
Jan
(29) |
Feb
(63) |
Mar
(95) |
Apr
(54) |
May
(14) |
Jun
(71) |
Jul
(60) |
Aug
(49) |
Sep
(3) |
Oct
(64) |
Nov
(115) |
Dec
(57) |
2020 |
Jan
(15) |
Feb
(9) |
Mar
(38) |
Apr
(27) |
May
(60) |
Jun
(53) |
Jul
(35) |
Aug
(46) |
Sep
(37) |
Oct
(64) |
Nov
(20) |
Dec
(25) |
2021 |
Jan
(20) |
Feb
(31) |
Mar
(27) |
Apr
(23) |
May
(21) |
Jun
(30) |
Jul
(30) |
Aug
(7) |
Sep
(18) |
Oct
|
Nov
(15) |
Dec
(4) |
2022 |
Jan
(3) |
Feb
(1) |
Mar
(10) |
Apr
|
May
(2) |
Jun
(26) |
Jul
(5) |
Aug
|
Sep
(1) |
Oct
(2) |
Nov
(9) |
Dec
(2) |
2023 |
Jan
(4) |
Feb
(4) |
Mar
(5) |
Apr
(10) |
May
(29) |
Jun
(17) |
Jul
|
Aug
|
Sep
(1) |
Oct
(1) |
Nov
(2) |
Dec
|
2024 |
Jan
|
Feb
(6) |
Mar
|
Apr
(1) |
May
(6) |
Jun
|
Jul
(5) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
|
2025 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Ling, X. <xia...@in...> - 2004-04-20 00:40:32
|
-----Original Message----- From: Jon Maloy [mailto:jon...@er...]=20 Sent: 2004=C4=EA4=D4=C220=C8=D5 1:09 To: Ling, Xiaofeng Cc: Daniel McNeil; Guo, Min; Mark Haverkamp; tipc Subject: Re: [Tipc-discussion] Re: tipc multicast patch My comments below. /jon =20 >>In TIPCv1, what I understand is 2 processes on one node can not open = the same port name sequence, >>This has changed in TIPCv2. Now you can bind more than one socket to = the same=20 >>sequence even on the same node. This may be useful if we want = "weighted" load sharing,=20 >>e.g. we may have 1 binding on one node, and 2 bindings on a second, = leading the=20 >>second node to take twice the load of the first one for this = particular function. >>on two or more node, only one node will get the a message sent to this = port name, that can be treated as a load >>balance. As for multicast, maybe this rule can also be applying. Of = cause,this also depends on application mode. >No. The semantics of multicast behaviour must be absolute and = predictable. Statictical=20 >load sharing applies to unicast only. So, in TIPCv2, any process binding to the same instance can take the = multicast receiver role? For example, on node 1 A, bind(17777, 20) B, bind(17777, 20) no node 2 C, bind(17777, 20) D, bind(17777, 20) if sendto name (17777, 20, 0),=20 only one process will receive the message. if sendto nameseq (17777, 20, 20),=20 all the processes will received the message. Is this description correct? =20 =20 |
From: Jon M. <jon...@er...> - 2004-04-20 00:32:29
|
It looks ok. You will find that it will collide with version 1.4 that I just checked in, though, so a merger will be needed. Cheers /Jon Mark Haverkamp wrote: >I found a race using tipc_queue_size. Multiple processors can be >accessing the variable at the same time causing it to become corrupted. >I was seeing places where the tipc_queue_size was already zero but being >decremented. This caused all kinds of strange things to happen. Anyway, >I converted tipc_queue_size to an atomic_t to protect access to the >variable. I can now run the tipc benchmark client/server test without >hanging the network and/or computer. > >Index: socket.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/linux-2.6/socket.c,v >retrieving revision 1.3 >diff -u -r1.3 socket.c >--- socket.c 15 Apr 2004 18:06:37 -0000 1.3 >+++ socket.c 19 Apr 2004 21:50:19 -0000 >@@ -99,6 +99,7 @@ > #include <linux/version.h> > #include <asm/semaphore.h> > #include <asm/string.h> >+#include <asm/atomic.h> > #include <linux/fcntl.h> > #include "tipc_adapt.h" > #include <tipc_msg.h> >@@ -133,7 +134,7 @@ > int buf_rest; > }; > >-static uint tipc_queue_size = 0; >+static atomic_t tipc_queue_size = ATOMIC_INIT(0); > extern wait_queue_head_t tipc_signal_wq; > static uint dispatch(struct tipc_port*,struct sk_buff *buf); > static void wakeupdispatch(struct tipc_port *port); >@@ -193,8 +194,8 @@ > pmask |= pollmask(tsock->queue_head); > tsock->pollmask = pmask; > tsock->queue_size--; >- tipc_queue_size--; > spin_unlock_bh(tsock->p->lock); >+ atomic_dec(&tipc_queue_size); > if (unlikely(pmask & POLLHUP)) > tipc_disconnect(tsock->p->ref); > } >@@ -284,7 +285,7 @@ > struct sk_buff *next = buf_next(buf); > tipc_reject_msg(buf, TIPC_ERR_NO_PORT); > buf = next; >- tipc_queue_size--; >+ atomic_dec(&tipc_queue_size); > } > > while (tsock->ev_head) { >@@ -371,7 +372,7 @@ > /* Remove for next poll/read */ > tsock->pollmask &= ~MSG_ERROR; > /* Empty error msg? */ >- if (!(pmask & TIPC_MSG_PENDING)) >+ if (!(pmask & TIPC_MSG_PENDING)) > advance_queue(tsock); > } > return pmask; >@@ -760,14 +761,16 @@ > uint pmask = 0; > uint res = TIPC_OK;; > >- if (unlikely(tipc_queue_size > OVERLOAD_LIMIT_BASE)) { >- if (overload(tipc_queue_size, OVERLOAD_LIMIT_BASE, msg)){ >+ if (unlikely((uint)atomic_read(&tipc_queue_size) > >+ OVERLOAD_LIMIT_BASE)) { >+ if (overload(atomic_read(&tipc_queue_size), >+ OVERLOAD_LIMIT_BASE, msg)){ > res = TIPC_ERR_OVERLOAD; > goto error; > } > } > if (unlikely(tsock->queue_size > OVERLOAD_LIMIT_BASE / 2)) { >- if (overload(tsock->queue_size, OVERLOAD_LIMIT_BASE / 2, msg)){ >+ if (overload(tsock->queue_size, OVERLOAD_LIMIT_BASE / 2, msg)) { > res = TIPC_ERR_OVERLOAD; > goto error; > } >@@ -779,7 +782,7 @@ > } > } > tsock->queue_size += 1; >- tipc_queue_size += 1; >+ atomic_inc(&tipc_queue_size); > msg_dbg(msg,"<DISP<: "); > buf_set_next(buf, 0); > if (tsock->queue_head) { > > > > |
From: Mark H. <ma...@os...> - 2004-04-19 22:37:19
|
I found a race using tipc_queue_size. Multiple processors can be accessing the variable at the same time causing it to become corrupted. I was seeing places where the tipc_queue_size was already zero but being decremented. This caused all kinds of strange things to happen. Anyway, I converted tipc_queue_size to an atomic_t to protect access to the variable. I can now run the tipc benchmark client/server test without hanging the network and/or computer. Index: socket.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/linux-2.6/socket.c,v retrieving revision 1.3 diff -u -r1.3 socket.c --- socket.c 15 Apr 2004 18:06:37 -0000 1.3 +++ socket.c 19 Apr 2004 21:50:19 -0000 @@ -99,6 +99,7 @@ #include <linux/version.h> #include <asm/semaphore.h> #include <asm/string.h> +#include <asm/atomic.h> #include <linux/fcntl.h> #include "tipc_adapt.h" #include <tipc_msg.h> @@ -133,7 +134,7 @@ int buf_rest; }; -static uint tipc_queue_size = 0; +static atomic_t tipc_queue_size = ATOMIC_INIT(0); extern wait_queue_head_t tipc_signal_wq; static uint dispatch(struct tipc_port*,struct sk_buff *buf); static void wakeupdispatch(struct tipc_port *port); @@ -193,8 +194,8 @@ pmask |= pollmask(tsock->queue_head); tsock->pollmask = pmask; tsock->queue_size--; - tipc_queue_size--; spin_unlock_bh(tsock->p->lock); + atomic_dec(&tipc_queue_size); if (unlikely(pmask & POLLHUP)) tipc_disconnect(tsock->p->ref); } @@ -284,7 +285,7 @@ struct sk_buff *next = buf_next(buf); tipc_reject_msg(buf, TIPC_ERR_NO_PORT); buf = next; - tipc_queue_size--; + atomic_dec(&tipc_queue_size); } while (tsock->ev_head) { @@ -371,7 +372,7 @@ /* Remove for next poll/read */ tsock->pollmask &= ~MSG_ERROR; /* Empty error msg? */ - if (!(pmask & TIPC_MSG_PENDING)) + if (!(pmask & TIPC_MSG_PENDING)) advance_queue(tsock); } return pmask; @@ -760,14 +761,16 @@ uint pmask = 0; uint res = TIPC_OK;; - if (unlikely(tipc_queue_size > OVERLOAD_LIMIT_BASE)) { - if (overload(tipc_queue_size, OVERLOAD_LIMIT_BASE, msg)){ + if (unlikely((uint)atomic_read(&tipc_queue_size) > + OVERLOAD_LIMIT_BASE)) { + if (overload(atomic_read(&tipc_queue_size), + OVERLOAD_LIMIT_BASE, msg)){ res = TIPC_ERR_OVERLOAD; goto error; } } if (unlikely(tsock->queue_size > OVERLOAD_LIMIT_BASE / 2)) { - if (overload(tsock->queue_size, OVERLOAD_LIMIT_BASE / 2, msg)){ + if (overload(tsock->queue_size, OVERLOAD_LIMIT_BASE / 2, msg)) { res = TIPC_ERR_OVERLOAD; goto error; } @@ -779,7 +782,7 @@ } } tsock->queue_size += 1; - tipc_queue_size += 1; + atomic_inc(&tipc_queue_size); msg_dbg(msg,"<DISP<: "); buf_set_next(buf, 0); if (tsock->queue_head) { -- Mark Haverkamp <ma...@os...> |
From: Mark H. <ma...@os...> - 2004-04-19 21:24:35
|
I found a problem initializing a bcastlink bitmap. Too much memory was being zeroed out corrupting memory and causing strange behavior. I changed the memset to only zero out the size of the bitmap array. (Using BIT_NODES*sizeof(unsigned long) would have worked too, but using this method, you are always going to fill the correct amount even if you change the data type someday). I have included a patch for review. Mark. Index: bcast.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/bcast.c,v retrieving revision 1.15 diff -u -r1.15 bcast.c --- bcast.c 19 Apr 2004 16:12:28 -0000 1.15 +++ bcast.c 19 Apr 2004 21:16:18 -0000 @@ -317,7 +317,7 @@ blink->bitmap[i] = 0; INIT_LIST_HEAD(&blink->list); blink->mask = (tipc_addr(tipc_zone(tipc_own_addr),tipc_cluster(tipc_own_addr),0)); - memset(blink->bitmap, 0, MAX_NODES*sizeof(unsigned long)); + memset(blink->bitmap, 0, sizeof(blink->bitmap)); list_add_tail(&(blink->list),&blink_head); return blink; -- Mark Haverkamp <ma...@os...> |
From: Jon M. <jon...@er...> - 2004-04-19 17:09:44
|
My comments below. /jon Ling, Xiaofeng wrote: Thanks for your good suggestion. some comments below. =20 -----Original Message----- From: Daniel McNeil [ mailto:da...@os... <mailto:da...@os...> ]=20 Sent: 2004=C4=EA4=D4=C216=C8=D5 23:14 To: Ling, Xiaofeng; Guo, Min Cc: Jon Maloy; Mark Haverkamp; tipc Subject: RE: [Tipc-discussion] Re: tipc multicast patch Hi, We have not tested > 8 nodes, yet. We could test that code by changing the check (we currently have 4 nodes) to a lower=20 number. Do you want us to do this? How/why was the number '8' chosen for broadcast? More or less by random. I think that it is at the order of 10 nodes where the use of broadcast starts to be beneficial. Below that, it adds no significicant bandwith usage.=20 The use of broadcast has some drawbacks that should be avoided when possible: 1: All nodes in the cluster will receive a copy of the packets, in many cases just for throwing it away. 2: Multicast implies higher packet loss rate than unicast, and hence more retranmissions. =20 8 is just a suggested number in the RFC, maybe the more feasible way is to make=20 it configurable module parameter. Or a dynamic number adjustable with the total nodes in the cluster. That could be TODO itme. If anything, the number should be increased. But let us not complicate things. If a hard-coded value works, let us stick to that. =20 Also, Mark and I notice some interesting behavior of the mulicast: If 2 processes on the same node publish the same port name=20 sequence, a multicast only goes 1 process on the local node=20 (we have not tried remote yet). Is this the intended=20 behavior? Should all processes on all nodes get it? (I do=20 not know if your latest check-in affects this behavior) _All_ sockets overlapping with the send sequence, no matter on which node, should receive exactly one copy of the message. The entities joining a TIPC "multicast group" (if that is what we prefer to call it) is sockets, not nodes. This is the only consistent behaviour: a process moving from one node to another must=20 expect the same behaviour with regard to which messages he receives. =20 In TIPCv1, what I understand is 2 processes on one node can not open the same port name sequence, This has changed in TIPCv2. Now you can bind more than one socket to the same=20 sequence even on the same node. This may be useful if we want "weighted" load sharing,=20 e.g. we may have 1 binding on one node, and 2 bindings on a second, leading the=20 second node to take twice the load of the first one for this particular function. on two or more node, only one node will get the a message sent to this port name, that can be treated as a load balance. As for multicast, maybe this rule can also be applying. Of cause,this also depends on application mode. No. The semantics of multicast behaviour must be absolute and predictable. Statictical=20 load sharing applies to unicast only. =20 Thanks, Daniel =20 =20 |
From: Mark H. <ma...@os...> - 2004-04-19 16:15:15
|
On Sat, 2004-04-17 at 10:42, Ling, Xiaofeng wrote: > Ok, these are good patches, you can just check in them. Done. >=20 > > -----Original Message----- > > From: Mark Haverkamp [mailto:ma...@os...]=20 > > Sent: 2004=E5=B9=B44=E6=9C=8817=E6=97=A5 1:41 > > To: Guo, Min; Ling, Xiaofeng; tipc > > Subject: mcast patches > >=20 > >=20 > > Here are some patches for your consideration. =20 > >=20 > > The recvbcast.c patch allows tipc_bcast_stop access to=20 > > bnode_outqueue_release. > >=20 > > The name_table.c patch fixed an occasional panic where the=20 > > for loop could run off the end of the seq table. > >=20 > > the bcast.c patch moves prev_destnode outside the=20 > > list_for_each so it isn't continually being zeroed out.=20 > >=20 > > Thanks, > > Mark. > >=20 > > Index: recvbcast.c=20 > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > RCS file: /cvsroot/tipc/source/unstable/net/tipc/recvbcast.c,v > > retrieving revision 1.11 > > diff -u -r1.11 recvbcast.c > > --- recvbcast.c 16 Apr 2004 04:03:46 -0000 1.11 > > +++ recvbcast.c 16 Apr 2004 17:34:33 -0000 > > @@ -166,7 +166,7 @@ > > * Input: ackno: the acked packet seqno > > * Return: void > > */ > > -static void bnode_outqueue_release(int ackno) > > +void bnode_outqueue_release(int ackno) > > { > > struct sk_buff *buf; > > =09 > > Index: name_table.c=20 > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.c,v > > retrieving revision 1.8 > > diff -u -r1.8 name_table.c > > --- name_table.c 16 Apr 2004 04:03:46 -0000 1.8 > > +++ name_table.c 16 Apr 2004 17:34:33 -0000 > > @@ -723,6 +723,9 @@ > > =20 > > if (high_seq < low_seq) > > goto not_found;=20 > > + > > + if (high_seq >=3D seq->first_free) > > + high_seq =3D seq->first_free -1; > > =20 > > spin_lock_bh(&seq->lock); > > =20 > > @@ -794,6 +797,9 @@ > > high_seq =3D nameseq_find_insert_pos(seq,upper);=20 > > if (high_seq < 0) > > high_seq =3D high_seq < 0 ? ((~high_seq) -1): high_seq; > > + > > + if (high_seq >=3D seq->first_free) > > + high_seq =3D seq->first_free -1; > > =20 > > if (high_seq < low_seq) > > goto not_found;=20 > > Index: bcast.c=20 > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > RCS file: /cvsroot/tipc/source/unstable/net/tipc/bcast.c,v > > retrieving revision 1.14 > > diff -u -r1.14 bcast.c > > --- bcast.c 16 Apr 2004 04:03:46 -0000 1.14 > > +++ bcast.c 16 Apr 2004 17:34:33 -0000 > > @@ -427,8 +427,8 @@ > > int i =3D 0,prev_destnode;=09 > > struct mc_identity* mid; > > =09 > > + prev_destnode =3D 0; > > list_for_each(pos,mc_head) { > > - prev_destnode =3D 0; > > mid =3D list_entry(pos,struct mc_identity,list); > > if (mid !=3D NULL && (prev_destnode !=3D mid->node)){ > > prev_destnode =3D mid->node; > >=20 > > --=20 > > Mark Haverkamp <ma...@os...> > >=20 > >=20 >=20 >=20 > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux Tutorials > Free Linux tutorial presented by Daniel Robbins, President and CEO of > GenToo technologies. Learn everything from fundamentals to system > administration.http://ads.osdn.com/?ad_id=1470&alloc_id638&opk > _______________________________________________ > TIPC-discussion mailing list > TIP...@li... > https://lists.sourceforge.net/lists/listinfo/tipc-discussion --=20 Mark Haverkamp <ma...@os...> |
From: Mark H. <ma...@os...> - 2004-04-19 14:32:05
|
On Sat, 2004-04-17 at 10:38, Ling, Xiaofeng wrote: > Thanks for your good suggestion. > some comments below. > > -----Original Message----- > > From: Daniel McNeil [mailto:da...@os...]=20 > > Sent: 2004=E5=B9=B44=E6=9C=8816=E6=97=A5 23:14 > > To: Ling, Xiaofeng; Guo, Min > > Cc: Jon Maloy; Mark Haverkamp; tipc > > Subject: RE: [Tipc-discussion] Re: tipc multicast patch > >=20 > >=20 > > Hi, > >=20 > >=20 > > We have not tested > 8 nodes, yet. We could test that code > > by changing the check (we currently have 4 nodes) to a lower=20 > > number. Do you want us to do this? > >=20 > > How/why was the number '8' chosen for broadcast? >=20 > 8 is just a suggested number in the RFC, maybe the more feasible way i= s to make=20 > it configurable module parameter. Or a dynamic number adjustable with t= he total nodes in the cluster. > That could be TODO itme. >=20 > > Also, Mark and I notice some interesting behavior of the mulicast: > >=20 > > If 2 processes on the same node publish the same port name=20 > > sequence, a multicast only goes 1 process on the local node=20 > > (we have not tried remote yet). Is this the intended=20 > > behavior? Should all processes on all nodes get it? (I do=20 > > not know if your latest check-in affects this behavior) > In TIPCv1, what I understand is 2 processes on one node can not open th= e same port name sequence, > on two or more node, only one node will get the a message sent to this = port name, that can be treated as a load > balance. As for multicast, maybe this rule can also be applying. Of cau= se,this also depends on application mode. >=20 I tried this out with your updated mcast code and it seems to work OK.=20 I published the same port name sequence from two processes on a node and was able to receive a multicast message to each process. This seems to me like the right thing to do. Jon, I looked at your RFC and didn't see this kind of behavior specified one way or the other. What do you think is the right thing to do? Thanks, Mark. >=20 >=20 >=20 > > Thanks, > >=20 > > Daniel > >=20 > >=20 > >=20 > >=20 > >=20 > >=20 > >=20 > >=20 --=20 Mark Haverkamp <ma...@os...> |
From: Ling, X. <xia...@in...> - 2004-04-17 17:42:44
|
Ok, these are good patches, you can just check in them. > -----Original Message----- > From: Mark Haverkamp [mailto:ma...@os...]=20 > Sent: 2004=C4=EA4=D4=C217=C8=D5 1:41 > To: Guo, Min; Ling, Xiaofeng; tipc > Subject: mcast patches >=20 >=20 > Here are some patches for your consideration. =20 >=20 > The recvbcast.c patch allows tipc_bcast_stop access to=20 > bnode_outqueue_release. >=20 > The name_table.c patch fixed an occasional panic where the=20 > for loop could run off the end of the seq table. >=20 > the bcast.c patch moves prev_destnode outside the=20 > list_for_each so it isn't continually being zeroed out.=20 >=20 > Thanks, > Mark. >=20 > Index: recvbcast.c=20 > = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > RCS file: /cvsroot/tipc/source/unstable/net/tipc/recvbcast.c,v > retrieving revision 1.11 > diff -u -r1.11 recvbcast.c > --- recvbcast.c 16 Apr 2004 04:03:46 -0000 1.11 > +++ recvbcast.c 16 Apr 2004 17:34:33 -0000 > @@ -166,7 +166,7 @@ > * Input: ackno: the acked packet seqno > * Return: void > */ > -static void bnode_outqueue_release(int ackno) > +void bnode_outqueue_release(int ackno) > { > struct sk_buff *buf; > =09 > Index: name_table.c=20 > = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.c,v > retrieving revision 1.8 > diff -u -r1.8 name_table.c > --- name_table.c 16 Apr 2004 04:03:46 -0000 1.8 > +++ name_table.c 16 Apr 2004 17:34:33 -0000 > @@ -723,6 +723,9 @@ > =20 > if (high_seq < low_seq) > goto not_found;=20 > + > + if (high_seq >=3D seq->first_free) > + high_seq =3D seq->first_free -1; > =20 > spin_lock_bh(&seq->lock); > =20 > @@ -794,6 +797,9 @@ > high_seq =3D nameseq_find_insert_pos(seq,upper);=20 > if (high_seq < 0) > high_seq =3D high_seq < 0 ? ((~high_seq) -1): high_seq; > + > + if (high_seq >=3D seq->first_free) > + high_seq =3D seq->first_free -1; > =20 > if (high_seq < low_seq) > goto not_found;=20 > Index: bcast.c=20 > = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > RCS file: /cvsroot/tipc/source/unstable/net/tipc/bcast.c,v > retrieving revision 1.14 > diff -u -r1.14 bcast.c > --- bcast.c 16 Apr 2004 04:03:46 -0000 1.14 > +++ bcast.c 16 Apr 2004 17:34:33 -0000 > @@ -427,8 +427,8 @@ > int i =3D 0,prev_destnode;=09 > struct mc_identity* mid; > =09 > + prev_destnode =3D 0; > list_for_each(pos,mc_head) { > - prev_destnode =3D 0; > mid =3D list_entry(pos,struct mc_identity,list); > if (mid !=3D NULL && (prev_destnode !=3D mid->node)){ > prev_destnode =3D mid->node; >=20 > --=20 > Mark Haverkamp <ma...@os...> >=20 >=20 |
From: Ling, X. <xia...@in...> - 2004-04-17 17:38:59
|
Thanks for your good suggestion. some comments below. > -----Original Message----- > From: Daniel McNeil [mailto:da...@os...]=20 > Sent: 2004=C4=EA4=D4=C216=C8=D5 23:14 > To: Ling, Xiaofeng; Guo, Min > Cc: Jon Maloy; Mark Haverkamp; tipc > Subject: RE: [Tipc-discussion] Re: tipc multicast patch >=20 >=20 > Hi, >=20 >=20 > We have not tested > 8 nodes, yet. We could test that code > by changing the check (we currently have 4 nodes) to a lower=20 > number. Do you want us to do this? >=20 > How/why was the number '8' chosen for broadcast? 8 is just a suggested number in the RFC, maybe the more feasible way is = to make=20 it configurable module parameter. Or a dynamic number adjustable with = the total nodes in the cluster. That could be TODO itme. > Also, Mark and I notice some interesting behavior of the mulicast: >=20 > If 2 processes on the same node publish the same port name=20 > sequence, a multicast only goes 1 process on the local node=20 > (we have not tried remote yet). Is this the intended=20 > behavior? Should all processes on all nodes get it? (I do=20 > not know if your latest check-in affects this behavior) In TIPCv1, what I understand is 2 processes on one node can not open the = same port name sequence, on two or more node, only one node will get the a message sent to this = port name, that can be treated as a load balance. As for multicast, maybe this rule can also be applying. Of = cause,this also depends on application mode. > Thanks, >=20 > Daniel >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 |
From: Jon M. <jon...@er...> - 2004-04-16 18:09:45
|
No problem with me. It seems like my own patches from yesterday somehow have survived. /jon Mark Haverkamp wrote: >I plan on checking in the re-fixed changes in this patch if there are no >objections. > >Thanks, >Mark. > >Index: manager.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/manager.c,v >retrieving revision 1.7 >diff -u -r1.7 manager.c >--- manager.c 16 Apr 2004 04:03:46 -0000 1.7 >+++ manager.c 16 Apr 2004 16:38:09 -0000 >@@ -161,12 +161,12 @@ > const char* data, > const uint size) > { >- memset(&rmsg, 0, sizeof (rmsg)); >+ memset(rmsg, 0, sizeof (*rmsg)); > rmsg->cmd = htonl(cmd); > rmsg->retval = htonl(res); > memcpy(rmsg->usr_handle,usr_handle,sizeof(rmsg->usr_handle)); > rmsg->result_len = htonl(sizeof(rmsg->result)); >- sct[0].data = (const unchar *) &rmsg; >+ sct[0].data = (const unchar *) rmsg; > sct[0].size = sizeof(*rmsg); > if (!data) > return; >Index: media.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/media.c,v >retrieving revision 1.11 >diff -u -r1.11 media.c >--- media.c 16 Apr 2004 04:03:46 -0000 1.11 >+++ media.c 16 Apr 2004 16:38:09 -0000 >@@ -525,7 +525,7 @@ > media++; > } > read_unlock_bh(&net_lock); >- return TIPC_OK; >+ return buf.crs-raw;; > } > > >Index: node.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/node.c,v >retrieving revision 1.13 >diff -u -r1.13 node.c >--- node.c 16 Apr 2004 04:03:46 -0000 1.13 >+++ node.c 16 Apr 2004 16:38:09 -0000 >@@ -624,8 +624,8 @@ > for(n = nodes;n;n = n->next) { > if (!in_scope(scope,n->addr)) > continue; >- list[cnt].addr = n->addr; >- list[cnt].up = node_is_up(n); >+ list[cnt].addr = htonl(n->addr); >+ list[cnt].up = htonl(node_is_up(n)); > cnt++; > }; > return (cnt * sizeof(struct tipc_node_info)); >Index: port.c >=================================================================== >RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.c,v >retrieving revision 1.13 >diff -u -r1.13 port.c >--- port.c 16 Apr 2004 04:03:46 -0000 1.13 >+++ port.c 16 Apr 2004 16:38:09 -0000 >@@ -734,7 +734,7 @@ > uport = dport->user_port; > usr_handle = uport->usr_handle; > connected = dport->publ.connected; >- published = dport->publ.connected; >+ published = dport->publ.published; > > if (unlikely(msg_errcode(msg))) > goto err; > > > |
From: Mark H. <ma...@os...> - 2004-04-16 17:41:38
|
Here are some patches for your consideration. The recvbcast.c patch allows tipc_bcast_stop access to bnode_outqueue_release. The name_table.c patch fixed an occasional panic where the for loop could run off the end of the seq table. the bcast.c patch moves prev_destnode outside the list_for_each so it isn't continually being zeroed out. Thanks, Mark. Index: recvbcast.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/recvbcast.c,v retrieving revision 1.11 diff -u -r1.11 recvbcast.c --- recvbcast.c 16 Apr 2004 04:03:46 -0000 1.11 +++ recvbcast.c 16 Apr 2004 17:34:33 -0000 @@ -166,7 +166,7 @@ * Input: ackno: the acked packet seqno * Return: void */ -static void bnode_outqueue_release(int ackno) +void bnode_outqueue_release(int ackno) { struct sk_buff *buf; Index: name_table.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.c,v retrieving revision 1.8 diff -u -r1.8 name_table.c --- name_table.c 16 Apr 2004 04:03:46 -0000 1.8 +++ name_table.c 16 Apr 2004 17:34:33 -0000 @@ -723,6 +723,9 @@ if (high_seq < low_seq) goto not_found; + + if (high_seq >= seq->first_free) + high_seq = seq->first_free -1; spin_lock_bh(&seq->lock); @@ -794,6 +797,9 @@ high_seq = nameseq_find_insert_pos(seq,upper); if (high_seq < 0) high_seq = high_seq < 0 ? ((~high_seq) -1): high_seq; + + if (high_seq >= seq->first_free) + high_seq = seq->first_free -1; if (high_seq < low_seq) goto not_found; Index: bcast.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/bcast.c,v retrieving revision 1.14 diff -u -r1.14 bcast.c --- bcast.c 16 Apr 2004 04:03:46 -0000 1.14 +++ bcast.c 16 Apr 2004 17:34:33 -0000 @@ -427,8 +427,8 @@ int i = 0,prev_destnode; struct mc_identity* mid; + prev_destnode = 0; list_for_each(pos,mc_head) { - prev_destnode = 0; mid = list_entry(pos,struct mc_identity,list); if (mid != NULL && (prev_destnode != mid->node)){ prev_destnode = mid->node; -- Mark Haverkamp <ma...@os...> |
From: Mark H. <ma...@os...> - 2004-04-16 16:41:10
|
I plan on checking in the re-fixed changes in this patch if there are no objections. Thanks, Mark. Index: manager.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/manager.c,v retrieving revision 1.7 diff -u -r1.7 manager.c --- manager.c 16 Apr 2004 04:03:46 -0000 1.7 +++ manager.c 16 Apr 2004 16:38:09 -0000 @@ -161,12 +161,12 @@ const char* data, const uint size) { - memset(&rmsg, 0, sizeof (rmsg)); + memset(rmsg, 0, sizeof (*rmsg)); rmsg->cmd = htonl(cmd); rmsg->retval = htonl(res); memcpy(rmsg->usr_handle,usr_handle,sizeof(rmsg->usr_handle)); rmsg->result_len = htonl(sizeof(rmsg->result)); - sct[0].data = (const unchar *) &rmsg; + sct[0].data = (const unchar *) rmsg; sct[0].size = sizeof(*rmsg); if (!data) return; Index: media.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/media.c,v retrieving revision 1.11 diff -u -r1.11 media.c --- media.c 16 Apr 2004 04:03:46 -0000 1.11 +++ media.c 16 Apr 2004 16:38:09 -0000 @@ -525,7 +525,7 @@ media++; } read_unlock_bh(&net_lock); - return TIPC_OK; + return buf.crs-raw;; } Index: node.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/node.c,v retrieving revision 1.13 diff -u -r1.13 node.c --- node.c 16 Apr 2004 04:03:46 -0000 1.13 +++ node.c 16 Apr 2004 16:38:09 -0000 @@ -624,8 +624,8 @@ for(n = nodes;n;n = n->next) { if (!in_scope(scope,n->addr)) continue; - list[cnt].addr = n->addr; - list[cnt].up = node_is_up(n); + list[cnt].addr = htonl(n->addr); + list[cnt].up = htonl(node_is_up(n)); cnt++; }; return (cnt * sizeof(struct tipc_node_info)); Index: port.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.c,v retrieving revision 1.13 diff -u -r1.13 port.c --- port.c 16 Apr 2004 04:03:46 -0000 1.13 +++ port.c 16 Apr 2004 16:38:09 -0000 @@ -734,7 +734,7 @@ uport = dport->user_port; usr_handle = uport->usr_handle; connected = dport->publ.connected; - published = dport->publ.connected; + published = dport->publ.published; if (unlikely(msg_errcode(msg))) goto err; -- Mark Haverkamp <ma...@os...> |
From: Mark H. <ma...@os...> - 2004-04-16 16:16:33
|
On Thu, 2004-04-15 at 21:08, Guo, Min wrote: > Hi, Jon > > I checked the our updated code into CVS, The basic > transmission(recv/send) of both blink and replication can work > correctly now.But for retransmission,we are still on debugging. I > marked the before-revision code as T04416 , and after-revision code as > Ttx_rx_bcast. I compiled the new code but couldn't load the tipc module: tipc: Unknown symbol bnode_outqueue_release Error inserting '/root/tipc.ko': -1 Unknown symbol in module bnode_outqueue_release is declared static in recvbcast.c but is called from tipc_bcast_sop in bcast.c. removing the static allows the module to be loaded. Mark. > > Thanks > Guo Min -- Mark Haverkamp <ma...@os...> |
From: Daniel M. <da...@os...> - 2004-04-16 15:14:31
|
Hi, Mark told me you checked-in your code changes. Just a few good open source development process suggestions: Could you please send email to the tipc list when code check-in's are done so all of us know? Patches should be reviewed on the list before check in. The check-in log messages should describe what actually was fixed. Patch early and patch often. (Mark and I debugged code which you already had fixes for :( ) Send out patches incrementally as you find and fix bugs. We have not tested > 8 nodes, yet. We could test that code by changing the check (we currently have 4 nodes) to a lower number. Do you want us to do this? How/why was the number '8' chosen for broadcast? Also, Mark and I notice some interesting behavior of the mulicast: If 2 processes on the same node publish the same port name sequence, a multicast only goes 1 process on the local node (we have not tried remote yet). Is this the intended behavior? Should all processes on all nodes get it? (I do not know if your latest check-in affects this behavior) Thanks, Daniel |
From: Mark H. <ma...@os...> - 2004-04-16 15:05:34
|
On Thu, 2004-04-15 at 21:08, Guo, Min wrote: > Hi, Jon > > I checked the our updated code into CVS, The basic > transmission(recv/send) of both blink and replication can work > correctly now.But for retransmission,we are still on debugging. I > marked the before-revision code as T04416 , and after-revision code as > Ttx_rx_bcast. > > Thanks > Guo Min Hi, Some of your check in changes reversed some of my recent fixes in areas un-related to multicast/bcast. manager.c: reversed broken memset. And there were no other changes other than to back out my fix, media.c: reversed a return code fix. Again there were no other changes other than to reverse the fix. msg.c: Nothing reversed but no code was changed. Only a log message. node.c: reversed fix in node_get_nodes. port.c: reversed fix in port_dispatcher_sigh setting published flag. No other changes. Please make sure that no one else has checked in a change before checking in your changes. You may need to merge with current cvs files before checking in. I'll re-check in the removed fixes. Thanks, Mark. -- Mark Haverkamp <ma...@os...> |
From: Guo, M. <mi...@in...> - 2004-04-16 02:55:34
|
Hi, Mark: =20 Do you test the MC member > 8 ? For replication transmission,it can work for us already in our local CVS tree, for bcastlink transmission, it also can work in our local CVS tree. We are now debugging the retransmission. =20 Thanks Guo Min _____ =20 From: tip...@li... [mailto:tip...@li...] On Behalf Of Jon Maloy Sent: Friday, April 16, 2004 2:29 AM To: Mark Haverkamp Cc: tipc Subject: Re: [Tipc-discussion] RE: Another tipc observation =09 =09 Hi again. I just checked it in myself, along with some other corrections. I also added a corresonding file release, tipc-1.3.09 =09 /jon =09 Mark Haverkamp wrote: =09 On Sat, 2004-04-10 at 11:06, Jon Maloy (QB/EMC) wrote: =20 Certainly not. I am surprised that it works at all, given that the management code is not tested at all yet. Feel free to correct it. =09 Thanks /Jon =09 =20 OK, here is a fix that works. I made the change in the node_get_nodes function. It is only called by the manager. If you would rather, I could fix it in mng_cmd_event but that would require casting the data buffer to a tipc_node_info and doing the htonl there. =09 Mark. =09 --- /home/markh/views/tipc/cvs/source/unstable/net/tipc/node.c 2004-03-30 07:11:39.000000000 -0800 +++ node.c 2004-04-14 09:54:30.000000000 -0700 @@ -613,8 +613,8 @@ for(n =3D nodes;n;n =3D n->next) { if (!in_scope(scope,n->addr)) continue; - list[cnt].addr =3D n->addr; - list[cnt].up =3D node_is_up(n); + list[cnt].addr =3D htonl(n->addr); + list[cnt].up =3D htonl(node_is_up(n)); cnt++; }; return (cnt * sizeof(struct tipc_node_info)); =09 =09 =20 |
From: Ling, X. <xia...@in...> - 2004-04-16 02:54:45
|
Guo and me are debuging now, currently our local copy can work for = replication transfer and simple broadcast, we are still working on broadcast restransmit. =20 Mark,=20 Has your patch worked for broadcast? I mean when node>8?=20 > -----Original Message----- > From: tip...@li...=20 > [mailto:tip...@li...] On=20 > Behalf Of Jon Maloy > Sent: 2004=C4=EA4=D4=C216=C8=D5 3:20 > To: Mark Haverkamp > Cc: tipc > Subject: [Tipc-discussion] Re: tipc multicast patch >=20 >=20 > It looks ok to me, but I think we should let Guo and Ling=20 > have a say first. >=20 > The only thing I hesitate about is the test for (!index) in=20 > "ref_lock_deref". > This is one of the most time-critical functions in TIPC. > Is it _really_ necessary to do this test, except for pure=20 > debugging purposes ? An index of zero should yield the right=20 > return value, -zero, and is hence no different from other=20 > invalid indexes, which also return zero. =3D> I would prefer=20 > that this test is omitted. >=20 > /Jon >=20 > Mark Haverkamp wrote: >=20 > >Daniel McNeil and I were playing with multicast in the unstable view=20 > >and found that it wasn't working for us. We spent some time=20 > debugging=20 > >and got it working (at least in our simple tests). I have=20 > enclosed the=20 > >patch that we came up with for review. We have run the=20 > client/server=20 > >test and it continues to function so I don't think that we broke=20 > >anything with the change. Please take a look and let us=20 > know what you=20 > >think. > > > >Thanks, > >Mark and Daniel. > > > > > >---=20 > /home/markh/views/tipc/cvs/source/unstable/net/tipc/bcast.h=09 > 2004-03-30 07:11:38.000000000 -0800 > >+++ bcast.h 2004-04-14 14:56:08.000000000 -0700 > >@@ -123,7 +123,7 @@ > > int tipc_bsend_buf(struct sk_buff *buf, struct list_head=20 > *mc_head); =20 > >int tipc_send_buf2nameseq(tipc_ref_t ref,struct tipc_name_seq=20 > >*dest,void *buf, uint size); int=20 > tipc_forward_buf2nameseq(tipc_ref_t ref,struct tipc_name_seq *name, > >- void *buf,struct tipc_portid=20 > const *orig,uint size, > >+ struct sk_buff *buf,struct=20 > tipc_portid const=20 > >+ *orig,uint size, > > uint importance,struct list_head=20 > *mc_head); > >=20 > >=20 > >---=20 > /home/markh/views/tipc/cvs/source/unstable/net/tipc/reg.h=09 > 2004-02-16 15:00:02.000000000 -0800 > >+++ reg.h 2004-04-15 10:08:28.000000000 -0700 > >@@ -91,8 +91,19 @@ > > static inline void * > > ref_lock_deref(uint ref) > > { > >- uint index =3D ref & ref_table.index_mask; > >- struct reference *r =3D &ref_table.entries[index]; > >+ uint index; > >+ struct reference *r; > >+ > >+ index =3D ref & ref_table.index_mask; > >+ > >+ /* > >+ * Zero is not a valid index > >+ */ > >+ if (!index) { > >+ printk("tipc ref_lock_deref: ref is zero\n"); > >+ return 0; > >+ } > >+ r =3D &ref_table.entries[index]; > > spin_lock_bh(&r->lock); > > if (likely(r->data.reference =3D=3D ref)) > > return r->object; > >---=20 > /home/markh/views/tipc/cvs/source/unstable/net/tipc/send > bcast.c 2004-03-24 08:58:32.000000000 -0800 > >+++ sendbcast.c 2004-04-15 10:22:54.000000000 -0700 > >@@ -140,7 +140,7 @@ > > */ > > int tipc_forward_buf2nameseq(tipc_ref_t ref, > > struct tipc_name_seq *name, > >- void *buf, > >+ struct sk_buff *buf, > > struct tipc_portid const *orig, > > uint size, > > uint importance,struct list_head=20 > *mc_head)=20 > >@@ -156,19 +156,19 @@ > > m =3D &this->publ.phdr; > > if (importance <=3D 3) > > msg_set_importance(m,importance); > >-=09 > >+ prev_destnode =3D 0; > > list_for_each(pos,mc_head) { > >- prev_destnode =3D 0; > > mid =3D list_entry(pos,struct mc_identity,list);=20 > > if (mid !=3D NULL && (prev_destnode !=3D mid->node)){ > > prev_destnode =3D mid->node; > >- copybuf =3D buf_acquire(msg_size(m)); > >- memcpy(copybuf,buf,msg_size(m)); > >+ copybuf =3D buf_clone(buf); > > msg_set_destnode(buf_msg(copybuf), mid ->node); > >- if (likely(mid ->node !=3D tipc_own_addr)) > >+ if (likely(mid ->node !=3D tipc_own_addr)) { > > res =3D=20 > tipc_send_buf_fast(copybuf,mid->node); > >- else > >+ } > >+ else { > > res =3D bcast_port_recv(copybuf); > >+ } > > } > > } > > return res; > >@@ -242,6 +242,7 @@ > > =09 > > if (!this)=20 > > return TIPC_FAILURE; > >+ > > INIT_LIST_HEAD(&mc_head);=09 > > nametbl_mc_translate(&mc_head,=20 > seq->type,seq->lower,seq->upper);=20 > > tipc_ownidentity(ref,&orig); > >@@ -255,7 +256,6 @@ > > res =3D msg_build(hdr,msg,scnt,TIPC_MAX_MSG_SIZE,&b); > > if (!b) > > return TIPC_FAILURE; > >-=09 > > count =3D count_mc_member(&mc_head);=09 > > =09 > > if (count <=3D REPLICA_NODES){ > >---=20 > /home/markh/views/tipc/cvs/source/unstable/net/tipc/bcast.c=09 > 2004-03-30 07:11:38.000000000 -0800 > >+++ bcast.c 2004-04-15 10:24:09.000000000 -0700 > >@@ -383,8 +383,8 @@ > > int i =3D 0,prev_destnode;=09 > > struct mc_identity* mid; > > =09 > >+ prev_destnode =3D 0; > > list_for_each(pos,mc_head) { > >- prev_destnode =3D 0; > > mid =3D list_entry(pos,struct mc_identity,list); > > if (mid !=3D NULL && (prev_destnode !=3D mid->node)){ > > prev_destnode =3D mid->node; > >@@ -433,6 +433,7 @@ > > if (mc_list =3D=3D NULL) > > return false; > > =09 > >+ INIT_LIST_HEAD(&mc_list->list); > > mc_list->port =3D destport; > > mc_list->node =3D destnode;=09 > > list_add_tail(&mc_list->list,list_head); > >@@ -492,15 +493,14 @@ > > void free_mclist(struct list_head *list_head) > > { > > struct mc_identity* mid; > >- struct list_head *pos;=09 > >+ struct list_head *pos, *n;=09 > > =09 > >- list_for_each(pos,list_head) { > >+ list_for_each_safe(pos, n, list_head) { > > mid =3D list_entry(pos, struct mc_identity, list); > > list_del(&mid->list); > > kfree(mid); > > =09 > > } > >- list_del(list_head); > > } > >=20 > > static struct list_head point =3D LIST_HEAD_INIT(point); > >---=20 > /home/markh/views/tipc/cvs/source/unstable/net/tipc/link.c=09 > 2004-03-11 07:32:51.000000000 -0800 > >+++ link.c 2004-04-15 10:24:34.000000000 -0700 > >@@ -1609,7 +1609,11 @@ > > if=20 > (likely(msg_is_dest(msg,tipc_own_addr))){ > > if (likely(msg_isdata(msg))) { > > =20 > spin_unlock_bh(&this->owner->lock); > >- port_recv_msg(buf); > >+ if=20 > (msg_type(msg) =3D=3D TIPC_MCAST_MSG) { > >+ =09 > bcast_port_recv(buf); > >+ } else { > >+ =09 > port_recv_msg(buf); > >+ } > > continue; > > } > > =09 > link_recv_non_data_msg(this, buf); > >---=20 > /home/markh/views/tipc/cvs/source/unstable/net/tipc/cfg.c=09 > 2004-02-16 15:00:01.000000000 -0800 > >+++ cfg.c 2004-04-15 10:27:00.000000000 -0700 > >@@ -393,7 +393,9 @@ > >=20 > > void cfg_link_event(tipc_net_addr_t addr,char* name,int up) > > { > >+#ifdef INTER_CLUSTER_COMM > > struct _zone* z =3D zone_find(addr); > >+#endif > >=20 > > if (in_own_cluster(addr)) > > return; > >---=20 > /home/markh/views/tipc/cvs/source/unstable/net/tipc/recv > bcast.c 2004-03-30 07:11:39.000000000 -0800 > >+++ recvbcast.c 2004-04-15 10:31:18.000000000 -0700 > >@@ -326,10 +326,10 @@ > > =09 > > struct list_head *pos; > > struct mc_identity *mid; > >- int res; > >+ int res =3D TIPC_OK; > > =09 > > list_for_each(pos,mc_head) { > >- mid =3D list_entry(pos, struct=20 > mc_identity, list); =09 > >+ mid =3D list_entry(pos, struct mc_identity, list); > > if(mid && mid ->node =3D=3D tipc_own_addr) { > > struct port* port =3D(struct=20 > port*) ref_deref(mid ->port); > > struct sk_buff *copymsg =3D=20 > buf_clone(buf); > >---=20 > /home/markh/views/tipc/cvs/source/unstable/net/tipc/name_ > table.c 2004-03-30 07:11:39.000000000 -0800 > >+++ name_table.c 2004-04-15 10:32:13.000000000 -0700 > >@@ -299,7 +299,7 @@ > > /* Insert a publication: */ > >=20 > > publ =3D publ_create(type, lower, upper, port, node, scope, key); > >- dbg("inserting publ %x, node =3D %x publ->node =3D %x,=20 > subscr->node\n", > >+ dbg("inserting publ %x, node =3D %x publ->node =3D %x,=20 > >+ subscr->node %x\n", > > publ,node,publ->node,publ->subscr.node); > > if (!sseq->zone_list)=20 > > sseq->zone_list =3D publ->zone_list.next =3D publ; = @@=20 > >-309,10 +309,11 @@ > > } > >=20 > > if (in_own_cluster(node)){ > >- if (!sseq->cluster_list)=20 > >+ if (!sseq->cluster_list) { > > sseq->cluster_list =3D=20 > publ->cluster_list.next =3D publ; > >- else{ > >- publ->cluster_list.next =3D=20 > sseq->cluster_list->cluster_list.next; =20 > >+ } else{ > >+ publ->cluster_list.next =3D=20 > >+ =09 > sseq->cluster_list->cluster_list.next; =20 > > =20 > sseq->cluster_list->cluster_list.next =3D publ; > > } > > } > >@@ -465,7 +466,7 @@ > > struct sub_seq *sseq =3D this->sseqs; > > if (!sseq) > > return 0; > >- dbg("nameseq_av: ff =3D %u, sseq =3D %x,=20 > &&this->sseqs[this->first_free =3D %x\n", > >+ dbg("nameseq_av: ff =3D %u, sseq =3D %x,=20 > >+ &this->sseqs[this]->first_free =3D %x\n", > > this->first_free,sseq,&this->sseqs[this->first_free]); > > for (;sseq !=3D &this->sseqs[this->first_free];sseq++) { > > if ((sseq->lower >=3D lower) && (sseq->lower <=3D upper)) > >@@ -707,10 +708,14 @@ > > =20 > > if (high_seq < low_seq) > > goto not_found; > >+ > >+ if (high_seq >=3D seq->first_free) > >+ high_seq =3D seq->first_free - 1; > > =20 > > spin_lock_bh(&seq->lock); > >=20 > > i =3D low_seq; > >+ > > =20 > > for (i =3D low_seq ; i <=3D high_seq; i++) > > { > >@@ -732,14 +737,15 @@ > > =09 > > if (destport) > > { > >- if ( false =3D=3D=20 > mc_identity_create(mc_head,destport,destnode)) > >+ if ( false =3D=3D=20 > mc_identity_create(mc_head,destport,destnode)) { > > goto found; =09 > >+ } > > } =09 > > }=09 > > if (list_empty(mc_head)) > > { > >- spin_unlock_bh(&seq->lock); > >- goto not_found; > >+ spin_unlock_bh(&seq->lock); > >+ goto not_found; > > } > > found: > > spin_unlock_bh(&seq->lock); > >@@ -783,16 +789,18 @@ > > if (high_seq < low_seq) > > goto not_found; > > =20 > >+ if (high_seq >=3D seq->first_free) > >+ high_seq =3D seq->first_free - 1; > >+ > > spin_lock_bh(&seq->lock); > >=20 > > i =3D low_seq; > >- =20 > >+ > > for (i =3D low_seq ; i <=3D high_seq; i++) > > { > > publ =3D seq->sseqs[i].node_list; > > if(!publ) { > >- spin_unlock_bh(&seq->lock); > >- goto not_found; =09 > >+ continue; > > } > > destnode =3D publ->node;=20 > > destport =3D publ->ref; > >@@ -804,9 +812,10 @@ > > } > > } =09 > > }=09 > >- if (list_empty(mc_head)) > >+ if (list_empty(mc_head)) { > > spin_unlock_bh(&seq->lock); > > goto not_found; > >+ } > > spin_unlock_bh(&seq->lock); > > read_unlock_bh(&nametbl_lock);=20 > > return true; > > > > =20 > > >=20 >=20 >=20 > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux Tutorials > Free Linux tutorial presented by Daniel Robbins, President=20 > and CEO of GenToo technologies. Learn everything from=20 > fundamentals to system=20 > = administration.http://ads.osdn.com/?ad_id=3D1470&alloc_id=3D3638&op=3Dcli= ck > _______________________________________________ > TIPC-discussion mailing list TIP...@li... > https://lists.sourceforge.net/lists/listinfo/tipc-discussion >=20 |
From: Jon M. <jon...@er...> - 2004-04-15 21:54:58
|
Good. I hope we will soon have some comements from Guo and Ling about this and about their progress relating to the broadcast protocol in general. Cheers /jon Mark Haverkamp wrote: On Thu, 2004-04-15 at 12:20, Jon Maloy wrote: It looks ok to me, but I think we should let Guo and Ling have a say first. I was hoping that they look at it. The only thing I hesitate about is the test for (!index) in "ref_lock_deref". This is one of the most time-critical functions in TIPC. Is it _really_ necessary to do this test, except for pure debugging purposes ? An index of zero should yield the right return value, -zero, and is hence no different from other invalid indexes, which also return zero. => I would prefer that this test is omitted. OK, the problem was that the lock in table[0] wasn't initialized causing the spinlock code to BUG(). I initialized the lock and took out the index check. See attached Mark. Index: bcast.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/bcast.h,v retrieving revision 1.9 diff -u -r1.9 bcast.h --- bcast.h 30 Mar 2004 03:33:16 -0000 1.9 +++ bcast.h 15 Apr 2004 20:42:45 -0000 @@ -123,7 +123,7 @@ int tipc_bsend_buf(struct sk_buff *buf, struct list_head *mc_head); int tipc_send_buf2nameseq(tipc_ref_t ref,struct tipc_name_seq *dest,void *buf, uint size); int tipc_forward_buf2nameseq(tipc_ref_t ref,struct tipc_name_seq *name, - void *buf,struct tipc_portid const *orig,uint size, + struct sk_buff *buf,struct tipc_portid const *orig,uint size, uint importance,struct list_head *mc_head); Index: reg.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/reg.c,v retrieving revision 1.6 diff -u -r1.6 reg.c --- reg.c 16 Feb 2004 23:00:02 -0000 1.6 +++ reg.c 15 Apr 2004 20:42:45 -0000 @@ -100,6 +100,7 @@ table = (struct reference *) k_malloc(sizeof (struct reference) * sz); table[0].object = 0; table[0].data.reference = ~0u; + table[0].lock = SPIN_LOCK_UNLOCKED; for (i = 1; i < sz - 1; i++) { table[i].object = 0; table[i].lock = SPIN_LOCK_UNLOCKED; Index: sendbcast.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/sendbcast.c,v retrieving revision 1.10 diff -u -r1.10 sendbcast.c --- sendbcast.c 16 Mar 2004 07:16:07 -0000 1.10 +++ sendbcast.c 15 Apr 2004 20:42:45 -0000 @@ -140,7 +140,7 @@ */ int tipc_forward_buf2nameseq(tipc_ref_t ref, struct tipc_name_seq *name, - void *buf, + struct sk_buff *buf, struct tipc_portid const *orig, uint size, uint importance,struct list_head *mc_head) @@ -156,19 +156,19 @@ m = &this->publ.phdr; if (importance <= 3) msg_set_importance(m,importance); - + prev_destnode = 0; list_for_each(pos,mc_head) { - prev_destnode = 0; mid = list_entry(pos,struct mc_identity,list); if (mid != NULL && (prev_destnode != mid->node)){ prev_destnode = mid->node; - copybuf = buf_acquire(msg_size(m)); - memcpy(copybuf,buf,msg_size(m)); + copybuf = buf_clone(buf); msg_set_destnode(buf_msg(copybuf), mid ->node); - if (likely(mid ->node != tipc_own_addr)) + if (likely(mid ->node != tipc_own_addr)) { res = tipc_send_buf_fast(copybuf,mid->node); - else + } + else { res = bcast_port_recv(copybuf); + } } } return res; @@ -242,6 +242,7 @@ if (!this) return TIPC_FAILURE; + INIT_LIST_HEAD(&mc_head); nametbl_mc_translate(&mc_head, seq->type,seq->lower,seq->upper); tipc_ownidentity(ref,&orig); @@ -255,7 +256,6 @@ res = msg_build(hdr,msg,scnt,TIPC_MAX_MSG_SIZE,&b); if (!b) return TIPC_FAILURE; - count = count_mc_member(&mc_head); if (count <= REPLICA_NODES){ Index: bcast.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/bcast.c,v retrieving revision 1.13 diff -u -r1.13 bcast.c --- bcast.c 30 Mar 2004 03:33:16 -0000 1.13 +++ bcast.c 15 Apr 2004 20:42:45 -0000 @@ -383,8 +383,8 @@ int i = 0,prev_destnode; struct mc_identity* mid; + prev_destnode = 0; list_for_each(pos,mc_head) { - prev_destnode = 0; mid = list_entry(pos,struct mc_identity,list); if (mid != NULL && (prev_destnode != mid->node)){ prev_destnode = mid->node; @@ -433,6 +433,7 @@ if (mc_list == NULL) return false; + INIT_LIST_HEAD(&mc_list->list); mc_list->port = destport; mc_list->node = destnode; list_add_tail(&mc_list->list,list_head); @@ -492,15 +493,14 @@ void free_mclist(struct list_head *list_head) { struct mc_identity* mid; - struct list_head *pos; + struct list_head *pos, *n; - list_for_each(pos,list_head) { + list_for_each_safe(pos, n, list_head) { mid = list_entry(pos, struct mc_identity, list); list_del(&mid->list); kfree(mid); } - list_del(list_head); } static struct list_head point = LIST_HEAD_INIT(point); Index: link.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/link.c,v retrieving revision 1.12 diff -u -r1.12 link.c --- link.c 15 Apr 2004 17:44:13 -0000 1.12 +++ link.c 15 Apr 2004 20:42:46 -0000 @@ -1612,7 +1612,11 @@ if (likely(msg_is_dest(msg,tipc_own_addr))){ if (likely(msg_isdata(msg))) { spin_unlock_bh(&this->owner->lock); - port_recv_msg(buf); + if (msg_type(msg) == TIPC_MCAST_MSG) { + bcast_port_recv(buf); + } else { + port_recv_msg(buf); + } continue; } link_recv_non_data_msg(this, buf); Index: cfg.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/cfg.c,v retrieving revision 1.6 diff -u -r1.6 cfg.c --- cfg.c 16 Feb 2004 23:00:01 -0000 1.6 +++ cfg.c 15 Apr 2004 20:42:47 -0000 @@ -393,7 +393,9 @@ void cfg_link_event(tipc_net_addr_t addr,char* name,int up) { +#ifdef INTER_CLUSTER_COMM struct _zone* z = zone_find(addr); +#endif if (in_own_cluster(addr)) return; Index: recvbcast.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/recvbcast.c,v retrieving revision 1.10 diff -u -r1.10 recvbcast.c --- recvbcast.c 30 Mar 2004 03:33:16 -0000 1.10 +++ recvbcast.c 15 Apr 2004 20:42:47 -0000 @@ -326,10 +326,10 @@ struct list_head *pos; struct mc_identity *mid; - int res; + int res = TIPC_OK; list_for_each(pos,mc_head) { - mid = list_entry(pos, struct mc_identity, list); + mid = list_entry(pos, struct mc_identity, list); if(mid && mid ->node == tipc_own_addr) { struct port* port =(struct port*) ref_deref(mid ->port); struct sk_buff *copymsg = buf_clone(buf); Index: name_table.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.c,v retrieving revision 1.7 diff -u -r1.7 name_table.c --- name_table.c 30 Mar 2004 03:33:16 -0000 1.7 +++ name_table.c 15 Apr 2004 20:42:47 -0000 @@ -299,7 +299,7 @@ /* Insert a publication: */ publ = publ_create(type, lower, upper, port, node, scope, key); - dbg("inserting publ %x, node = %x publ->node = %x, subscr->node\n", + dbg("inserting publ %x, node = %x publ->node = %x, subscr->node %x\n", publ,node,publ->node,publ->subscr.node); if (!sseq->zone_list) sseq->zone_list = publ->zone_list.next = publ; @@ -309,10 +309,11 @@ } if (in_own_cluster(node)){ - if (!sseq->cluster_list) + if (!sseq->cluster_list) { sseq->cluster_list = publ->cluster_list.next = publ; - else{ - publ->cluster_list.next = sseq->cluster_list->cluster_list.next; + } else{ + publ->cluster_list.next = + sseq->cluster_list->cluster_list.next; sseq->cluster_list->cluster_list.next = publ; } } @@ -465,7 +466,7 @@ struct sub_seq *sseq = this->sseqs; if (!sseq) return 0; - dbg("nameseq_av: ff = %u, sseq = %x, &&this->sseqs[this->first_free = %x\n", + dbg("nameseq_av: ff = %u, sseq = %x, &this->sseqs[this]->first_free = %x\n", this->first_free,sseq,&this->sseqs[this->first_free]); for (;sseq != &this->sseqs[this->first_free];sseq++) { if ((sseq->lower >= lower) && (sseq->lower <= upper)) @@ -707,10 +708,14 @@ if (high_seq < low_seq) goto not_found; + + if (high_seq >= seq->first_free) + high_seq = seq->first_free - 1; spin_lock_bh(&seq->lock); i = low_seq; + for (i = low_seq ; i <= high_seq; i++) { @@ -732,14 +737,15 @@ if (destport) { - if ( false == mc_identity_create(mc_head,destport,destnode)) + if ( false == mc_identity_create(mc_head,destport,destnode)) { goto found; + } } } if (list_empty(mc_head)) { - spin_unlock_bh(&seq->lock); - goto not_found; + spin_unlock_bh(&seq->lock); + goto not_found; } found: spin_unlock_bh(&seq->lock); @@ -783,16 +789,18 @@ if (high_seq < low_seq) goto not_found; + if (high_seq >= seq->first_free) + high_seq = seq->first_free - 1; + spin_lock_bh(&seq->lock); i = low_seq; - + for (i = low_seq ; i <= high_seq; i++) { publ = seq->sseqs[i].node_list; if(!publ) { - spin_unlock_bh(&seq->lock); - goto not_found; + continue; } destnode = publ->node; destport = publ->ref; @@ -804,9 +812,10 @@ } } } - if (list_empty(mc_head)) + if (list_empty(mc_head)) { spin_unlock_bh(&seq->lock); goto not_found; + } spin_unlock_bh(&seq->lock); read_unlock_bh(&nametbl_lock); return true; |
From: Mark H. <ma...@os...> - 2004-04-15 20:44:43
|
On Thu, 2004-04-15 at 12:20, Jon Maloy wrote: > It looks ok to me, but I think we should let Guo and Ling have a > say first. I was hoping that they look at it. > > The only thing I hesitate about is the test for (!index) in > "ref_lock_deref". > This is one of the most time-critical functions in TIPC. > Is it _really_ necessary to do this test, except for pure > debugging purposes ? An index of zero should yield the right > return value, -zero, and is hence no different from other invalid > indexes, which also return zero. > => I would prefer that this test is omitted. OK, the problem was that the lock in table[0] wasn't initialized causing the spinlock code to BUG(). I initialized the lock and took out the index check. See attached Mark. Index: bcast.h =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/bcast.h,v retrieving revision 1.9 diff -u -r1.9 bcast.h --- bcast.h 30 Mar 2004 03:33:16 -0000 1.9 +++ bcast.h 15 Apr 2004 20:42:45 -0000 @@ -123,7 +123,7 @@ int tipc_bsend_buf(struct sk_buff *buf, struct list_head *mc_head); int tipc_send_buf2nameseq(tipc_ref_t ref,struct tipc_name_seq *dest,void *buf, uint size); int tipc_forward_buf2nameseq(tipc_ref_t ref,struct tipc_name_seq *name, - void *buf,struct tipc_portid const *orig,uint size, + struct sk_buff *buf,struct tipc_portid const *orig,uint size, uint importance,struct list_head *mc_head); Index: reg.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/reg.c,v retrieving revision 1.6 diff -u -r1.6 reg.c --- reg.c 16 Feb 2004 23:00:02 -0000 1.6 +++ reg.c 15 Apr 2004 20:42:45 -0000 @@ -100,6 +100,7 @@ table = (struct reference *) k_malloc(sizeof (struct reference) * sz); table[0].object = 0; table[0].data.reference = ~0u; + table[0].lock = SPIN_LOCK_UNLOCKED; for (i = 1; i < sz - 1; i++) { table[i].object = 0; table[i].lock = SPIN_LOCK_UNLOCKED; Index: sendbcast.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/sendbcast.c,v retrieving revision 1.10 diff -u -r1.10 sendbcast.c --- sendbcast.c 16 Mar 2004 07:16:07 -0000 1.10 +++ sendbcast.c 15 Apr 2004 20:42:45 -0000 @@ -140,7 +140,7 @@ */ int tipc_forward_buf2nameseq(tipc_ref_t ref, struct tipc_name_seq *name, - void *buf, + struct sk_buff *buf, struct tipc_portid const *orig, uint size, uint importance,struct list_head *mc_head) @@ -156,19 +156,19 @@ m = &this->publ.phdr; if (importance <= 3) msg_set_importance(m,importance); - + prev_destnode = 0; list_for_each(pos,mc_head) { - prev_destnode = 0; mid = list_entry(pos,struct mc_identity,list); if (mid != NULL && (prev_destnode != mid->node)){ prev_destnode = mid->node; - copybuf = buf_acquire(msg_size(m)); - memcpy(copybuf,buf,msg_size(m)); + copybuf = buf_clone(buf); msg_set_destnode(buf_msg(copybuf), mid ->node); - if (likely(mid ->node != tipc_own_addr)) + if (likely(mid ->node != tipc_own_addr)) { res = tipc_send_buf_fast(copybuf,mid->node); - else + } + else { res = bcast_port_recv(copybuf); + } } } return res; @@ -242,6 +242,7 @@ if (!this) return TIPC_FAILURE; + INIT_LIST_HEAD(&mc_head); nametbl_mc_translate(&mc_head, seq->type,seq->lower,seq->upper); tipc_ownidentity(ref,&orig); @@ -255,7 +256,6 @@ res = msg_build(hdr,msg,scnt,TIPC_MAX_MSG_SIZE,&b); if (!b) return TIPC_FAILURE; - count = count_mc_member(&mc_head); if (count <= REPLICA_NODES){ Index: bcast.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/bcast.c,v retrieving revision 1.13 diff -u -r1.13 bcast.c --- bcast.c 30 Mar 2004 03:33:16 -0000 1.13 +++ bcast.c 15 Apr 2004 20:42:45 -0000 @@ -383,8 +383,8 @@ int i = 0,prev_destnode; struct mc_identity* mid; + prev_destnode = 0; list_for_each(pos,mc_head) { - prev_destnode = 0; mid = list_entry(pos,struct mc_identity,list); if (mid != NULL && (prev_destnode != mid->node)){ prev_destnode = mid->node; @@ -433,6 +433,7 @@ if (mc_list == NULL) return false; + INIT_LIST_HEAD(&mc_list->list); mc_list->port = destport; mc_list->node = destnode; list_add_tail(&mc_list->list,list_head); @@ -492,15 +493,14 @@ void free_mclist(struct list_head *list_head) { struct mc_identity* mid; - struct list_head *pos; + struct list_head *pos, *n; - list_for_each(pos,list_head) { + list_for_each_safe(pos, n, list_head) { mid = list_entry(pos, struct mc_identity, list); list_del(&mid->list); kfree(mid); } - list_del(list_head); } static struct list_head point = LIST_HEAD_INIT(point); Index: link.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/link.c,v retrieving revision 1.12 diff -u -r1.12 link.c --- link.c 15 Apr 2004 17:44:13 -0000 1.12 +++ link.c 15 Apr 2004 20:42:46 -0000 @@ -1612,7 +1612,11 @@ if (likely(msg_is_dest(msg,tipc_own_addr))){ if (likely(msg_isdata(msg))) { spin_unlock_bh(&this->owner->lock); - port_recv_msg(buf); + if (msg_type(msg) == TIPC_MCAST_MSG) { + bcast_port_recv(buf); + } else { + port_recv_msg(buf); + } continue; } link_recv_non_data_msg(this, buf); Index: cfg.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/cfg.c,v retrieving revision 1.6 diff -u -r1.6 cfg.c --- cfg.c 16 Feb 2004 23:00:01 -0000 1.6 +++ cfg.c 15 Apr 2004 20:42:47 -0000 @@ -393,7 +393,9 @@ void cfg_link_event(tipc_net_addr_t addr,char* name,int up) { +#ifdef INTER_CLUSTER_COMM struct _zone* z = zone_find(addr); +#endif if (in_own_cluster(addr)) return; Index: recvbcast.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/recvbcast.c,v retrieving revision 1.10 diff -u -r1.10 recvbcast.c --- recvbcast.c 30 Mar 2004 03:33:16 -0000 1.10 +++ recvbcast.c 15 Apr 2004 20:42:47 -0000 @@ -326,10 +326,10 @@ struct list_head *pos; struct mc_identity *mid; - int res; + int res = TIPC_OK; list_for_each(pos,mc_head) { - mid = list_entry(pos, struct mc_identity, list); + mid = list_entry(pos, struct mc_identity, list); if(mid && mid ->node == tipc_own_addr) { struct port* port =(struct port*) ref_deref(mid ->port); struct sk_buff *copymsg = buf_clone(buf); Index: name_table.c =================================================================== RCS file: /cvsroot/tipc/source/unstable/net/tipc/name_table.c,v retrieving revision 1.7 diff -u -r1.7 name_table.c --- name_table.c 30 Mar 2004 03:33:16 -0000 1.7 +++ name_table.c 15 Apr 2004 20:42:47 -0000 @@ -299,7 +299,7 @@ /* Insert a publication: */ publ = publ_create(type, lower, upper, port, node, scope, key); - dbg("inserting publ %x, node = %x publ->node = %x, subscr->node\n", + dbg("inserting publ %x, node = %x publ->node = %x, subscr->node %x\n", publ,node,publ->node,publ->subscr.node); if (!sseq->zone_list) sseq->zone_list = publ->zone_list.next = publ; @@ -309,10 +309,11 @@ } if (in_own_cluster(node)){ - if (!sseq->cluster_list) + if (!sseq->cluster_list) { sseq->cluster_list = publ->cluster_list.next = publ; - else{ - publ->cluster_list.next = sseq->cluster_list->cluster_list.next; + } else{ + publ->cluster_list.next = + sseq->cluster_list->cluster_list.next; sseq->cluster_list->cluster_list.next = publ; } } @@ -465,7 +466,7 @@ struct sub_seq *sseq = this->sseqs; if (!sseq) return 0; - dbg("nameseq_av: ff = %u, sseq = %x, &&this->sseqs[this->first_free = %x\n", + dbg("nameseq_av: ff = %u, sseq = %x, &this->sseqs[this]->first_free = %x\n", this->first_free,sseq,&this->sseqs[this->first_free]); for (;sseq != &this->sseqs[this->first_free];sseq++) { if ((sseq->lower >= lower) && (sseq->lower <= upper)) @@ -707,10 +708,14 @@ if (high_seq < low_seq) goto not_found; + + if (high_seq >= seq->first_free) + high_seq = seq->first_free - 1; spin_lock_bh(&seq->lock); i = low_seq; + for (i = low_seq ; i <= high_seq; i++) { @@ -732,14 +737,15 @@ if (destport) { - if ( false == mc_identity_create(mc_head,destport,destnode)) + if ( false == mc_identity_create(mc_head,destport,destnode)) { goto found; + } } } if (list_empty(mc_head)) { - spin_unlock_bh(&seq->lock); - goto not_found; + spin_unlock_bh(&seq->lock); + goto not_found; } found: spin_unlock_bh(&seq->lock); @@ -783,16 +789,18 @@ if (high_seq < low_seq) goto not_found; + if (high_seq >= seq->first_free) + high_seq = seq->first_free - 1; + spin_lock_bh(&seq->lock); i = low_seq; - + for (i = low_seq ; i <= high_seq; i++) { publ = seq->sseqs[i].node_list; if(!publ) { - spin_unlock_bh(&seq->lock); - goto not_found; + continue; } destnode = publ->node; destport = publ->ref; @@ -804,9 +812,10 @@ } } } - if (list_empty(mc_head)) + if (list_empty(mc_head)) { spin_unlock_bh(&seq->lock); goto not_found; + } spin_unlock_bh(&seq->lock); read_unlock_bh(&nametbl_lock); return true; -- Mark Haverkamp <ma...@os...> |
From: Jon M. <jon...@er...> - 2004-04-15 19:20:43
|
It looks ok to me, but I think we should let Guo and Ling have a say first. The only thing I hesitate about is the test for (!index) in "ref_lock_deref". This is one of the most time-critical functions in TIPC. Is it _really_ necessary to do this test, except for pure debugging purposes ? An index of zero should yield the right return value, -zero, and is hence no different from other invalid indexes, which also return zero. => I would prefer that this test is omitted. /Jon Mark Haverkamp wrote: >Daniel McNeil and I were playing with multicast in the unstable view and >found that it wasn't working for us. We spent some time debugging and >got it working (at least in our simple tests). I have enclosed the >patch that we came up with for review. We have run the client/server >test and it continues to function so I don't think that we broke >anything with the change. Please take a look and let us know what you >think. > >Thanks, >Mark and Daniel. > > >--- /home/markh/views/tipc/cvs/source/unstable/net/tipc/bcast.h 2004-03-30 07:11:38.000000000 -0800 >+++ bcast.h 2004-04-14 14:56:08.000000000 -0700 >@@ -123,7 +123,7 @@ > int tipc_bsend_buf(struct sk_buff *buf, struct list_head *mc_head); > int tipc_send_buf2nameseq(tipc_ref_t ref,struct tipc_name_seq *dest,void *buf, uint size); > int tipc_forward_buf2nameseq(tipc_ref_t ref,struct tipc_name_seq *name, >- void *buf,struct tipc_portid const *orig,uint size, >+ struct sk_buff *buf,struct tipc_portid const *orig,uint size, > uint importance,struct list_head *mc_head); > > >--- /home/markh/views/tipc/cvs/source/unstable/net/tipc/reg.h 2004-02-16 15:00:02.000000000 -0800 >+++ reg.h 2004-04-15 10:08:28.000000000 -0700 >@@ -91,8 +91,19 @@ > static inline void * > ref_lock_deref(uint ref) > { >- uint index = ref & ref_table.index_mask; >- struct reference *r = &ref_table.entries[index]; >+ uint index; >+ struct reference *r; >+ >+ index = ref & ref_table.index_mask; >+ >+ /* >+ * Zero is not a valid index >+ */ >+ if (!index) { >+ printk("tipc ref_lock_deref: ref is zero\n"); >+ return 0; >+ } >+ r = &ref_table.entries[index]; > spin_lock_bh(&r->lock); > if (likely(r->data.reference == ref)) > return r->object; >--- /home/markh/views/tipc/cvs/source/unstable/net/tipc/sendbcast.c 2004-03-24 08:58:32.000000000 -0800 >+++ sendbcast.c 2004-04-15 10:22:54.000000000 -0700 >@@ -140,7 +140,7 @@ > */ > int tipc_forward_buf2nameseq(tipc_ref_t ref, > struct tipc_name_seq *name, >- void *buf, >+ struct sk_buff *buf, > struct tipc_portid const *orig, > uint size, > uint importance,struct list_head *mc_head) >@@ -156,19 +156,19 @@ > m = &this->publ.phdr; > if (importance <= 3) > msg_set_importance(m,importance); >- >+ prev_destnode = 0; > list_for_each(pos,mc_head) { >- prev_destnode = 0; > mid = list_entry(pos,struct mc_identity,list); > if (mid != NULL && (prev_destnode != mid->node)){ > prev_destnode = mid->node; >- copybuf = buf_acquire(msg_size(m)); >- memcpy(copybuf,buf,msg_size(m)); >+ copybuf = buf_clone(buf); > msg_set_destnode(buf_msg(copybuf), mid ->node); >- if (likely(mid ->node != tipc_own_addr)) >+ if (likely(mid ->node != tipc_own_addr)) { > res = tipc_send_buf_fast(copybuf,mid->node); >- else >+ } >+ else { > res = bcast_port_recv(copybuf); >+ } > } > } > return res; >@@ -242,6 +242,7 @@ > > if (!this) > return TIPC_FAILURE; >+ > INIT_LIST_HEAD(&mc_head); > nametbl_mc_translate(&mc_head, seq->type,seq->lower,seq->upper); > tipc_ownidentity(ref,&orig); >@@ -255,7 +256,6 @@ > res = msg_build(hdr,msg,scnt,TIPC_MAX_MSG_SIZE,&b); > if (!b) > return TIPC_FAILURE; >- > count = count_mc_member(&mc_head); > > if (count <= REPLICA_NODES){ >--- /home/markh/views/tipc/cvs/source/unstable/net/tipc/bcast.c 2004-03-30 07:11:38.000000000 -0800 >+++ bcast.c 2004-04-15 10:24:09.000000000 -0700 >@@ -383,8 +383,8 @@ > int i = 0,prev_destnode; > struct mc_identity* mid; > >+ prev_destnode = 0; > list_for_each(pos,mc_head) { >- prev_destnode = 0; > mid = list_entry(pos,struct mc_identity,list); > if (mid != NULL && (prev_destnode != mid->node)){ > prev_destnode = mid->node; >@@ -433,6 +433,7 @@ > if (mc_list == NULL) > return false; > >+ INIT_LIST_HEAD(&mc_list->list); > mc_list->port = destport; > mc_list->node = destnode; > list_add_tail(&mc_list->list,list_head); >@@ -492,15 +493,14 @@ > void free_mclist(struct list_head *list_head) > { > struct mc_identity* mid; >- struct list_head *pos; >+ struct list_head *pos, *n; > >- list_for_each(pos,list_head) { >+ list_for_each_safe(pos, n, list_head) { > mid = list_entry(pos, struct mc_identity, list); > list_del(&mid->list); > kfree(mid); > > } >- list_del(list_head); > } > > static struct list_head point = LIST_HEAD_INIT(point); >--- /home/markh/views/tipc/cvs/source/unstable/net/tipc/link.c 2004-03-11 07:32:51.000000000 -0800 >+++ link.c 2004-04-15 10:24:34.000000000 -0700 >@@ -1609,7 +1609,11 @@ > if (likely(msg_is_dest(msg,tipc_own_addr))){ > if (likely(msg_isdata(msg))) { > spin_unlock_bh(&this->owner->lock); >- port_recv_msg(buf); >+ if (msg_type(msg) == TIPC_MCAST_MSG) { >+ bcast_port_recv(buf); >+ } else { >+ port_recv_msg(buf); >+ } > continue; > } > link_recv_non_data_msg(this, buf); >--- /home/markh/views/tipc/cvs/source/unstable/net/tipc/cfg.c 2004-02-16 15:00:01.000000000 -0800 >+++ cfg.c 2004-04-15 10:27:00.000000000 -0700 >@@ -393,7 +393,9 @@ > > void cfg_link_event(tipc_net_addr_t addr,char* name,int up) > { >+#ifdef INTER_CLUSTER_COMM > struct _zone* z = zone_find(addr); >+#endif > > if (in_own_cluster(addr)) > return; >--- /home/markh/views/tipc/cvs/source/unstable/net/tipc/recvbcast.c 2004-03-30 07:11:39.000000000 -0800 >+++ recvbcast.c 2004-04-15 10:31:18.000000000 -0700 >@@ -326,10 +326,10 @@ > > struct list_head *pos; > struct mc_identity *mid; >- int res; >+ int res = TIPC_OK; > > list_for_each(pos,mc_head) { >- mid = list_entry(pos, struct mc_identity, list); >+ mid = list_entry(pos, struct mc_identity, list); > if(mid && mid ->node == tipc_own_addr) { > struct port* port =(struct port*) ref_deref(mid ->port); > struct sk_buff *copymsg = buf_clone(buf); >--- /home/markh/views/tipc/cvs/source/unstable/net/tipc/name_table.c 2004-03-30 07:11:39.000000000 -0800 >+++ name_table.c 2004-04-15 10:32:13.000000000 -0700 >@@ -299,7 +299,7 @@ > /* Insert a publication: */ > > publ = publ_create(type, lower, upper, port, node, scope, key); >- dbg("inserting publ %x, node = %x publ->node = %x, subscr->node\n", >+ dbg("inserting publ %x, node = %x publ->node = %x, subscr->node %x\n", > publ,node,publ->node,publ->subscr.node); > if (!sseq->zone_list) > sseq->zone_list = publ->zone_list.next = publ; >@@ -309,10 +309,11 @@ > } > > if (in_own_cluster(node)){ >- if (!sseq->cluster_list) >+ if (!sseq->cluster_list) { > sseq->cluster_list = publ->cluster_list.next = publ; >- else{ >- publ->cluster_list.next = sseq->cluster_list->cluster_list.next; >+ } else{ >+ publ->cluster_list.next = >+ sseq->cluster_list->cluster_list.next; > sseq->cluster_list->cluster_list.next = publ; > } > } >@@ -465,7 +466,7 @@ > struct sub_seq *sseq = this->sseqs; > if (!sseq) > return 0; >- dbg("nameseq_av: ff = %u, sseq = %x, &&this->sseqs[this->first_free = %x\n", >+ dbg("nameseq_av: ff = %u, sseq = %x, &this->sseqs[this]->first_free = %x\n", > this->first_free,sseq,&this->sseqs[this->first_free]); > for (;sseq != &this->sseqs[this->first_free];sseq++) { > if ((sseq->lower >= lower) && (sseq->lower <= upper)) >@@ -707,10 +708,14 @@ > > if (high_seq < low_seq) > goto not_found; >+ >+ if (high_seq >= seq->first_free) >+ high_seq = seq->first_free - 1; > > spin_lock_bh(&seq->lock); > > i = low_seq; >+ > > for (i = low_seq ; i <= high_seq; i++) > { >@@ -732,14 +737,15 @@ > > if (destport) > { >- if ( false == mc_identity_create(mc_head,destport,destnode)) >+ if ( false == mc_identity_create(mc_head,destport,destnode)) { > goto found; >+ } > } > } > if (list_empty(mc_head)) > { >- spin_unlock_bh(&seq->lock); >- goto not_found; >+ spin_unlock_bh(&seq->lock); >+ goto not_found; > } > found: > spin_unlock_bh(&seq->lock); >@@ -783,16 +789,18 @@ > if (high_seq < low_seq) > goto not_found; > >+ if (high_seq >= seq->first_free) >+ high_seq = seq->first_free - 1; >+ > spin_lock_bh(&seq->lock); > > i = low_seq; >- >+ > for (i = low_seq ; i <= high_seq; i++) > { > publ = seq->sseqs[i].node_list; > if(!publ) { >- spin_unlock_bh(&seq->lock); >- goto not_found; >+ continue; > } > destnode = publ->node; > destport = publ->ref; >@@ -804,9 +812,10 @@ > } > } > } >- if (list_empty(mc_head)) >+ if (list_empty(mc_head)) { > spin_unlock_bh(&seq->lock); > goto not_found; >+ } > spin_unlock_bh(&seq->lock); > read_unlock_bh(&nametbl_lock); > return true; > > > |
From: Jon M. <jon...@er...> - 2004-04-15 18:28:56
|
Hi again. I just checked it in myself, along with some other corrections. I also added a corresonding file release, tipc-1.3.09 /jon Mark Haverkamp wrote: On Sat, 2004-04-10 at 11:06, Jon Maloy (QB/EMC) wrote: Certainly not. I am surprised that it works at all, given that the management code is not tested at all yet. Feel free to correct it. Thanks /Jon OK, here is a fix that works. I made the change in the node_get_nodes function. It is only called by the manager. If you would rather, I could fix it in mng_cmd_event but that would require casting the data buffer to a tipc_node_info and doing the htonl there. Mark. --- /home/markh/views/tipc/cvs/source/unstable/net/tipc/node.c 2004-03-30 07:11:39.000000000 -0800 +++ node.c 2004-04-14 09:54:30.000000000 -0700 @@ -613,8 +613,8 @@ for(n = nodes;n;n = n->next) { if (!in_scope(scope,n->addr)) continue; - list[cnt].addr = n->addr; - list[cnt].up = node_is_up(n); + list[cnt].addr = htonl(n->addr); + list[cnt].up = htonl(node_is_up(n)); cnt++; }; return (cnt * sizeof(struct tipc_node_info)); |
From: Mark H. <ma...@os...> - 2004-04-15 17:58:50
|
Daniel McNeil and I were playing with multicast in the unstable view and found that it wasn't working for us. We spent some time debugging and got it working (at least in our simple tests). I have enclosed the patch that we came up with for review. We have run the client/server test and it continues to function so I don't think that we broke anything with the change. Please take a look and let us know what you think. Thanks, Mark and Daniel. --- /home/markh/views/tipc/cvs/source/unstable/net/tipc/bcast.h 2004-03-30 07:11:38.000000000 -0800 +++ bcast.h 2004-04-14 14:56:08.000000000 -0700 @@ -123,7 +123,7 @@ int tipc_bsend_buf(struct sk_buff *buf, struct list_head *mc_head); int tipc_send_buf2nameseq(tipc_ref_t ref,struct tipc_name_seq *dest,void *buf, uint size); int tipc_forward_buf2nameseq(tipc_ref_t ref,struct tipc_name_seq *name, - void *buf,struct tipc_portid const *orig,uint size, + struct sk_buff *buf,struct tipc_portid const *orig,uint size, uint importance,struct list_head *mc_head); --- /home/markh/views/tipc/cvs/source/unstable/net/tipc/reg.h 2004-02-16 15:00:02.000000000 -0800 +++ reg.h 2004-04-15 10:08:28.000000000 -0700 @@ -91,8 +91,19 @@ static inline void * ref_lock_deref(uint ref) { - uint index = ref & ref_table.index_mask; - struct reference *r = &ref_table.entries[index]; + uint index; + struct reference *r; + + index = ref & ref_table.index_mask; + + /* + * Zero is not a valid index + */ + if (!index) { + printk("tipc ref_lock_deref: ref is zero\n"); + return 0; + } + r = &ref_table.entries[index]; spin_lock_bh(&r->lock); if (likely(r->data.reference == ref)) return r->object; --- /home/markh/views/tipc/cvs/source/unstable/net/tipc/sendbcast.c 2004-03-24 08:58:32.000000000 -0800 +++ sendbcast.c 2004-04-15 10:22:54.000000000 -0700 @@ -140,7 +140,7 @@ */ int tipc_forward_buf2nameseq(tipc_ref_t ref, struct tipc_name_seq *name, - void *buf, + struct sk_buff *buf, struct tipc_portid const *orig, uint size, uint importance,struct list_head *mc_head) @@ -156,19 +156,19 @@ m = &this->publ.phdr; if (importance <= 3) msg_set_importance(m,importance); - + prev_destnode = 0; list_for_each(pos,mc_head) { - prev_destnode = 0; mid = list_entry(pos,struct mc_identity,list); if (mid != NULL && (prev_destnode != mid->node)){ prev_destnode = mid->node; - copybuf = buf_acquire(msg_size(m)); - memcpy(copybuf,buf,msg_size(m)); + copybuf = buf_clone(buf); msg_set_destnode(buf_msg(copybuf), mid ->node); - if (likely(mid ->node != tipc_own_addr)) + if (likely(mid ->node != tipc_own_addr)) { res = tipc_send_buf_fast(copybuf,mid->node); - else + } + else { res = bcast_port_recv(copybuf); + } } } return res; @@ -242,6 +242,7 @@ if (!this) return TIPC_FAILURE; + INIT_LIST_HEAD(&mc_head); nametbl_mc_translate(&mc_head, seq->type,seq->lower,seq->upper); tipc_ownidentity(ref,&orig); @@ -255,7 +256,6 @@ res = msg_build(hdr,msg,scnt,TIPC_MAX_MSG_SIZE,&b); if (!b) return TIPC_FAILURE; - count = count_mc_member(&mc_head); if (count <= REPLICA_NODES){ --- /home/markh/views/tipc/cvs/source/unstable/net/tipc/bcast.c 2004-03-30 07:11:38.000000000 -0800 +++ bcast.c 2004-04-15 10:24:09.000000000 -0700 @@ -383,8 +383,8 @@ int i = 0,prev_destnode; struct mc_identity* mid; + prev_destnode = 0; list_for_each(pos,mc_head) { - prev_destnode = 0; mid = list_entry(pos,struct mc_identity,list); if (mid != NULL && (prev_destnode != mid->node)){ prev_destnode = mid->node; @@ -433,6 +433,7 @@ if (mc_list == NULL) return false; + INIT_LIST_HEAD(&mc_list->list); mc_list->port = destport; mc_list->node = destnode; list_add_tail(&mc_list->list,list_head); @@ -492,15 +493,14 @@ void free_mclist(struct list_head *list_head) { struct mc_identity* mid; - struct list_head *pos; + struct list_head *pos, *n; - list_for_each(pos,list_head) { + list_for_each_safe(pos, n, list_head) { mid = list_entry(pos, struct mc_identity, list); list_del(&mid->list); kfree(mid); } - list_del(list_head); } static struct list_head point = LIST_HEAD_INIT(point); --- /home/markh/views/tipc/cvs/source/unstable/net/tipc/link.c 2004-03-11 07:32:51.000000000 -0800 +++ link.c 2004-04-15 10:24:34.000000000 -0700 @@ -1609,7 +1609,11 @@ if (likely(msg_is_dest(msg,tipc_own_addr))){ if (likely(msg_isdata(msg))) { spin_unlock_bh(&this->owner->lock); - port_recv_msg(buf); + if (msg_type(msg) == TIPC_MCAST_MSG) { + bcast_port_recv(buf); + } else { + port_recv_msg(buf); + } continue; } link_recv_non_data_msg(this, buf); --- /home/markh/views/tipc/cvs/source/unstable/net/tipc/cfg.c 2004-02-16 15:00:01.000000000 -0800 +++ cfg.c 2004-04-15 10:27:00.000000000 -0700 @@ -393,7 +393,9 @@ void cfg_link_event(tipc_net_addr_t addr,char* name,int up) { +#ifdef INTER_CLUSTER_COMM struct _zone* z = zone_find(addr); +#endif if (in_own_cluster(addr)) return; --- /home/markh/views/tipc/cvs/source/unstable/net/tipc/recvbcast.c 2004-03-30 07:11:39.000000000 -0800 +++ recvbcast.c 2004-04-15 10:31:18.000000000 -0700 @@ -326,10 +326,10 @@ struct list_head *pos; struct mc_identity *mid; - int res; + int res = TIPC_OK; list_for_each(pos,mc_head) { - mid = list_entry(pos, struct mc_identity, list); + mid = list_entry(pos, struct mc_identity, list); if(mid && mid ->node == tipc_own_addr) { struct port* port =(struct port*) ref_deref(mid ->port); struct sk_buff *copymsg = buf_clone(buf); --- /home/markh/views/tipc/cvs/source/unstable/net/tipc/name_table.c 2004-03-30 07:11:39.000000000 -0800 +++ name_table.c 2004-04-15 10:32:13.000000000 -0700 @@ -299,7 +299,7 @@ /* Insert a publication: */ publ = publ_create(type, lower, upper, port, node, scope, key); - dbg("inserting publ %x, node = %x publ->node = %x, subscr->node\n", + dbg("inserting publ %x, node = %x publ->node = %x, subscr->node %x\n", publ,node,publ->node,publ->subscr.node); if (!sseq->zone_list) sseq->zone_list = publ->zone_list.next = publ; @@ -309,10 +309,11 @@ } if (in_own_cluster(node)){ - if (!sseq->cluster_list) + if (!sseq->cluster_list) { sseq->cluster_list = publ->cluster_list.next = publ; - else{ - publ->cluster_list.next = sseq->cluster_list->cluster_list.next; + } else{ + publ->cluster_list.next = + sseq->cluster_list->cluster_list.next; sseq->cluster_list->cluster_list.next = publ; } } @@ -465,7 +466,7 @@ struct sub_seq *sseq = this->sseqs; if (!sseq) return 0; - dbg("nameseq_av: ff = %u, sseq = %x, &&this->sseqs[this->first_free = %x\n", + dbg("nameseq_av: ff = %u, sseq = %x, &this->sseqs[this]->first_free = %x\n", this->first_free,sseq,&this->sseqs[this->first_free]); for (;sseq != &this->sseqs[this->first_free];sseq++) { if ((sseq->lower >= lower) && (sseq->lower <= upper)) @@ -707,10 +708,14 @@ if (high_seq < low_seq) goto not_found; + + if (high_seq >= seq->first_free) + high_seq = seq->first_free - 1; spin_lock_bh(&seq->lock); i = low_seq; + for (i = low_seq ; i <= high_seq; i++) { @@ -732,14 +737,15 @@ if (destport) { - if ( false == mc_identity_create(mc_head,destport,destnode)) + if ( false == mc_identity_create(mc_head,destport,destnode)) { goto found; + } } } if (list_empty(mc_head)) { - spin_unlock_bh(&seq->lock); - goto not_found; + spin_unlock_bh(&seq->lock); + goto not_found; } found: spin_unlock_bh(&seq->lock); @@ -783,16 +789,18 @@ if (high_seq < low_seq) goto not_found; + if (high_seq >= seq->first_free) + high_seq = seq->first_free - 1; + spin_lock_bh(&seq->lock); i = low_seq; - + for (i = low_seq ; i <= high_seq; i++) { publ = seq->sseqs[i].node_list; if(!publ) { - spin_unlock_bh(&seq->lock); - goto not_found; + continue; } destnode = publ->node; destport = publ->ref; @@ -804,9 +812,10 @@ } } } - if (list_empty(mc_head)) + if (list_empty(mc_head)) { spin_unlock_bh(&seq->lock); goto not_found; + } spin_unlock_bh(&seq->lock); read_unlock_bh(&nametbl_lock); return true; -- Mark Haverkamp <ma...@os...> |
From: Jon M. <jon...@er...> - 2004-04-15 17:48:58
|
The solution is fine. Just check it in. /jon Mark Haverkamp wrote: On Sat, 2004-04-10 at 11:06, Jon Maloy (QB/EMC) wrote: Certainly not. I am surprised that it works at all, given that the management code is not tested at all yet. Feel free to correct it. Thanks /Jon OK, here is a fix that works. I made the change in the node_get_nodes function. It is only called by the manager. If you would rather, I could fix it in mng_cmd_event but that would require casting the data buffer to a tipc_node_info and doing the htonl there. Mark. --- /home/markh/views/tipc/cvs/source/unstable/net/tipc/node.c 2004-03-30 07:11:39.000000000 -0800 +++ node.c 2004-04-14 09:54:30.000000000 -0700 @@ -613,8 +613,8 @@ for(n = nodes;n;n = n->next) { if (!in_scope(scope,n->addr)) continue; - list[cnt].addr = n->addr; - list[cnt].up = node_is_up(n); + list[cnt].addr = htonl(n->addr); + list[cnt].up = htonl(node_is_up(n)); cnt++; }; return (cnt * sizeof(struct tipc_node_info)); |
From: Jon M. <jon...@er...> - 2004-04-15 14:04:15
|
Go ahead Thanks /jon Mark Haverkamp wrote: >The net_proto_ops and the net_proto_family structures don't set the >owner field. This prevents proper reference counting for the module and >allows the module to be unloaded when in use. This diff initialized the >owner field and also converts the initialization to the style used by >the kernel code. > >Any objections to checking this in? > >Mark. > >--- /home/markh/views/tipc/cvs/source/unstable/net/tipc/linux-2.6/socket.c 2004-03-30 07:11:39.000000000 -0800 >+++ socket.c 2004-04-13 13:14:06.000000000 -0700 >@@ -1352,69 +1352,73 @@ > } > > static struct proto_ops msg_ops = { >- family:AF_TIPC, >- release:release, >- bind:bind, >- connect:connect, >- socketpair:no_skpair, >- accept:accept, >- getname:get_name, >- poll:poll, >- ioctl:ioctl, >- listen:listen, >- shutdown:shutdown, >- setsockopt:setsockopt, >- getsockopt:getsockopt, >- sendmsg:send_msg, >- recvmsg:recv_msg, >- mmap:no_mmap, >- sendpage:no_sendpage >+ .owner = THIS_MODULE, >+ .family = AF_TIPC, >+ .release = release, >+ .bind = bind, >+ .connect = connect, >+ .socketpair = no_skpair, >+ .accept = accept, >+ .getname = get_name, >+ .poll = poll, >+ .ioctl = ioctl, >+ .listen = listen, >+ .shutdown = shutdown, >+ .setsockopt = setsockopt, >+ .getsockopt = getsockopt, >+ .sendmsg = send_msg, >+ .recvmsg = recv_msg, >+ .mmap = no_mmap, >+ .sendpage = no_sendpage > }; > > static struct proto_ops packet_ops = { >- family:AF_TIPC, >- release:release, >- bind:bind, >- connect:connect, >- socketpair:no_skpair, >- accept:accept, >- getname:get_name, >- poll:poll, >- ioctl:ioctl, >- listen:listen, >- shutdown:shutdown, >- setsockopt:setsockopt, >- getsockopt:getsockopt, >- sendmsg:send_packet, >- recvmsg:recv_msg, >- mmap:no_mmap, >- sendpage:no_sendpage >+ .owner = THIS_MODULE, >+ .family = AF_TIPC, >+ .release = release, >+ .bind = bind, >+ .connect = connect, >+ .socketpair = no_skpair, >+ .accept = accept, >+ .getname = get_name, >+ .poll = poll, >+ .ioctl = ioctl, >+ .listen = listen, >+ .shutdown = shutdown, >+ .setsockopt = setsockopt, >+ .getsockopt = getsockopt, >+ .sendmsg = send_packet, >+ .recvmsg = recv_msg, >+ .mmap = no_mmap, >+ .sendpage = no_sendpage > }; > > static struct proto_ops stream_ops = { >- family:AF_TIPC, >- release:release, >- bind:bind, >- connect:connect, >- socketpair:no_skpair, >- accept:accept, >- getname:get_name, >- poll:poll, >- ioctl:ioctl, >- listen:listen, >- shutdown:shutdown, >- setsockopt:setsockopt, >- getsockopt:getsockopt, >- sendmsg:send_stream, >- recvmsg:recv_stream, >- mmap:no_mmap, >- sendpage:no_sendpage >+ .owner = THIS_MODULE, >+ .family = AF_TIPC, >+ .release = release, >+ .bind = bind, >+ .connect = connect, >+ .socketpair = no_skpair, >+ .accept = accept, >+ .getname = get_name, >+ .poll = poll, >+ .ioctl = ioctl, >+ .listen = listen, >+ .shutdown = shutdown, >+ .setsockopt = setsockopt, >+ .getsockopt = getsockopt, >+ .sendmsg = send_stream, >+ .recvmsg = recv_stream, >+ .mmap = no_mmap, >+ .sendpage = no_sendpage > }; > > > static struct net_proto_family tipc_family_ops = { >- family:AF_TIPC, >- create:tipc_socket >+ .owner = THIS_MODULE, >+ .family = AF_TIPC, >+ .create = tipc_socket > }; > > int > > > |
From: Mark H. <ma...@os...> - 2004-04-14 17:06:08
|
On Sat, 2004-04-10 at 11:06, Jon Maloy (QB/EMC) wrote: > Certainly not. I am surprised that it works at all, > given that the management code is not tested at all > yet. Feel free to correct it. > > Thanks /Jon > OK, here is a fix that works. I made the change in the node_get_nodes function. It is only called by the manager. If you would rather, I could fix it in mng_cmd_event but that would require casting the data buffer to a tipc_node_info and doing the htonl there. Mark. --- /home/markh/views/tipc/cvs/source/unstable/net/tipc/node.c 2004-03-30 07:11:39.000000000 -0800 +++ node.c 2004-04-14 09:54:30.000000000 -0700 @@ -613,8 +613,8 @@ for(n = nodes;n;n = n->next) { if (!in_scope(scope,n->addr)) continue; - list[cnt].addr = n->addr; - list[cnt].up = node_is_up(n); + list[cnt].addr = htonl(n->addr); + list[cnt].up = htonl(node_is_up(n)); cnt++; }; return (cnt * sizeof(struct tipc_node_info)); -- Mark Haverkamp <ma...@os...> |