From: Mark H. <ma...@os...> - 2004-04-19 14:32:05
|
On Sat, 2004-04-17 at 10:38, Ling, Xiaofeng wrote: > Thanks for your good suggestion. > some comments below. > > -----Original Message----- > > From: Daniel McNeil [mailto:da...@os...]=20 > > Sent: 2004=E5=B9=B44=E6=9C=8816=E6=97=A5 23:14 > > To: Ling, Xiaofeng; Guo, Min > > Cc: Jon Maloy; Mark Haverkamp; tipc > > Subject: RE: [Tipc-discussion] Re: tipc multicast patch > >=20 > >=20 > > Hi, > >=20 > >=20 > > We have not tested > 8 nodes, yet. We could test that code > > by changing the check (we currently have 4 nodes) to a lower=20 > > number. Do you want us to do this? > >=20 > > How/why was the number '8' chosen for broadcast? >=20 > 8 is just a suggested number in the RFC, maybe the more feasible way i= s to make=20 > it configurable module parameter. Or a dynamic number adjustable with t= he total nodes in the cluster. > That could be TODO itme. >=20 > > Also, Mark and I notice some interesting behavior of the mulicast: > >=20 > > If 2 processes on the same node publish the same port name=20 > > sequence, a multicast only goes 1 process on the local node=20 > > (we have not tried remote yet). Is this the intended=20 > > behavior? Should all processes on all nodes get it? (I do=20 > > not know if your latest check-in affects this behavior) > In TIPCv1, what I understand is 2 processes on one node can not open th= e same port name sequence, > on two or more node, only one node will get the a message sent to this = port name, that can be treated as a load > balance. As for multicast, maybe this rule can also be applying. Of cau= se,this also depends on application mode. >=20 I tried this out with your updated mcast code and it seems to work OK.=20 I published the same port name sequence from two processes on a node and was able to receive a multicast message to each process. This seems to me like the right thing to do. Jon, I looked at your RFC and didn't see this kind of behavior specified one way or the other. What do you think is the right thing to do? Thanks, Mark. >=20 >=20 >=20 > > Thanks, > >=20 > > Daniel > >=20 > >=20 > >=20 > >=20 > >=20 > >=20 > >=20 > >=20 --=20 Mark Haverkamp <ma...@os...> |