From: Stephens, A. <all...@wi...> - 2011-02-10 14:54:39
|
Hi Sridhar: If you are running a system with only a single Ethernet interface to each TIPC node then the receiving node's broadcast link endpoint should be receiving all messages in order, without any deferral of out-of-sequence messages, retransmission requests, or duplicated messages. This is clearly not happening in your system. The fact that the receiver issues 3049 retransmit requests but receives only 162 duplicated messages strongly suggests that messages are actually being lost before they reach the receiver's broadcast link endpoint, and that you're not just dealing with a case in which messages are being delivered to the endpoint in the wrong order. There have been some broadcast link fixes put into TIPC since TIPC 1.7.7-rc1, so you might want to upgrade to -rc4 to see if this helps your situation. However, there's a good chance that you're dealing with some other issue that may or may not be a problem with TIPC. Regards Al > -----Original Message----- > From: Sridhar [mailto:chs...@gm...] > Sent: Wednesday, February 09, 2011 10:01 PM > To: Stephens, Allan > Cc: tip...@li...; Yongmoo Cho; > chs...@gm... > Subject: Re: [tipc-discussion] TIPC Multicast issue in 1.7.7.-rc1 > > forgot to attach the output of tipc-config -ls command: > > On sender: > > $tipc-config -ls > Link statistics: > Link <broadcast-link> > Window:1000 packets > RX packets:6 fragments:0/0 bundles:0/0 > TX packets:1000000 fragments:0/0 bundles:0/0 > RX naks:3049 defs:0 dups:0 > TX naks:0 acks:0 dups:544210 > Congestion bearer:0 link:8698 Send queue max:1000 avg:942 > > Link <1.1.13:eth0-1.1.12:eth0> > ACTIVE MTU:1500 Priority:10 Tolerance:1500 ms Window:250 packets > RX packets:1 fragments:0/0 bundles:0/0 > TX packets:1 fragments:0/0 bundles:0/0 > TX profile sample:0 packets average:0 octets > 0-64:0% -256:0% -1024:0% -4096:0% -16354:0% -32768:0% -66000:0% > RX states:67920 probes:2709 naks:0 defs:0 dups:0 > TX states:10142 probes:2708 naks:0 acks:3199 dups:0 > Congestion bearer:0 link:0 Send queue max:1 avg:0 > > > > On Receiver: > > $tipc-config -ls > Link statistics: > Link <broadcast-link> > Window:1000 packets > RX packets:1000000 fragments:0/0 bundles:0/0 > TX packets:6 fragments:0/0 bundles:0/0 > RX naks:0 defs:923652 dups:162 > TX naks:3049 acks:62500 dups:0 > Congestion bearer:0 link:0 Send queue max:1 avg:1 > > Link <1.1.12:eth0-1.1.13:eth0> > ACTIVE MTU:1500 Priority:10 Tolerance:1500 ms Window:250 packets > RX packets:1 fragments:0/0 bundles:0/0 > TX packets:1 fragments:0/0 bundles:0/0 > TX profile sample:2 packets average:68 octets > 0-64:0% -256:100% -1024:0% -4096:0% -16354:0% -32768:0% -66000:0% > RX states:8382 probes:2624 naks:0 defs:0 dups:0 > TX states:67752 probes:2625 naks:0 acks:0 dups:0 > Congestion bearer:0 link:0 Send queue max:1 avg:0 > > > Thanks > -Sridhar > > > 2011/2/9 Sridhar <chs...@gm...> > > > Hi Al, > > While I am debugging the mutlicast issue, I see that > the number of deferred receive packets in the output of > tipc-config -ls command, and I think the packet is getting > discarded in the function tipc_bclink_recv_pkt() because out > of order of messages i.e., the expected sequence number > doesn't match to the sequence number of the message that is > being processed. > > In my two node (2 cores per node) cluster, one sender > is sending the mcast data to 8 receivers running on another > node. The msg size is 512 bytes and the number of messages > that I am sending is 1Million Packets. > > Is the sequence mismatch or receiving the out of order > broadcast messages a known issue? Please let me know. > > Thanks > -Sridhar > > > > 2011/2/7 Sridhar <chs...@gm...> > > > Hi, > > Thanks for the information. > > My environment has only two cores per node and > 2 node cluster. Having only two cores per node makes any difference? > > Thanks > -Sridhar > > > 2011/2/6 유승도 <seo...@lg...> > > > Sorry. > > Intel ATCA MPCBL00050 has only 4 cores. > > > > > ________________________________ > > > From: 유승도 [mailto:seo...@lg...] > Sent: Monday, February 07, 2011 4:44 PM > To: 'Sridhar' > Cc: > 'tip...@li...'; Yongmoo Cho > Subject: RE: [tipc-discussion] TIPC > Multicast issue in 1.7.7.-rc1 > > > > Hi Sridhar. > > Sorry for my late answer. I was on the > Lunar New years holiday during 1/29~2/6 > > > > Environment. > > - Platform details > > Board : ATCA (Intel MPCBL00050) . only > 2 node, 1 cluster > > > - Architecture details > > ATCA Intel board and TIPC version : > 1.7.7-rc4. > > > > - Number of cores running on the nodes > in cluster > > ATCA Intel MPCBL00050 , 16 core > > > > I have tested more than 10 times. But > there’s no missing multicast packet on multicast test program. > > > > Bye. > > > > > > > ________________________________ > > > From: Sridhar [mailto:chs...@gm...] > > Sent: Tuesday, February 01, 2011 4:15 PM > > To: 유승도 > > > Cc: > tip...@li...; chs...@gm... > Subject: Re: [tipc-discussion] TIPC > Multicast issue in 1.7.7.-rc1 > > > > Hi, > > Can you please pass on the environment > that you have tested? > > - Platform details > - Architecture details > - Number of cores running on the nodes > in cluster > > > Also, please let me know how many times > that you have run the same program. Because, I see sometimes > 8 receivers passing with 1Million pkts and pkt size is 4096. > But it is not always true as sometimes, the test case is > failing because not all of them receiving the entire packets. > > My environment is - PPC architecture, 2 > processors in a node, two node cluster. > > The fc_count is basically the Flow > control count because MSG_DONTWAIT is being passed to the > sendto() function, and my client application will retry the > message when it is notified by the TIPC on the write fd set > saying that the send buffer is available. > > Thanks > -Sridhar > > > > 2011/1/31 유승도 <seo...@lg...> > > Hi Sridhar > > > > I have tested your multicast program, > and I have found many sending fail counts on client. > > Your multicast errors also appeared in > my system. > > > > Thank you. > > > > Log > > Client_tipc > > root@cp_9:/usr# ./client_tipc 0 300 > 1000000 4096 18888 > > *** PASS: CLIENT - Total Time to send > 1000000 messages is: 100243 ms, fc_count 388743 > > > > Server_tipc > > > > Port {18888,0,10} created... > MSG_SIZE=4096 NUM_MSGS=1000000 > > Port {18888,11,20} created... > MSG_SIZE=4096 NUM_MSGS=1000000 > > Port {18888,31,40} created... > MSG_SIZE=4096 NUM_MSGS=1000000 > > Port {18888,51,60} created... > MSG_SIZE=4096 NUM_MSGS=1000000 > > Port {18888,61,70} created... > MSG_SIZE=4096 NUM_MSGS=1000000 > > Port {18888,41,50} created... > MSG_SIZE=4096 NUM_MSGS=1000000 > > Port {18888,21,30} created... > MSG_SIZE=4096 NUM_MSGS=1000000 > > ++++ PASS : Server : Port {18888,0,10} > received: 1000000 multicast messages > > ++++ PASS : Server : Port > {18888,11,20} received: 1000000 multicast messages > > ++++ PASS : Server : Port > {18888,21,30} received: 1000000 multicast messages > > ++++ PASS : Server : Port > {18888,51,60} received: 1000000 multicast messages > > ++++ PASS : Server : Port > {18888,61,70} received: 1000000 multicast messages > > ++++ PASS : Server : Port > {18888,41,50} received: 1000000 multicast messages > > ++++ PASS : Server : Port > {18888,31,40} received: 1000000 multicast messages > > > ============================================================== > ===================== > > > ________________________________ > > > From: Sridhar [mailto:chs...@gm...] > > Sent: Saturday, January 29, 2011 4:06 AM > To: > > 유승도 > > > Cc: > tip...@li...; chs...@gm... > Subject: Re: [tipc-discussion] TIPC > Multicast issue in 1.7.7.-rc1 > > > > Hi, > > Please find attached the multicast > client and server programs. > > My scenario is - We have only 2 members > in a cluster. One member is sending the multicast data to the > receivers (7 Receivers) running on another member of the > cluster. The behavior that I have observed is, few receivers > gets all the data. That means, the data is not delivered to > all the ports registered for the same port number on the other member. > > Thanks > -Sridhar > > 2011/1/27 유승도 <seo...@lg...> > > Hi Sridhar. > I test broadcast test that server > located in 4 node and make 8 receiver. > Multicase test program send broadcast > 10,000,000. but all 8 receiver > received completely in TIPC 1.7.7-rc1. > In my case I modified tipcutils- > 2.0.0/demos/multicast_demo. > So I want to test your sample test > program in our system that consists of 2 > octeon ATCA board and TIPC 1.7.7-rc1. > If your test program is different > with multicast_demo, please send your > test program so that I could test it. > > I have one problem, I used benchmark > program but something unexpected > thing occurred, I used option such > like " ./client_tipc -t 100 -n 32", > "./server_tipc -i 900" but, among 32 > processes, one or two process couldn't > complete operation. I don't know the > reason. If you have a similar > experience, please reference about it. > > Thank you. > > > > -----Original Message----- > From: Sridhar [mailto:chs...@gm...] > > Sent: Wednesday, January 26, 2011 7:17 AM > To: Stephens, Allan > Cc: tip...@li... > > Subject: Re: [tipc-discussion] TIPC > Multicast issue in 1.7.7.-rc1 > > Hi Al, > > Thank you so much for the detailed > information. I shall look at the receive > side by instrumenting the code and will > posting my observations. > > In my example, I am not sending the > multicast data to the other node > (node#2) where 6 receivers are running. > My environment has only two member > cluster. > > > Best Regards, > -Sridhar > > > On Tue, Jan 25, 2011 at 8:14 AM, > Stephens, Allan < > all...@wi...> wrote: > > > Sridhar wrote: > > > > > I have tried the Replicast patch on > top of TIPC-1.7.7-rc4 code base. > > > > > > Here is my observation: > > > > > > 1) Ran 6 receivers bound to port > number 18888 and difference > > > port name sequences > > > 2) Ran 1 sender sending the > multicast data to the port number > > > 18888 and covering all the port > name sequences (of all the 6 > > > receivers) > > > 3) The message size is 4K and the > number of messages is 5000 > > > > > > I see that 1-2 receivers are not > getting the data. Only 4-5 > > > receivers getting the 5000 messages. > > > > > > When I tried with 1000 number of > messages, it seems to be > > > working fine i.e., all the 6 > receivers are getting the data. > > > However, I didn't perform this test > multiple times. > > > > > > I am still seeing the issue of not > all receivers getting the > > > multicast data with the replicast patch. > > > > > > I have run the tipc-config -ls and > I see that the broadcast > > > link stats tx and rx are zero just > to confirm that the > > > broadcast link is not being used > after installing the Replicast patch. > > > > The data you've gather adds weight to > the hypothesis that the message loss > > you're seeing is not due to issues > with the broadcast link (although it's > > not absolute proof of that claim). > Isolating the source of the message > loss > > is going to require you to add some > instrumentation code to your system > > (including TIPC) to try and determine > where the message loss is happening. > > > > > > > Can you please help me in resolving > the multicast issue? > > > Ideally, the replicast patch may > not work for me (even though > > > it works) because it doesn't take > care of the cases where > > > flow control (congestion) is hit by > one receiver and when one > > > of the tipc_link_send() fail. But > as you mentioned, to > > > isolate the broadast link issue, I > have tested and I think, > > > it is not the broadcast link issue. > Please correct me if I am wrong. > > > > Here is what happens when a message > is sent to a multicast address: > > > > 1) Name table translation is done to > determine what ports on the sending > > node need to receive the message, and > which other nodes in the cluster > need > > to receive it. > > > > 2) The message is given to the > broadcast link, which is resposible for > > reliably transmitting the message to > all of the other nodes in the > cluster. > > If the broadcast link is congested, > the application either blocks until > the > > congestion clears or returns > -EWOULDBLOCK, depending on the type of send > the > > application requested. > > > > 3) Once the message has been given to > the broadcast link the message is > > internally replicast to all of the > ports on the sending node that are > > supposed to receive it. > > > > 4) Once each of the other cluster > nodes receives the broadcast message, it > > does the same sort of name table > lookup and internal replicasting as the > > sending node did in phases 1) and 3) > above, which results in it delivering > > the message to all of its ports that > have a matching name. > > > > Now, let's examine all of the places > that message loss can occur: > > > > 1) The fact that some of the > receivers are receiving all of the messages > > you're sending indicates that name > translation (phase 1) is working > > properly. > > > > 2) The fact that some of the > receivers are receiving all of the messages > > you're sending indicates that message > delivery from sending node to > > receiving node (phase 2) is working > properly -- at least when the > broadcast > > link is being used. The reason I say > this is that if one or more of the > > receiving nodes wasn't receiving and > acknowledging a message this would > > cause the broadcast link to congest, > and this would eventually prevent the > > application from sending the rest of > its messages. (As you previously > > observed, if you're replicasting you > could encounter a situation where > only > > some of the receiving nodes received > a message; if this is happening, then > > the replicasting code should be > recording "replicast failed" messages in > the > > system log of the sending node -- you > should take a look to see if this is > > happening in your most recent tests.) > > > > 3) I'm not sure if you're trying to > send multicast data to ports on the > > sending node ... I get the impression > that you're not, in which case > phase 3 > > doesn't matter. > > > > 4) This is the area that I think you > should add some instrumentation to; > > the key routine to start with is > tipc_port_recv_mcast(). The most likely > > causes of message loss here are that > TIPC is unable to clone an incoming > > message (in which case you should be > seeing "unable to deliver ..." > messages > > in your system log) or the message > isn't being accepted by the receiving > > socket (perhaps because it's receive > queue is too full); for the latter > case > > you should probably add some > instrumentation to the end of dispatch() and > > backlog_rcv() in tipc_socket.c so > that you count/flag cases where an > > incoming multicast message wasn't > added to the socket's receive queue > > properly. > > > > > > > Another question is - > > > Does TIPC-1.7.7-rc4 has the seven > broadcast related patches > > > as well which you have provided me > sometime last year? If so, > > > then without replicast patch, > TIPC-1.7.7-rc4 is having the > > > Multicast issue i.e., not all the > receivers are getting the > > > data sent by one client. > > > > As mentioned previously TIPC > 1.7.7-rc4 has all known broadcast-related > > patches; there are no additional > patches for you to try. > > > > > Please help me on how I can proceed further. > > > > > > Thanks > > > -Sridhar > > > > I suggest you instrument TIPC to see > if messages are actually being lost > in > > phase 4, as described above. If this > turns out to be the problem area, > then > > I'm not sure what TIPC can do to help > you since you've created a situation > > in which data is arriving more > quickly than your application can consume > it. > > > > I should also give you a "heads up" > that I'm going to be out-of-the-office > > from Jan 27th to Feb 6th, and won't > be responding to emails on the TIPC > > mailing list during that time. > However, you can still post your findings > > during that time as there are other > people reading the list who may be > able > > to provide you with assistance. > > > > Regards, > > Al > > > > > > > > > > > > > On Mon, Jan 24, 2011 at 2:06 PM, Sridhar > > > <chs...@gm...> wrote: > > > > > > > > > Hi Al, > > > > > > Sure. I shall try it out and > will let you know. > > > > > > Thanks > > > -Sridhar > > > > > > > > > On Mon, Jan 24, 2011 at 12:12 > PM, Stephens, Allan > > > <all...@wi...> wrote: > > > > > > > > > Hi Sridhar: > > > > > > The patch you > received is not designed to be a > > > bulletproof replicast solution that > will resolve your current > > > problems, as it doesn't handle > certain congestion problems > > > particularly well -- such as the > ones you've noted below. The > > > reason I've suggested you try the > patch out is to see if it > > > will shed some light on the source > of your current problems, > > > and give us an idea as to whether > or not the broadcast link > > > is the source of your problem. > > > > > > Regards, > > > Al > > > > > > > > > ________________________________ > > > > > > > From: Sridhar > > > [mailto:chs...@gm...] > > > > > > Sent: Monday, > January 24, 2011 2:59 PM > > > To: Stephens, Allan > > > Cc: > chs...@gm...; > > > tip...@li... > > > > > > Subject: Re: > [tipc-discussion] TIPC > > > Multicast issue in 1.7.7.-rc1 > > > > > > > > > > > > Hi Al, > > > > > > I have gone > through the patch, and I > > > have couple of questions > > > > > > 1) I see that > the tipc_link_send() is > > > being called from a for loop, in > this case, if one of the > > > tipc_lind_send() fails, it bails > out from the for loop. With > > > this loop approach, only certain > number of receivers would > > > get the multicast data, and not all > of them. Is TIPC going to > > > take care of this scenario where it > will make sure that the > > > data is reached to all the > receivers? If yes, how it is being > > > handled? If not, I see this is > going to be an issue for the > > > application because the multicast > data has been reached to > > > only certain receivers. > > > > > > 2) If one of > the tipc_link_send() > > > causes EAGAIN or LINKCONG, how TIPC > is going to handle the > > > flow control situation for the other links? > > > > > > Can you > please throw more light on how > > > these two cases are being handled? > > > > > > > > > Thanks > > > -Sridhar > > > > > > > > > > > > On Mon, Jan > 24, 2011 at 10:07 AM, > > > Stephens, Allan > <all...@wi...> wrote: > > > > > > > > > Hi Sridhar: > > > > > > Yes, > if you do a multicast send > > > using the MSG_PEEK flag this will > effectively send a message > > > to each receiving node > individually, rather than broadcast > > > the message to all nodes at the > same time. This may or may > > > not result in a signficant impact > on performance, depending > > > on the number of nodes you have in > your network. R(Also, > > > regardless of whether the message > is sent using broadcast or > > > replicast, the receiving node will > still have to do > > > additional replicasting to pass on > a copy of the message to > > > every port on that node that has a > matching name.) > > > > > > Once > you've activated the > > > replicasting feature (using "make > menuconfig", or similar) > > > and rebuilt your kernel, you've > done all the configuration > > > steps you need to do; there are no > changes needed to the > > > tipc-config commands you've been > using. As mentioned > > > previously, you'll still need to > modify your sends to use > > > MSG_PEEK for those places where you > want to do replicasting > > > rather than broadcasting. > > > > > > Regards, > > > Al > > > > > > > > > ________________________________ > > > > > > > > > From: Sridhar > [mailto:chs...@gm...] > > > > > > > Sent: Monday, January > > > 24, 2011 12:54 PM > > > > > > > To: Stephens, Allan > > > Cc: > > > tip...@li... > > > > > > > Subject: Re: > > > [tipc-discussion] TIPC Multicast > issue in 1.7.7.-rc1 > > > > > > > > > > > > Hi Al, > > > > > > > Thanks for the patch. > > > > > > > I think, with this > > > patch, it is going to simulate the > multicast functionality, > > > like issuing "sendmsg" to each > receiver one after the other. > > > If my understanding is correct, is > this going to be impacted > > > from performance perspective > because this is not going to be > > > true multicast. Please confirm. > > > > > > > Also, when I am > > > applying this patch, you mentioned > that I need to reconfigure > > > the kernel to activate the > replicasting capability, is there > > > any configuration steps that I need > to follow when running > > > tipc-config. I will modify the > application as to pass the > > > MSG_PEEK flag to the "sendto" system call. > > > > > > Thanks > > > > -Sridhar > > > > > > > > > > On Mon, Jan 24, 2011 at > > > 5:00 AM, Stephens, Allan > <all...@wi...> wrote: > > > > > > > > > > Hi Sridhar: > > > > > > > If you're running TIPC > > > 1.7.7-rc4, then you're running the > most up-to-date TIPC code > > > that exists. However, you might > also want to try using an > > > experimental patch that allows you > to send multicast traffic > > > using TIPC's unicast links rather > than the broadcast link. > > > > > > > I've attached a version > > > of this patch that applies on top > of TIPC 1.7.7-rc4 to this > > > email. Once you'e applied the > patch, you'll need to > > > reconfigure your kernel to activate > the replicasting > > > capability (it's a new TIPC > configuration setting) AND you'll > > > need to modify your application so > that it does it's > > > multicast sends using the MSG_PEEK > flag. (If you omit any of > > > these steps you'll end up using the > broadcast link just as > > > you are now.) > > > > > > > It'll be interesting to > > > see what impact (if any) you see > from using this patch. Let > > > us know the results once you've got them. > > > > > > > Regards, > > > Al > > > > > > > > > ________________________________ > > > > > > > > > From: Sridhar > [mailto:chs...@gm...] > > > > > > > Sent: Saturday, January > > > 22, 2011 3:03 AM > > > > To: Stephens, Allan > > > > > > Cc: > > > > tip...@li...; chs...@gm... > > > > > > > Subject: Re: > > > [tipc-discussion] TIPC Multicast > issue in 1.7.7.-rc1 > > > > > > > > > Hi Al, > > > > > > > I have tried with TIPC > > > 1.7.7-rc4 and I am still seeing the > same issue. > > > > > > > The issue I am seeing > > > is in multicast case where I have 7 > receivers waiting for 1 > > > Million Pkts and each pkt size is > 4096. One client sends 1 > > > Million pkts (each pkt size 4096) > to all the receivers (port > > > name sequence is 1 to 50 and port > name is 18888). All the > > > receivers are not getting the > entire packets. If I try with 3 > > > receivers, it is working fine. > > > > > > > The seven patches that > > > I got from the below given link > > > > > > > http://tipc.cslab.ericsson.net/cgi-bin/gitweb.cgi?p=people/all > > an/tipc.git;a=shortlog;h=tipc1.7 > > > > > > > is not working. Can you > > > please let me know the location of > those 7 patches related to > > > the broadcast? > > > > > > > I am really stuck at > > > this point. I tried previously with > those 7 patches and it > > > worked fine but unfortunately I > lost my data. > > > > > > > Please help me to move forward. > > > > > > > Best Regards, > > > > -Sridhar > > > > > > > > > > > > > > > > > > > On Fri, Jan 21, 2011 at > > > 2:43 PM, Sridhar > <chs...@gm...> wrote: > > > > > > > > > Hi Al, > > > > > > > I am trying to access > > > the below given link but it is > taking me to the > > > sourceforge.net site. > > > > > > > I have previously > > > applied those 7 patches but my > machine got hard disk problem > > > and I lost the changes. Can you > please provide me the exact > > > link where I can find the seven > patches related to the broadcast. > > > > > > > Also, if I get the TIPC > > > 1.7.7-rc4, does it contain those > seven patches? If so, then > > > probably, I dont have to apply > those 7 patches. I copied all > > > the files from TIPC 1.7.7-rc4 > > > > > > > > > > http://tipc.git.sourceforge.net/git/gitweb.cgi?p=tipc/tipc;a=t > > > > ree;f=net/tipc;h=34c4638dc3c6a443103c15af48daecdc3f15cf96;hb=c > > d315307125df7023ccd98e0d06e2536f110eb6d > > > > > > > and the related header files. > > > > > > > Please advice > > > > > > > > > Thanks > > > > -Sridhar > > > > > > > > > > On Wed, Sep 15, 2010 at > > > 7:07 AM, Stephens, Allan > <all...@wi...> wrote: > > > > > > > > > > Hi Sridhar: > > > > > > > I haven't heard of > > > anyone experiencing this sort of > problem before. It's > > > > possible that you're > > > experiencing problems relating to > TIPC's broadcast > > > > link, since it is used > > > in the sending of multicast > messages -- there > > > > have been issues with > > > the link becoming "stalled", which > could explain > > > > why messages aren't > > > getting through. > > > > > > > I suggest that you pick > > > up all seven TIPC patches that have been > > > > incorporated into TIPC > > > 1.7 since TIPC 1.7.7-rc1 was > released; you can > > > > find them at > > > > > > > http://tipc.cslab.ericsson.net/cgi-bin/gitweb.cgi?p=people/all > > > an/tipc.gi > > > > t;a=shortlog;h=tipc1.7. > > > If you still see the problem after applying > > > > these patches send me > > > your modified application so I can try to > > > > replicate your failure scenario. > > > > > > > Regards, > > > Al > > > > > > > > > > > -----Original Message----- > > > > > From: Sridhar > > > [mailto:chs...@gm...] > > > > > Sent: Monday, > > > September 13, 2010 7:18 PM > > > > To: > > > tip...@li... > > > > > Cc: chs...@gm... > > > > > Subject: > > > [tipc-discussion] TIPC Multicast > issue in 1.7.7.-rc1 > > > > > > > > Hi, > > > > > > > > > I am having an issue > > > when I run the multicast client code in > > > > > order to send > > > messages to multiple receivers for > a range of > > > > > sequence numbers. > > > > > > > > > I have modified the > > > client_tipc.c and server_tipc.c file as > > > > > to work with the port > > > name sequences that I give from command > > > > > line argument. I am > > > making sure that I always start the > > > > > server with the port > > > name sequence of server port name 18888 > > > > > > > > > I am using TIPC > > > 1.7..7-rc1and Linux kernel is 2.6.24 and I > > > > > have TWO nodes > > > connected over eth0 and architecture is ppc. > > > > > > > > > I have started server > > > as follows > > > > > > > > # > > > server_tipc_multicast_demo 10 13 > <<<< Here 10 and 13 > > > > > is the range of > > > > > the port name > > > sequence. Started in node#1 > > > > > > > > # > > > server_tipc_multicast_demo 14 17 > <<<< Started in node#2 > > > > > > > > > > > > # > > > server_tipc_multicast_demo 18 20 > <<<< Started in node#1 > > > > > > > > # > > > server_tipc_multicast_demo 21 23 > <<<< Started in node#1 > > > > > > > > > > > > > I have started client > > > from Node#1 as > > > > > > > > > > > #client_tipc_multicast_demo 10 25 > <<<< Sending 100K > > > > > message of each > > > > > message size 4096 > > > bytes to all the servers ranging from 10 to > > > > > 25. I have 4 > > > receivers. I even tried with 100 > bytes per msg > > > > > and still seeing the > > > same issue. > > > > > > > > > > > > > What I have observed > > > is, all the receivers are not getting > > > > > all 100K messages, > > > sometimes two receivers are getting 100K > > > > > messages but not all > > > of them. This is the issue that I > am facing now. > > > > > > > > > Can someone please > > > let me know whether this is a known > > > > > problem? How do I > > > make this program work? I am really stuck > > > > > at this point and > > > appreciate any help. > > > > > > > > > I made one change in > > > the client_tipc.c (under multicast demo) > > > > > is to changing the > > > scope of the server from NODE specific to > > > > > CLUSTER specific. > > > > > > > > > Thanks much for the > > > help in advance. > > > > > > > > > Best Regards, > > > > > -Sridhar > > > > > > > > > > > -------------------------------------------------------------- > > > > > ---------------- > > > > > Start uncovering the > > > many advantages of virtual appliances > > > > > and start using them > > > to simplify application deployment and > > > > > accelerate your shift > > > to cloud computing. > > > > > > > http://p.sf.net/sfu/novell-sfdev2dev > > > > > > > > _______________________________________________ > > > > > tipc-discussion mailing list > > > > > > > tip...@li... > > > > > > > > https://lists.sourceforge.net/lists/listinfo/tipc-discussion > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------------------------------------------------------- > -------------- > -- > Special Offer-- Download ArcSight > Logger for FREE (a $49 USD value)! > Finally, a world-class log management > solution at an even better price-free! > Download using promo code > Free_Logger_4_Dev2Dev. Offer expires > February 28th, so secure your free > ArcSight Logger TODAY! > http://p.sf.net/sfu/arcsight-sfd2d > > _______________________________________________ > tipc-discussion mailing list > tip...@li... > > https://lists.sourceforge.net/lists/listinfo/tipc-discussion > > > > > > > > > |