You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(6) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(9) |
Feb
(11) |
Mar
(22) |
Apr
(73) |
May
(78) |
Jun
(146) |
Jul
(80) |
Aug
(27) |
Sep
(5) |
Oct
(14) |
Nov
(18) |
Dec
(27) |
2005 |
Jan
(20) |
Feb
(30) |
Mar
(19) |
Apr
(28) |
May
(50) |
Jun
(31) |
Jul
(32) |
Aug
(14) |
Sep
(36) |
Oct
(43) |
Nov
(74) |
Dec
(63) |
2006 |
Jan
(34) |
Feb
(32) |
Mar
(21) |
Apr
(76) |
May
(106) |
Jun
(72) |
Jul
(70) |
Aug
(175) |
Sep
(130) |
Oct
(39) |
Nov
(81) |
Dec
(43) |
2007 |
Jan
(81) |
Feb
(36) |
Mar
(20) |
Apr
(43) |
May
(54) |
Jun
(34) |
Jul
(44) |
Aug
(55) |
Sep
(44) |
Oct
(54) |
Nov
(43) |
Dec
(41) |
2008 |
Jan
(42) |
Feb
(84) |
Mar
(73) |
Apr
(30) |
May
(119) |
Jun
(54) |
Jul
(54) |
Aug
(93) |
Sep
(173) |
Oct
(130) |
Nov
(145) |
Dec
(153) |
2009 |
Jan
(59) |
Feb
(12) |
Mar
(28) |
Apr
(18) |
May
(56) |
Jun
(9) |
Jul
(28) |
Aug
(62) |
Sep
(16) |
Oct
(19) |
Nov
(15) |
Dec
(17) |
2010 |
Jan
(14) |
Feb
(36) |
Mar
(37) |
Apr
(30) |
May
(33) |
Jun
(53) |
Jul
(42) |
Aug
(50) |
Sep
(67) |
Oct
(66) |
Nov
(69) |
Dec
(36) |
2011 |
Jan
(52) |
Feb
(45) |
Mar
(49) |
Apr
(21) |
May
(34) |
Jun
(13) |
Jul
(19) |
Aug
(37) |
Sep
(43) |
Oct
(10) |
Nov
(23) |
Dec
(30) |
2012 |
Jan
(42) |
Feb
(36) |
Mar
(46) |
Apr
(25) |
May
(96) |
Jun
(146) |
Jul
(40) |
Aug
(28) |
Sep
(61) |
Oct
(45) |
Nov
(100) |
Dec
(53) |
2013 |
Jan
(79) |
Feb
(24) |
Mar
(134) |
Apr
(156) |
May
(118) |
Jun
(75) |
Jul
(278) |
Aug
(145) |
Sep
(136) |
Oct
(168) |
Nov
(137) |
Dec
(439) |
2014 |
Jan
(284) |
Feb
(158) |
Mar
(231) |
Apr
(275) |
May
(259) |
Jun
(91) |
Jul
(222) |
Aug
(215) |
Sep
(165) |
Oct
(166) |
Nov
(211) |
Dec
(150) |
2015 |
Jan
(164) |
Feb
(324) |
Mar
(299) |
Apr
(214) |
May
(111) |
Jun
(109) |
Jul
(105) |
Aug
(36) |
Sep
(58) |
Oct
(131) |
Nov
(68) |
Dec
(30) |
2016 |
Jan
(46) |
Feb
(87) |
Mar
(135) |
Apr
(174) |
May
(132) |
Jun
(135) |
Jul
(149) |
Aug
(125) |
Sep
(79) |
Oct
(49) |
Nov
(95) |
Dec
(102) |
2017 |
Jan
(104) |
Feb
(75) |
Mar
(72) |
Apr
(53) |
May
(18) |
Jun
(5) |
Jul
(14) |
Aug
(19) |
Sep
(2) |
Oct
(13) |
Nov
(21) |
Dec
(67) |
2018 |
Jan
(56) |
Feb
(50) |
Mar
(148) |
Apr
(41) |
May
(37) |
Jun
(34) |
Jul
(34) |
Aug
(11) |
Sep
(52) |
Oct
(48) |
Nov
(28) |
Dec
(46) |
2019 |
Jan
(29) |
Feb
(63) |
Mar
(95) |
Apr
(54) |
May
(14) |
Jun
(71) |
Jul
(60) |
Aug
(49) |
Sep
(3) |
Oct
(64) |
Nov
(115) |
Dec
(57) |
2020 |
Jan
(15) |
Feb
(9) |
Mar
(38) |
Apr
(27) |
May
(60) |
Jun
(53) |
Jul
(35) |
Aug
(46) |
Sep
(37) |
Oct
(64) |
Nov
(20) |
Dec
(25) |
2021 |
Jan
(20) |
Feb
(31) |
Mar
(27) |
Apr
(23) |
May
(21) |
Jun
(30) |
Jul
(30) |
Aug
(7) |
Sep
(18) |
Oct
|
Nov
(15) |
Dec
(4) |
2022 |
Jan
(3) |
Feb
(1) |
Mar
(10) |
Apr
|
May
(2) |
Jun
(26) |
Jul
(5) |
Aug
|
Sep
(1) |
Oct
(2) |
Nov
(9) |
Dec
(2) |
2023 |
Jan
(4) |
Feb
(4) |
Mar
(5) |
Apr
(10) |
May
(29) |
Jun
(17) |
Jul
|
Aug
|
Sep
(1) |
Oct
(1) |
Nov
(2) |
Dec
|
2024 |
Jan
|
Feb
(6) |
Mar
|
Apr
(1) |
May
(6) |
Jun
|
Jul
(5) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
|
2025 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Jon M. <jon...@er...> - 2004-03-12 21:11:49
|
Correct, TIPC will use ethernet broadcasts for neighbour detection and even TIPC multicast messages, as it is implemented now. If the two switches are connected in loop as you describe, I can not see how TIPC can stop a broadcast flood, so you must avoid this by other means, and STP is the obvious solution, as far I can see. Regards /Jon Kevin Kaichuan He wrote: Thank you for the clarification. In fact the reason I ask about the topology is that I investigating TIPC as the managment IPC used in stacked switches environment. One concern in switch/L2 environment is about broadcast storm in a topology containing loops. Usually we run STP to form a loop-free topology among switches. I'm just thinking whether TIPC can eliminate the need of STP. It seems to me so far TIPC is working one layer above L2 and STP is still necessary to provide loop free l2 broadcast domain to TIPC. Will TIPC send broadcast (DMAC=ff:ff:ff:ff:ff:ff) frame for control/managment plane frames in the negotiation stage and unicast frame for data plane traffic ? Thanks Kevin --- "Guo, Min" <mailto:mi...@in...> <mi...@in...> wrote: -----Original Message----- From: tip...@li... <mailto:tip...@li...> [ mailto:tip...@li... <mailto:tip...@li...> ] On Behalf Of Kevin Kaichuan He Sent: Friday, March 12, 2004 1:50 PM To: tip...@li... <mailto:tip...@li...> Subject: [Tipc-discussion] How good is TIPC on various topologies ? Will TIPC works well with the following topology ? PC1 PC2 PC3 |eth0 |eth0 |eth0 |________hub_________| Yes How about the following topology ? Will client on PC1 reach server on PC4 automatically via PC3 ? eth1 eth0 PC1 PC2 PC3--------- hub2 -----PC4 |eth0 |eth0 |eth0 |________hub1________| Yes How about the following topology where a self-loop exists ? eth1 PC1--------- |eth0 | |________hub Yes How about the following toplogy where a loop between two PCs exist ? Essentially the following toplogy exists when STP is turned off. If TIPC can tolerate this topoplogy we may not need a underlying STP to resolve L2 loops. eth1 eth0 PC1--------- /----PC2---| |eth0 | / | |________switch_________| I have not tried in such condition, but I think logically TIPC can work in such topology. When STP is enabled on the switch the above toplogy will become the following: PC1 PC2 |eth0 | |________switch_________| Thanks a lot! Kevin ------------------------------------------------------- This SF.Net email is sponsored by: IBM Linux Tutorials Free Linux tutorial presented by Daniel Robbins, President and CEO of GenToo technologies. Learn everything from fundamentals to system administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click _______________________________________________ TIPC-discussion mailing list TIP...@li... <mailto:TIP...@li...> https://lists.sourceforge.net/lists/listinfo/tipc-discussion <https://lists.sourceforge.net/lists/listinfo/tipc-discussion> ------------------------------------------------------- This SF.Net email is sponsored by: IBM Linux Tutorials Free Linux tutorial presented by Daniel Robbins, President and CEO of GenToo technologies. Learn everything from fundamentals to system administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click _______________________________________________ TIPC-discussion mailing list TIP...@li... <mailto:TIP...@li...> https://lists.sourceforge.net/lists/listinfo/tipc-discussion <https://lists.sourceforge.net/lists/listinfo/tipc-discussion> This communication is confidential and intended solely for the addressee(s). Any unauthorized review, use, disclosure or distribution is prohibited. If you believe this message has been sent to you in error, please notify the sender by replying to this transmission and delete the message without disclosing it. Thank you. E-mail including attachments is susceptible to data corruption, interruption, unauthorized amendment, tampering and viruses, and we only send and receive e-mails on the basis that we are not liable for any such corruption, interception, amendment, tampering or viruses or any consequences thereof. |
From: Jon M. <jon...@er...> - 2004-03-12 20:26:34
|
Hi, First, about 64-bit addresses: The current node address format is 32 bit: 8 bits zone, 12 bits cluster and 12 bits node. So even if driver.c reveives an int, it will not accept node numbers above 4095. Hence, to extend this to include an (Ethernet) MAC-address would require a node address size of 8+12+48 = 68 bits, not 64. Neither does a TIPC node address map 1-to-1 on a MAC address. In a system with redundant ethernets (or other bearers) a node address will represent at least 2 MAC addresses, and theoretically up to 8. And, TIPC is no bound to ethernet only. There will soon be "MAC" addresses of 144 bits (TCP port number + IPv6 address) when we start to carry TIPC over IPv6. Conclusion: Not a very good idea, even if we disregard the compatibility issue, which can not be ignored. For further responses, see below. Kevin Kaichuan He wrote: Will TIPC works well with the following topology ? PC1 PC2 PC3 |eth0 |eth0 |eth0 |________hub_________| Yes, this is the basic topology envisaged. How about the following topology ? Will client on PC1 reach server on PC4 automatically via PC3 ? eth1 eth0 PC1 PC2 PC3--------- hub2 -----PC4 |eth0 |eth0 |eth0 |________hub1________| Only if you define PC4 to be in a different cluster than the rest, or a slave node. A cluster is per definition a domain where there is full connectivity. (Don't try this with the current code, that part of it is broken now, but it will be fixed.) How about the following topology where a self-loop exists ? eth1 PC1--------- |eth0 | |________hub You may never connect two TIPC activated interfaces from a node to the same switch. In this particular case it will do no harm, since a node never tries to establish connections to itself. How about the following toplogy where a loop between two PCs exist ? Essentially the following toplogy exists when STP is turned off. If TIPC can tolerate this topoplogy we may not need a underlying STP to resolve L2 loops. eth1 eth0 PC1--------- /----PC2---| |eth0 | / | |________switch_________| TIPC only uses the interfaces you tell it to use, so if you configure the two nodes to use only one of their interfaces for TIPC (any of them, -and you may of course still use both for IP traffic), it will work fine. If, however, you try to activate both interfaces for TIPC, the result will be utter havoc. Then the TIPC address of PC1 will represent two interfaces on the same LAN, and the two links starting on PC2 will be very confused when they try to find their counterpart in PC1. Some messages will come from link PC1/eth0, others from link PC1/eth1, and you will see the two links starting wobbling up and down as result. ===> If you have two interfaces, and you want TIPC to use both, you must also have two switches. (Anything else would not make sense anyway, the whole idea with this is to avoid single point of failures.) Regards /jon This communication is confidential and intended solely for the addressee(s). Any unauthorized review, use, disclosure or distribution is prohibited. If you believe this message has been sent to you in error, please notify the sender by replying to this transmission and delete the message without disclosing it. Thank you. E-mail including attachments is susceptible to data corruption, interruption, unauthorized amendment, tampering and viruses, and we only send and receive e-mails on the basis that we are not liable for any such corruption, interception, amendment, tampering or viruses or any consequences thereof. |
From: Kevin K. He <he...@ya...> - 2004-03-12 20:26:30
|
Thank you for the clarification. In fact the reason I ask about the topology is that I investigating TIPC as the managment IPC used in stacked switches environment. One concern in switch/L2 environment is about broadcast storm in a topology containing loops. Usually we run STP to form a loop-free topology among switches. I'm just thinking whether TIPC can eliminate the need of STP. It seems to me so far TIPC is working one layer above L2 and STP is still necessary to provide loop free l2 broadcast domain to TIPC. Will TIPC send broadcast (DMAC=ff:ff:ff:ff:ff:ff) frame for control/managment plane frames in the negotiation stage and unicast frame for data plane traffic ? Thanks Kevin > --- "Guo, Min" <mi...@in...> wrote: > > > > > > > -----Original Message----- > > > From: tip...@li... > > > [mailto:tip...@li...] On > > > Behalf Of Kevin Kaichuan He > > > Sent: Friday, March 12, 2004 1:50 PM > > > To: tip...@li... > > > Subject: [Tipc-discussion] How good is TIPC on various topologies ? > > > > > > Will TIPC works well with the following topology ? > > > > > > PC1 PC2 PC3 > > > |eth0 |eth0 |eth0 > > > |________hub_________| > > > > > > > Yes > > > > > > How about the following topology ? Will client on PC1 reach > > > server on PC4 automatically via PC3 ? > > > > > > eth1 eth0 > > > PC1 PC2 PC3--------- hub2 -----PC4 > > > |eth0 |eth0 |eth0 > > > |________hub1________| > > > > Yes > > > > > > How about the following topology where a self-loop exists ? > > > eth1 > > > PC1--------- > > > |eth0 | > > > |________hub > > > > Yes > > > > > > How about the following toplogy where a loop between two PCs > > > exist ? Essentially the following toplogy exists when STP is > > > turned off. If TIPC can tolerate this topoplogy we may not > > > need a underlying STP to resolve L2 loops. > > > > > > eth1 eth0 > > > PC1--------- /----PC2---| > > > |eth0 | / | > > > |________switch_________| > > > > > I have not tried in such condition, but I think logically TIPC can work > > in such topology. > > > > > > > > When STP is enabled on the switch the above toplogy will > > > become the following: > > > > > > > > > PC1 PC2 > > > |eth0 | > > > |________switch_________| > > > > > > > > > Thanks a lot! > > > > > > Kevin > > > > > > > > > > > > ------------------------------------------------------- > > > This SF.Net email is sponsored by: IBM Linux Tutorials Free > > > Linux tutorial presented by Daniel Robbins, President and CEO > > > of GenToo technologies. Learn everything from fundamentals to > > > system > > > administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click > > > _______________________________________________ > > > TIPC-discussion mailing list > > > TIP...@li... > > > https://lists.sourceforge.net/lists/listinfo/tipc-discussion > > > > > |
From: Kevin K. He <he...@ya...> - 2004-03-12 20:24:19
|
I didn't mean that two PCs intereconnted with two NICs on each. What I wish to convey is an analogy of two switches interconnected in the following way. ---------- 1/1 2/1 ----------- |switch 1|----------| switch2 | | | 1/2 2/2 | | |_______ |----------|__________| Assuming all ports 1/1,1/2,2/1,2/2 are in the same Vlan (broadcast domain). Then the above topology has a "loop" because broadcast or flooded frames from switch 1 port 1/1 will be forwarded to 2/1 and then flooded by switch 2 to 2/2 , then back to switch 1 through 1/2, then switch 1 again flood the frame to 1/1,...,etc. Thus the frame will be flooded forever in a loop. I'm trying to understand what is the situation if I use TIPC as managment/control plane IPC between the two switches (imagine that the two switches form a single stack). I guess my analogy using two PCs is inaccurate since PC usually doesn't perform the "bridging" function between ports. Thanks Kevin > > > --- "Ling, Xiaofeng" <xia...@in...> wrote: > > > > > > How about the following toplogy where a loop between two PCs > > > > exist ? Essentially the following toplogy exists when STP is > > > > turned off. If TIPC can tolerate this topoplogy we may not > > > > need a underlying STP to resolve L2 loops. > > > > > > > > eth1 eth0 > > > > PC1--------- /----PC2---| > > > > |eth0 | / | > > > > |________switch_________| > > > > > > > I have not tried in such condition, but I think logically > > > TIPC can work in such topology. > > > > > If this means two PC with two ethercard to each other, then there will > > be two link for the two node. > > and it will automatically perform load balance and failover. |
From: Kevin K. He <he...@ya...> - 2004-03-12 20:22:00
|
It seems that we can't squeeze MAC address in the 12-bit node id. Then what will happen if two processor use the same node id ? Will TIPC automatically detect the collision and even automatically resolve the collision by assigning unique node id to each other ? Does this sound a bad idea to implement if it's not there yet ? Kevin > > --- "Ling, Xiaofeng" <xia...@in...> wrote: > > currently nodeid is only 12bit, and a whole tipc address is 32bit, including zone,cluster, and > > node three part. > > Seems it is not so simple to change it to 64bit. > > > > > -----Original Message----- > > > From: tip...@li... > > > [mailto:tip...@li...] On > > > Behalf Of Kevin Kaichuan He > > > Sent: 2004Äê3ÔÂ12ÈÕ 12:57 > > > To: tip...@li... > > > Subject: [Tipc-discussion] 64-bit processor node id > > > > > > > > > Currently I see the driver.c uses a "int node" to store > > > the process node id. So it will be 32-bit node id on > > > 32-bit processors. > > > > > > I am thinking that whether we can make it 64-bit. The reason is > > > that 64-bit integer is enough to store ethernet MAC address. > > > So in order to generate a cluster-wide unique node id the > > > managment planes on different nodes don't need exchange any > > > network packets because they can simply use MAC addresses > > > as their node ids. > > > > > > One motivation of using TIPC in our project is that we can > > > avoid the complexity of IP address managment in a stack of L2 > > > switches. With 64-bit node id, I guess every node in our > > > stack can start tipc totally independent from others. > > > > > > Will there be negative impact of 64-bit node id on the tipc ? > > > > > > Thank you! > > > > > > Kevin > > > > > > > > > > > > ------------------------------------------------------- > > > This SF.Net email is sponsored by: IBM Linux Tutorials > > > Free Linux tutorial presented by Daniel Robbins, President > > > and CEO of GenToo technologies. Learn everything from > > > fundamentals to system > > > administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click > > > _______________________________________________ > > > TIPC-discussion mailing list TIP...@li... > > > https://lists.sourceforge.net/lists/listinfo/tipc-discussion > > > > > |
From: Kevin K. He <he...@ya...> - 2004-03-12 20:20:28
|
Can we update the processor node ID dynamically ? For example at the installation time I do "insmod tipc.o node=1", later can I update the node to 2 or 3, ..etc. ? Will TIPC tolerate such change ? Thanks Kevin --- "Ling, Xiaofeng" <xia...@in...> wrote: > currently nodeid is only 12bit, and a whole tipc address is 32bit, including zone,cluster, and > node three part. > Seems it is not so simple to change it to 64bit. > > > -----Original Message----- > > From: tip...@li... > > [mailto:tip...@li...] On > > Behalf Of Kevin Kaichuan He > > Sent: 2004Äê3ÔÂ12ÈÕ 12:57 > > To: tip...@li... > > Subject: [Tipc-discussion] 64-bit processor node id > > > > > > Currently I see the driver.c uses a "int node" to store > > the process node id. So it will be 32-bit node id on > > 32-bit processors. > > > > I am thinking that whether we can make it 64-bit. The reason is > > that 64-bit integer is enough to store ethernet MAC address. > > So in order to generate a cluster-wide unique node id the > > managment planes on different nodes don't need exchange any > > network packets because they can simply use MAC addresses > > as their node ids. > > > > One motivation of using TIPC in our project is that we can > > avoid the complexity of IP address managment in a stack of L2 > > switches. With 64-bit node id, I guess every node in our > > stack can start tipc totally independent from others. > > > > Will there be negative impact of 64-bit node id on the tipc ? > > > > Thank you! > > > > Kevin > > > > > > > > ------------------------------------------------------- > > This SF.Net email is sponsored by: IBM Linux Tutorials > > Free Linux tutorial presented by Daniel Robbins, President > > and CEO of GenToo technologies. Learn everything from > > fundamentals to system > > administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click > > _______________________________________________ > > TIPC-discussion mailing list TIP...@li... > > https://lists.sourceforge.net/lists/listinfo/tipc-discussion > > |
From: Mark H. <ma...@os...> - 2004-03-12 15:58:37
|
On Thu, 2004-03-11 at 21:50, Kevin Kaichuan He wrote: > Will TIPC works well with the following topology ? [ ... ] > > How about the following topology ? Will client > on PC1 reach server on PC4 automatically via PC3 ? > > eth1 eth0 > PC1 PC2 PC3--------- hub2 -----PC4 > |eth0 |eth0 |eth0 > |________hub1________| This doesn't work. I believe that all nodes are required to have full connectivity unless you have a device processor setup. Then not all nodes are equal though. Mark. -- Mark Haverkamp <ma...@os...> |
From: Ling, X. <xia...@in...> - 2004-03-12 06:55:31
|
> > How about the following toplogy where a loop between two PCs > > exist ? Essentially the following toplogy exists when STP is=20 > > turned off. If TIPC can tolerate this topoplogy we may not=20 > > need a underlying STP to resolve L2 loops. > >=20 > > eth1 eth0 > > PC1--------- /----PC2---| > > |eth0 | / | > > |________switch_________| > >=20 > I have not tried in such condition, but I think logically=20 > TIPC can work in such topology. >=20 If this means two PC with two ethercard to each other, then there will be two link for the two node. and it will automatically perform load balance and failover. |
From: Guo, M. <mi...@in...> - 2004-03-12 06:27:53
|
=20 > -----Original Message----- > From: tip...@li...=20 > [mailto:tip...@li...] On=20 > Behalf Of Kevin Kaichuan He > Sent: Friday, March 12, 2004 1:50 PM > To: tip...@li... > Subject: [Tipc-discussion] How good is TIPC on various topologies ? >=20 > Will TIPC works well with the following topology ? >=20 > PC1 PC2 PC3 > |eth0 |eth0 |eth0 > |________hub_________| >=20 Yes >=20 > How about the following topology ? Will client on PC1 reach=20 > server on PC4 automatically via PC3 ? >=20 > eth1 eth0 > PC1 PC2 PC3--------- hub2 -----PC4 > |eth0 |eth0 |eth0 > |________hub1________| Yes >=20 > How about the following topology where a self-loop exists ? > eth1 > PC1--------- > |eth0 | > |________hub Yes >=20 > How about the following toplogy where a loop between two PCs=20 > exist ? Essentially the following toplogy exists when STP is=20 > turned off. If TIPC can tolerate this topoplogy we may not=20 > need a underlying STP to resolve L2 loops. >=20 > eth1 eth0 > PC1--------- /----PC2---| > |eth0 | / | > |________switch_________| >=20 I have not tried in such condition, but I think logically TIPC can work in such topology. >=20 > When STP is enabled on the switch the above toplogy will=20 > become the following: >=20 > =20 > PC1 PC2 > |eth0 | > |________switch_________| >=20 >=20 > Thanks a lot! >=20 > Kevin >=20 >=20 >=20 > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux Tutorials Free=20 > Linux tutorial presented by Daniel Robbins, President and CEO=20 > of GenToo technologies. Learn everything from fundamentals to=20 > system=20 > = administration.http://ads.osdn.com/?ad_id=3D1470&alloc_id=3D3638&op=3Dcli= ck > _______________________________________________ > TIPC-discussion mailing list > TIP...@li... > https://lists.sourceforge.net/lists/listinfo/tipc-discussion >=20 |
From: Ling, X. <xia...@in...> - 2004-03-12 06:27:20
|
currently nodeid is only 12bit, and a whole tipc address is 32bit, = including zone,cluster, and node three part. Seems it is not so simple to change it to 64bit. > -----Original Message----- > From: tip...@li...=20 > [mailto:tip...@li...] On=20 > Behalf Of Kevin Kaichuan He > Sent: 2004=C4=EA3=D4=C212=C8=D5 12:57 > To: tip...@li... > Subject: [Tipc-discussion] 64-bit processor node id >=20 >=20 > Currently I see the driver.c uses a "int node" to store > the process node id. So it will be 32-bit node id on > 32-bit processors. >=20 > I am thinking that whether we can make it 64-bit. The reason is=20 > that 64-bit integer is enough to store ethernet MAC address.=20 > So in order to generate a cluster-wide unique node id the=20 > managment planes on different nodes don't need exchange any=20 > network packets because they can simply use MAC addresses=20 > as their node ids. >=20 > One motivation of using TIPC in our project is that we can=20 > avoid the complexity of IP address managment in a stack of L2=20 > switches. With 64-bit node id, I guess every node in our=20 > stack can start tipc totally independent from others. >=20 > Will there be negative impact of 64-bit node id on the tipc ? >=20 > Thank you! >=20 > Kevin >=20 >=20 >=20 > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux Tutorials > Free Linux tutorial presented by Daniel Robbins, President=20 > and CEO of GenToo technologies. Learn everything from=20 > fundamentals to system=20 > = administration.http://ads.osdn.com/?ad_id=3D1470&alloc_id=3D3638&op=3Dcli= ck > _______________________________________________ > TIPC-discussion mailing list TIP...@li... > https://lists.sourceforge.net/lists/listinfo/tipc-discussion >=20 |
From: Kevin K. He <he...@ya...> - 2004-03-12 06:01:17
|
Will TIPC works well with the following topology ? PC1 PC2 PC3 |eth0 |eth0 |eth0 |________hub_________| How about the following topology ? Will client on PC1 reach server on PC4 automatically via PC3 ? eth1 eth0 PC1 PC2 PC3--------- hub2 -----PC4 |eth0 |eth0 |eth0 |________hub1________| How about the following topology where a self-loop exists ? eth1 PC1--------- |eth0 | |________hub How about the following toplogy where a loop between two PCs exist ? Essentially the following toplogy exists when STP is turned off. If TIPC can tolerate this topoplogy we may not need a underlying STP to resolve L2 loops. eth1 eth0 PC1--------- /----PC2---| |eth0 | / | |________switch_________| When STP is enabled on the switch the above toplogy will become the following: PC1 PC2 |eth0 | |________switch_________| Thanks a lot! Kevin |
From: Kevin K. He <he...@ya...> - 2004-03-12 05:07:29
|
Currently I see the driver.c uses a "int node" to store the process node id. So it will be 32-bit node id on 32-bit processors. I am thinking that whether we can make it 64-bit. The reason is that 64-bit integer is enough to store ethernet MAC address. So in order to generate a cluster-wide unique node id the managment planes on different nodes don't need exchange any network packets because they can simply use MAC addresses as their node ids. One motivation of using TIPC in our project is that we can avoid the complexity of IP address managment in a stack of L2 switches. With 64-bit node id, I guess every node in our stack can start tipc totally independent from others. Will there be negative impact of 64-bit node id on the tipc ? Thank you! Kevin |
From: Kevin K. He <he...@ya...> - 2004-03-11 06:02:14
|
It seems tipc_signal_wait_queue is an external variable not defined anywhere ? Thanks Kevin |
From: Jon M. <jon...@er...> - 2004-03-11 02:00:46
|
I is a lot more stable than the previous version, and I have corrected numerous compiler issues raised by Mark and Steve. I added end-to-end flow control on connections, and an optional 'unreliable mode' for SOCK_SEQPACKET and SOCK_RDM. The latter is equivalent to SOCK_DGRAM, which I also added. (Is there any Posix name for "unreliable SOCK_SEQPACKET ?) With this think I am done with hacking the API and socket.c, except for pure bug fixes. SOCK_STREAM is still not tested. Next: Move over to Linux 2.6, and SMP. /Jon This communication is confidential and intended solely for the addressee(s). Any unauthorized review, use, disclosure or distribution is prohibited. If you believe this message has been sent to you in error, please notify the sender by replying to this transmission and delete the message without disclosing it. Thank you. E-mail including attachments is susceptible to data corruption, interruption, unauthorized amendment, tampering and viruses, and we only send and receive e-mails on the basis that we are not liable for any such corruption, interception, amendment, tampering or viruses or any consequences thereof. |
From: Jon M. <jon...@er...> - 2004-02-27 02:02:05
|
Hi all, I have checked in an "unstable" TIPC, merged with Guo's and Ling's multcast code. (I had to comment out that code, though, there seems to be something missing that makes it impossible to compile.) The user API has had another overhaul, and I think the socket interface is now a lot closer to Posix compliance. From now on a user must strictly define which kind of socket he wants: SOCK_RDM (for connectionless communication) SOCK_SEQPACKET or SOCK_STREAM. The sockets will behave the way people expect them to: To set up/shut down a connection, use connect() and shutdown(), and then send/receive data the way you are used to. But: It is still possible to do setup by just sending a name addressed message to the accepting port. Also, it is possible to do local connect()/disconnect() via ioctl's, so the current semantics is fully preserved, for those who want that option. I implemented use of the MSG_PEEK flag for SOCK_SEQPACKET and SOCK_RDM, but I find this rather unsatisfactory, so I invented a new flag for recv(): MSG_PEEP. This behaves as the TIPC API always did: if the given buffer is too short, it is returned with the first four bytes containing the length of the pending message. If this flag is not used, a recv() will behave according to standard, i.e. to discard the remainder of the message. The code is less stable than the previous version, and some of it (SOCK_STREAM) is not even tested, but I still decided to check it in so that you can see what the interfaces and socket adaptation look like now, and possibly do some testing. I am going to Seoul (IETF-59) tomorrow, and may be difficuly to reach next week, but you are welcome to try. I can be reached at +1 514 576 2150, or as secondary option +47 928 09 576. And I read email, of course. Regards /Jon This communication is confidential and intended solely for the addressee(s). Any unauthorized review, use, disclosure or distribution is prohibited. If you believe this message has been sent to you in error, please notify the sender by replying to this transmission and delete the message without disclosing it. Thank you. E-mail including attachments is susceptible to data corruption, interruption, unauthorized amendment, tampering and viruses, and we only send and receive e-mails on the basis that we are not liable for any such corruption, interception, amendment, tampering or viruses or any consequences thereof. |
From: Guo, M. <mi...@in...> - 2004-02-25 07:01:09
|
Add mc translation function to the nametable.c file --------------------------- Index: name_table.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D RCS file: /home/cvs/develop/unstable_tipc/net/tipc/name_table.c,v retrieving revision 1.3 retrieving revision 1.5 diff -u -r1.3 -r1.5 --- name_table.c 24 Feb 2004 02:00:08 -0000 1.3 +++ name_table.c 25 Feb 2004 06:49:24 -0000 1.5 @@ -38,13 +38,22 @@ * * = ------------------------------------------------------------------------ * - * $Id: name_table.c,v 1.3 2004/02/24 02:00:08 xling Exp $ + * $Id: name_table.c,v 1.5 2004/02/25 06:49:24 mguo Exp $ * * Revision history: * ---------------- * $Log: name_table.c,v $ - * Revision 1.3 2004/02/24 02:00:08 xling - * sf0224 + * Revision 1.5 2004/02/25 06:49:24 mguo + * complier pass + * + * Revision 1.4 2004/02/25 06:00:47 xling + * fix mc + * + * Revision 1.2 2004/02/18 07:22:02 xling + * *** empty log message *** + * + * Revision 1.1.1.1 2004/02/17 08:48:08 xling + * import * * Revision 1.4 2004/02/09 03:49:44 jonmaloy * *** empty log message *** @@ -69,6 +78,9 @@ #include "node_subscr.h" #include "name_subscr.h" #include "port.h" +#include "cluster.h" +#include "bcast.h" +#include <linux/list.h> =20 static void nametbl_dump(void); static void nametbl_print(struct print_buf *buf, const char *str); @@ -674,6 +686,146 @@ not_found: read_unlock_bh(&nametbl_lock);=20 return 0; +} + +/* + *nametbl_mc_translate +*/ +int nametbl_mc_translate(struct list_head* mc_head ,uint type, + uint lower,uint upper) +{ + struct name_seq* seq;=20 + int i =3D 0;=20 + struct publication* publ; + struct sub_seq *sseq; =20 + int low_seq, high_seq; + uint destport; + tipc_net_addr_t destnode; + + read_lock_bh(&nametbl_lock);=20 + seq =3D nametbl_find_seq(type);=20 + if (!seq) goto not_found; =20 + + sseq =3D nameseq_available(seq,lower,upper); =20 + if (!sseq) goto not_found; =20 + =20 + low_seq =3D nameseq_find_insert_pos(seq,lower); =20 + if (low_seq < 0) + low_seq =3D low_seq < 0 ? ~low_seq : low_seq; + + high_seq =3D nameseq_find_insert_pos(seq,upper);=20 + if (high_seq < 0) + high_seq =3D high_seq < 0 ? ((~high_seq) -1): high_seq; + =20 + if (high_seq < low_seq) + goto not_found;=20 + =20 + spin_lock_bh(&seq->lock); + + i =3D low_seq; + =20 + for (i =3D low_seq ; i <=3D high_seq; i++) + { + + publ =3D seq->sseqs[i].cluster_list; + if(!publ) { + spin_unlock_bh(&seq->lock); + goto not_found; + =09 + } + =09 + do { + destnode =3D publ->node;=20 + destport =3D publ->ref; + if ( false =3D=3D mc_indenity_create(mc_head,destport,destnode)) + goto found; =09 + publ=3D publ->cluster_list.next; + }while(publ !=3D seq->sseqs[i].cluster_list); + =09 + if (destport) + { + if ( false =3D=3D mc_indenity_create(mc_head,destport,destnode)) + goto found; =09 + } =09 + }=09 + if (list_empty(mc_head)) + { + spin_unlock_bh(&seq->lock); + goto not_found; + } + found: + spin_unlock_bh(&seq->lock); + read_unlock_bh(&nametbl_lock);=20 + return true; + not_found: + read_unlock_bh(&nametbl_lock);=20 + return false; +} + + +/* + *nametbl_mc_translate +*/ +int nametbl_self_translate(struct list_head* mc_head ,uint type, + uint lower,uint upper) +{ + struct name_seq* seq;=20 + int i =3D 0;=20 + struct publication* publ; + struct sub_seq *sseq; =20 + int low_seq, high_seq; + uint destport; + tipc_net_addr_t destnode; + + read_lock_bh(&nametbl_lock);=20 + seq =3D nametbl_find_seq(type);=20 + if (!seq) goto not_found; =20 + + sseq =3D nameseq_available(seq,lower,upper); =20 + if (!sseq) goto not_found; =20 + =20 + low_seq =3D nameseq_find_insert_pos(seq,lower); =20 + if (low_seq < 0) + low_seq =3D low_seq < 0 ? ~low_seq : low_seq; + + high_seq =3D nameseq_find_insert_pos(seq,upper);=20 + if (high_seq < 0) + high_seq =3D high_seq < 0 ? ((~high_seq) -1): high_seq; + =20 + if (high_seq < low_seq) + goto not_found;=20 + =20 + spin_lock_bh(&seq->lock); + + i =3D low_seq; + =20 + for (i =3D low_seq ; i <=3D high_seq; i++) + { + publ =3D seq->sseqs[i].node_list; + if(!publ) { + spin_unlock_bh(&seq->lock); + goto not_found; =09 + } + destnode =3D publ->node;=20 + destport =3D publ->ref; + if (destport && (destnode =3D=3D tipc_own_addr)) + { + if ( false =3D=3D mc_indenity_create(mc_head,destport,destnode)) { + spin_unlock_bh(&seq->lock); + goto not_found; =09 + } + } =09 + }=09 + if (list_empty(mc_head)) + spin_unlock_bh(&seq->lock); + goto not_found; + spin_unlock_bh(&seq->lock); + read_unlock_bh(&nametbl_lock);=20 + return true; + not_found: + read_unlock_bh(&nametbl_lock);=20 + return false; + } =20 struct publication * |
From: Ling, X. <xia...@in...> - 2004-02-24 08:27:23
|
Hi, Jon. We figure out the patch for multicast part. This patch does not include the 4 files I checked in last time. If it has no problem, you can merge it with your code and check in. When debuging, to eliminate the effect of these multicast codes, just comment out two lines including "btlink_create" and "btlink_delete". --------------------------------------------------------------------- Index: driver.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D RCS file: /cvsroot/tipc/source/unstable/net/tipc/driver.c,v retrieving revision 1.10 diff -u -r1.10 driver.c --- driver.c 16 Feb 2004 23:00:01 -0000 1.10 +++ driver.c 24 Feb 2004 07:58:50 -0000 @@ -138,6 +143,7 @@ #include <tipc_bearer.h> #include <tipc_dbg.h> #include <tipc_port.h> +#include <bcast.h> =20 #ifdef TIPC_LINUX_2_4 EXPORT_NO_SYMBOLS; @@ -201,6 +207,7 @@ return -EINVAL; eth_media_start(eth); udp_media_start(udp0); + tipc_bcast_start(); sock_start(); return 0; } Index: link.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D RCS file: /cvsroot/tipc/source/unstable/net/tipc/link.c,v retrieving revision 1.7 diff -u -r1.7 link.c --- link.c 16 Feb 2004 23:00:01 -0000 1.7 +++ link.c 24 Feb 2004 07:58:51 -0000 @@ -550,6 +551,9 @@ strcpy((char *) msg_data(msg),bearer_strip_name(b->publ.name)); this->owner =3D node_attach_link(this); k_signal((Handler) link_start, (void *)this); + if (in_own_cluster(this->owner->addr)) {=20 + btlink_create(this, b); + } return this; } =20 @@ -565,6 +569,7 @@ if (!this) return; dbg("link_delete()\n"); + btlink_delete(this); link_reset(this); node_detach_link(this->owner,this); link_stop(this); @@ -1738,6 +1743,9 @@ spin_unlock_bh(&this->owner->lock); port_recv_proto_msg(buf); return; + case BCAST_PROTOCOL: + spin_unlock_bh(&this->owner->lock); + link_recv_bcast_proto_msg(this, buf); =20 default: spin_unlock_bh(&this->owner->lock); net_route_msg(buf); Index: node.h =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D RCS file: /cvsroot/tipc/source/unstable/net/tipc/node.h,v retrieving revision 1.6 diff -u -r1.6 node.h --- node.h 16 Feb 2004 23:00:02 -0000 1.6 +++ node.h 24 Feb 2004 07:58:51 -0000 @@ -97,6 +95,10 @@ uint last_in_bcast; uint acked_bcast; spinlock_t lock; + int gap; /* gap between exp_seq and unseqencial data package, request peer to restransmit*/ + int deferred_inqueue_sz; + struct sk_buff *deferred_in; + =09 }; =20 =20 @@ -104,6 +106,7 @@ void node_delete(struct node*); struct node* node_attach_link(struct link* link); void node_detach_link(struct node*,struct link* link); +struct link* node_get_other_link(struct node*,struct link* link); void node_link_down(struct node*,struct link* link); void node_link_up(struct node*,struct link* link); int node_has_active_links(struct node* this); Index: port.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D RCS file: /cvsroot/tipc/source/unstable/net/tipc/port.c,v retrieving revision 1.6 diff -u -r1.6 port.c --- port.c 16 Feb 2004 23:00:02 -0000 1.6 +++ port.c 24 Feb 2004 07:58:52 -0000 @@ -383,11 +384,17 @@ buf_discard(buf); return TIPC_OK; } - - rbuf =3D buf_acquire(data_sz + LONG_H_SIZE); + if (msg_mcast(msg)) + rbuf =3D buf_acquire(data_sz + MCAST_H_SIZE); + else + rbuf =3D buf_acquire(data_sz + LONG_H_SIZE); rmsg =3D buf_msg(rbuf); - msg_init(rmsg,imp,msg_type(msg),err, - LONG_H_SIZE,msg_orignode(msg)); + if (msg_mcast(msg)) + msg_init(rmsg,imp,msg_type(msg),err, + MCAST_H_SIZE,msg_orignode(msg)); + else + msg_init(rmsg,imp,msg_type(msg),err, + LONG_H_SIZE,msg_orignode(msg)); msg_set_destport(rmsg, msg_origport(msg)); msg_set_prevnode(rmsg, tipc_own_addr); msg_set_origport(rmsg, msg_destport(msg)); @@ -395,15 +402,23 @@ msg_set_orignode(rmsg,tipc_own_addr); else msg_set_orignode(rmsg,msg_destnode(msg)); - msg_set_size(rmsg, data_sz + LONG_H_SIZE); - msg_set_nametype(rmsg, msg_nametype(msg)); - msg_set_nameinst(rmsg, msg_nameinst(msg)); - buf_copy_append(rbuf,LONG_H_SIZE,msg_data(msg),data_sz); + msg_set_nametype(rmsg, msg_nametype(msg)); + msg_set_nameinst(rmsg, msg_nameinst(msg)); + + if (msg_mcast(msg)){ + msg_set_size(rmsg, data_sz + MCAST_H_SIZE); + msg_set_nameupper(rmsg,msg_nameupper(msg)); + buf_copy_append(rbuf,MCAST_H_SIZE,msg_data(msg),data_sz); + } + else{=09 + msg_set_size(rmsg, data_sz + LONG_H_SIZE); + buf_copy_append(rbuf,LONG_H_SIZE,msg_data(msg),data_sz); + } +=09 buf_discard(buf); net_route_msg(rbuf); return TIPC_OK; } - static int port_reject_sections(struct tipc_msg* hdr, struct tipc_msg_section const *sseq, @@ -1375,17 +1390,6 @@ } =20 =20 -int -tipc_multicast_buf(tipc_ref_t portref,=20 - struct tipc_name_seq const *seq,=20 - tipc_net_addr_t domain, /* 0:own zone */ - void *buf, - uint size) -{ - warn("Multicast not implemented\n"); - return TIPC_FAILURE; -} - =20 void port_stop(void) |
From: Ling, X. <xia...@in...> - 2004-02-23 08:07:48
|
> -----Original Message----- > From: Jon Maloy [mailto:jon...@er...]=20 > Sent: 2004=C4=EA2=D4=C221=C8=D5 8:51 > To: Ling, Xiaofeng > Cc: Guo, Min; tipc > Subject: Re: check in some files for tipc realiable multicast >=20 >=20 > Hi, > I did not take a look at the code yet, but it sounds like you=20 > are using a good approach. We have to do something about pt.=20 > 5 at some moment, maybe extend UB(skb) (there is space),=20 > because core code should not know anything about sk_buff's. =20 > But otherwise it sounds ok. >=20 Ok, we revised the code and recheck in. now we add a point next2 in struct usr_block and add two macros: buf_next2 and buf_set_next2 in tipc_buf.h. |
From: Jon M. <jon...@er...> - 2004-02-21 00:58:43
|
Hi, I did not take a look at the code yet, but it sounds like you are using a good approach. We have to do something about pt. 5 at some moment, maybe extend UB(skb) (there is space), because core code should not know anything about sk_buff's. But otherwise it sounds ok. Have a nice weekend /jon Ling, Xiaofeng wrote: >Hi, Jon > Guo Min and I have finished some initial code for tipc reliable >multicast. >this files are: >bcast.h >bcast.c >sendbcast.c >recvbcast.c >These file include routings used by reliable multicast. >The patch to some part has not checked in yet, so currently these codes >will affect others. > >some explaining for the implementation. > >1. We define a struct of bcastlink, it is derived from struct link. Each >bearer will has one corresponding > bcastlink, it will be created in the link config process. >2. We define another struct of bcastlinkset, it is a set of bcastlink, >and one bcastlinkset will cover all >the nodes in the cluster. >3.A global array of bcastlinkset is used for blink_select, each >broadcast packet will be deliverd at one of the >bcastlinkset. >4.The policy of bcastlinkset array creation is: one bcastlink will only >in one bcastlinkset and one bcastlinkset will >try to include less bcastlink. >5. A bcast outqueue is defined as "struct sk_buff_head bcast_outqueue", >it uses the "next and prev point" inside the struct sk_buff >to link the queue instead of UB(skb). That is because the UB(skb) point >is used in the queue of bcastlink for bearer congestion. > > This communication is confidential and intended solely for the addressee(s). Any unauthorized review, use, disclosure or distribution is prohibited. If you believe this message has been sent to you in error, please notify the sender by replying to this transmission and delete the message without disclosing it. Thank you. E-mail including attachments is susceptible to data corruption, interruption, unauthorized amendment, tampering and viruses, and we only send and receive e-mails on the basis that we are not liable for any such corruption, interception, amendment, tampering or viruses or any consequences thereof. |
From: Ling, X. <xia...@in...> - 2004-02-20 05:54:26
|
Hi, Jon Guo Min and I have finished some initial code for tipc reliable multicast. this files are: bcast.h bcast.c sendbcast.c=20 recvbcast.c These file include routings used by reliable multicast. The patch to some part has not checked in yet, so currently these codes will affect others. some explaining for the implementation. 1. We define a struct of bcastlink, it is derived from struct link. Each bearer will has one corresponding=20 bcastlink, it will be created in the link config process. 2. We define another struct of bcastlinkset, it is a set of bcastlink, and one bcastlinkset will cover all=20 the nodes in the cluster. 3.A global array of bcastlinkset is used for blink_select, each broadcast packet will be deliverd at one of the bcastlinkset. 4.The policy of bcastlinkset array creation is: one bcastlink will only in one bcastlinkset and one bcastlinkset will try to include less bcastlink. 5. A bcast outqueue is defined as "struct sk_buff_head bcast_outqueue", it uses the "next and prev point" inside the struct sk_buff=20 to link the queue instead of UB(skb). That is because the UB(skb) point is used in the queue of bcastlink for bearer congestion.=20 |
From: <all...@wi...> - 2004-02-19 16:22:00
|
Hi Jon: I thought I'd pass along my feedback about the draft TIPC document, since you indicated that you're accepting input. Most of these are simple grammar or spelling issues, but there are a few others that relate to content. page 2, bullet 5 You might want to add the word "ordered" (or some equivalent term) to the list of properties relating to data transfer. (It may seem obvious, but I noted that you explicitly mentioned this later in the document.) page 14 The 2nd sentence for "Link" is a bit misleading/outdated, based on what you told me on Monday. I suggest the following replacement text: "A node pair may be interconnected by up to 2 active links; additional standby links of lower priority may also be present." page 22 The last sentence of section 2.2.6 is garbled. Maybe remove "and considered advantageous"? page 23 Text for section 2.2.9 is missing. Will this be filled in later? page 24, paragraph 2 Replace "losses" with "loses". page 35, section 2.5.4, paragraph 2, line 2 Replace "exists" with "exist". page 37, section 2.5.5, bullet 2 Replace "N message" with "N messages". page 46, state Reset-Reset Replace "will will" with "will" at start of 3rd sentence. page 50, state Active-Standby Replace "numeral" with "numeric". page 52, section 2.8 Replace "support" with "supports". page 53, paragraph 1 Replace "nameing" with "naming" on line 3. page 53, paragraph 2 Wouldn't each node have 199 x 2 + 4 x 2 = 406 links to maintain? Also, I was under the impression that each node creates a virtual link to itself to handle traffic between two of its own ports; am I mistaken here or would that mean there are really 407 links per node in the example? page 60, paragraph 1 Replace "sner" with "sender" on line 2, I think. page 62 The two paragraphs following Figure 21 should really appear at the end of section 3 (i.e. just before the start of section 3.1), rather than in section 3.1.1, since this info applies to both section 3.1 and section 3.2. Having it it section 3.1.1 confused me a fair bit the first time I read the document. page 64, 2nd paragraph of text Replace "TIPC_COMM_ERROR" with "TIPC_CONN_ERROR" on line 4, I think. That's all for now. Regards, Al Stephens Member of Technical Staff Wind River Systems |
From: Mark H. <ma...@os...> - 2004-02-17 19:43:04
|
On Mon, 2004-02-16 at 15:29, Jon Maloy wrote: > Hi all > A new version of "unstable" has been checked in. It works better > than the > previous one, but is far from stable yet. The attached files show how > the API is used now. It is a slightly rewritten version of the > benchmark > program, and actually runs to the end now. > > The "insmod" command has also changed: > > insmod node=xx eth0=1 udp0=2 netid=4711 (e.g.) > > The values you give to eth0 and udp0 define the "broadcast domain" > for that interface, i.e from which nodes the setup broadcasts should > be > responded to. (see description of "destination domain" in the draft, > page > 80/81. 0 means "anyone", 1 means "nodes in this zone", 2 "nodes > in this cluster", and 3 means "enable inteface, but don't send > broadcasts". > 4 (default value) means that the interface is disabled. > I put together a Makefile that I could use to make tipc for 2.6 but I had to move tipc.h from include/net to include/net/tipc since the #include in the code seems to want it that way. Also I had a number of compile errors. I have attached the Makefile that I used (it only works for 2.6) and some patch files to fix some of my compile errors. I stopped at port.c since there were lots of errors and wasn't sure what to fix. In particular, there are references to a lock in the port struct which is not there. Mark. -- Mark Haverkamp <ma...@os...> |
From: Ling, X. <xia...@in...> - 2004-02-17 01:53:24
|
p65, paragraph2/L1, "translated to a port name(port identity?)" p67 ph2/L1 two "the" p81, ph2/line2 two "with" |
From: Ling, X. <xia...@in...> - 2004-02-10 03:16:50
|
Hi, jon On page 18, why "The maximum number of clusters within a zone is = 4097"? there are 12 bits in the network address, total is 4096, is here 4095 instead? The same quetion on next paraghaph"2048-4097". > -----Original Message----- > From: tip...@li...=20 > [mailto:tip...@li...] On=20 > Behalf Of Jon Maloy > Sent: 2004=C4=EA1=D4=C213=C8=D5 5:10 > To: tipc > Subject: [Tipc-discussion] TIPC draft for IETF >=20 >=20 > Hi all, > I just finished the first "official" draft describing TIPC=20 > for IETF. Use this as a reference for how TIPC version 2 is=20 > supposed to work. Of course, any viewpoints are welcome,=20 > -there will no doubt be a version 01 within a few months. >=20 > /Jon >=20 >=20 |
From: Jon M. <jon...@er...> - 2004-01-30 23:15:21
|
Added Guo's multicast funtionality + some minor bug fixes. /Jon |