From: Harald W. <la...@gn...> - 2001-06-05 07:21:07
|
Hi! As announced, here is now the multicast transport for UML. Attached is the uml-kernel-patch, as well as the sourcecode for a small debugging / dumping tool, which can be useful while setting up your UML multicast network, as well as a short readme. -- Live long and prosper - Harald Welte / la...@gn... http://www.gnumonks.org/ ============================================================================ GCS/E/IT d- s-: a-- C+++ UL++++$ P+++ L++++$ E--- W- N++ o? K- w--- O- M- V-- PS+ PE-- Y+ PGP++ t++ 5-- !X !R tv-- b+++ DI? !D G+ e* h+ r% y+(*) |
From: Jeff D. <jd...@ka...> - 2001-06-05 20:12:15
|
la...@gn... said: > you MUST specify at least mcast and a hwaddr, as we cannot derive the > hardware address from the IP address (== multicast group), because it > would be the same on all machines. The hwaddr is derived from the IP address of the UML side of the device, not the host side. So, hwaddr should be optional. > 3. Security > None. If you run this on an untrusted network, anybody on the network > will be able to sniff all the packets (you're multicasting them!) and > insert fake pakets in your network. Do unix sockets support multicast? The unix man page doesn't mention anything that looks like the equivalent of the ip multicast socketopts. If not, we can get some security from file permissions on a single machine, ar least, by moving the packet distribution smarts into the drivers. It would simulate multicast by sending packets to the other machines as needed, with a central administrative daemon keeping everyone up-to-date on who is on the network and which of them have IFF_PROMISC turned on. Also, an administrative daemon plus these driver changes could be used to allow a machine to join the network any way it wants, (IP multicast, unix socket, prviate IP socket), and the other machines will know how to communicate with it (although non-local machines would have a problem with unix sockets...). > So what we really want to have, is exposing the promiscuous mode flag > to the daemon. This means, that the 'filtering' part, usually done by > the ethernet board, is transferred into the daemon. Yup, this sounds reasonable to me... Jeff |
From: Harald W. <la...@gn...> - 2001-06-05 20:50:53
|
On Tue, Jun 05, 2001 at 04:26:17PM -0500, Jeff Dike wrote: > > The hwaddr is derived from the IP address of the UML side of the device, not > the host side. So, hwaddr should be optional. ouch, i somewhat missed that :) I see, ok...didn't try it, though. > > 3. Security > > None. If you run this on an untrusted network, anybody on the network > > will be able to sniff all the packets (you're multicasting them!) and > > insert fake pakets in your network. > > Do unix sockets support multicast? The unix man page doesn't mention > anything that looks like the equivalent of the ip multicast socketopts. no, it doesn't. > If not, we can get some security from file permissions on a single machine, > ar least, by moving the packet distribution smarts into the drivers. It > would simulate multicast by sending packets to the other machines as needed, > with a central administrative daemon keeping everyone up-to-date on who is on > the network and which of them have IFF_PROMISC turned on. Hm, ok. That's out-of-scope of the multicast thing, however. I think the of the multicast transport as an easy means of getting multiple uml's on one or more machines running without setting up any external daemons / etc. As soon as you want to do the transport problem 'efficiently' (i.e. send packets only once to each host, not for each uml there, maintaining state about 'promiscuity' (*g*), ... you will have to run one daemon on each machine, and have a state protocol, permission handling, ... > Also, an administrative daemon plus these driver changes could be used to > allow a machine to join the network any way it wants, (IP multicast, unix > socket, prviate IP socket), and the other machines will know how to > communicate with it (although non-local machines would have a problem with > unix sockets...). hm. nah, that sounds too complicated to me. so the daemon would have to translate unix domain datagrams to UDP and other nasty stuff like that? (nasty fragmentation issues, etc.) I don't know. Sounds all like a bit of overkill to me, at least for now. > > So what we really want to have, is exposing the promiscuous mode flag > > to the daemon. This means, that the 'filtering' part, usually done by > > the ethernet board, is transferred into the daemon. > > Yup, this sounds reasonable to me... ok. As stated, I don't think that I will get around doing any work on that, as I'll be busy with other stuff until september... > Jeff -- Live long and prosper - Harald Welte / la...@gn... http://www.gnumonks.org/ ============================================================================ GCS/E/IT d- s-: a-- C+++ UL++++$ P+++ L++++$ E--- W- N++ o? K- w--- O- M- V-- PS+ PE-- Y+ PGP++ t++ 5-- !X !R tv-- b+++ DI? !D G+ e* h+ r% y+(*) |
From: Jeff D. <jd...@ka...> - 2001-06-05 22:29:30
|
OK, I put it, with some changes (some cosmestic, some because I changed an interface or two) and it seems to be working fine. la...@gn... said: > ouch, i somewhat missed that :) I see, ok...didn't try it, though. I changed this, too. One thing that's a bit strange is that I get noticably lower throughput with mcast vs daemon (215K - 220K per sec vs 230K - 260K per sec). This is with scp copying a 35M file from one UML to another. I would have expected better throughput from mcast because packets aren't going through the daemon... Jeff |
From: Harald W. <la...@gn...> - 2001-06-05 23:41:00
|
On Tue, Jun 05, 2001 at 06:43:23PM -0500, Jeff Dike wrote: > OK, I put it, with some changes (some cosmestic, some because I changed an > interface or two) and it seems to be working fine. Thanks. > One thing that's a bit strange is that I get noticably lower throughput with > mcast vs daemon (215K - 220K per sec vs 230K - 260K per sec). This is with > scp copying a 35M file from one UML to another. very interesting, i didn't do any benchmarking yet. No idea where that comes from... > I would have expected better throughput from mcast because packets aren't > going through the daemon... I'm not sure about that. Maybe some fragmentation issue? Although it's unlikely, as we never leave any interface of the local machine. There are some MTU issues when you run it over ethernet. Obviously, a full 1514-byte-sized ethernet frame cannot be encapsulated in udp+ip and fit into ethernet frames again... so the stack has to generate IP fragments, which is evil. Haven't done any testing about that yet, but as soon I have, we should put some hint in the docs... > Jeff -- Live long and prosper - Harald Welte / la...@gn... http://www.gnumonks.org/ ============================================================================ GCS/E/IT d- s-: a-- C+++ UL++++$ P+++ L++++$ E--- W- N++ o? K- w--- O- M- V-- PS+ PE-- Y+ PGP++ t++ 5-- !X !R tv-- b+++ DI? !D G+ e* h+ r% y+(*) |
From: Jeff D. <jd...@ka...> - 2001-06-05 23:47:32
|
jd...@ka... said: > I would have expected better throughput from mcast because packets > aren't going through the daemon... Actually, it occurred to me just after posting this that the reason might be that multicast is acting as a hub, whereas the daemon is a switch. So, each UML needs to read back its own packets and throw them out. Jeff |
From: Harald W. <la...@gn...> - 2001-06-06 00:10:59
|
On Tue, Jun 05, 2001 at 08:01:35PM -0500, Jeff Dike wrote: > jd...@ka... said: > > I would have expected better throughput from mcast because packets > > aren't going through the daemon... > > Actually, it occurred to me just after posting this that the reason might be > that multicast is acting as a hub, whereas the daemon is a switch. So, each > UML needs to read back its own packets and throw them out. Yeah, right. The combination of a hub with the missing part of a filtering non-promiscuous network board is the reason. The IP_MULTICAST_LOOP flag determines if the packets should get looped back to local sockets. Unfortunately 'local sockets' means socket(s) of the local process _and_ other local processes. So we have to set the flag in order to make multiple uml's on one host work. If you only have one uml per host, and a network of multiple hosts, you can switch this flag off ;) Theoretically one could solve this problem by: - using n multicast groups (where 'n' is the number of nodes participating in the virtual network - have each uml subscribe to all groups _but_ it's own but that would require some more complicated configuration, and thus loose a lot of the ease of the mcast approach. > Jeff -- Live long and prosper - Harald Welte / la...@gn... http://www.gnumonks.org/ ============================================================================ GCS/E/IT d- s-: a-- C+++ UL++++$ P+++ L++++$ E--- W- N++ o? K- w--- O- M- V-- PS+ PE-- Y+ PGP++ t++ 5-- !X !R tv-- b+++ DI? !D G+ e* h+ r% y+(*) |