Re: [Etherboot-developers] [RFC] Multicast reception....
Brought to you by:
marty_connor,
stefanhajnoczi
|
From: <ebi...@ln...> - 2002-06-09 19:11:08
|
"Anselm Martin Hoffmeister" <an...@ho...> writes:
> > To address that address that I am implemeting a reliable multicast
> > download protocol in etherboot.
>
> How far have you got? I'm just working on something similar (not to grab you
> the price, it was your idea) and would like to save work.
I have a functioning client. I still need to test multiple clients at
once and implement support in more NICs but the core work is done.
> > Transmitting multicast packets is trivial and I have completed the
> > implemetation in 10 lines of code. Receiving multicast packets is
> > more interesting.
>
> On hardware level, especially. This would have to be implemented for each
> card type on its own, if possible using the hardware filter.
Right. Since most cards implement a filter for multicast packets opening
up that wide instead of to everything that promiscuous mode allows looks fine.
> > Implementing IGMP and filtering for multicast addresses we are waiting
> > for looks like about the same amount of work as ARP.
>
> I was not concerned with arp. Let's assume - just for now - that we need no
> IGMP, as any client hangs on the same subnet as the multicasting server,
> which anyway sends its packets out to the first hardware network (doesn't
> it?)
Nope switches need IGMP. Besides I already have it implemtned. :)
> > My understanding of the guts of NIC is limited, so please check me. I
> > believe NICs have a hardware filter that allows them to receive just
> > broadcast packets, and packets to their own mac address. So to
> > receive multicast packets I need open up or disable the filter all
> > together. The simplest solution appears to be disabling the hardware
> > filter and go into promiscous mode, and then replace the hardware
> > filter with a software filter in await_reply.
>
> That's quite right. But afaik it's not that much more work to implement the
> hardware filter with
> some cards (e.g. via-rhine seems to have one that is programmable easily) so
> one {you, me, we} should intend any code changes to implement the hardware
> filter as well- if the nic supports it.
>
> As I learned from the packet driver specifiactions (somewhere at
> crynwr.com), there are different states a card can be in, listed (missing
> one? not sure)
> 1/ accept no input packets (not interesting for us)
> 2/ accept packets for our MAC-address
> 3/ like 2/ plus accept broadcast packets (standard etherboot modus?)
> 4/ like 3/ plus accept multicast packets to a list of mc. addresses
> 5/ like 3/ plus accept all multicast packets
> 6/ accept any packets travelling by
>
> As you might have read already, multicast packets are addressed to special
> ethernet addresses (00:50:5f:xx:yy:zz or so, no docs at hand), leaving away
> the top 8 bit of the Multicast IP address. So in any case we need a filter
> to differentiate between multicast packets to 235.12.45.67 and 228.12.45.67.
> BTW, the developers seem to be not too sure wether the topmost bit of [xx]
> always has to be zero (so that 228.12.45.67 maps to the same MAC as
> 228.140.45.67).
>
> Most (all?) cards seem to have support for mode 1, 2, 3, 6 so (in case)
> having no hardware filter is the general solution. Some cards (the more
> expensive ones like via-rhine :-) have at least support for mode 4 too,
> which is the most desirable.
And 5/ is even more common than 4/ from skimming the kernel. Only very
old cards don't seem to have ti implemented. That is what I think I
would like to make the new default etherboot mode. For most
practicial purposes it is what we have today but it allows us to
receive multicast packets as well.
> > Does anyone know if there is any communication between NICs and
> > switches about what a NIC is listening for that would make promiscous
> > mode a bad state to put NICs into?
>
> Switches are dumb, aren't they? They forward (afaik) multicast and broadcast
> packets to any device connected. If they don't work with multicast, you're
> lost of course. My switch works fine (test yours with issuing a "route
> add -net 224.0.0.0 netmask 240.0.0.0 dev eth0" on two PCs on the switch and
> pinging to 224.0.0.1 - every packet should be dup'ed, no matter which host
> you ping from). As I see it, no packet may have a multicast address as
> originator - that would break the concepts of retransmission on that OSI
> level etc pp. So the switch will treat packets to the multicast MAC
> addresses like these where it doesn't know the port of the destination -
> just dump it to all ports.
> That is not always what you want, as too much multicast packets will fill
> the bandwith even of parts of your structured network where they are not
> needed. That's where routers are for!
But there are lots of various levels of inteligence in switches. So a
smarter switch will sneak up to level 3 to do igmp to see where
multicast packets should go.
> What I wanted to find out:
> Did you in the meantime implement the card driver multicast code
> portion?
For the eepro100 yes. More to come.
> Let's make a standard for it, e.g. a function in the driver code for
> "listen_for_multicast (1=on, 0=off)" or even better the chance to listen for
> some addresses only... what the specific card driver makes out of it, no
> matter, a software filter is neccessary anyway and should be implemented
> *outside* the drivers (as it would be the same code in all of that
> drivers).
Unless someone can show me that listening for all multicast packets as
the come along is decidely bad. I'm just going to turn it on by
default, and have a compile time option during the transition period.
Etherboot doesn't need super high performance drivers, just drivers
that are good enough. And if a card really cares I guess it can spy
on the igmp table. But I would be really suprised if that mattered.
> Let me know if you have - specifically - drivers for ne2k-isa (ns8390.[ch])
> and via-rhine (via-rhine.c) as these are the cards I'm working with.
Grep through the linux drivers for ALLMULTI it looks like a single
additional outb in most cases.
> AMD-home-pna stands on this list as the next, as that is the card vmware
> simulates and can be tested more easily - rebooting is more commode, and you
> don't need the second screen and keyboard. If you have any of those, less
> work for me, more honor for you - else I will take off my gloves and grab
> right into the dustiest code.
I will see if I can get my code checked in the next couple of hours so
you can see where I have gone. No promises until tomorrow though.
Eric
|