Michael Renzmann wrote:
> Hi all.
> In the past three, four weeks we have received an increasing amount of
> donation offers, most of them related to hardware donations. This is great
> on the one hand, since it's one of the best signs telling us that we did a
> good job during the last months.
> Hardware donations are much easier to handle for us, unless we are
> incorporated as non-profit organization (which is still work in progress).
> At this time, we have already received four MiniPCI cards, two of them
> have been handed over to MadWifi developers, two are currently unused.
> Open offers are:
> * two MiniPCI cards, two MiniPCI/express cards or two USB-Sticks
> * one MiniPCI card
> * an unspecified amount of cards
> * 2 x "hardware" (including WRAPs or similar systems)
> Now the question rises: what to do with all the hardware? And this is
> where the actual RFC this mail is about starts :)
> A look back
> Recently there has been a discussion about regression testing, and as part
> of the discussion I already suggested establishing an "official",
> project-driven testbed installation to perform mostly automated regression
> and stability testing. See , .
> There have been some arguments against having such an installation if it
> has permanent connection to the internet (this can be regarded as being a
> "detail" we can discuss about later). However, I think there was no real
> objection against the main idea of having a project-driven testbed.
> Others mentioned that the hardware is not too expensive, which rises the
> possibility that users might contribute with the results of their local
> regression tests. Still having a project-driven testbed makes us a bit
> more independant from sources we can not control - the worst case would be
> that we need to perform some tests and there are no users available with
> the right hardware / setup at that time.
> The idea
> I think the following setup might work:
> We have one "control box", which probably can be accessed from the
> internet and which controls the other testbed hosts. All testbed hosts are
> connected by ethernet with each other, and (if possible) should also have
> a serial connection to the control box.
> The control box should have a bit of CPU power, enough for compiling stuff
> in a reasonable time. "CPU power" does not necessarily needs to be an
> Athlon64X2, but could also be P3-800.
> The remaining testbed hosts could be nearly anything, such as WRAPs. CPU
> power does not matter for them. For now x86 systems would be fine, at a
> later time different architectures could be helpful.
> I think it would be useful to have a "netboot setup", so that only the
> control box needs a reasonable HDD. All other testbed hosts could boot
> from network, making it a lot easier to switch distributions on each of
> the hosts and keeping the installations in sync.
> Regarding the "who resets the environment in case it's hosed": there are
> devices that allow power circuits to be switched on and off via
> web-interface. In the office we came across  some weeks ago, and gave
> these devices a test - although they are not as comfortable as APC's
> masterswitch devices, they work fine. We could try to get one of these
> devices, so that we could pull the plug remotely if needs be.
> Yes, these are very rough plans. Yes, this requires a nice bunch of work.
> Yes, I'm aware of the fact that such an installation won't go online
> anytime soon, and I don't expect that. However I think that the current
> donation offers should be used to lay the grounds for the testbed, since
> it will definitely help to make the driver more stable and reliable (if we
> use this tool wisely).
> We could start having the control box performing automated regression
> tests and reporting the results in a yet-to-be-defined way. That probably
> would be compile tests in the beginning. Later, as the other testbed boxes
> are added, also real functionality tests could be performed (which involve
> real WLAN setups).
I think these ideas are quite good, although I assume it would take a
fair amount of time and effort to implement ;-)
> What hardware do we need? Suggestions needed!
> We have several potential donors, asking me what kind of hardware we need.
> Once we agree on the general idea of having a testbed installation, we
> should define our initial needs. The installation can be extended at a
> later time if necessesary.
> The control box can and should be a "normal" PC, providing several PCI
> slots (might be necessary for the serial connections and for inserting one
> or more PCI WLAN cards). Monitor and keyboard (for (initial) local
> maintainance work) are avilable, and I'll have a 60GB HDD at hands as soon
> as madwifi.org has been moved to the new server (which has to happen
> before mid of July).
> For the other testbed hosts I'd prefer small devices - the location I have
> in mind for the testbed has some space limitations. WRAPs would be nice,
> and they are quite cheap, for example. These hosts also need at least one
> WLAN card (MiniPCI in case of the WRAPs), and at least a pigtail cable.
> For now the testbed would cover an inhouse installation, where we don't
> need special antennas - 6/13cm paper-clip antennas would most probably be
> sufficient. Ah, yes, and some flash devices (compact flash cards in case
> of the WRAPs) would be necessary to allow initial booting.
It seems as if the MadWifi project has lots of radio equipment, but no
boards to deploy them in. Maybe if a testbed was drafted up, we could
make a "wishlist" of hardware required to create an extra test "node" or
so. And provide that on madwifi.org so that potential donators know what
is of interest?
> Comments highly welcome. It would be great if we could agree soon on the
> general idea of the testbed and a rough idea of a possible setup.
Sorry for such late a small comments, but better late then never!