From: Craig H. <cr...@gu...> - 2005-11-21 23:08:51
|
I think a cluster of gumstix as a native compile farm, or maybe even for other uses, would be pretty cool. But I'm way too busy to set one up myself right now (or, likely, for the next few months). So, I've thought up a possibly cool alternative solution: Give away a bunch of gumstix+netstix in exchange for people agreeing to provide basic sysadmin services + connectivity. So here's the offer: Gumstix will give away 3x (6x gumstix-400g + 6x netstix + 6x power supplies + 6x ethernet cables) to 3 happy homes out there, if the recipients will help do the following: 1. Provide always-on internet connection to their "cluster" 2. Preferably, provide a local NFS server to the cluster which would allow for some larger amount of storage space. Users might want to NFS mount their own space over the internet instead, but then you get bandwidth/latency issues which would be ameliorated with more-local storage. 3. Help figure out a good way of hosting a native development environment (using distcc) on the cluster, so that gumstix users can log into the cluster remotely and use distcc to build stuff like perl, PHP, etc natively on the gumstix, since those packages don't really like being cross-compiled. 4. We'll provide DNS service under the gumstix.com (and/or gumstix.org) domain for the cluster machines themselves, and any meta- site that makes sense to put up (though I'd guess the existing wiki would probably be able to handle most of what might be needed). 5. Other terms and conditions may apply. 6. If 3 clusters turns out not to be enough because all 3 are constantly jammed with people doing useful stuff, then we might well do more in the future. If 6 nodes isn't enough, we might top that up too. 7. Anyone who wants to host one of these clusters should probably spend at least a minute or two thinking through the security implications of having a generally-accessible set of linux boxes on their network. You can probably pretty easily isolate the machines onto their own LAN segment (maybe each setup would also include a netDUO and another gumstix to provide a nice firewall setup) to mitigate things. So anyways, if you're interested in being one of the 3 hosts, let me know ASAP. Also, if anyone has any thoughts on how better to do what I'm suggesting above, let me know that too. C |
From: N.E.Whiteford <N.E...@so...> - 2005-11-21 23:24:22
|
:) Sounds like a fun idea to be. I could host a cluster. nav On Mon, 21 Nov 2005, Craig Hughes wrote: > I think a cluster of gumstix as a native compile farm, or maybe even > for other uses, would be pretty cool. But I'm way too busy to set > one up myself right now (or, likely, for the next few months). So, > I've thought up a possibly cool alternative solution: Give away a > bunch of gumstix+netstix in exchange for people agreeing to provide > basic sysadmin services + connectivity. > > So here's the offer: > > Gumstix will give away 3x (6x gumstix-400g + 6x netstix + 6x power > supplies + 6x ethernet cables) to 3 happy homes out there, if the > recipients will help do the following: > > 1. Provide always-on internet connection to their "cluster" > 2. Preferably, provide a local NFS server to the cluster which would > allow for some larger amount of storage space. Users might want to > NFS mount their own space over the internet instead, but then you get > bandwidth/latency issues which would be ameliorated with more-local > storage. > 3. Help figure out a good way of hosting a native development > environment (using distcc) on the cluster, so that gumstix users can > log into the cluster remotely and use distcc to build stuff like > perl, PHP, etc natively on the gumstix, since those packages don't > really like being cross-compiled. > 4. We'll provide DNS service under the gumstix.com (and/or > gumstix.org) domain for the cluster machines themselves, and any meta- > site that makes sense to put up (though I'd guess the existing wiki > would probably be able to handle most of what might be needed). > 5. Other terms and conditions may apply. > 6. If 3 clusters turns out not to be enough because all 3 are > constantly jammed with people doing useful stuff, then we might well > do more in the future. If 6 nodes isn't enough, we might top that up > too. > 7. Anyone who wants to host one of these clusters should probably > spend at least a minute or two thinking through the security > implications of having a generally-accessible set of linux boxes on > their network. You can probably pretty easily isolate the machines > onto their own LAN segment (maybe each setup would also include a > netDUO and another gumstix to provide a nice firewall setup) to > mitigate things. > > So anyways, if you're interested in being one of the 3 hosts, let me > know ASAP. Also, if anyone has any thoughts on how better to do what > I'm suggesting above, let me know that too. > > C > > > ------------------------------------------------------- > This SF.Net email is sponsored by the JBoss Inc. Get Certified Today > Register for a JBoss Training Course. Free Certification Exam > for All Training Attendees Through End of 2005. For more info visit: > http://ads.osdn.com/?ad_id=7628&alloc_id=16845&op=click > _______________________________________________ > gumstix-users mailing list > gum...@li... > https://lists.sourceforge.net/lists/listinfo/gumstix-users > |
From: Lorenzo H. Garcia-H. <lor...@gm...> - 2005-11-21 23:46:47
|
El lun, 21-11-2005 a las 15:08 -0800, Craig Hughes escribi=F3: > I think a cluster of gumstix as a native compile farm, or maybe even =20 > for other uses, would be pretty cool. But I'm way too busy to set =20 > one up myself right now (or, likely, for the next few months). So, =20 > I've thought up a possibly cool alternative solution: Give away a =20 > bunch of gumstix+netstix in exchange for people agreeing to provide =20 > basic sysadmin services + connectivity. >=20 > So here's the offer: >=20 > Gumstix will give away 3x (6x gumstix-400g + 6x netstix + 6x power =20 > supplies + 6x ethernet cables) to 3 happy homes out there, if the =20 > recipients will help do the following: >=20 > 1. Provide always-on internet connection to their "cluster" > 2. Preferably, provide a local NFS server to the cluster which would =20 > allow for some larger amount of storage space. Users might want to =20 > NFS mount their own space over the internet instead, but then you get =20 > bandwidth/latency issues which would be ameliorated with more-local =20 > storage. NFS is a weak protocol and I think it will add further difficulties to the maintenance process. What about interfacing an IDE/ATA disk or a huge capacity compact flash card? NFS is good once you get it right set up and isolated. > 3. Help figure out a good way of hosting a native development =20 > environment (using distcc) on the cluster, so that gumstix users can =20 > log into the cluster remotely and use distcc to build stuff like =20 > perl, PHP, etc natively on the gumstix, since those packages don't =20 > really like being cross-compiled. OK, that would be easy. Share a common a directory per user. Users should apply for registration and "clearance". It's not really feasible to open the cluster widely. One thing is open distributed computing and the other one a shell server for any funky kid willing to disturb others. We could host a registration site on gumstix.com and XML-RPC or just manually handled account maintenance, then sysadmins open accounts, etc. > 4. We'll provide DNS service under the gumstix.com (and/or =20 > gumstix.org) domain for the cluster machines themselves, and any meta-=20 > site that makes sense to put up (though I'd guess the existing wiki =20 > would probably be able to handle most of what might be needed). I could help with tuxedo-es.org stuff, but I think we have mostly everything with gumstix.com. > 5. Other terms and conditions may apply. > 6. If 3 clusters turns out not to be enough because all 3 are =20 > constantly jammed with people doing useful stuff, then we might well =20 > do more in the future. If 6 nodes isn't enough, we might top that up =20 > too. One master node serving storage, even two. The rest are slave nodes with limited "scope" regarding file system access, etc. openMosix would help too but that makes more sense for other stuff than compiling and so on. For compilation farm we would be done with distcc and ccache. > 7. Anyone who wants to host one of these clusters should probably =20 > spend at least a minute or two thinking through the security =20 > implications of having a generally-accessible set of linux boxes on =20 > their network. You can probably pretty easily isolate the machines =20 > onto their own LAN segment (maybe each setup would also include a =20 > netDUO and another gumstix to provide a nice firewall setup) to =20 > mitigate things. Put them on their own network segment and block access to any other devices on other networks. Or use dedicated DSL line (I have one to spend but need to make it a bit less restrictive). I'm thinking on giving web access and not a lot more. We could work out an interface for building software enabled in a database and available within the master node. Then populate rootfs images and tarballs to a publicly accessible server. Think on a Tinderbox-alike application but fitting our needs. Other applications of the cluster: think on open distributed computing assurance models research. I'm working on one model but didn't find a way to implement it in a reliable manner. In any case, former volunteers are needed in order to work this out and make something good out of it. > So anyways, if you're interested in being one of the 3 hosts, let me =20 > know ASAP. Also, if anyone has any thoughts on how better to do what =20 > I'm suggesting above, let me know that too. I need to think more on it but SEGumstix could have a chance for helping with the security issues. Cheers, take care. --=20 Lorenzo Hern=E1ndez Garc=EDa-Hierro <lo...@gn...>=20 [1024D/6F2B2DEC] & [2048g/9AE91A22][http://tuxedo-es.org] |
From: Alexandre P. N. <al...@om...> - 2005-11-22 00:15:20
|
Craig Hughes escreveu: > I think a cluster of gumstix as a native compile farm, or maybe even > for other uses, would be pretty cool. But I'm way too busy to set > one up myself right now (or, likely, for the next few months). So, > I've thought up a possibly cool alternative solution: Give away a > bunch of gumstix+netstix in exchange for people agreeing to provide > basic sysadmin services + connectivity. > > So here's the offer: > > Gumstix will give away 3x (6x gumstix-400g + 6x netstix + 6x power > supplies + 6x ethernet cables) to 3 happy homes out there, if the > recipients will help do the following: > I could host one set without problems. Alexandre |
From: Craig H. <cr...@gu...> - 2005-11-22 00:19:18
|
On Nov 21, 2005, at 4:15 PM, Alexandre Pereira Nunes wrote: > Craig Hughes escreveu: > >> I think a cluster of gumstix as a native compile farm, or maybe even >> for other uses, would be pretty cool. But I'm way too busy to set >> one up myself right now (or, likely, for the next few months). So, >> I've thought up a possibly cool alternative solution: Give away a >> bunch of gumstix+netstix in exchange for people agreeing to provide >> basic sysadmin services + connectivity. >> >> So here's the offer: >> >> Gumstix will give away 3x (6x gumstix-400g + 6x netstix + 6x power >> supplies + 6x ethernet cables) to 3 happy homes out there, if the >> recipients will help do the following: >> > > I could host one set without problems. A couple of folks have replied now both on-list and off -- if you are interested, please could you give me some details on your internet set up, etc. to make it easier to pick the best home for these things? Ideally if we send them out, we'd like them to be going places which have nice solid, stable, allowed-to-run-servers-on-it type internet connections at places like universities or companies whose network admin staff aren't going to power cycle the thing randomly because they don't know what it is, or where your cable provider is going to start blocking ports because it looks like you're overusing your "unlimited use" connection. Thanks C |
From: Alexandre P. N. <al...@om...> - 2005-11-22 00:33:32
|
Craig Hughes escreveu: > On Nov 21, 2005, at 4:15 PM, Alexandre Pereira Nunes wrote: > >> Craig Hughes escreveu: >> >>> I think a cluster of gumstix as a native compile farm, or maybe even >>> for other uses, would be pretty cool. But I'm way too busy to set >>> one up myself right now (or, likely, for the next few months). So, >>> I've thought up a possibly cool alternative solution: Give away a >>> bunch of gumstix+netstix in exchange for people agreeing to provide >>> basic sysadmin services + connectivity. >>> >>> So here's the offer: >>> >>> Gumstix will give away 3x (6x gumstix-400g + 6x netstix + 6x power >>> supplies + 6x ethernet cables) to 3 happy homes out there, if the >>> recipients will help do the following: >>> >> >> I could host one set without problems. > > > A couple of folks have replied now both on-list and off -- if you are > interested, please could you give me some details on your internet > set up, etc. to make it easier to pick the best home for these > things? Ideally if we send them out, we'd like them to be going > places which have nice solid, stable, allowed-to-run-servers-on-it > type internet connections at places like universities or companies > whose network admin staff aren't going to power cycle the thing > randomly because they don't know what it is, or where your cable > provider is going to start blocking ports because it looks like > you're overusing your "unlimited use" connection. > > Thanks > > C I'm the IT manager of our company here, we have two facilities with independent, always-on internet connections, linked to different backbones. I'll have to choose one of the facilities, but the setup is almost the same (E1). The only restriction I'll have to enforce is putting them behind a firewall I manage myself. I can leave some incoming ports mapped to ssh or whatever, but I'm considering to enforce a very strict *outgoing* policy, as is done i.e. in the sourceforge.net's compile farm. Any objections? :) Cheers, Alexandre |
From: Craig H. <cr...@gu...> - 2005-11-22 00:47:37
|
On Nov 21, 2005, at 4:33 PM, Alexandre Pereira Nunes wrote: > Craig Hughes escreveu: >> A couple of folks have replied now both on-list and off -- if you are >> interested, please could you give me some details on your internet >> set up, etc. to make it easier to pick the best home for these >> things? Ideally if we send them out, we'd like them to be going >> places which have nice solid, stable, allowed-to-run-servers-on-it >> type internet connections at places like universities or companies >> whose network admin staff aren't going to power cycle the thing >> randomly because they don't know what it is, or where your cable >> provider is going to start blocking ports because it looks like >> you're overusing your "unlimited use" connection. > I'm the IT manager of our company here, we have two facilities with > independent, always-on internet connections, linked to different > backbones. I'll have to choose one of the facilities, but the setup is > almost the same (E1). The only restriction I'll have to enforce is > putting them behind a firewall I manage myself. I can leave some > incoming ports mapped to ssh or whatever, but I'm considering to > enforce > a very strict *outgoing* policy, as is done i.e. in the > sourceforge.net's compile farm. Any objections? :) Nope, that is a great answer :) I like the geographic distribution concept of locating one of these things in Brazil actually. Probably one in Europe and the 3rd in North America somewhere would be a nice distribution. C |
From: Alexandre P. N. <al...@om...> - 2005-11-22 01:18:04
|
> [cut] > >> I'm the IT manager of our company here, we have two facilities with >> independent, always-on internet connections, linked to different >> backbones. I'll have to choose one of the facilities, but the setup is >> almost the same (E1). The only restriction I'll have to enforce is >> putting them behind a firewall I manage myself. I can leave some >> incoming ports mapped to ssh or whatever, but I'm considering to >> enforce >> a very strict *outgoing* policy, as is done i.e. in the >> sourceforge.net's compile farm. Any objections? :) > > > Nope, that is a great answer :) I like the geographic distribution > concept of locating one of these things in Brazil actually. Probably > one in Europe and the 3rd in North America somewhere would be a nice > distribution. > > C > I also like the concept, it kind of remembers me the root dns scheme :-) Well, I'm ready to take all actions required, just let me know ... Alexandre |
From: Alexandre P. N. <al...@om...> - 2005-11-22 16:11:39
|
Craig Hughes escreveu: > [cut] > > Gumstix will give away 3x (6x gumstix-400g + 6x netstix + 6x power > supplies + 6x ethernet cables) to 3 happy homes out there, if the > recipients will help do the following: If I'm selected as one of the hosts and you want to take the opportunity, you can consider send me a netduo too, conditioned to this: if I can't help you with the driver, I send it back, what about that? :-) Just a thought :-) Cheers, Alexandre |
From: Craig H. <cr...@gu...> - 2005-11-22 20:36:22
|
Now that's a deal! C On Nov 22, 2005, at 8:11 AM, Alexandre Pereira Nunes wrote: > Craig Hughes escreveu: > >> [cut] >> >> Gumstix will give away 3x (6x gumstix-400g + 6x netstix + 6x power >> supplies + 6x ethernet cables) to 3 happy homes out there, if the >> recipients will help do the following: > > > If I'm selected as one of the hosts and you want to take the > opportunity, you can consider send me a netduo too, conditioned to > this: > if I can't help you with the driver, I send it back, what about > that? :-) > > Just a thought :-) > > Cheers, > > Alexandre > > > > ------------------------------------------------------- > This SF.Net email is sponsored by the JBoss Inc. Get Certified Today > Register for a JBoss Training Course. Free Certification Exam > for All Training Attendees Through End of 2005. For more info visit: > http://ads.osdn.com/?ad_id=7628&alloc_id=16845&op=click > _______________________________________________ > gumstix-users mailing list > gum...@li... > https://lists.sourceforge.net/lists/listinfo/gumstix-users |
From: Daniel M. <dan...@gm...> - 2005-11-22 17:39:05
|
I can host one here in sunny Arizona! On 11/21/05, Craig Hughes <cr...@gu...> wrote: > I think a cluster of gumstix as a native compile farm, or maybe even > for other uses, would be pretty cool. But I'm way too busy to set > one up myself right now (or, likely, for the next few months). So, > I've thought up a possibly cool alternative solution: Give away a > bunch of gumstix+netstix in exchange for people agreeing to provide > basic sysadmin services + connectivity. > > So here's the offer: > > Gumstix will give away 3x (6x gumstix-400g + 6x netstix + 6x power > supplies + 6x ethernet cables) to 3 happy homes out there, if the > recipients will help do the following: > > 1. Provide always-on internet connection to their "cluster" > 2. Preferably, provide a local NFS server to the cluster which would > allow for some larger amount of storage space. Users might want to > NFS mount their own space over the internet instead, but then you get > bandwidth/latency issues which would be ameliorated with more-local > storage. > 3. Help figure out a good way of hosting a native development > environment (using distcc) on the cluster, so that gumstix users can > log into the cluster remotely and use distcc to build stuff like > perl, PHP, etc natively on the gumstix, since those packages don't > really like being cross-compiled. > 4. We'll provide DNS service under the gumstix.com (and/or > gumstix.org) domain for the cluster machines themselves, and any meta- > site that makes sense to put up (though I'd guess the existing wiki > would probably be able to handle most of what might be needed). > 5. Other terms and conditions may apply. > 6. If 3 clusters turns out not to be enough because all 3 are > constantly jammed with people doing useful stuff, then we might well > do more in the future. If 6 nodes isn't enough, we might top that up > too. > 7. Anyone who wants to host one of these clusters should probably > spend at least a minute or two thinking through the security > implications of having a generally-accessible set of linux boxes on > their network. You can probably pretty easily isolate the machines > onto their own LAN segment (maybe each setup would also include a > netDUO and another gumstix to provide a nice firewall setup) to > mitigate things. > > So anyways, if you're interested in being one of the 3 hosts, let me > know ASAP. Also, if anyone has any thoughts on how better to do what > I'm suggesting above, let me know that too. > > C > > > ------------------------------------------------------- > This SF.Net email is sponsored by the JBoss Inc. Get Certified Today > Register for a JBoss Training Course. Free Certification Exam > for All Training Attendees Through End of 2005. For more info visit: > http://ads.osdn.com/?ad_id=3D7628&alloc_id=3D16845&op=3Dclick > _______________________________________________ > gumstix-users mailing list > gum...@li... > https://lists.sourceforge.net/lists/listinfo/gumstix-users > -- Daniel A. Morrigan He who says it cannot be done is interrupting the one doing it. |
From: Craig H. <cr...@gu...> - 2005-11-22 21:20:07
|
Ok, I think I've picked 3 sites from the replies I've received. I decided to spread them around a bit geographically. For the overseas sites, we're going to have to figure out how to ship the stuff to you hopefully without incurring any import duties, or else figure out how to pay those import duties so that you recipients don't have to... The 3 sites will be: Nava Whiteford at the University of Southampton in the UK Alexandre Pereira Nunes at Omnisystem in Brazil Jason Spence at the IEEE student lab at Berkeley (boo hiss -- axe thieves) I have shipping info already for the first two, but Jason, if you could email me a shipping address, we'll get that out to you. Nava/ Alexandre -- if you want me to ship somewhere other than what your oscommerce record shows, let me know that too please. I'll talk to UPS about how best to ship the things to you with regard to the import duties issue, so it might take a bit longer to get those out to you than for the one which is just headed across the bay. As far as actually configuring the boxes, allowing remote access, etc, the work is just getting started. Anyone with thoughts on what best to do (and whether to do the same thing at each location or not) please go ahead and discuss on this thread :) C |
From: Adam E. <ada...@at...> - 2005-11-22 21:27:41
|
For those of us coming in late on this discussion... What's this project's purpose? How big will these "gumstix clusters" be, and what will people do with them that's not doable with regular computer clusters? Thanks, Adam Ernst On Nov 22, 2005, at 3:19 PM, Craig Hughes wrote: > As far as actually configuring the boxes, allowing remote access, > etc, the work is just getting started. Anyone with thoughts on > what best to do (and whether to do the same thing at each location > or not) please go ahead and discuss on this thread :) |
From: Craig H. <cr...@gu...> - 2005-11-22 21:58:27
|
On Nov 22, 2005, at 1:26 PM, Adam Ernst wrote: > For those of us coming in late on this discussion... > > What's this project's purpose? How big will these "gumstix > clusters" be, and what will people do with them that's not doable > with regular computer clusters? Well, I think my main idea initially was to provide some publically accessible compile farms for building stuff to run on gumstix. That is, allow native compilation at reasonable speeds (using distcc/ ccache) for stuff which can't be cross-compiled in the buildroot due to "issues" with the upstream package. For example stuff like the perl build really wants to run natively on the machine it's being compiled for. You can kind of get miniperl to compile with a cross- compiler, but it's not easy, and miniperl ain't perl. I gave up entirely on trying to get PHP to cross compile, though I compiled it natively on a gumstix using an NFS mounted root_fs, and that worked nicely (but took a long time). But that's not by any means the only thing one could do with a gumstix cluster (gumwad?). The intention is basically to provide the hardware out there to help people potentially come up with great ideas. I think the main advantage to a gumstix cluster vs any other kind of cluster, if there is one, is the power consumption. a 6-node gumstix+netstix cluster will have about 2.4GHz of (integer) processing power, but only draw about 12 watts of power. By contrast, your typical single-CPU desktop will probably be drawing somewhere in the 100-200 watt range. 6 gumstix+netstix also fits in about 1/50th of the space of a 1U rackmount enclosure, if you arrange them nicely. I think if one thinks about power supply issues carefully, you could probably fit 300 or 400 gumstix into a 1U enclosure, for about 120-160GHz of CPU, though to do that you'd probably want to not use the etherstix, but do something a bit smarter by interconnecting the busses of the PXAs more directly. That SMC ethernet controller chip sucks a ton of power, and seems to turn most of what it draws into heat... As for what I'll be sending to each of these 3 sites, I haven't completely made up my mind yet (gotta keep some surprises back), but it'll probably be something like: 6 connex-400xm 6 netMMC 1 connex-200 1 netDUO cables, power bricks, etc as makes sense The idea would be that the connex200+netDUO could be used as an access control system to the cluster. The site will need to provide a >=7 port hub or switch as a backbone, but I suspect each of the 3 sites probably has some spare network ports lying around which could be used. We'll figure out how to do shared storage among the boxes, but right now I'm thinking that it ought to be relatively easy to do something like export the MMC drives from each of the 6 machines as linux network block devices, then use software RAID over those block devices or something. Anyway, I'm sure we'll come up with a bright idea there. C |
From: Alexandre P. N. <al...@om...> - 2005-11-22 22:47:04
|
Craig Hughes escreveu: > [cut] > Well, I think my main idea initially was to provide some publically > accessible compile farms for building stuff to run on gumstix. That > is, allow native compilation at reasonable speeds (using distcc/ > ccache) for stuff which can't be cross-compiled in the buildroot due > to "issues" with the upstream package. For example stuff like the > perl build really wants to run natively on the machine it's being > compiled for. You can kind of get miniperl to compile with a cross- > compiler, but it's not easy, and miniperl ain't perl. I gave up > entirely on trying to get PHP to cross compile, though I compiled it > natively on a gumstix using an NFS mounted root_fs, and that worked > nicely (but took a long time). > So I'm not that off the road :-) > But that's not by any means the only thing one could do with a > gumstix cluster (gumwad?). The intention is basically to provide the > hardware out there to help people potentially come up with great > ideas. I think the main advantage to a gumstix cluster vs any other > kind of cluster, if there is one, is the power consumption. a 6-node > gumstix+netstix cluster will have about 2.4GHz of (integer) > processing power, but only draw about 12 watts of power. By > contrast, your typical single-CPU desktop will probably be drawing > somewhere in the 100-200 watt range. 6 gumstix+netstix also fits in > about 1/50th of the space of a 1U rackmount enclosure, if you arrange > them nicely. I think if one thinks about power supply issues > carefully, you could probably fit 300 or 400 gumstix into a 1U > enclosure, for about 120-160GHz of CPU, though to do that you'd > probably want to not use the etherstix, but do something a bit > smarter by interconnecting the busses of the PXAs more directly. > That SMC ethernet controller chip sucks a ton of power, and seems to > turn most of what it draws into heat... Wow. I can't even think about a 160ghz cluster. I'll have dreams about that tonight! > > As for what I'll be sending to each of these 3 sites, I haven't > completely made up my mind yet (gotta keep some surprises back), but > it'll probably be something like: > > 6 connex-400xm > 6 netMMC > 1 connex-200 > 1 netDUO > cables, power bricks, etc as makes sense > > The idea would be that the connex200+netDUO could be used as an > access control system to the cluster. The site will need to provide > a >=7 port hub or switch as a backbone, but I suspect each of the 3 > sites probably has some spare network ports lying around which could > be used. We'll figure out how to do shared storage among the boxes, > but right now I'm thinking that it ought to be relatively easy to do > something like export the MMC drives from each of the 6 machines as > linux network block devices, then use software RAID over those block > devices or something. Anyway, I'm sure we'll come up with a bright > idea there. > This is great, one more gumstix with a netduo will do a great firewall :-) About shared storage, I guess something as weird as "sshfs" would do the trick. And I maybe wrong :-) Alexandre |
From: Alexandre P. N. <al...@om...> - 2005-11-22 22:54:17
|
Alexandre Pereira Nunes escreveu: > [cut] > >>The idea would be that the connex200+netDUO could be used as an >>access control system to the cluster. The site will need to provide >>a >=7 port hub or switch as a backbone, but I suspect each of the 3 >>sites probably has some spare network ports lying around which could >>be used. We'll figure out how to do shared storage among the boxes, >>but right now I'm thinking that it ought to be relatively easy to do >>something like export the MMC drives from each of the 6 machines as >>linux network block devices, then use software RAID over those block >>devices or something. Anyway, I'm sure we'll come up with a bright >>idea there. >> >> >> > >This is great, one more gumstix with a netduo will do a great firewall :-) > > >About shared storage, I guess something as weird as "sshfs" would do the >trick. And I maybe wrong :-) > >Alexandre > > Ah, my brain warned me too late; I'd already press "send" by then. You meant shared storage between nodes and not between cluster sites. sshfs is bloat in this case. I guess we can evolve on this later. Anybody? Alexandre |
From: Jason S. <js...@li...> - 2005-11-22 23:40:27
|
On Tue, Nov 22, 2005 at 01:57:50PM -0800, Craig Hughes wrote: > On Nov 22, 2005, at 1:26 PM, Adam Ernst wrote: > > >For those of us coming in late on this discussion... > > > >What's this project's purpose? How big will these "gumstix > >clusters" be, and what will people do with them that's not doable > >with regular computer clusters? > > Well, I think my main idea initially was to provide some publically > accessible compile farms for building stuff to run on gumstix. That > is, allow native compilation at reasonable speeds (using distcc/ > ccache) for stuff which can't be cross-compiled in the buildroot due > to "issues" with the upstream package. For example stuff like the > perl build really wants to run natively on the machine it's being > compiled for. You can kind of get miniperl to compile with a cross- > compiler, but it's not easy, and miniperl ain't perl. I gave up > entirely on trying to get PHP to cross compile, though I compiled it > natively on a gumstix using an NFS mounted root_fs, and that worked > nicely (but took a long time). I vote for iSCSI for the OS storage, and either iSCSI or AFS for the build storage. I can set up a common AFS root that we can all share for builds. Of course, that assumes that the other sites can talk to Cal quickly enough for that to be practical; otherwise, we should probably use a source control system hosted by sourceforge or another highly available provider logically close to all three sites. We're also going to need an admin mailing list. Any suggestions on who it should host it? > But that's not by any means the only thing one could do with a > gumstix cluster (gumwad?). http://en.wikipedia.org/wiki/Gumwad > The intention is basically to provide the hardware out there to help > people potentially come up with great ideas. I think the main > advantage to a gumstix cluster vs any other kind of cluster, if > there is one, is the power consumption. a 6-node gumstix+netstix > cluster will have about 2.4GHz of (integer) processing power, but > only draw about 12 watts of power. By contrast, your typical > single-CPU desktop will probably be drawing somewhere in the 100-200 > watt range. 6 gumstix+netstix also fits in about 1/50th of the > space of a 1U rackmount enclosure, if you arrange them nicely. I > think if one thinks about power supply issues carefully, you could > probably fit 300 or 400 gumstix into a 1U enclosure, for about > 120-160GHz of CPU, though to do that you'd probably want to not use > the etherstix, but do something a bit smarter by interconnecting the > busses of the PXAs more directly. That SMC ethernet controller chip > sucks a ton of power, and seems to turn most of what it draws into > heat... We're next door to the mechanical engineering building, and I hear they build weird enclosures like that if you ask them nice. If you're serious about doing this, we can build a custom multiport switching power supply around one of the analog IC vendors' chips for each (say) 50 gumstix. There's also the small matter of getting several hundred gumstix for a gumwad, but that's a small implementation detail. > As for what I'll be sending to each of these 3 sites, I haven't > completely made up my mind yet (gotta keep some surprises back), but > it'll probably be something like: > > 6 connex-400xm > 6 netMMC > 1 connex-200 > 1 netDUO > cables, power bricks, etc as makes sense > > The idea would be that the connex200+netDUO could be used as an > access control system to the cluster. The site will need to provide > a >=7 port hub or switch as a backbone, but I suspect each of the 3 > sites probably has some spare network ports lying around which could > be used. You have no idea. May I also suggest some serial hardware for implementing stomith and differentiating between network and node failures? Oh, and several hundred people walk by our lab each day. If you provide me with a nice-looking mount for the cluster, I can mount the cluster in the secure display case outside our lab. -- - Jason Best of all is never to have been born. Second best is to die soon. |
From: Craig H. <cr...@gu...> - 2005-11-23 10:29:45
|
On Nov 22, 2005, at 3:39 PM, Jason Spence wrote: > I vote for iSCSI for the OS storage, and either iSCSI or AFS for the > build storage. I can set up a common AFS root that we can all share > for builds. Of course, that assumes that the other sites can talk to > Cal quickly enough for that to be practical; otherwise, we should > probably use a source control system hosted by sourceforge or another > highly available provider logically close to all three sites. Right now, I'm not envisaging storage being shared between the 3 sites, only locally between nodes at each site. You can always transfer bits and pieces between the local storage clusters using rsync or something; but for online access, I think local is probably a better idea (at least initially). > We're also going to need an admin mailing list. Any suggestions on > who it should host it? It'll probably be easiest to just set such a mailing list up alongside this list at sf.net >> But that's not by any means the only thing one could do with a >> gumstix cluster (gumwad?). > > http://en.wikipedia.org/wiki/Gumwad Nice. I just updated the article to be a bit more accurate ;) > We're next door to the mechanical engineering building, and I hear > they build weird enclosures like that if you ask them nice. If you're > serious about doing this, we can build a custom multiport switching > power supply around one of the analog IC vendors' chips for each (say) > 50 gumstix. We'll see. Right now, I'm in "throw it out there and see what sticks" mode. > There's also the small matter of getting several hundred gumstix for a > gumwad, but that's a small implementation detail. Money, shmoney, right? >> cables, power bricks, etc as makes sense The more I think about this, the more the "etc" part sticks out as important. Right now, I'm thinking "etc" consists of: 6x 512MB RS-MMC card a few tweeners ie "enough" serial port access, and some actual storage. I picture a possible setup as: ---- netMMC ---- netMMC ===== netDUO =< ---- netMMC ---- netMMC ---- netMMC ---- netMMC where the netDUO is actually the machine folks log into on the cluster. the 6 netMMC machines are the grunts and do all the work, but they're all just identically configured, exporting their 512MB each into a pooled 3GB software-RAID over netblock device. Maybe 1.5GB mirrored so that failure of an MMC card isn't too much of a problem. 1.5GB ought to be plenty for what we're contemplating here at least in the short term. The 6 512MB "disks" get mounted as netblock devices on the netDUO machine which does the software RAID to create a single 1.5GB device, which is then exported back to the 6 work nodes via NFS or whatever -- something with low protocol overhead would probably be best. Now this is certainly not the most optimal solution in the world, nor the most performant. But it has the advantage of being trivial to set up, and frankly we don't care so much I think about squeezing every ounce of performance out of this thing, So you basically end up with 1.5GB of disk accessible across the 6 grunt nodes. Access to the cluster is controlled through the single host at the front (which is the only machine one directly logs into). That front-end node has SSH keys to enable it to kick off distcc processes on the other hosts as needed, and the front-end hosts's only jobs are essentially NFS server and access control. C |
From: N.E.Whiteford <N.E...@so...> - 2005-11-23 14:54:59
|
On Wed, 23 Nov 2005, Craig Hughes wrote: > On Nov 22, 2005, at 3:39 PM, Jason Spence wrote: > > [cut] > Right now, I'm not envisaging storage being shared between the 3 > sites, only locally between nodes at each site. You can always > transfer bits and pieces between the local storage clusters using > rsync or something; but for online access, I think local is probably > a better idea (at least initially). I think the simplest solution is probably the best one in the first instance. We also need to decide how similar the sites are going to be, I'd suggest that we all have the same "core" functionality (i.e. distcc) and then experiment and evolve other applications as we go. > > We're also going to need an admin mailing list. Any suggestions on > > who it should host it? > > It'll probably be easiest to just set such a mailing list up > alongside this list at sf.net agreed > > We're next door to the mechanical engineering building, and I hear > > they build weird enclosures like that if you ask them nice. If you're > > serious about doing this, we can build a custom multiport switching > > power supply around one of the analog IC vendors' chips for each (say) > > 50 gumstix. > > We'll see. Right now, I'm in "throw it out there and see what > sticks" mode. > > > There's also the small matter of getting several hundred gumstix for a > > gumwad, but that's a small implementation detail. > > Money, shmoney, right? hmm.. perhaps we should put a research grant proposal in for "green clustering", maybe we could make these things solar powered. :) > > >> cables, power bricks, etc as makes sense > > The more I think about this, the more the "etc" part sticks out as > important. Right now, I'm thinking "etc" consists of: > > 6x 512MB RS-MMC card > a few tweeners > > ie "enough" serial port access, and some actual storage. I picture a > possible setup as: > > > ---- netMMC > ---- netMMC > ===== netDUO =< ---- netMMC > ---- netMMC > ---- netMMC > ---- netMMC > > where the netDUO is actually the machine folks log into on the > cluster. the 6 netMMC machines are the grunts and do all the work, > but they're all just identically configured, exporting their 512MB > each into a pooled 3GB software-RAID over netblock device. Maybe > 1.5GB mirrored so that failure of an MMC card isn't too much of a > problem. 1.5GB ought to be plenty for what we're contemplating here > at least in the short term. The 6 512MB "disks" get mounted as > netblock devices on the netDUO machine which does the software RAID > to create a single 1.5GB device, which is then exported back to the 6 > work nodes via NFS or whatever -- something with low protocol > overhead would probably be best. Now this is certainly not the most > optimal solution in the world, nor the most performant. But it has > the advantage of being trivial to set up, and frankly we don't care > so much I think about squeezing every ounce of performance out of > this thing, So you basically end up with 1.5GB of disk accessible > across the 6 grunt nodes. Access to the cluster is controlled > through the single host at the front (which is the only machine one > directly logs into). That front-end node has SSH keys to enable it > to kick off distcc processes on the other hosts as needed, and the > front-end hosts's only jobs are essentially NFS server and access > control. Sounds like a good starting point to me. If we are thinking of these as prototypes for larger clusters as well, it might be interesting to change the interconnect from Ethernet to something else at a later date. One option might be to use USB (with something like an NSLU2 (running nslu2-linux) as the host), but bandwidth on USB 1 might be an issue. nav > > C > > > ------------------------------------------------------- > This SF.Net email is sponsored by the JBoss Inc. Get Certified Today > Register for a JBoss Training Course. Free Certification Exam > for All Training Attendees Through End of 2005. For more info visit: > http://ads.osdn.com/?ad_id=7628&alloc_id=16845&op=click > _______________________________________________ > gumstix-users mailing list > gum...@li... > https://lists.sourceforge.net/lists/listinfo/gumstix-users > |
From: floris v. <flo...@ya...> - 2005-11-23 15:25:12
|
Would running globus on them do any good ? In the future then, once its finished. "N.E.Whiteford" <N.E...@so...> wrote: On Wed, 23 Nov 2005, Craig Hughes wrote: > On Nov 22, 2005, at 3:39 PM, Jason Spence wrote: > > [cut] > Right now, I'm not envisaging storage being shared between the 3 > sites, only locally between nodes at each site. You can always > transfer bits and pieces between the local storage clusters using > rsync or something; but for online access, I think local is probably > a better idea (at least initially). I think the simplest solution is probably the best one in the first instance. We also need to decide how similar the sites are going to be, I'd suggest that we all have the same "core" functionality (i.e. distcc) and then experiment and evolve other applications as we go. > > We're also going to need an admin mailing list. Any suggestions on > > who it should host it? > > It'll probably be easiest to just set such a mailing list up > alongside this list at sf.net agreed > > We're next door to the mechanical engineering building, and I hear > > they build weird enclosures like that if you ask them nice. If you're > > serious about doing this, we can build a custom multiport switching > > power supply around one of the analog IC vendors' chips for each (say) > > 50 gumstix. > > We'll see. Right now, I'm in "throw it out there and see what > sticks" mode. > > > There's also the small matter of getting several hundred gumstix for a > > gumwad, but that's a small implementation detail. > > Money, shmoney, right? hmm.. perhaps we should put a research grant proposal in for "green clustering", maybe we could make these things solar powered. :) > > >> cables, power bricks, etc as makes sense > > The more I think about this, the more the "etc" part sticks out as > important. Right now, I'm thinking "etc" consists of: > > 6x 512MB RS-MMC card > a few tweeners > > ie "enough" serial port access, and some actual storage. I picture a > possible setup as: > > > ---- netMMC > ---- netMMC > ===== netDUO =< ---- netMMC > ---- netMMC > ---- netMMC > ---- netMMC > > where the netDUO is actually the machine folks log into on the > cluster. the 6 netMMC machines are the grunts and do all the work, > but they're all just identically configured, exporting their 512MB > each into a pooled 3GB software-RAID over netblock device. Maybe > 1.5GB mirrored so that failure of an MMC card isn't too much of a > problem. 1.5GB ought to be plenty for what we're contemplating here > at least in the short term. The 6 512MB "disks" get mounted as > netblock devices on the netDUO machine which does the software RAID > to create a single 1.5GB device, which is then exported back to the 6 > work nodes via NFS or whatever -- something with low protocol > overhead would probably be best. Now this is certainly not the most > optimal solution in the world, nor the most performant. But it has > the advantage of being trivial to set up, and frankly we don't care > so much I think about squeezing every ounce of performance out of > this thing, So you basically end up with 1.5GB of disk accessible > across the 6 grunt nodes. Access to the cluster is controlled > through the single host at the front (which is the only machine one > directly logs into). That front-end node has SSH keys to enable it > to kick off distcc processes on the other hosts as needed, and the > front-end hosts's only jobs are essentially NFS server and access > control. Sounds like a good starting point to me. If we are thinking of these as prototypes for larger clusters as well, it might be interesting to change the interconnect from Ethernet to something else at a later date. One option might be to use USB (with something like an NSLU2 (running nslu2-linux) as the host), but bandwidth on USB 1 might be an issue. nav > > C > > > ------------------------------------------------------- > This SF.Net email is sponsored by the JBoss Inc. Get Certified Today > Register for a JBoss Training Course. Free Certification Exam > for All Training Attendees Through End of 2005. For more info visit: > http://ads.osdn.com/?ad_id=7628&alloc_id=16845&op=click > _______________________________________________ > gumstix-users mailing list > gum...@li... > https://lists.sourceforge.net/lists/listinfo/gumstix-users > ------------------------------------------------------- This SF.Net email is sponsored by the JBoss Inc. Get Certified Today Register for a JBoss Training Course. Free Certification Exam for All Training Attendees Through End of 2005. For more info visit: http://ads.osdn.com/?ad_id=7628&alloc_id=16845&op=click _______________________________________________ gumstix-users mailing list gum...@li... https://lists.sourceforge.net/lists/listinfo/gumstix-users By Antoine de Saint-Exupery "A rock pile ceases to be a rock pile the moment a single man contemplates it, bearing within him the image of a cathedral." --------------------------------- Yahoo! Messenger NEW - crystal clear PC to PC calling worldwide with voicemail |
From: Alexandre P. N. <al...@om...> - 2005-11-23 16:30:54
|
N.E.Whiteford escreveu: >On Wed, 23 Nov 2005, Craig Hughes wrote: > > > >>On Nov 22, 2005, at 3:39 PM, Jason Spence wrote: >> >> >>>[cut] >>> >>> >>Right now, I'm not envisaging storage being shared between the 3 >>sites, only locally between nodes at each site. You can always >>transfer bits and pieces between the local storage clusters using >>rsync or something; but for online access, I think local is probably >>a better idea (at least initially). >> >> > >I think the simplest solution is probably the best one in the first >instance. We also need to decide how similar the sites are going to be, >I'd suggest that we all have the same "core" functionality >(i.e. distcc) and then experiment and evolve other applications as we >go. > > > I guess that's a great idea, at least for a while: it let users play with basic stuff, at the same time, let's us play with new toys and share experiments. Sometime in the future we can decide wheter to adopt or not some defined set of software which is common among the cluster sites. Or we could just have a page on the wiki mentioning the sites and marking whether or not it's on a know state, i.e.: stable (all published information describing the site is valid at the moment); unstable (the cluster site is testing some new feature and may or may not behave as declared); maintainance (the cluster site is expected to be down), etc... thoughs? >>>We're also going to need an admin mailing list. Any suggestions on >>>who it should host it? >>> >>> >>It'll probably be easiest to just set such a mailing list up >>alongside this list at sf.net >> >> > >agreed > > > agreed, it's the easier way I can think of. >> [cut] > > >hmm.. perhaps we should put a research grant proposal in for "green >clustering", maybe we could make these things solar powered. :) > > > That would be awesome, in fact, if buying solar panes wasn't so difficult (and expensive) in my country, I would try something myself :-) >[cut] >Sounds like a good starting point to me. If we are thinking of these as >prototypes for larger clusters as well, it might be interesting to >change the interconnect from Ethernet to something else at a later >date. One option might be to use USB (with something like an NSLU2 >(running nslu2-linux) as the host), but bandwidth on USB 1 might be an >issue. > >nav > > A great idea (for a larger cluster) would be to do something like NUMA and interconnect the nodes through a simple bus, perhaps forming groups through some sort of latches. I guess that would be close enough to the maximum possible speed, but for that I guess the design would probably have to take in account the number of nodes on a system. But that's a mind exercise I won't do now :-) Alexandre |
From: Craig H. <cr...@gu...> - 2005-11-23 17:31:24
|
On Nov 23, 2005, at 6:54 AM, N.E.Whiteford wrote: >>> There's also the small matter of getting several hundred gumstix >>> for a >>> gumwad, but that's a small implementation detail. >> >> Money, shmoney, right? > > hmm.. perhaps we should put a research grant proposal in for "green > clustering", maybe we could make these things solar powered. :) Hey look, if you can get a grant to buy a few dozen gumstix, I'm not gonna stop you. Heck, if a couple-thousand-dollar giveaway turns into a couple-tens-of-thousands order, I think that'd be some pretty strong pavlovian encouragement to expand the giveaway program ;) C |
From: Craig H. <cr...@gu...> - 2005-11-23 17:35:54
|
On Nov 23, 2005, at 6:54 AM, N.E.Whiteford wrote: > Sounds like a good starting point to me. If we are thinking of > these as > prototypes for larger clusters as well, it might be interesting to > change the interconnect from Ethernet to something else at a later > date. One option might be to use USB (with something like an NSLU2 > (running nslu2-linux) as the host), but bandwidth on USB 1 might be an > issue. Yeah, for expanded bandwidth between nodes, the only step up from ethernet you'll get is by just connecting the CPU busses together directly. I have a vague recollection that the PXA actually has some kind of master/slave addressing mode on its CPU bus where you can build little clusters of 1 master and 15 slave CPUs or something, but I don't remember where I saw that -- I'll read the Intel docs again and see if I can figure out what that vague memory is based on. But I think that'd be the way to go, if it's right. Then you can inter- tie the mini-clusters of 16 CPUs via ethernet. I can picture a carrier board which has 16 92-pin connectors on it, an RJ45 "male" connector on one side, and an RJ45 "female" connector on the other. You then plug the carrier boards into each other puzzle-piece style with an ethernet cable on the ends. Slap down the 'stix onto the 92- pin connectors, and away you go. C |
From: Mike <ke...@sy...> - 2005-11-22 23:54:46
|
On November 22, 2005 4:57 pm, Craig Hughes wrote: > ideas. I think the main advantage to a gumstix cluster vs any other > kind of cluster, if there is one, is the power consumption. a 6-node > gumstix+netstix cluster will have about 2.4GHz of (integer) > processing power, but only draw about 12 watts of power. By > contrast, your typical single-CPU desktop will probably be drawing > somewhere in the 100-200 watt range. 6 gumstix+netstix also fits in > about 1/50th of the space of a 1U rackmount enclosure, if you arrange Has any thought been given to creating a backplane for building clusters. It could have a few rows of connex connectors with a common lead for power and Ethernet. Linux journal has an article on small clusters this month, a gumwad could fit in a small lunch pail. I've been thinking I could put a gumstix in the empty battery compartment of my old laptop to give it a big power boost, if I could only figure out how to connect to the serial port from the inside. -- Collector of vintage computers http://www.ncf.ca/~ba600 Open Source Weekend http://www.osw.ca |
From: Alexandre P. N. <al...@om...> - 2005-11-22 21:59:38
|
Craig Hughes escreveu: > Ok, I think I've picked 3 sites from the replies I've received. I > decided to spread them around a bit geographically. For the overseas > sites, we're going to have to figure out how to ship the stuff to you > hopefully without incurring any import duties, or else figure out how > to pay those import duties so that you recipients don't have to... > > The 3 sites will be: > > Nava Whiteford at the University of Southampton in the UK > Alexandre Pereira Nunes at Omnisystem in Brazil > Jason Spence at the IEEE student lab at Berkeley (boo hiss -- axe > thieves) > > I have shipping info already for the first two, but Jason, if you > could email me a shipping address, we'll get that out to you. Nava/ > Alexandre -- if you want me to ship somewhere other than what your > oscommerce record shows, let me know that too please. I'll talk to > UPS about how best to ship the things to you with regard to the > import duties issue, so it might take a bit longer to get those out > to you than for the one which is just headed across the bay. > Replied about this privately, let me know if you got my mail :-) > As far as actually configuring the boxes, allowing remote access, > etc, the work is just getting started. Anyone with thoughts on what > best to do (and whether to do the same thing at each location or not) > please go ahead and discuss on this thread :) > > C Well, I guess we have a branching decision: either the clusters will be a clone of each other in what concerns functionality, or each cluster can have a "personality" of their own. Both options have pros and cons. I'm personally not inclined in either direction yet, for the users I see an initial advantage in having all clusters quite similar, since they can connect to any other of them if their preferred one is somehow offline; On the other hand, we risk having to implement some sort of load-balancing among them, to avoid some cluster overloaded while other(s) are idle, which in turn may lead to users connecting to an idle but lagged (i.e. for being geographically away, thus imposing some latency) set. On the other hand, having somewhat different cluster sets would/could allow the user to test some software under different setups. I'm assuming one of the functions of the clusters will be to allow a compile-farm, but could I be losing the goal? A thing I think would be a quite interesting experiment would be making the cluster a giant and distributed NUMA set; But although I'm pretty impressed by this technology and have played a bit with it, I honestly don't have a clue about where to start to get one built from stratch :-) Well, these are my first thoughs. Comments are more than welcome, the cluster is going to serve us all. (p.s..: well, it depends. If the thing decide to call itself skynet it may end up serving only itself) Alexandre |