|
From: Will P. <pa...@dc...> - 2005-11-07 20:59:54
|
Hi, Jim (and others) -- Arusha land is pretty quiet, but still here... On your various remarks: > 1.) A sidai suggestion: sidai/package/rsync.xml has a phrase in the > <patch> section that names a bunch of rsync versions. I think the > sense of the test should be reversed, so it tests *for* [ > "$PKG_VERSION" = "2.5.5" ] and defaults to : ok # no worries, mate. > (The "doesn't need a patch" list is already out of date, 2.6.6 is out > and the build fails because of this...) *Yes*. There are many instances of equally egregious code. Reason: the shortest path to success is to add '-o <new-version>' and move on. But, this being "collaborative sysadmin", I guess I'll need to do better :-) > 2.) Mostly a statement: the 'how do I migrate from the "try-ark" mode > to a real site' instructions are .... lacking. I got a bit frustrated > trying to discover in which dark corner the magic config file that was > causing certain behavior lived... turned out I had a mixture of old > and new happening, and didn't know that, because of the level of magic > involved. At that early phase of maturity, this needs to be smoother. > (I know -- send patches... :) Urgh. I _do_ tend to test that stuff before releases (which are now about a year apart -- CVS updates happen, of course...), but of course I never see it with newbie eyes. All advice welcome... > 3.) Newbie Q: what happens if a machine is down when a deploy/reveal > command is issued? Is there anything akin to "self-healing state > management" in here? Er, not really. There may be some command-line gunk (--down-hosts=?) to advise the tool to steer clear; I'd have to look it up. You can also change a host <status> to "pending", which means it will get put into tables and things (i.e. its a "player") but won't get ssh'd to. I often do this if I know a machine's going to be down for a few days. The _right_ solution is something like a client cron job that does 'ark package reveal ALL' every 15 mins. This would need some slight extra mechanism (which I've never needed) for someone to say "this one is good to go", after which the cron job would get it. The machinery exists... ark package good-to-go synopsys-vcs--2005.06 ... which does little more than 'touch' some sentinel; then change the deploy method a bit to look for the sentinel. > 4.) Anyone done any windows integration? Thoughts in that direction? Um... sorry. > 5.) Are there any utilities/commands to report on the ark-state info? No; there should be. Got an idea for one? -- maybe we could write it. > 6.) A sidai question: I built a version of gcc to go into /our; used > the sidai template with just a couple of tiny tweaks. It got > installed/deployed in a somewhat unexpected manner -- in the deploy > tree I have a whole tree of subdirectories, and (most of) the leaves > are symlinks into the install tree. A couple of things are directly > stored in the deploy tree. All the other apps I've done so far have > the top of the deploy tree just symlinked to the top of the respective > install tree. Is this expected, and if so -- what is the logic behind > doing it this way; and what directive triggered the different install? The idea was: for a few bits of a GCC build (libgcc.so for example), you really don't want to be making NFS trips to get it. So each (per-client) deployment has a copy of that. On the other hand, for swathes of a GCC distribution, one shared copy is fine. So symlinks to those bits. The guilty party is ($ARK/sidai/package/gcc.xml): <deployment-spec><table><entry name="*"> @TYPE@=manifest ^lib/lib.*\.so copy * * 755 . link </entry></table></deployment-spec> If you just wanted to copy blindly, you'd change (override) that with <deployment-spec><table><entry name="*"> @TYPE@=copy </entry></table></deployment-spec> for example. Will |