You can subscribe to this list here.
2000 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
(3) |
Jun
(9) |
Jul
(18) |
Aug
(40) |
Sep
(22) |
Oct
(41) |
Nov
(83) |
Dec
(30) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(28) |
Feb
(14) |
Mar
(33) |
Apr
(44) |
May
(78) |
Jun
(53) |
Jul
(39) |
Aug
(7) |
Sep
(12) |
Oct
(4) |
Nov
(8) |
Dec
|
2002 |
Jan
(3) |
Feb
(26) |
Mar
(16) |
Apr
(2) |
May
(10) |
Jun
(12) |
Jul
(6) |
Aug
(5) |
Sep
(21) |
Oct
(28) |
Nov
(1) |
Dec
(1) |
2003 |
Jan
|
Feb
(7) |
Mar
(7) |
Apr
(3) |
May
(21) |
Jun
(11) |
Jul
(4) |
Aug
|
Sep
(8) |
Oct
(11) |
Nov
(6) |
Dec
(3) |
2004 |
Jan
(6) |
Feb
(5) |
Mar
|
Apr
(1) |
May
(3) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2005 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(13) |
Dec
(3) |
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
2007 |
Jan
|
Feb
(1) |
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Sang Y. <Sa...@Ov...> - 2003-11-14 01:16:51
|
i'm evaluating ark to manage a large number of systems, and followed the = test drive setup recipe to see what ark can do. when i finally ran the = try-ark script, i ran into an exception: >./try-ark/try-ark = 03-11-13 5:14PM ./try-ark/try-ark: warning: this script may be overly paranoid and, um, = wrong... + ./arkcmd package reveal --verbose --use-deps=3Dmax arkbase=20 Traceback (most recent call last): File "./arkcmd", line 86, in ? main() File "./arkcmd", line 73, in main things_mgr.doToolSubcommand() File "/home/shy2/ark-src/ARK/arkbase/ark/thing.py", line 240, in = doToolSubcommand victims =3D self.unpackSpecs(ark_ctrl.cmdLineNonOptions()) File "/home/shy2/ark-src/ARK/arkbase/ark/thing.py", line 139, in = unpackSpecs thing =3D self.lookup(thing_name,team_id) File "/home/shy2/ark-src/ARK/arkbase/ark/thing.py", line 75, in lookup team_id =3D ark.team.ACTING_TEAM().id File "/home/shy2/ark-src/ARK/arkbase/ark/team.py", line 65, in = ACTING_TEAM _ActingTeam.instance =3D ArkTeam(acting_team_id) File "/home/shy2/ark-src/ARK/arkbase/ark/team.py", line 149, in = __init__ (self._is_prototype, xml_version, self._xmlf) =3D = xml_file.read(explicit_filename=3D"%s/team.xml" % = (ark.control.ArkControl().repository(team_id))) File "/home/shy2/ark-src/ARK/arkbase/ark/xmlfile.py", line 199, in = read (fname, fval) =3D _read_cooked_resource( child, filename ) File "/home/shy2/ark-src/ARK/arkbase/ark/xmlfile.py", line 314, in = _read_cooked_resource zero_attributes(res) File "/home/shy2/ark-src/ARK/arkbase/ark/xmlfile.py", line 708, in = zero_attributes if len(node.attributes) !=3D 0: TypeError: len() of unsized object this was on a solaris 2.8 box with python 2.3.2 =20 anyone else run into this before? thought i'd ask before i start = digging through the *.py. thanks -sang |
From: Will P. <pa...@dc...> - 2003-10-16 18:28:52
|
Joel S raises a very real concern/issue w/ existing ARK stuff. Some Good Ideas would be welcome in this domain. Let me take things out of order... > As a workaround, I ran "ark-invasive --hosts=newhost > package-config" for each config package. ... Hmmm... sounds familiar :-( An "ark-invasive --hosts=newhost sidai:ark-diffing-deploy" is mildly better. > Arusha manages a large and growing number of system configuration files: > password, group, automounter, syslog, and many more. [The usual disclaimer, for casual onlookers: ARK does not force you do manage these files in this way.] > The ark packages for many of these files use a two phase > approach: > > -- First, a master configuration file or template is constructed from data > on the gold server by <configure> and <compile> methods. One goal I'd like to hang onto for that stage is that you can create the config files for a host *before* it is up. > When a new host is bootstrapped into ark's world, the <configure> and > <compile> methods are not rerun automatically. ... Could we get rid of the 'recordable="yes"' settings on the methods between <configure> and <deploy>? If you did 'ark package deploy ...', it would then have to go right back to configure. A related problem that you didn't mention is needing to re-deploy some packages on hosts *other* than the new one. For example, the NFS servers need to be told to export to the new host. A "solution" that might work in *some* contexts is to have a pre-populated set of "pending" hosts -- e.g. if you have a lab full of 40 Linux boxes, you could have another 20 pending ones (on the grounds of "if we get some more boxes, we know how they will be configured"). All the configuration stuff is supposed to include pending hosts ("they'll appear any moment now") in its output. As I say, all ideas welcome. It's hard to know how to make configuration do the right thing for hosts that you don't know you're going to have :-) Will |
From: Will P. <pa...@dc...> - 2003-10-16 13:16:01
|
Joel S raises some valid things-that-could-be-better; on this one... > ... The problem is that the <configure> method runs only > on the gold host. It is not rerun for each new host, so > the constraints are ignored. I added the same three > constraints to the <deploy> method as a workaround. ... > > Although repeating the constraints in <configure> and > <deploy> appears to work, it seems awkward. Shouldn't > there be a way to specify these constraints only once? Does the idiom below (untested) give you any joy? Will <configure> <constraint><dependency type="normal" name="." on-method="common-constraints" /></constraint> ... </configure> <deploy> <constraint><dependency type="normal" name="." on-method="common-constraints" /></constraint> ... </deploy> <common-constraints> <constraint><dependency type="normal" name="library1" on-method="deploy" /></constraint> <constraint><dependency type="normal" name="library2" on-method="deploy" /></constraint> <constraint><dependency type="normal" name="library3" on-method="deploy" /></constraint> <code lang="none" recordable="no" /> </common-constraints> |
From: Shprentz, J. [C] <Shp...@ni...> - 2003-10-15 19:51:33
|
Arusha manages a large and growing number of system configuration files: password, group, automounter, syslog, and many more. The ark packages for many of these files use a two phase approach: -- First, a master configuration file or template is constructed from data on the gold server by <configure> and <compile> methods. -- Second, the host specific files are disseminated in <deploy> methods. When a new host is bootstrapped into ark's world, the <configure> and <compile> methods are not rerun automatically. Instead, the <deploy> method tries to disseminate a host specific file based on a master configuration file or template that predates the new host. The result is usually an incorrect configuration on the new host. As a workaround, I ran "ark-invasive --hosts=newhost package-config" for each config package. There must be an easier way. -- Joel Shprentz National Imagery and Mapping Agency Mailstop N-17 Washington Navy Yard, Building 213 1200 First Street, SE Washington, DC 20303-0001 202-685-3534 |
From: Shprentz, J. [C] <Shp...@ni...> - 2003-10-15 19:43:41
|
When bootstrapping Arusha onto a new host, ark will start using the newly installed packages immediately. This results in failure when a dependent library has not been installed. For example, when openssh-config ran ssh-keygen on a new host, it couldn't find libcrypto. Why not? This worked fine on the gold server! The appropriate constraints are specified in Verilab's openssh--3.7.1p2: the <configure> method depends on deployment of zlib, openssl, and tcpwrappers. The problem is that the <configure> method runs only on the gold host. It is not rerun for each new host, so the constraints are ignored. I added the same three constraints to the <deploy> method as a workaround. A similar problem occurs after ark installs Python on a new host, but hasn't yet installed gcc libraries. Again, their are constraints on the <configure> method, but not on the <deploy> method. Although repeating the constraints in <configure> and <deploy> appears to work, it seems awkward. Shouldn't there be a way to specify these constraints only once? -- Joel Shprentz National Imagery and Mapping Agency Mailstop N-17 Washington Navy Yard, Building 213 1200 First Street, SE Washington, DC 20303-0001 202-685-3534 |
From: Jonathan H. <jon...@on...> - 2003-10-15 16:57:48
|
On 15/10/2003 17:26, Will Partain wrote: > Also, because I don't > deploy/reveal with the pkgsrc stuff, the user isn't going to > see those extra 40 packages anyway. They're still simply > going to see plain old /our/bin/emacs -- which just happens > to have been carefully maintained by the NetBSD guys :-) OK, I see what you mean now. This makes sense. >> I'd help, but the thing is that I'm actually quite fond of the Arusha >> package manager ;-) > > I can sympathise w/ this point of view. Sandboxing is the main benefit for me. I tend not to reveal any library packages. Thanks to the rather cool way Sidai does runtime linking, I can safely install conflicting versions of libraries and build new systems without breaking running systems. On the web servers I look after, we don't reveal PHP, but load it in the Apache config file directly from the deploy directory. When upgrading to a newer version of PHP, I can build a new one with whatever new libraries and extensions I want and then just modify the Apache config and gracefully restart Apache - allowing a hot upgrade without any downtime. Then, when it turns out that the new version of PHP has just completely buggered half a dozen critical sites, I can modify the config again and downgrade back to the previous one just as fast ;-) This sort of thing is Just Too Scary when your build system is bashing libraries that are in runtime use. Jonathan -- Jonathan Hogg Director One Good Idea Ltd. <http://www.onegoodidea.com/> |
From: Will P. <pa...@dc...> - 2003-10-15 16:29:04
|
Jonathan H writes: > > The pkgsrc world can then be used to do *everything* > > required (including dependencies) to achieve the equivalent > > of 'ark package install ...'. > > How do you then note in Arusha that any dependencies have been installed? Is > it possible to get this information back from pkgsrc into Arusha? Is it > desirable even? I would have guessed so for completeness, but perhaps not. My idea (obviously sketchy...) was... Let's say you have an ARK package 'emacs-pkgsrc': it effectively says "go install Emacs in pkgsrc land". The pkgsrc stuff starts running, and has to install another 40 (pkgsrc) packages in order to install Emacs. *I don't care* -- that's under-the-hood/invisible. Also, because I don't deploy/reveal with the pkgsrc stuff, the user isn't going to see those extra 40 packages anyway. They're still simply going to see plain old /our/bin/emacs -- which just happens to have been carefully maintained by the NetBSD guys :-) > I'd help, but the thing is that I'm actually quite fond of the Arusha > package manager ;-) I can sympathise w/ this point of view. Will |
From: Jonathan H. <jon...@on...> - 2003-10-15 15:27:47
|
On 15/10/2003 12:54, Will Partain wrote: > The pkgsrc world can then be used to do *everything* > required (including dependencies) to achieve the equivalent > of 'ark package install ...'. How do you then note in Arusha that any dependencies have been installed? Is it possible to get this information back from pkgsrc into Arusha? Is it desirable even? I would have guessed so for completeness, but perhaps not. I'd help, but the thing is that I'm actually quite fond of the Arusha package manager ;-) I've grown quite used to being able to have multiple versions of the same package installed and the ability to deploy packages without revealing them. Jonathan -- Jonathan Hogg Director One Good Idea Ltd. <http://www.onegoodidea.com/> |
From: Will P. <pa...@dc...> - 2003-10-15 11:56:23
|
Folks, something not reflected in the release notes for the recent ARK "release" was some meddling with the "pkgsrc" stuff out of the NetBSD project, which I did a few months ago. It is basically about building NetBSD packages on non-NetBSD systems. Background: the Arusha Project (notably: Sidai team) has a rolled-our-own multi-platform package manager, but this isn't ideal (and wasn't my initial idea). It would be better to use one or more Real Package Managers [TM] to do the heavy lifting, and let ARK be the "meta package manager". This is a great idea until you actually try to find a multi-platform package manager. (I looked at RPM and NetBSD pkgsrc pretty hard back in 1999 or so.) Times change. The NetBSD pkgsrc thing is much advanced, and other choices have emerged: OpenPKG, for example. It really should be possible to build an Arusha system with one of these as the major package-handling substrate. I do not have a compelling reason to do so myself, but perhaps one of you is interested (and I'll help :-). Here's how far I got with NetBSD pkgsrc (remnants of this effort are in the verilab2 team files). Set up bits of disk in the right places, then grab and build the bootstrap package; I did... ark package install pkgsrc-bootstrap--20030225 My idea was then to keep all of this stuff *in its own little world* and definitely *out of my systems areas* (/etc, /usr, ...). That's the bit I didn't really figure out. You have to do weird things like install the 'xpkgwedge' package, which makes all the X stuff get put somewhere other than '/usr/X11'. Anyway, then for each chunk of NetBSD-packageness that you want, you make a little ARK-package "wrapper", something like... <package name="dia-pkgsrc" xml-version="1"> <status>revealed</status> <prototypes> <prototype team="." name="pkgsrc"/> <prototype team="sidai" name="pkgsrc"/> <prototype team="." name="ALL"/> </prototypes> <pkgsrc-tool-location> graphics/dia </pkgsrc-tool-location> </package> The pkgsrc world can then be used to do *everything* required (including dependencies) to achieve the equivalent of 'ark package install ...'. I was going to use the Sidai machinery for the final steps of making the package available on each host ('deploying' and 'revealing' in ARK-speak). You could do something different, of course. Someone slightly familiar with the NetBSD pkgsrc system could probably get this going without much difficulty. I lack the combination of knowledge and motivation myself, but will be happy to help :-) Will |
From: Will P. <pa...@dc...> - 2003-10-13 11:16:38
|
I have just done a full release of all the Arusha Project stuff I could get my hands on (teams: ARK, arksf1, glasli1, sample1, sidai, simple1, verilab2). The release notes are below. Wander to http://sourceforge.net/projects/ark for the bits. In case you've forgotten: the Arusha Project (ARK) provides a framework for collaborative system administration of multi-platform Unix sites with many dozens of machines. Thanks to Joel Shprentz for useful input in this iteration. If any of you need help exploring or thinking about Arusha Project stuff, or have "it would be great if..." ideas, please speak up or get in touch. Will == release notes ======================================= In the ARK engine, host context is tracked better. We attempt to satisfy new constraints with chosen-earlier constraints. Added 'may-fail' constraints. In the Sidai code, networking info has been generalized and sample packet-filter code changed to match. General support for RH Linux 9 added. A 'clean' method added to most packages. In individual teams' code, there are now examples of using rdiff-backup and of how to kickstart an RH Linux box into ARK-readiness. Over 150 package specifications are new or updated to current software versions. |
From: Will P. <pa...@dc...> - 2003-10-09 10:39:39
|
(sorry to be slow) Joel S writes: > Summary: When building all packages from scratch, ark fails when it can't > satisfy a gcc-wrapper install-bits constraint to first deploy a certain > version of gcc. Well, I'm reasonably confident I made the -wrapper change/(*cough* you mean..)"hack" for some actual reason :-( > I uncommented some print statements to debug the problem. This tidbit was > printed by the pkgsWithName method of package.py: > > name sought: gcc ['gcc-wrapper'] I'm less worried that 'gcc-wrapper' is on the candidate list and more worried that 'gcc--3.2.2' *isn't*. If it were, then it would satisfy the version spec, and all would be well. Ah, I think I see what's going on: pkgsWithName is being called *with* a candidates' list -- this is to try to satisfy a constraint from already-seen constraints. (Rationale: don't want to satisfy a configure constraint with Berkeley DB version 3.x and then a comparable compile constraint with a version 4.x -- yurk. So, we feed in "constraints so far" to the compile constraining, so that it will pick the same thing again.) The problem is that gcc-wrapper is on the candidates' list to start with. pkgsWithName simply returns that, and then the version test fails. My proposed heart-not-entirely-in-it is: try the candidates' list thing first; if it fails, then retry the "normal" way. *Untested* diffs are below. Let me know what happens. (I am hoping to do a new release in the near future.) Will diff -u -1 -r1.75 thing.py --- thing.py 28 Jul 2003 09:38:06 -0000 1.75 +++ thing.py 9 Oct 2003 10:35:45 -0000 @@ -1427,2 +1427,16 @@ pkgs_so_named = pkgs_mgr.pkgsWithName(name,cands=cands) + if version_spec == '*' or version_spec == None: + return pkgs_so_named + + pkgs_that_match = pkgMatchingVersionSpec(version_spec,pkgs_so_named) + # We need to do this pkgMatching... thing inside the try block + # because it might fail, because the candidates list might + # be bunk. For example, if doing a dependency of gcc-wrapper + # on gcc, the name 'gcc-wrapper' will be in the candidates list + # but then the version check (on 'gcc') will fail. We need + # to catch that and then go back and do it all over again + # *without* a candidates list. + if len(pkgs_that_match) == 0: # trigger a retry + raise ark.error.NoPackageWithName,'irrelevant' + except ark.error.NoPackageWithName,msg: @@ -1430,13 +1444,11 @@ pkgs_so_named = pkgs_mgr.pkgsWithName(name,cands=None) - - if version_spec == '*' or version_spec == None: - return pkgs_so_named - else: + if version_spec == '*' or version_spec == None: + return pkgs_so_named pkgs_that_match = pkgMatchingVersionSpec(version_spec,pkgs_so_named) - if len(pkgs_that_match) == 0: - raise ark.error.NoThingFitsConstraint,'type=package name=%s, version_spec=%s' % (name,version_spec) + if len(pkgs_that_match) == 0: + raise ark.error.NoThingFitsConstraint,'type=package name=%s, version_spec=%s' % (name,version_spec) - else: - return pkgs_that_match + else: + return pkgs_that_match |
From: Shprentz, J. [C] <Shp...@ni...> - 2003-10-07 21:54:00
|
Summary: When building all packages from scratch, ark fails when it can't satisfy a gcc-wrapper install-bits constraint to first deploy a certain version of gcc. Details: The arrival of some new hardware inspired me to rebuild all packages on a clean system. All went well until ark tried to build gcc-wrapper. The install-bits method has a constraint requiring the default gcc to be installed. For example, this constraint appears in the Verilabs gcc-wrapper package: <contstraint><dependency type="normal" name="gcc" version-spec="eq 3.3" on-method="deploy"/> This worked when I last ran it, about six months ago. Today, instead, ark reported this message: No thing matches constraint criteria: type=package name=gcc, version_spec=eq 3.3 I uncommented some print statements to debug the problem. This tidbit was printed by the pkgsWithName method of package.py: name sought: gcc ['gcc-wrapper'] Apparently gcc-wrapper is on the candidate list when pkgsWithName is called from _find_dependees in thing.py. To my surprise, gcc-wrapper matched the name sought, gcc. Later, when the version_spec was checked, the match failed. Instead of finding gcc version 3.2.2, it found gcc-wrapper version None. What happened? Among the changes last March to pkgsWithName, a test matches name + "-wrapper". We wanted gcc, but pkgsWithName also accepts names starting gcc-- or gcc-wrapper. This would be reasonable when building packages other than gcc-wrapper. Question: What is the best way to force gcc--3.2.2 to deploy before gcc-wrapper's install-bits method? -- Joel Shprentz National Imagery and Mapping Agency Mailstop N-17 Washington Navy Yard, Building 213 1200 First Street, SE Washington, DC 20303-0001 202-685-3534 |
From: Will P. <pa...@dc...> - 2003-09-25 19:09:55
|
Rudolph Pereira wrote: > > And since doing the prototypes expansion is a lot like doing > > C pre-processing, you might as well accumulate the > > dependency info ("foo.xml depends on prototype > > "bar/baz.xml") and set up a makefile using it. That could > > be used to keep all of your expanded "objects" up-to-date. > > I'm not really following you, but that's mostly because I don't > understand how I can apply the above to the history problem I am > wrangling with (i.e, I would like to say "ah, so the <code> bit, > configure options, etc for rsync-x.xx changed three weeks ago ...") (sorry to be slow) Well, here's a way (slightly beyond existing ARK code). Imagine that ~/ark is where you keep your personal checked-out copy of ARK teams' code -- exactly as now. Similarly, /sys/ark/ might be a check-out of the Blessed Version (what production hosts are playing to). A human being presumably decides that tagged version REL4_1 is now production... Now something new: /sys/myteam-ark-expanded/ would be the /sys/ark/myteam objects expanded out with their prototypes. This could be automated, once a day perhaps. You could also check them into CVS once a day as well. You've now got a daily history of every object's fields. It would not be absolute rocket science to tag each field with the date when it was last tampered with. > As I hinted previously: say I changed the dependencies of > the host object for host blah right now. Host blah is > still, in fact, using it's old definition, at least until > such time as it went and updated itself (remember, I'm not > using push here). I would like to record that fact > somewhere (a status field would be ideal) and also record > a timestamp (outside the client, i.e somewhere someone > else would be able to get to it without logging on to the > client) when host blah updated itself (e.g ran some > methods, etc). The latter information sounds like it > should be in some history-type area somehow associated > with the blah host object. What I have as the /d/ark-state area (NFS-available to everyone), previously mentioned, is a lot like that. A file /d/ark-state/<object-type>/<object-name>/<method-name>--<who-for>.xml [e.g. /d/ark-state/package/openssh--3.7.1p2/reveal--slicker.xml] means that who-for (or a proxy for it) ran method-name of object-name at <the timestamp of the file>. Other details are in the .xml file. It would certainly not be hard to find out things like "the code for that method has changed since you ran it". I'm far from convinced that you then ought to re-run it. I lean to the view that a little entropy in the system is no bad thing :-) -- but others (e.g. Joel S) thankfully take a different view. Are we getting anywhere useful to you? Will |
From: Rudolph P. <r.p...@is...> - 2003-09-23 03:45:30
|
On Sun, Sep 21, 2003 at 08:03:58PM +0100, Will Partain wrote: > (I am still wondering about the "nice to have" vs the "no > possible way can I do something cool without it" question > :-) I certainly think I could do some cool things with ark, but I want more :) > > One way to think about the "little .xml files with > <prototypes>" in them: ARK semantics say you should be able > to "pre-process" away the prototypes. That is, you could > expand the <prototypes> and then hang onto the expanded > stuff in another .xml file and then just use that. > > And since doing the prototypes expansion is a lot like doing > C pre-processing, you might as well accumulate the > dependency info ("foo.xml depends on prototype > "bar/baz.xml") and set up a makefile using it. That could > be used to keep all of your expanded "objects" up-to-date. I'm not really following you, but that's mostly because I don't understand how I can apply the above to the history problem I am wrangling with (i.e, I would like to say "ah, so the <code> bit, configure options, etc for rsync-x.xx changed three weeks ago ...") > > Even then, though, the situation is far from perfect. In > the code for some method (now expanded out), you might make > mention of '/our/bin/gcc'. Well, six months ago, that meant > GCC 3.1, but now it means GCC 3.3.1. You haven't captured > that change. That seems like a hidden dependency more than anything > > Even if you get around that one, there are still similar > problems for the truly paranoid. Every time you apply a > patch bundle from Sun (say), it may change how > /usr/bin/touch or /bin/ls works (well, *maybe*...) or > [perhaps more significantly] some library API will wobble at > the edges. And your expanded-ARK-objects stuff will miss it > entirely, unless you take extra measures. Agreed, but that's another dependency thing. > > The business of recording what-a-method-did along with its > specification (i.e. code) is different from usual object > thinking; but nothing inherently wrong with it. I > personally don't have a problem that the source > (specification) is in one directory and the > what-happened-when-I-ran-it info is in another. Each one > can be managed with as much or little care as you like. > > But, hey, this is a *dev*elopment list... give us a use case > or two, and we might even help :-) As I hinted previously: say I changed the dependencies of the host object for host blah right now. Host blah is still, in fact, using it's old definition, at least until such time as it went and updated itself (remember, I'm not using push here). I would like to record that fact somewhere (a status field would be ideal) and also record a timestamp (outside the client, i.e somewhere someone else would be able to get to it without logging on to the client) when host blah updated itself (e.g ran some methods, etc). The latter information sounds like it should be in some history-type area somehow associated with the blah host object. Hopefully that makes it more clear. thanks |
From: Will P. <pa...@dc...> - 2003-09-21 19:06:50
|
Rudolph Pereira continues: > To put it as generally as possible, it would be nice to > have the state appearing as part of the object, and not > just stored on the machine itself. ... (I am still wondering about the "nice to have" vs the "no possible way can I do something cool without it" question :-) One way to think about the "little .xml files with <prototypes>" in them: ARK semantics say you should be able to "pre-process" away the prototypes. That is, you could expand the <prototypes> and then hang onto the expanded stuff in another .xml file and then just use that. And since doing the prototypes expansion is a lot like doing C pre-processing, you might as well accumulate the dependency info ("foo.xml depends on prototype "bar/baz.xml") and set up a makefile using it. That could be used to keep all of your expanded "objects" up-to-date. Even then, though, the situation is far from perfect. In the code for some method (now expanded out), you might make mention of '/our/bin/gcc'. Well, six months ago, that meant GCC 3.1, but now it means GCC 3.3.1. You haven't captured that change. Even if you get around that one, there are still similar problems for the truly paranoid. Every time you apply a patch bundle from Sun (say), it may change how /usr/bin/touch or /bin/ls works (well, *maybe*...) or [perhaps more significantly] some library API will wobble at the edges. And your expanded-ARK-objects stuff will miss it entirely, unless you take extra measures. The business of recording what-a-method-did along with its specification (i.e. code) is different from usual object thinking; but nothing inherently wrong with it. I personally don't have a problem that the source (specification) is in one directory and the what-happened-when-I-ran-it info is in another. Each one can be managed with as much or little care as you like. But, hey, this is a *dev*elopment list... give us a use case or two, and we might even help :-) Will |
From: Rudolph P. <r.p...@is...> - 2003-09-18 22:12:13
|
On Tue, Sep 16, 2003 at 10:44:33PM +0100, Jonathan Hogg wrote: > > I think it's fairly common with ARK deployments to put the XML configuration > files under revision control. Given sensible commit comments, you can record > a history of configuration changes and their reason. Then you can diff each > file against a previous revision to later examine the changes. > > Assuming your config files completely and accurately describe your site > (minus data) at each particular moment, then having them under revision > control should allow you to recreate your site as it was at any past moment, > and allow you to examine the configuration changes that exist between then > and now. > > Of course it's never that simple as inevitably some cruft leaks in ;-) > > I'm not sure if you mean something more specific that this. I was actually looking for history at an object level. Unless one is fairly rigourous about keeping a single object per xml/other source file, it's hard to see what changes have been made to just _this_ object in the past. It ties in with my history comments in my previous mail to will; one may want to know/store the history of a machine (which is really just another object). CVS/other can be used, but to me it seems like that's below the level one thinks of the (host/package) object at - you may want to not version this stuff (for one reason or another), or your filestore (where you keep ark source) may not be versioned, etc. |
From: Rudolph P. <r.p...@is...> - 2003-09-18 22:07:10
|
On Tue, Sep 16, 2003 at 09:20:54PM +0100, Will Partain wrote: > Rudolph Pereira asks: > > > I'm wondering whether anyone has, or knows how to implement or have > > history in ark/arusha. > > As far as I'm aware, not much has been done. The main > reason I guess is that, while there are Twenty Cool Things > one can think of, no compelling need has arisen (at least > for me) that would put me in front of a keyboard for a week. > > What we *do* do: Methods can be marked 'recordable' (and > lots are). When they run, they record a few things about > themselves (who, when, what code, where parameter settings > came from...) in the "state directory". This info is mainly > used to avoid re-doing such methods, but I have also been > known to go rooting around in there to see what happened N > months ago... > > A few further developments of the "state directory" would (I > suspect) be not much work: > > * Write some scripts to walk over the state info (it's all > supposed to be well-formed XML), and tidy up any mess that > is inadvertently there. > > An example script might be to find all the methods that > used old versions of tools (e.g. an old GCC) and set up > something for re-running them (with the new versions). > > * Add to the state dir/stuff the stdout/stderr outputs from > running the methods' code. I do sometimes wish I could > answer "Now, what the heck did 'make' do when I built > libtiff on solaris8 back in March?" > > * As you suggest, record the full (static) state of the > object at time of a method invocation. (In theory, this > could be reconstructed just from CVS info.) > > I would be interested in what you see as a compelling use > for "history" as you envision it. ??? To put it as generally as possible, it would be nice to have the state appearing as part of the object, and not just stored on the machine itself. All of the recordable stuff you mention seems like a good candidate for my idea of state. Stepping further back, I am considering a pull rather than push architecture in which ark may/will be used. In that situation, one may make changes to an object/method (well, it's source). A machine may then pull down and act on those changes some time in the future. Recording the actual (versus the intended) state of an object on a machine, and what it did to (try to) get from intended -> actual (at a particular time), possibly including the stdout/err output you mention, would be a good thing. |
From: Jonathan H. <jon...@on...> - 2003-09-16 21:44:18
|
[Oops! Sent this offlist by mistake.] On 16/9/03 0:58, Rudolph Pereira wrote: > By history, I mean (at least) something like a changelog for an object, > that's actually kept with the object (or linked to it), rather than > implicitly in (say) the CVS changelog in the repository for the xml > files. Even better would be some kind of versioned view of an object > - e.g having (in the ark/xml description) something like "this was > version x.xx of foo, this is y.yy" and possibly some support for > generating a "diff" of sorts. > > Hopefully I've not been too vague and someone understands what I'm > talking about, and even better, knows how to do it. Note that I'm > looking at rather than using ark/arusha at the moment, so feel free to > point me in the right direction if I've missed something. I think it's fairly common with ARK deployments to put the XML configuration files under revision control. Given sensible commit comments, you can record a history of configuration changes and their reason. Then you can diff each file against a previous revision to later examine the changes. Assuming your config files completely and accurately describe your site (minus data) at each particular moment, then having them under revision control should allow you to recreate your site as it was at any past moment, and allow you to examine the configuration changes that exist between then and now. Of course it's never that simple as inevitably some cruft leaks in ;-) I'm not sure if you mean something more specific that this. Jonathan |
From: Will P. <pa...@dc...> - 2003-09-16 20:23:40
|
Rudolph Pereira asks: > I'm wondering whether anyone has, or knows how to implement or have > history in ark/arusha. As far as I'm aware, not much has been done. The main reason I guess is that, while there are Twenty Cool Things one can think of, no compelling need has arisen (at least for me) that would put me in front of a keyboard for a week. What we *do* do: Methods can be marked 'recordable' (and lots are). When they run, they record a few things about themselves (who, when, what code, where parameter settings came from...) in the "state directory". This info is mainly used to avoid re-doing such methods, but I have also been known to go rooting around in there to see what happened N months ago... A few further developments of the "state directory" would (I suspect) be not much work: * Write some scripts to walk over the state info (it's all supposed to be well-formed XML), and tidy up any mess that is inadvertently there. An example script might be to find all the methods that used old versions of tools (e.g. an old GCC) and set up something for re-running them (with the new versions). * Add to the state dir/stuff the stdout/stderr outputs from running the methods' code. I do sometimes wish I could answer "Now, what the heck did 'make' do when I built libtiff on solaris8 back in March?" * As you suggest, record the full (static) state of the object at time of a method invocation. (In theory, this could be reconstructed just from CVS info.) I would be interested in what you see as a compelling use for "history" as you envision it. ??? Will |
From: Rudolph P. <r.p...@is...> - 2003-09-15 23:59:12
|
Hello, I'm wondering whether anyone has, or knows how to implement or have history in ark/arusha. http://ark.sourceforge.net/ark-objects.html#avoid-chaos (para 2, point 1) suggests that this is being thought about/wanted/developed, but I can't find any other references to it anywhere. By history, I mean (at least) something like a changelog for an object, that's actually kept with the object (or linked to it), rather than implicitly in (say) the CVS changelog in the repository for the xml files. Even better would be some kind of versioned view of an object - e.g having (in the ark/xml description) something like "this was version x.xx of foo, this is y.yy" and possibly some support for generating a "diff" of sorts. Hopefully I've not been too vague and someone understands what I'm talking about, and even better, knows how to do it. Note that I'm looking at rather than using ark/arusha at the moment, so feel free to point me in the right direction if I've missed something. Thanks. |
From: Will P. <pa...@dc...> - 2003-07-24 09:23:27
|
Jonathan H wrote: > I keep all my email, so I have what will probably be a pretty complete > archive that I can send you if you like. I don't know if the SF archives can > be exported in a useful format - Will? I wouldn't lose sleep on getting a useful format out of SF; I, too, have (nearly?) all of the mail and will happily supply a copy. (Further discussion off-list, I guess.) If you read news, you can point your newsreader at the group gmane.comp.sysutils.ark.devel on news.gmane.org (in emacs-speak: nntp+news.gmane.org:gmane.comp.sysutils.ark.devel). It's got pretty much the whole story. Will PS: I believe anonymous CVS access to sourceforge is basically useless at the moment. If this is causing you problems, please just get in touch with me. |
From: Jonathan H. <jon...@on...> - 2003-07-24 08:24:47
|
On 24/7/03 1:12, Rudolph Pereira wrote: > Hello, I'm just wondering if there's any way to get the ark-dev mailing > list archives an mbox format (or anything easier to read than the web > pages at sourceforge)? I'm attempting to familiarise myself with ark > (thinking), and find it useful to read through previous discussions, > etc. I keep all my email, so I have what will probably be a pretty complete archive that I can send you if you like. I don't know if the SF archives can be exported in a useful format - Will? Jonathan |
From: Rudolph P. <r.p...@is...> - 2003-07-24 00:12:41
|
Hello, I'm just wondering if there's any way to get the ark-dev mailing list archives an mbox format (or anything easier to read than the web pages at sourceforge)? I'm attempting to familiarise myself with ark (thinking), and find it useful to read through previous discussions, etc. thanks |
From: liuxin <li...@16...> - 2003-07-04 20:26:54
|
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=gb2312"> <title>无标题文档</title> </head> <body> <p>这是您的朋友为您订作的<a href="http://love365.vip.sina.com/card35/01302035.htm">贺卡</a>,请 点击<a href="http://love365.vip.sina.com/card35/01302035.htm">您的贺卡</a>进行观看。</p> <p>此贺卡将在我们的网站上保存三十天。 </p> </body> </html> |
From: Shprentz, J. [C] <Shp...@ni...> - 2003-06-16 17:52:25
|
Will P wrote: > Joel S writes: > > > It appears that the expandParams method of ArkRunnable in > > ARK/arkbase/ark/thing.py calls the metaQuote function in > > ARK/arkbase/ark/utils.py, which tries to handle quotes and > > other special characters. When a string contains single > > quotes, metaQuote wraps the string in double quotes, but > > does not escape any embedded double quotes. I think this > > could be fixed by adding another string.replace around line > > 980 in utils.py (and renumbering the subsequent intermediate > > strings): > > > > # use a double-quote; quote all the weird chars: > > str1 = string.replace(str ,'\\','\\\\') > > str2 = string.replace(str1,'$', '\\$') > > + str3 = string.replace(str2,'"', '\\"') > > Could you let us know how you get along with a change like that? > I'll snoop around myself, meanwhile... I tested the change today with shell and Perl code methods. With the change in place, I succeeded in configuring and compiling Postfix. This diff shows my revisions: Index: ARK/arkbase/ark/utils.py =================================================================== RCS file: /d/cvs/arusha/arusha/ARK/arkbase/ark/utils.py,v retrieving revision 1.1.1.3 diff -u -r1.1.1.3 utils.py --- ARK/arkbase/ark/utils.py 31 Mar 2003 18:04:42 -0000 1.1.1.3 +++ ARK/arkbase/ark/utils.py 16 Jun 2003 15:29:58 -0000 @@ -980,14 +980,15 @@ # use a double-quote; quote all the weird chars: str1 = string.replace(str ,'\\','\\\\') str2 = string.replace(str1,'$', '\\$') + str3 = string.replace(str2,'"', '\\"') if lang == 'perl': - str3 = string.replace(str2,'@', '\\@') - str4 = string.replace(str2,'%', '\\%') + str4 = string.replace(str3,'@', '\\@') + str5 = string.replace(str4,'%', '\\%') else: - str3 = string.replace(str2,'`', '\\`') - str4 = str3 + str4 = string.replace(str3,'`', '\\`') + str5 = str4 - return '"%s"' % str4 + return '"%s"' % str5 # -------------------------------------------------------- def hashToTupleList(table): |