You can subscribe to this list here.
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(6) |
Jul
(22) |
Aug
(24) |
Sep
|
Oct
(53) |
Nov
(29) |
Dec
(14) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(7) |
Feb
(17) |
Mar
(48) |
Apr
(7) |
May
(7) |
Jun
(8) |
Jul
|
Aug
(5) |
Sep
(13) |
Oct
(11) |
Nov
(21) |
Dec
(2) |
2007 |
Jan
|
Feb
(34) |
Mar
(13) |
Apr
(2) |
May
(13) |
Jun
(7) |
Jul
(2) |
Aug
(4) |
Sep
(28) |
Oct
(7) |
Nov
(11) |
Dec
(3) |
2008 |
Jan
(9) |
Feb
(4) |
Mar
(47) |
Apr
(24) |
May
(3) |
Jun
(24) |
Jul
(14) |
Aug
(18) |
Sep
(7) |
Oct
(2) |
Nov
(1) |
Dec
(3) |
2009 |
Jan
(10) |
Feb
(3) |
Mar
(9) |
Apr
(3) |
May
|
Jun
(2) |
Jul
(1) |
Aug
(2) |
Sep
(1) |
Oct
(8) |
Nov
(1) |
Dec
(8) |
2010 |
Jan
(17) |
Feb
(15) |
Mar
|
Apr
|
May
(1) |
Jun
(7) |
Jul
(5) |
Aug
(1) |
Sep
(1) |
Oct
(2) |
Nov
(1) |
Dec
(2) |
2011 |
Jan
|
Feb
|
Mar
(2) |
Apr
(4) |
May
(1) |
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
(1) |
Mar
|
Apr
(1) |
May
|
Jun
(2) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(16) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
2020 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Patrick M M. <mc...@um...> - 2005-07-27 15:58:35
|
> I noticed a problem with the patch I sent some time back to avoid > the .hotfiles.btree issue. I didn't realize that since the active > transcript points to the next one we *expect* to see an object from, > the workaround code can get called on objects other than it should. > In particular, creatable transcripts do not contain a checksum for any > previously unknown objects that show up immediately before an object > that exists in a negative transcript. Stanford also noticed this in their testing and worked with us on a fix which is checked into CVS. Here's the comment explaining what we did: /* * after this point, name is in the fs, so if it's 'f' or an 'a', and * checksums are on, get the checksum if: * - it's create-able, not negative and on both fs and in tran. * we have to get cksum later if it is negative and gid/uid changed * - it's apply-able and in both tran and fs. If it's only * in fs, we are just going to remove it, so no need for checksum. * If it's negative, no need for a checksum either since we don't * care about the contents. * * Type CMP Tran cksum comment * A 0 P/S Y * A 0 N N ignore contents * A <0 - N No need - just going to remove * C 0 P/S Y * C 0 N N must do later if uid/ gid change * C <0 - Y */ |
From: Wesley C. <we...@um...> - 2005-07-26 18:02:26
|
On 22 Jul 2005, at 18:30, la...@sc... wrote: > Also, consider that I'm not talking about creating a facility > within radmind for doing file editing and don't want to turn it > into cfengine. I just want to be able to call out to some kind of > black box which returns a complete file. So instead of pulling the > file off the filesystem (technically that's just a fairly > transparent and non-dynamic black box) you pull the file out of a > script (allowing it to be dynamic). Radmind would still only be > dealing with whole files. This seems like a simple enough idea. > I may be oversimplifying the problem though. Other attributes of > the server that you may want to interrogate locally might be: > > - attatched disk shelves > - number of CPUs > - weither the machine thinks its a database running oracle > > That last use case may take some explaining. At the scale that > we've hit we've found that some charactersistics of what type of > machine a given server is are better expressed simply through local > config -- usually through small scripts that interrogate the server > and figure out what class of machine the server thinks it belongs > to. Once the class of server is known, then the state associated > with that class can be enforced on the machine through centralized > config. We've found this to be more scalable than trying to > maintain lists of machines associated with server types in a > central database and then enforcing that config. > > The general problem here is just interrogating what kind of > applications the server thinks it should be running and what > resources it thinks it needs and then enforcing that state. > > It is a little different way of doing configuration management, but > it strongly decouples this kind of low-level enforcement of state > by server type from the processes that configure a server as a > particular type of server. In other words I'm trying to build "all > servers of type A look like this" and I want a black-box oracle to > tell me which type of server I'm dealing with because I don't care. This idea sort of contradicts the security concerns above. What's to stop a client from claiming that it should have the secure data from another host? :wes |
From: Wesley C. <we...@um...> - 2005-07-26 17:57:44
|
On 25 Jul 2005, at 09:39, Patrick M McNeal wrote: > Ideally, we should be using SASL. That way, an admin can choose > any SASL mechanism, not just those that radmind supports. Of > course, you still always have the problem of getting a key onto a > blank disk for initial installs. Conveniently, you have just more or less completed the addition of SASL to the snet library that radmind, cosign, and simta all use for communication. So adding SASL to radmind should be more or less trivial. As we know, tho, setting up SASL is a pretty difficult task, so I wouldn't expect many people to able to take advantage of it. :wes |
From: Patrick M M. <mc...@um...> - 2005-07-25 13:39:58
|
Let's move this thread to the radmind-devel list. On Jul 22, 2005, at 6:30 PM, la...@sc... wrote: > You are correct that issuing specialized certificates to clients > would accomplish exactly what I want. That means building a PKI > infrastructure and a way to distribute those certs to the clients > securely. Its doable, but it isn't as trivial as it sounds at this > scale, and I'll have a kerb5 infrastructure that I could use for > host authentication before I get the PKI infrastructure. > At a minimum, we could add krb5. ( Most places see certificates as an easy entry point compared to Kerberos, but if you already have Kerberos in place... ) Ideally, we should be using SASL. That way, an admin can choose any SASL mechanism, not just those that radmind supports. Of course, you still always have the problem of getting a key onto a blank disk for initial installs. > Yeah, I want to slow down fsdiff. > > I'm thinking of using it for tripwire and reporting of changes (SOX > detect control) and to automatically fix through lapply (SOX > prevent control, security, and normalizing configs). Initially I'd > like to push out software tools (an enterprise-wide /usr/local more > or less) to all hosts and ensure that its consistent, then to grow > it into validating the rest of the OS if I can make that work. > > I'll look into writing a patch. > As Andrew mentioned, checkout our coding standards. Also, before writing a big chunk of code, let's hash things out on the devel mailing list. > Replication of the basic use case I'm talking about is to just rm - > rf a random directory under /var/radmind/file/<transcript>/ on the > server. I went and tested this and radmind is nice because ktcheck > will not propagate the corruption to clients that already have the > files, and for new clients the mismatch between the files and the > transcript will halt lapply. > > I'd actually view this data integrity issue as a major selling > point of radmind over rsync. > Very interesting case. Do you have a white paper of your current deployment? > You get no real argument from me. Piecing together files like > inetd.conf is a hack, xinetd.d works much better... > > But I do need to handle the case of /etc/passwd and /etc/shadow > where I can't use this approach. Negative files is the obvious > workaround, but then I need to write another app with > authentication code, etc... > > Also, consider that I'm not talking about creating a facility > within radmind for doing file editing and don't want to turn it > into cfengine. I just want to be able to call out to some kind of > black box which returns a complete file. So instead of pulling the > file off the filesystem (technically that's just a fairly > transparent and non-dynamic black box) you pull the file out of a > script (allowing it to be dynamic). Radmind would still only be > dealing with whole files. > Would this be a server or client side feature? With non-secure data, I'd say just push all the possible files to the clients and create the final file as needed. > I need to RTFM and see if this solves the problem. > Doh. ra.sh is the only man page that isn't done. I've got about 45% of it written. I'll move it up on my todo. > I may be oversimplifying the problem though. Other attributes of > the server that you may want to interrogate locally might be: > > - attatched disk shelves > - number of CPUs > - weither the machine thinks its a database running oracle > > That last use case may take some explaining. At the scale that > we've hit we've found that some charactersistics of what type of > machine a given server is are better expressed simply through local > config -- usually through small scripts that interrogate the server > and figure out what class of machine the server thinks it belongs > to. Once the class of server is known, then the state associated > with that class can be enforced on the machine through centralized > config. We've found this to be more scalable than trying to > maintain lists of machines associated with server types in a > central database and then enforcing that config. > > The general problem here is just interrogating what kind of > applications the server thinks it should be running and what > resources it thinks it needs and then enforcing that state. > > It is a little different way of doing configuration management, but > it strongly decouples this kind of low-level enforcement of state > by server type from the processes that configure a server as a > particular type of server. In other words I'm trying to build "all > servers of type A look like this" and I want a black-box oracle to > tell me which type of server I'm dealing with because I don't care. > So, the "server" here is a Radmind client - it's not the Radmind server doing the interrogation, right? I just want to make sure I understand where you want the blackbox to be be running. > Yeah, obviously you can push a cfengine2 binary, a cfengine script, > and then run it through a postapply. > > I don't like that for /etc/passwd though because I either need to: > > - write an app which talks to centralized databases, applies policy > and constructs /etc/passwd and then port that app to different > architectures. > - write an app which pulls /etc/passwd from a centralized server which > applies policy. > > I don't like the former because of the local application of policy, > and because its thicker code to port. I don't like the latter > because it seems like radmind already does 99% of what I want and > someone else can port the client to different architectures for me > -- all that is missing is making radmind call out to a script that > I write (and don't have to port) which constructs /etc/passwd. > For secure files like this, I agree - you can't make the client do it. It's as though the radmind server needs to call out to a black box which creates the correct file and then hands it to the client. It's like a special file on crack. Working name, magic file? |
From: Patrick M M. <mc...@um...> - 2005-07-18 19:40:13
|
> I'm looking for examples large scale deployments of Radmind. > Preferably in commercial companies. I need a selling job to do. This is the radmind developer list. Your question should be sent to the radmind-discussion list fount at: https://mailman.rice.edu/mailman/listinfo/radmind |
From: ik b. op t f. <fi...@gm...> - 2005-07-18 19:23:52
|
Hi=20 I'm looking for examples large scale deployments of Radmind. Preferably in commercial companies. I need a selling job to do. thanks Fietske |
From: Patrick M M. <mc...@um...> - 2005-07-15 13:24:00
|
In reading up on NTFS, it looks like there are some other issues that we've not run into before. Not being an NTFS expert, I thought I'd see what the list thought. * Compressed Files File and directories can be compressed on an NTFS volume. The kernel does all of the encoding/decoding on the fly, so radmind would only see decompressed data. Is the compression of a file/directory something we would want to manage with radmind? Can we even get access to the raw, compressed data? http://www.ntfs.com/ntfs-compressed.htm * EFS - Encrypting File System Files and directories can be encrypted on an NTFS volume. Again, the kernel does all of the encoding/decoding on the fly. Maybe we can avoid this issue by treating as a user space concern - something radmind should not manage. It would be nice to know if a clean install of the OS has any EFS files. |
From: Patrick M M. <mc...@um...> - 2005-07-15 13:14:03
|
> Files: > f <path> O:<owner sid> G:<group sid> D:<dacl> <attributes> <high > mtime> <low mtime> <high file size> <low file size> <checksum> > > Directories: > d <path> O:<owner sid> G:<group sid> D:<dacl> <attributes> It's interesting to point out, that if a NTFS transcript were to be applied to a non-NTFS system, the tools would complain on seeing an f line with extra attributes. The only problem would be Apple built tools would try to update a directory with NTFS attributes, which would ( should? ) fail. > Does this look like a reasonable change to everyone? Any other > questions, in general, about the ntfsdiff transcripts we may have > overlooked? It seems to make sense, but we wanted to get input > from others before moving forward with this format. As we move forward with NTFS development, there are also some other questions we'll have to figure out, including if we need to support NTFS file streams. http://www.ntfs.com/ntfs-multiple.htm |
From: Jarod <ja...@um...> - 2005-07-14 21:31:21
|
We're currently attempting to figure out the best way to format the transcripts for ntfsdiff and would like some feedback. As it stands, this is the way the information appears in the transcript: Files: f <path> <attributes> <high mtime> <low mtime> <high file size> <low file size> O:<owner sid> G:<group sid> D:<dacl> <checksum> Directories: d <path> <attributes> O:<owner sid> G:<group sid> D:<dacl> This causes some issues with the server-side lmerge tool in that it expects only a single line of information per file system object. Also, note that the 'f' and 'd' transcript types are already being used and have a different number of columns when compared to *nix file system objects: f path mode uid gid mtime size checksum d path mode uid gid [ finder-information ] You'll note the optional [finder-information] for Apple directories. In other words, we'd be comfortable with having 'f' and 'd' lines with different numbers of columns if other people are (as opposed to using a different lettering scheme such as 'N' for ntfs file and 'y' for ntfs directory). In order to remain as similar to the other fsdiff tools, this is the format we've come up with: Files: f <path> O:<owner sid> G:<group sid> D:<dacl> <attributes> <high mtime> <low mtime> <high file size> <low file size> <checksum> Directories: d <path> O:<owner sid> G:<group sid> D:<dacl> <attributes> Does this look like a reasonable change to everyone? Any other questions, in general, about the ntfsdiff transcripts we may have overlooked? It seems to make sense, but we wanted to get input from others before moving forward with this format. +---------------------+ | Jarod Malestein | +=====================+ |
From: Neagle, G. <Gre...@di...> - 2005-07-01 16:04:08
|
On Jun 30, 2005, at 5:23 PM, Wout Mertens wrote: > Although I would like to extend on this. Sharing a cache between > multiple systems would be absolutely fantastic for systems in > remote sites (remote meaning thin network pipe). We could put a > smallish shared filer on the network there, and keep support costs > low, instead of having to maintain a full-fledged radmind server > containing _all_ overloads. > > For that to work, lapply should accept a location for the cache, > and the retrieve-from-cache should be able to copy if rename > doesn't work. There might be issues with applefiles too, although > we could require the shared file system to have resource fork > support. Thoughts? IMHO, that's a bit crazy. Unless you have a pre-existing Windows server infrastructure, I can't see how it would be better/cheaper/ easier to put in "smallish shared filer" rather than dropping in a cheap Mac ( a Mini?) or Linux box as a radmind server replica. That said, as long as you could tell lapply where the cache lives, and it didn't clear the cache as it used it, it could work. But it seems a departure from the core idea of pre-cached lapply. -Greg |
From: Wout M. <wme...@ci...> - 2005-07-01 00:23:49
|
On 26 Jun 2005, at 15:29, Wesley Craig wrote: > On 26 Jun 2005, at 06:29, Wout Mertens wrote: >> One more thing: lapply should have a "download only" switch, so that >> you can do things between a successful download and the actual >> applying. >> >> (E.g., in my case, tell the user to please quit these running apps >> that will be overwritten, and tell him that he'll have to reboot >> afterwards (if needed) ) > > So we need two switches, actually. One to say "only download" and > another that says "use the downloaded material". How about a switch that enables pre-caching, and another to disable anything but pre-caching when the first is enabled? That way, most people just need one switch. >> On 24 Jun 2005, at 17:32, Wout Mertens wrote: >>> Here's what I see as the minimal set of changes that lapply would >>> need: >>> - It should save out the transcript if it's given on stdin, so it >>> can seek through it. Or, we could specify that pre-cached apply only >>> works when > > So probably we'll open/create a file in /tmp (or specified directory), > and then remove it. That way if lapply is interrupted, the temp file > isn't left around. Good idea. >>> - It should accept a location for the cached files. This triggers >>> the phased approach described below. > > I'd rather see an approach where the cached files are always stored on > the target volume: that way, move_to_live is always a move. > Otherwise, if you're installing across volumes, the "rename" call is > not sufficient. Imagine that you have "/" and "/Volumes/Users" and > want to manage /Volumes/Users/Shared. If you download to "/", then > the rename system call will fail, and the file will instead need to be > copied between the two volumes. > > I have code that will find the mount point for the volume containing > any given file. In my above case, you might have /.radmind.cache and > /Volumes/Users/.radmind.cache or something like that. Good idea too! (and too hard to script ;-) ) Although I would like to extend on this. Sharing a cache between multiple systems would be absolutely fantastic for systems in remote sites (remote meaning thin network pipe). We could put a smallish shared filer on the network there, and keep support costs low, instead of having to maintain a full-fledged radmind server containing _all_ overloads. For that to work, lapply should accept a location for the cache, and the retrieve-from-cache should be able to copy if rename doesn't work. There might be issues with applefiles too, although we could require the shared file system to have resource fork support. Thoughts? >>> Then for each of the phases: >>> - CHECK >>> - DOWNLOAD >>> - FIX >>> - MOVE_TO_LIVE >>> - REMOVE So, in line with the discussion below, the phases become: - CHECK - DOWNLOAD => Possible bail-out point - APPLY >>> >>> The CHECK phase should test the total size of the transcript and >>> complain if there's not enough place in the download location. The >>> admin should then decide if the update should be done "live", i.e. >>> as before. > > The CHECK phase also needs the mount point code I mentioned above. > Also, getting disk free space is mildly unportable, we'll need code > for each supported platform. Ah well, that particular problem has been solved many times. I just wonder what a good algorithm would be to find all the mountpoints of all the files without stressing the system. >>> The DOWNLOAD phase should fetch all the '+' lines in the transcript. >>> It should automagically make missing directories. >>> If a file is already downloaded, it should checksum it to be sure >>> it's there. That way, you can run lapply multiple times until the >>> download succeeds. > > There's a utility routine that the radmind server uses called "mkdirs" > that does this reasonably efficiently. See around line 807 in > command.c. The idea is that the caller attempts to open the target > file. If the open fails, try "mkdirs", and then try again. (Probably > this code could be improved, by checking that errno is set to > something like ENOENT before bothering to call "mkdirs".) Right. Alternatively, we could dump all files in one large directory and name them according to their hash. I don't know if that's a good idea though, just wondering about this. >>> The FIX phase should apply all permission changes and directory >>> creations. > > I think I might modify these phases, somewhat. I think the majority > of the existing lapply code can just be used for the "use the > downloaded material" phase. The existing code calls "retr" to > download the file to the target directory, and then calls "rename" to > move it into place. The new code will have already downloaded the > file, so the same "rename" just works. The other tasks (fix, > move_to_line, and remove) can all be done just as lapply does them > down, as it reads them. I agree. So we consolidate these phases to the APPLY phase. It works as lapply worked before, with the change that it will check the cache(s) to see if the wanted file exists in there (and has the valid checksum?) before attempting to contact the radmind server. >>> The MOVE_TO_LIVE phase should move the files to the live filesystem, >>> being aware about resource forks and finder info. If a directory is >>> missing, it's created 711 root:root, and a big warning is generated. > > In the current code, the temp file is made by in retr.c, around line > 130. To make missing directories legal, simple crib the code from > command.c 807 -- call mkdirs. Either the mkdirs call should be > modified to specify a permission, or lapply should use umask. You'll > also see in retr.c 122 how the temppath is currently generated. This > code should be replaced with the /.radmind.cache code. Right. Do we make missing directories legal for all operating modes? I think so. It seems to me that there are only 2 good ways to handle this sort of transcript error: - Bite the bullet and continue - Exhaustively check the filesystem before doing the apply. Bailing out of the apply is never a good solution, IMO. >>> The REMOVE phase should remove all the files that are marked as >>> such. If it can't remove a file, it moves on but generates a >>> warning. Same reasoning goes here. > There was some idea of doing more than warning, no? Uh... What do you mean? Wout. |
From: Neagle, G. <Gre...@di...> - 2005-06-30 19:24:47
|
This repeats some opinions I posted on the radmind list, but I thought I should get them in here. I think a goal is to make lapply more robust, and less likely to leave your filesystem in a nebulous state. Three big failure modes for lapply, each of which can cause lapply to abort half-way through its run: 1) Failure to download a file or files(s). The proposal is to (optionally) direct lapply to download the files first, and not make any changes to the "live" filesystem until all needed files are retrieved. This of course has implications for disk space usage. 2) Applicable transcript that calls for files to be created in directories that do not currently exist and are not listed in the applicable transcript. This is usually caused by dependancy errors in overload transcripts. Two remedies have been suggested: a) lapply should (optionally) create the needed directories on the fly with default mode, owner and group, and issue warnings; b) a pre-flight check to identify the problem and allow lapply to exit before any changes have been made to the filesystem. 3) Applicable transcript specifies the removal of a non-empty directory and has no entries for one or more of the directory's children. Again, this is usually caused by dependancy errors in overload transcripts, and usually is remedied by adding entries for the directories in question to additional overload transcripts. This means that the directory should no have been marked for deletion. So one suggestion (a) is to (optionally) have lapply issue a warning on failure to delete a directory, but continue processing the applicable transcript. Another suggestion (b) is another pre-flight check that identifies the problem and allows lapply to exit before any changes have been made to the filesystem. Since the "pre-flight" checks proposed in 2 and 3 could take a long time for large applicable transcripts, perhaps they should be moved to another tool which could provide progress feedback (perhaps this is the "lcheck" tool on the roadmap for 2.0?) -Greg |
From: Wesley C. <we...@um...> - 2005-06-26 13:29:59
|
On 26 Jun 2005, at 06:29, Wout Mertens wrote: > One more thing: lapply should have a "download only" switch, so > that you can do things between a successful download and the actual > applying. > > (E.g., in my case, tell the user to please quit these running apps > that will be overwritten, and tell him that he'll have to reboot > afterwards (if needed) ) So we need two switches, actually. One to say "only download" and another that says "use the downloaded material". > On 24 Jun 2005, at 17:32, Wout Mertens wrote: >> Here's what I see as the minimal set of changes that lapply would >> need: >> - It should save out the transcript if it's given on stdin, so it >> can seek through it. Or, we could specify that pre-cached apply >> only works when So probably we'll open/create a file in /tmp (or specified directory), and then remove it. That way if lapply is interrupted, the temp file isn't left around. >> - It should accept a location for the cached files. This triggers >> the phased approach described below. I'd rather see an approach where the cached files are always stored on the target volume: that way, move_to_live is always a move. Otherwise, if you're installing across volumes, the "rename" call is not sufficient. Imagine that you have "/" and "/Volumes/Users" and want to manage /Volumes/Users/Shared. If you download to "/", then the rename system call will fail, and the file will instead need to be copied between the two volumes. I have code that will find the mount point for the volume containing any given file. In my above case, you might have /.radmind.cache and /Volumes/Users/.radmind.cache or something like that. >> Then for each of the phases: >> - CHECK >> - DOWNLOAD >> - FIX >> - MOVE_TO_LIVE >> - REMOVE >> >> The CHECK phase should test the total size of the transcript and >> complain if there's not enough place in the download location. The >> admin should then decide if the update should be done "live", i.e. >> as before. The CHECK phase also needs the mount point code I mentioned above. Also, getting disk free space is mildly unportable, we'll need code for each supported platform. >> The DOWNLOAD phase should fetch all the '+' lines in the transcript. >> It should automagically make missing directories. >> If a file is already downloaded, it should checksum it to be sure >> it's there. That way, you can run lapply multiple times until the >> download succeeds. There's a utility routine that the radmind server uses called "mkdirs" that does this reasonably efficiently. See around line 807 in command.c. The idea is that the caller attempts to open the target file. If the open fails, try "mkdirs", and then try again. (Probably this code could be improved, by checking that errno is set to something like ENOENT before bothering to call "mkdirs".) >> The FIX phase should apply all permission changes and directory >> creations. I think I might modify these phases, somewhat. I think the majority of the existing lapply code can just be used for the "use the downloaded material" phase. The existing code calls "retr" to download the file to the target directory, and then calls "rename" to move it into place. The new code will have already downloaded the file, so the same "rename" just works. The other tasks (fix, move_to_line, and remove) can all be done just as lapply does them down, as it reads them. >> The MOVE_TO_LIVE phase should move the files to the live >> filesystem, being aware about resource forks and finder info. If a >> directory is missing, it's created 711 root:root, and a big >> warning is generated. In the current code, the temp file is made by in retr.c, around line 130. To make missing directories legal, simple crib the code from command.c 807 -- call mkdirs. Either the mkdirs call should be modified to specify a permission, or lapply should use umask. You'll also see in retr.c 122 how the temppath is currently generated. This code should be replaced with the /.radmind.cache code. >> The REMOVE phase should remove all the files that are marked as >> such. If it can't remove a file, it moves on but generates a warning. There was some idea of doing more than warning, no? :wes |
From: Wout M. <wme...@ci...> - 2005-06-26 10:30:54
|
One more thing: lapply should have a "download only" switch, so that you can do things between a successful download and the actual applying. (E.g., in my case, tell the user to please quit these running apps that will be overwritten, and tell him that he'll have to reboot afterwards (if needed) ) Are we missing anything else here? Wout. On 24 Jun 2005, at 17:32, Wout Mertens wrote: > Hey guys! > > Let's make lapply do pre-caching of downloads (if possible). The > script I posted to the radmind mailing list already works, but it > would be more efficient if lapply did it by itself. I'll post it here > as well for safekeeping. > > Here's what I see as the minimal set of changes that lapply would need: > - It should save out the transcript if it's given on stdin, so it can > seek through it. Or, we could specify that pre-cached apply only works > when > > - It should accept a location for the cached files. This triggers the > phased approach described below. > > Then for each of the phases: > - CHECK > - DOWNLOAD > - FIX > - MOVE_TO_LIVE > - REMOVE > > The CHECK phase should test the total size of the transcript and > complain if there's not enough place in the download location. The > admin should then decide if the update should be done "live", i.e. as > before. > > The DOWNLOAD phase should fetch all the '+' lines in the transcript. > It should automagically make missing directories. > If a file is already downloaded, it should checksum it to be sure it's > there. That way, you can run lapply multiple times until the download > succeeds. > > The FIX phase should apply all permission changes and directory > creations. > > The MOVE_TO_LIVE phase should move the files to the live filesystem, > being aware about resource forks and finder info. If a directory is > missing, it's created 711 root:root, and a big warning is generated. > > The REMOVE phase should remove all the files that are marked as such. > If it can't remove a file, it moves on but generates a warning. > > What do you think? > > Wout. > > <pcapply.sh> |
From: Wout M. <wme...@ci...> - 2005-06-24 15:34:09
|
Hey guys! Let's make lapply do pre-caching of downloads (if possible). The script I posted to the radmind mailing list already works, but it would be more efficient if lapply did it by itself. I'll post it here as well for safekeeping. Here's what I see as the minimal set of changes that lapply would need: - It should save out the transcript if it's given on stdin, so it can seek through it. Or, we could specify that pre-cached apply only works when - It should accept a location for the cached files. This triggers the phased approach described below. Then for each of the phases: - CHECK - DOWNLOAD - FIX - MOVE_TO_LIVE - REMOVE The CHECK phase should test the total size of the transcript and complain if there's not enough place in the download location. The admin should then decide if the update should be done "live", i.e. as before. The DOWNLOAD phase should fetch all the '+' lines in the transcript. It should automagically make missing directories. If a file is already downloaded, it should checksum it to be sure it's there. That way, you can run lapply multiple times until the download succeeds. The FIX phase should apply all permission changes and directory creations. The MOVE_TO_LIVE phase should move the files to the live filesystem, being aware about resource forks and finder info. If a directory is missing, it's created 711 root:root, and a big warning is generated. The REMOVE phase should remove all the files that are marked as such. If it can't remove a file, it moves on but generates a warning. What do you think? Wout. |
From: Patrick M M. <mc...@um...> - 2005-06-17 14:03:24
|
This is the first release candidate for version 1.5.1 of the radmind tools. It it beta software and should not be used in production. Source tar ball: http://rsug.itd.umich.edu/software/radmind/files/radmind-1.5.1rc1.tgz OS X package: http://rsug.itd.umich.edu/software/radmind/files/ RadmindTools-1.5.1rc1.pkg.tgz Please test this code and submit any new issue to the bug tracker: https://sourceforge.net/tracker/?group_id=141444&atid=749492 --Patrick |
From: Patrick M M. <mc...@um...> - 2005-06-16 15:03:39
|
Hey there fellow Radmind developers. This mailing list is going to be a place to discuss purely development related issues for the Radmind project. This list is not intended to be a user support forum. Any questions related to the installation or use of Radmind should be sent to the existing Radmind list. I will be very strict about enforcing this distinction. In addition to the mailing list, I've setup a SourceForge project for Radmind. To start, the project includes a patch, bug and feature tracker, and an archive of this mailing list. If we find SourceForge to be a useful development tool, we may expand into other features. If you do not already have a SourceForge user account, you can create one at: https://sourceforge.net/account/newuser_emailverify.php With an account you'll be able to submit tracker items and, if need, become a member of the project. Our first SourceForge "experiment" will be working together on the prebinding patch. Wout and Maarten have done a great job on the code and it will be a good launching point for Radmind's developer community. If you have any comments on how the project is being run, suggestions on leveraging SourceForge tools or any development related feedback, feel free to send them here. --Patrick |