|
From: Alan D. <We...@Om...> - 2005-10-02 23:57:08
|
On 10/1/2005 8:57 PM, Jamie Cameron wrote: > On Fri, 2005-09-30 at 22:56, Alan Dobkin wrote: >> /On 9/29/2005 9:07 PM, Jamie Cameron wrote: >>> On Fri, 2005-09-30 at 03:52, Alan Dobkin wrote: >>> >>>> As a possible resolution, perhaps each backup could be assigned a job >>>> number to identify it, and/or the selection list could other attributes, >>>> such as the dump level and the parent job's schedule. Or, better yet, >>>> it could use a move up/down interface similar to the rule configuration >>>> tables in the Linux Firewall module. >>>> >>> >>> How about if the list of dumps includes the destination and level as >>> well? That way you could more easily know which one is being selected.. >>> >> >> That would be helpful, but there could still be multiple jobs with >> the same destination and level scheduled at different times. For it >> to really be unique, I think it would have to display all three >> fields and/or a job number. The move up/down interface seems like a >> more elegant direction though, because it would be easier to display >> and modify the connections based on a parent/child relationship, and >> they would all display in the correct order in the summary table. >> But I'm sure the former is much easier to code than the latter, and >> it would still be functional./ > I will have to see if I can fit the schedule in the list as well. > A table wouldn't really make sense, as you are not defining an > ordering, but more of a list or tree of backups. That's true, but I was thinking more of separate tables for each chain/branch, similar to the Linux Firewall module. Either way though, it will be a welcome improvement to differentiate the jobs. What I did in the meantime was temporarily rename the duplicate jobs on the same directories by prefixing an extra slash just so I could finish setting them up, and then I went back and removed them afterwards. >>>> /The other improvement that I would like to suggest is a way to retain >>>> multiple backups and auto-expire them after a given amount of time. For >>>> example, it is bad practice to overwrite a good backup with a second >>>> (possibly bad) backup. Instead, the second backup should be written to >>>> a second destination (file, tape, remote host, etc.), and then the first >>>> backup can be deleted after a set amount of time. This could be >>>> extended to retain any number of old backups before expiring/deleting >>>> them, similar to how the logrotate program keeps a certain number of old >>>> backlogs. In order to accomplish this now, I have to create multiple >>>> backup jobs for each repetition, which is very cumbersome. But again, I >>>> would welcome any suggestions if someone knows of a better way to do >>>> this given the current interface. >>>> / >>>> / / >>> / >>> / >>> /I can't really make any suggestions here, apart from perhaps doing >>> backups with data-based filenames, and setting up a cron job to delete >>> backups older that a certain number of days. >>> >>> - Jamie/ >>> / / >> / >> I already set the filenames based on the directory that is being >> backed up, but how would that allow retention of multiple backups of >> the same directory? I guess I could create separate jobs for each >> day of the week, for example, and then they would automatically >> overwrite the file from the previous week. But setting that up for >> multiple directories would be cumbersome. For example, all of these >> jobs would have to be created separately: >> >> home-mon >> home-tues >> home-wed >> ... >> >> etc-mon >> etc-tues >> etc-wed >> ... >> >> same for var, usr, etc.... >> >> Is there an easier way to accomplish something like this, or am I >> asking too much of this module?/ > Have you looked into using the strftime format code feature, which can > be enabled on the Module Config page? You can then use codes like %d > for the day of the month or %a for the day of the week in the backup > destination. > > - Jamie > No, I hadn't thought of that yet, but it's a great idea. In the meantime, I am using the options to run a command before and after each backup to save a copy of the previous backup before overwriting it with the new one. Thanks, Alan |