|
From: Alan D. <We...@Om...> - 2005-09-29 17:51:09
|
I recently began using the Filesystem Backup module, and I have some feedback and a few requests for improvement. First, there is a useful feature in the scheduling interface, where multiple backup jobs can be chained to each other to have them run in series. However, the selection list only shows the directory path of the backup to chain it to. The problem is that there can be multiple jobs scheduled for that same directory, for example with different dump levels to save full and incremental/differential backups. This selection makes it impossible to know which of these backups it is being chained to. Has anyone else run into this problem, or is there a better way to do it? As a possible resolution, perhaps each backup could be assigned a job number to identify it, and/or the selection list could other attributes, such as the dump level and the parent job's schedule. Or, better yet, it could use a move up/down interface similar to the rule configuration tables in the Linux Firewall module. The other improvement that I would like to suggest is a way to retain multiple backups and auto-expire them after a given amount of time. For example, it is bad practice to overwrite a good backup with a second (possibly bad) backup. Instead, the second backup should be written to a second destination (file, tape, remote host, etc.), and then the first backup can be deleted after a set amount of time. This could be extended to retain any number of old backups before expiring/deleting them, similar to how the logrotate program keeps a certain number of old backlogs. In order to accomplish this now, I have to create multiple backup jobs for each repetition, which is very cumbersome. But again, I would welcome any suggestions if someone knows of a better way to do this given the current interface. Thanks, Alan |
|
From: Jamie C. <jca...@we...> - 2005-09-30 01:09:17
|
On Fri, 2005-09-30 at 03:52, Alan Dobkin wrote: > I recently began using the Filesystem Backup module, and I have some > feedback and a few requests for improvement. > > First, there is a useful feature in the scheduling interface, where > multiple backup jobs can be chained to each other to have them run in > series. However, the selection list only shows the directory path of > the backup to chain it to. The problem is that there can be multiple > jobs scheduled for that same directory, for example with different dump > levels to save full and incremental/differential backups. This > selection makes it impossible to know which of these backups it is being > chained to. Has anyone else run into this problem, or is there a better > way to do it? > > As a possible resolution, perhaps each backup could be assigned a job > number to identify it, and/or the selection list could other attributes, > such as the dump level and the parent job's schedule. Or, better yet, > it could use a move up/down interface similar to the rule configuration > tables in the Linux Firewall module. How about if the list of dumps includes the destination and level as well? That way you could more easily know which one is being selected.. > The other improvement that I would like to suggest is a way to retain > multiple backups and auto-expire them after a given amount of time. For > example, it is bad practice to overwrite a good backup with a second > (possibly bad) backup. Instead, the second backup should be written to > a second destination (file, tape, remote host, etc.), and then the first > backup can be deleted after a set amount of time. This could be > extended to retain any number of old backups before expiring/deleting > them, similar to how the logrotate program keeps a certain number of old > backlogs. In order to accomplish this now, I have to create multiple > backup jobs for each repetition, which is very cumbersome. But again, I > would welcome any suggestions if someone knows of a better way to do > this given the current interface. I can't really make any suggestions here, apart from perhaps doing backups with data-based filenames, and setting up a cron job to delete backups older that a certain number of days. - Jamie > Thanks, > Alan > > > ------------------------------------------------------- > This SF.Net email is sponsored by: > Power Architecture Resource Center: Free content, downloads, discussions, > and more. http://solutions.newsforge.com/ibmarch.tmpl > - > Forwarded by the Webmin mailing list at web...@li... > To remove yourself from this list, go to > http://lists.sourceforge.net/lists/listinfo/webadmin-list |
|
From: Alan D. <We...@Om...> - 2005-09-30 12:55:25
|
On 9/29/2005 9:07 PM, Jamie Cameron wrote: > On Fri, 2005-09-30 at 03:52, Alan Dobkin wrote: > >> As a possible resolution, perhaps each backup could be assigned a job >> number to identify it, and/or the selection list could other attributes, >> such as the dump level and the parent job's schedule. Or, better yet, >> it could use a move up/down interface similar to the rule configuration >> tables in the Linux Firewall module. >> > > How about if the list of dumps includes the destination and level as > well? That way you could more easily know which one is being selected.. > That would be helpful, but there could still be multiple jobs with the same destination and level scheduled at different times. For it to really be unique, I think it would have to display all three fields and/or a job number. The move up/down interface seems like a more elegant direction though, because it would be easier to display and modify the connections based on a parent/child relationship, and they would all display in the correct order in the summary table. But I'm sure the former is much easier to code than the latter, and it would still be functional. >> The other improvement that I would like to suggest is a way to retain >> multiple backups and auto-expire them after a given amount of time. For >> example, it is bad practice to overwrite a good backup with a second >> (possibly bad) backup. Instead, the second backup should be written to >> a second destination (file, tape, remote host, etc.), and then the first >> backup can be deleted after a set amount of time. This could be >> extended to retain any number of old backups before expiring/deleting >> them, similar to how the logrotate program keeps a certain number of old >> backlogs. In order to accomplish this now, I have to create multiple >> backup jobs for each repetition, which is very cumbersome. But again, I >> would welcome any suggestions if someone knows of a better way to do >> this given the current interface. >> > > I can't really make any suggestions here, apart from perhaps doing > backups with data-based filenames, and setting up a cron job to delete > backups older that a certain number of days. > > - Jamie I already set the filenames based on the directory that is being backed up, but how would that allow retention of multiple backups of the same directory? I guess I could create separate jobs for each day of the week, for example, and then they would automatically overwrite the file from the previous week. But setting that up for multiple directories would be cumbersome. For example, all of these jobs would have to be created separately: home-mon home-tues home-wed ... etc-mon etc-tues etc-wed ... same for var, usr, etc.... Is there an easier way to accomplish something like this, or am I asking too much of this module? Thanks, Alan |
|
From: Jamie C. <jca...@we...> - 2005-10-02 23:25:27
|
On Fri, 2005-09-30 at 22:56, Alan Dobkin wrote: > On 9/29/2005 9:07 PM, Jamie Cameron wrote: > > > On Fri, 2005-09-30 at 03:52, Alan Dobkin wrote: > > > > > > > As a possible resolution, perhaps each backup could be assigned a job > > > number to identify it, and/or the selection list could other attributes, > > > such as the dump level and the parent job's schedule. Or, better yet, > > > it could use a move up/down interface similar to the rule configuration > > > tables in the Linux Firewall module. > > > > > > > > > > > How about if the list of dumps includes the destination and level as > > well? That way you could more easily know which one is being selected.. > > > > > That would be helpful, but there could still be multiple jobs with the > same destination and level scheduled at different times. For it to > really be unique, I think it would have to display all three fields > and/or a job number. The move up/down interface seems like a more > elegant direction though, because it would be easier to display and > modify the connections based on a parent/child relationship, and they > would all display in the correct order in the summary table. But I'm > sure the former is much easier to code than the latter, and it would > still be functional. I will have to see if I can fit the schedule in the list as well. A table wouldn't really make sense, as you are not defining an ordering, but more of a list or tree of backups. > > > The other improvement that I would like to suggest is a way to retain > > > multiple backups and auto-expire them after a given amount of time. For > > > example, it is bad practice to overwrite a good backup with a second > > > (possibly bad) backup. Instead, the second backup should be written to > > > a second destination (file, tape, remote host, etc.), and then the first > > > backup can be deleted after a set amount of time. This could be > > > extended to retain any number of old backups before expiring/deleting > > > them, similar to how the logrotate program keeps a certain number of old > > > backlogs. In order to accomplish this now, I have to create multiple > > > backup jobs for each repetition, which is very cumbersome. But again, I > > > would welcome any suggestions if someone knows of a better way to do > > > this given the current interface. > > > > > > > > > > > I can't really make any suggestions here, apart from perhaps doing > > backups with data-based filenames, and setting up a cron job to delete > > backups older that a certain number of days. > > > > - Jamie > > > I already set the filenames based on the directory that is being > backed up, but how would that allow retention of multiple backups of > the same directory? I guess I could create separate jobs for each day > of the week, for example, and then they would automatically overwrite > the file from the previous week. But setting that up for multiple > directories would be cumbersome. For example, all of these jobs would > have to be created separately: > > home-mon > home-tues > home-wed > ... > > etc-mon > etc-tues > etc-wed > ... > > same for var, usr, etc.... > > Is there an easier way to accomplish something like this, or am I > asking too much of this module? Have you looked into using the strftime format code feature, which can be enabled on the Module Config page? You can then use codes like %d for the day of the month or %a for the day of the week in the backup destination. - Jamie |
|
From: Alan D. <We...@Om...> - 2005-10-02 23:57:08
|
On 10/1/2005 8:57 PM, Jamie Cameron wrote: > On Fri, 2005-09-30 at 22:56, Alan Dobkin wrote: >> /On 9/29/2005 9:07 PM, Jamie Cameron wrote: >>> On Fri, 2005-09-30 at 03:52, Alan Dobkin wrote: >>> >>>> As a possible resolution, perhaps each backup could be assigned a job >>>> number to identify it, and/or the selection list could other attributes, >>>> such as the dump level and the parent job's schedule. Or, better yet, >>>> it could use a move up/down interface similar to the rule configuration >>>> tables in the Linux Firewall module. >>>> >>> >>> How about if the list of dumps includes the destination and level as >>> well? That way you could more easily know which one is being selected.. >>> >> >> That would be helpful, but there could still be multiple jobs with >> the same destination and level scheduled at different times. For it >> to really be unique, I think it would have to display all three >> fields and/or a job number. The move up/down interface seems like a >> more elegant direction though, because it would be easier to display >> and modify the connections based on a parent/child relationship, and >> they would all display in the correct order in the summary table. >> But I'm sure the former is much easier to code than the latter, and >> it would still be functional./ > I will have to see if I can fit the schedule in the list as well. > A table wouldn't really make sense, as you are not defining an > ordering, but more of a list or tree of backups. That's true, but I was thinking more of separate tables for each chain/branch, similar to the Linux Firewall module. Either way though, it will be a welcome improvement to differentiate the jobs. What I did in the meantime was temporarily rename the duplicate jobs on the same directories by prefixing an extra slash just so I could finish setting them up, and then I went back and removed them afterwards. >>>> /The other improvement that I would like to suggest is a way to retain >>>> multiple backups and auto-expire them after a given amount of time. For >>>> example, it is bad practice to overwrite a good backup with a second >>>> (possibly bad) backup. Instead, the second backup should be written to >>>> a second destination (file, tape, remote host, etc.), and then the first >>>> backup can be deleted after a set amount of time. This could be >>>> extended to retain any number of old backups before expiring/deleting >>>> them, similar to how the logrotate program keeps a certain number of old >>>> backlogs. In order to accomplish this now, I have to create multiple >>>> backup jobs for each repetition, which is very cumbersome. But again, I >>>> would welcome any suggestions if someone knows of a better way to do >>>> this given the current interface. >>>> / >>>> / / >>> / >>> / >>> /I can't really make any suggestions here, apart from perhaps doing >>> backups with data-based filenames, and setting up a cron job to delete >>> backups older that a certain number of days. >>> >>> - Jamie/ >>> / / >> / >> I already set the filenames based on the directory that is being >> backed up, but how would that allow retention of multiple backups of >> the same directory? I guess I could create separate jobs for each >> day of the week, for example, and then they would automatically >> overwrite the file from the previous week. But setting that up for >> multiple directories would be cumbersome. For example, all of these >> jobs would have to be created separately: >> >> home-mon >> home-tues >> home-wed >> ... >> >> etc-mon >> etc-tues >> etc-wed >> ... >> >> same for var, usr, etc.... >> >> Is there an easier way to accomplish something like this, or am I >> asking too much of this module?/ > Have you looked into using the strftime format code feature, which can > be enabled on the Module Config page? You can then use codes like %d > for the day of the month or %a for the day of the week in the backup > destination. > > - Jamie > No, I hadn't thought of that yet, but it's a great idea. In the meantime, I am using the options to run a command before and after each backup to save a copy of the previous backup before overwriting it with the new one. Thanks, Alan |