You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(49) |
Aug
(39) |
Sep
(37) |
Oct
(8) |
Nov
(4) |
Dec
(14) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
(20) |
Feb
(30) |
Mar
(64) |
Apr
(70) |
May
(41) |
Jun
(43) |
Jul
(109) |
Aug
(59) |
Sep
(45) |
Oct
(39) |
Nov
(79) |
Dec
(40) |
2006 |
Jan
(132) |
Feb
(50) |
Mar
(72) |
Apr
(47) |
May
(109) |
Jun
(111) |
Jul
(71) |
Aug
(99) |
Sep
(71) |
Oct
(83) |
Nov
(103) |
Dec
(70) |
2007 |
Jan
(112) |
Feb
(115) |
Mar
(75) |
Apr
(133) |
May
(85) |
Jun
(82) |
Jul
(80) |
Aug
(127) |
Sep
(61) |
Oct
(40) |
Nov
(133) |
Dec
(58) |
2008 |
Jan
(59) |
Feb
(79) |
Mar
(88) |
Apr
(175) |
May
(53) |
Jun
(74) |
Jul
(86) |
Aug
(86) |
Sep
(70) |
Oct
(185) |
Nov
(92) |
Dec
(80) |
2009 |
Jan
(123) |
Feb
(134) |
Mar
(75) |
Apr
(63) |
May
(66) |
Jun
(55) |
Jul
(30) |
Aug
(65) |
Sep
(127) |
Oct
(96) |
Nov
(257) |
Dec
(41) |
2010 |
Jan
(66) |
Feb
(51) |
Mar
(67) |
Apr
(79) |
May
(29) |
Jun
(35) |
Jul
(60) |
Aug
(91) |
Sep
(86) |
Oct
(46) |
Nov
(28) |
Dec
(14) |
2011 |
Jan
(47) |
Feb
(22) |
Mar
(40) |
Apr
(53) |
May
(41) |
Jun
(101) |
Jul
(50) |
Aug
(56) |
Sep
(25) |
Oct
(3) |
Nov
(58) |
Dec
(43) |
2012 |
Jan
(116) |
Feb
(30) |
Mar
(83) |
Apr
(104) |
May
(62) |
Jun
(112) |
Jul
(23) |
Aug
(59) |
Sep
(46) |
Oct
(48) |
Nov
(30) |
Dec
(52) |
2013 |
Jan
(60) |
Feb
(96) |
Mar
(18) |
Apr
(67) |
May
(90) |
Jun
(61) |
Jul
(38) |
Aug
(55) |
Sep
(10) |
Oct
(21) |
Nov
(16) |
Dec
(84) |
2014 |
Jan
(16) |
Feb
(19) |
Mar
(19) |
Apr
(61) |
May
(11) |
Jun
(37) |
Jul
(75) |
Aug
(28) |
Sep
(76) |
Oct
(89) |
Nov
(20) |
Dec
(32) |
2015 |
Jan
(82) |
Feb
(22) |
Mar
(16) |
Apr
(48) |
May
(17) |
Jun
(140) |
Jul
(57) |
Aug
(21) |
Sep
(4) |
Oct
(24) |
Nov
(5) |
Dec
(10) |
2016 |
Jan
(25) |
Feb
(58) |
Mar
(29) |
Apr
(19) |
May
(90) |
Jun
(15) |
Jul
(23) |
Aug
(38) |
Sep
(39) |
Oct
(24) |
Nov
(9) |
Dec
(7) |
2017 |
Jan
(10) |
Feb
(2) |
Mar
(4) |
Apr
(12) |
May
(12) |
Jun
(10) |
Jul
(7) |
Aug
(3) |
Sep
(6) |
Oct
(11) |
Nov
(9) |
Dec
(1) |
2018 |
Jan
(17) |
Feb
|
Mar
(21) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
(13) |
2019 |
Jan
(5) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2020 |
Jan
(2) |
Feb
(14) |
Mar
|
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(11) |
2021 |
Jan
|
Feb
(10) |
Mar
|
Apr
(10) |
May
|
Jun
|
Jul
|
Aug
(4) |
Sep
|
Oct
|
Nov
|
Dec
(37) |
2022 |
Jan
(5) |
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(10) |
Aug
|
Sep
(3) |
Oct
(6) |
Nov
|
Dec
(9) |
2023 |
Jan
(13) |
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(6) |
Dec
(1) |
2024 |
Jan
(6) |
Feb
(4) |
Mar
(6) |
Apr
(10) |
May
(4) |
Jun
|
Jul
(8) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Scott H. <sc...@do...> - 2024-03-28 18:14:24
|
If you stop all backups and do a single giant rsync -aHS, you should get the symlinks. This is challenging, though. Might be reasonable with the rsnapshot root presented (mine has ~2T across 80 snapshots). What I've done is that I have a script which starts with the oldest working towards newest, and uses commands similar to shown in rsnapshot.log. So it will do a cp -al of the previous mirrored snapshot to a working dir, then an rsync from the original to the copy, then rename the copy into place. I keep meaning to clean it up, because right now it has a lot of manual steps, which is why I'm not just posting the script here. The first version was actually just a bash script with a loop over an array of snapshots, it was pretty manual. -scott On Thu, Mar 28, 2024 at 6:46 AM t.schneider--- via rsnapshot-discuss < rsn...@li...> wrote: > Hello, > > I have created these backups with rsnapshot: > # du -hs /backup/vlcdblva.devsys.net/* > 12G /backup/vlcdblva.devsys.net/daily.0 > 97M /backup/vlcdblva.devsys.net/daily.1 > 206M /backup/vlcdblva.devsys.net/daily.2 > 102M /backup/vlcdblva.devsys.net/daily.3 > 93M /backup/vlcdblva.devsys.net/daily.4 > 102M /backup/vlcdblva.devsys.net/daily.5 > 207M /backup/vlcdblva.devsys.net/daily.6 > 11G /backup/vlcdblva.devsys.net/hana > 211M /backup/vlcdblva.devsys.net/hourly.0 > 101M /backup/vlcdblva.devsys.net/hourly.1 > 590M /backup/vlcdblva.devsys.net/monthly.0 > 158M /backup/vlcdblva.devsys.net/monthly.1 > 93M /backup/vlcdblva.devsys.net/weekly.0 > 225M /backup/vlcdblva.devsys.net/weekly.1 > 219M /backup/vlcdblva.devsys.net/weekly.2 > 239M /backup/vlcdblva.devsys.net/weekly.3 > > > Question: > How can I copy these backups to another server w/o blowing up the target > directory, means any hard link remains a hard link? > > > THX > _______________________________________________ > rsnapshot-discuss mailing list > rsn...@li... > https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss > |
From: <t.s...@di...> - 2024-03-28 13:45:58
|
Hello, I have created these backups with rsnapshot: # du -hs /backup/vlcdblva.devsys.net/* 12G /backup/vlcdblva.devsys.net/daily.0 97M /backup/vlcdblva.devsys.net/daily.1 206M /backup/vlcdblva.devsys.net/daily.2 102M /backup/vlcdblva.devsys.net/daily.3 93M /backup/vlcdblva.devsys.net/daily.4 102M /backup/vlcdblva.devsys.net/daily.5 207M /backup/vlcdblva.devsys.net/daily.6 11G /backup/vlcdblva.devsys.net/hana 211M /backup/vlcdblva.devsys.net/hourly.0 101M /backup/vlcdblva.devsys.net/hourly.1 590M /backup/vlcdblva.devsys.net/monthly.0 158M /backup/vlcdblva.devsys.net/monthly.1 93M /backup/vlcdblva.devsys.net/weekly.0 225M /backup/vlcdblva.devsys.net/weekly.1 219M /backup/vlcdblva.devsys.net/weekly.2 239M /backup/vlcdblva.devsys.net/weekly.3 Question: How can I copy these backups to another server w/o blowing up the target directory, means any hard link remains a hard link? THX |
From: Woll N. <wo...@2-...> - 2024-03-04 11:43:58
|
I have seen talk in the rsnapshot mailing list about the use of rsnapshot’s 'sync_first’ mode (especially for backing up laptops that may not be connected/on all the time), but couldn’t find any clear, detailed documentation/script for the implementation of the suggested strategies, so I wrote my own. I’ve released it on github, here: https://github.com/woll/rsnapshot-herder The strategy I used was inspired by a post by user ‘Tapani Tarvainen’, in this thread on the rsnapshot mailing list: (https://sourceforge.net/p/rsnapshot/mailman/message/34179129/ For completeness/discussion, below is a very simplified version of the script (without the error-checking/mailing/logging or automation of some of the configuration that is in the version on github). The strategy is basically: 1) Use the ’sync_first’ mode. 2) Call the script every hour from cron (so that laptops that are connected/on intermittently will be backed up when they are connected). 3) Once a machine has been backed up it wont be backed up again until required. 4) The script handles the sequencing and timing of the backups by ‘rsnapshot sync’ and, when required, the rotation of the higher backup levels through ‘rsnapshot daily’ etc. This will backup and rotate at the required frequencies for each backup level. The script published on github is much more complete, with documentation, error checking and automatic extraction of some of the configuration directly from the rsnapshot conf file, plus detailed control over the emailing/logging of the status of backups. —— rsnapshot-herder-simplified —— #!/bin/bash ############################## Configuration ######################################### rsnapshot_conf=~/my_rsnapshot.conf rsnapshot_root=~/my_backup_folder RSNAPSHOT=/opt/local/bin/rsnapshot # Number of rotations for each backup level (not including the topmost), before rotating the next-higher level. # The first number should be zero, so that the .sync level created in 'rsync_first' mode is rotated every time. # e.g. "0 7 4 12" rotates the lowest backup level 7 times before rotating the second level and so on, up to # the highest level which will be rotated after the previous level has rotated 12 times. DELTA_ROTATIONS=( 0 7 4 12 ) # How often (in minutes) the fastest-changing backup level (typically named hourly, daily or alpha) should rotate. # Every hour is '60', every day is '60 * 24' etc FASTEST_ROTATION=$((60 * 24)) delta_names=( "sync" "daily" "weekly" "monthly" "yearly" ) ################################################################################# if [ ! -d $rsnapshot_root ]; then printf "ERROR: $rsnapshot_root does not exist\n" exit fi state_dir=$rsnapshot_root/.rsnapshot-herder_state exit_value_file="$state_dir/exit-value" time_file="$state_dir/time" # Create state directory if it does not exist if [ ! -d "$state_dir" ]; then mkdir "$state_dir" # Set number of past delta rotations to zero for ((i=${#delta_names[@]}-1; i>=0; i-- )) do echo 0 > "$state_dir/${delta_names[$i]}" done fi # If exit_value file does not exist, then create it if [ ! -f "$exit_value_file" ]; then echo -1 > "$exit_value_file"; fi previous_exit_value=`cat "$exit_value_file"` # If time file does not exist, then create it if [ ! -f "$time_file" ]; then echo -1 > "$time_file"; fi previous_time=`cat "$time_file"` time=$((`date "+%s"` / 60 )) time_delta=$(( $time - $previous_time )) # If the last 'rsnapshot sync' returned an error, or the fastest rotating backup level has expired if [ $previous_exit_value != 0 -a $previous_exit_value != 2 -o $time_delta -ge $FASTEST_ROTATION ]; then if [ $time_delta -ge $FASTEST_ROTATION ]; then printf "Previous '${delta_names[1]}' backup is older than sync frequency, so sync...\n" else printf "Previous 'rsnapshot sync' incomplete, so re-try...\n" fi # Save start time of rsnapshot echo $((`date "+%s"` / 60 )) > "$time_file" $RSNAPSHOT -v -c "$rsnapshot_conf" sync exit_value=$? echo $exit_value > "$exit_value_file" # If the rsync finished successfully, then do the required rsnapshot rotations if [ $exit_value = 0 -o $exit_value = 2 ]; then # For each backup level for ((i=${#delta_names[@]}-1; i>=1; i--)) do delta=${delta_names[$i]} rotations=`cat "$state_dir/$delta"` printf "'$delta' has been rotated $rotations\n" next_lower_delta=${delta_names[$i-1]} next_lower_rotations=`cat "$state_dir/$next_lower_delta"` # If the next lower backup level has been rotated enough times, then rotate this level if [ $next_lower_rotations -ge ${DELTA_ROTATIONS[$i-1]} ]; then RSNAPSHOT -v -c "$rsnapshot_conf" $delta echo $((rotations + 1)) > "$state_dir/$delta" echo 0 > "$state_dir/$next_lower_delta" fi done fi elif [ $time_delta -le $FASTEST_ROTATION ]; then printf "Not time for next '${delta_names[1]}' backup yet.\n" fi # Show rotations msg="Current rotation levels: " for ((i=${#delta_names[@]}-1; i>=0; i--)) do value=`cat "$state_dir/${delta_names[$i]}"` msg="${msg}${delta_names[$i]}: $value " done msg="${msg}\n" printf "$msg" |
From: Scott H. <sc...@do...> - 2024-02-12 17:29:11
|
I would pull the rsync command-line out and run it manually, that is easier to debug by adding -v, -vv, etc. Code 23 from rsync is "Partial transfer due to error", so you should look at what DOES get transferred. You could certainly try to debug what is different about the SomeFolder directory, especially extended attributes (ls -lO, I think?). Possibly chflags could help adjust the settings, if there are problems with only a few specific files. My guess is that in this case, you'll be better off running without sudo, since osx has a number of things which can supercede sudo. Assuming your source and target are both owned by a single user, of course. Another thing to keep in mind is that Apple's rsync is fairly old, due to Apple being unwilling to support certain license changes (same thing with Bash). On my system it's 2.6.9, while I use an rsync I built from MacPorts, 3.1.2, which is itself pretty old. There are some patches in the MacPorts build to support some osx differences, and I think there were a couple patches that were disabled for lack of testing. I have no idea if this is related to what you're seeing, just pointing out that you can get some odd interactions between where OSX adds things and where rsync doesn't follow. [I'm sorry about that paragraph of FUD - I haven't revisited my homebuilt backup scripts in a few years, so all I have is memories of memories. My script originally built a bootable external drive, but that is no longer possible, so it is possible my build of rsync was necessary for reasons that no longer apply.] -scott On Mon, Feb 12, 2024 at 6:44 AM <chr...@ma...> wrote: > If I run the command referencing some folder in the config which does not > exist then I get an error indicating that immediately. In my case though, > the running backup fails much later. So rsnapshot initially acknowledges > that the folders exist but later it is problems backing up. > I have tried running the command "rsnapshot -c > /usr/local/etc/rsnapshot.conf alpha” without sudo and with sudo from the > command line. It is always being run manually by me from the command line, > no cron involved. The Terminal application which I use for the command line > has Full Disk Access in the system settings. > I am at a loss here. Any other things I can look at to investigate? > > On 10. Feb 2024, at 19:12, Scott Hess <sc...@do...> wrote: > > That doesn't look like a problem with the localhost/ target directory, > because it can't even open the source directory. > > Are you running the rsnapshot backup as USERNAME? Are you seeing the > error when running at the command-line, or is it running from cron? > > MacOS has additional layers of permissions beyond what a regular Unix > has. You may have to go to the Security area of Preferences and mark > something as having "Full Disk Access". I use a script similar to > rsnapshot on my system, and had to add "cron" to the apps under "Full Disk > Access". I also have "Terminal.app" in that list, though I'd guess that > wouldn't matter for this case, if you're running rsnapshot as the user > owning the data. > > It looks like Cryptomator also uses MacFUSE to implement the filesystem, > which could provide an additional layer of issues to look at. Ten years > ago I'd have suggested looking at Console.app for log lines that relate, > but in the past five years or so there is so much noise in Console.app that > it has become almost impossible to use successfully to debug anything. > > -scott > > > On Sat, Feb 10, 2024 at 9:55 AM christopher.social--- via > rsnapshot-discuss <rsn...@li...> wrote: > >> Hello everyone, >> >> can someone help with this problem: >> https://unix.stackexchange.com/questions/767801/rsnapshot-backup-fails-with-cryptomator-mounted-volume >> ? >> >> Thank you and best regards, >> Chris >> >> _______________________________________________ >> rsnapshot-discuss mailing list >> rsn...@li... >> https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss >> > > |
From: <chr...@ma...> - 2024-02-12 14:44:26
|
If I run the command referencing some folder in the config which does not exist then I get an error indicating that immediately. In my case though, the running backup fails much later. So rsnapshot initially acknowledges that the folders exist but later it is problems backing up. I have tried running the command "rsnapshot -c /usr/local/etc/rsnapshot.conf alpha” without sudo and with sudo from the command line. It is always being run manually by me from the command line, no cron involved. The Terminal application which I use for the command line has Full Disk Access in the system settings. I am at a loss here. Any other things I can look at to investigate? > On 10. Feb 2024, at 19:12, Scott Hess <sc...@do...> wrote: > > That doesn't look like a problem with the localhost/ target directory, because it can't even open the source directory. > > Are you running the rsnapshot backup as USERNAME? Are you seeing the error when running at the command-line, or is it running from cron? > > MacOS has additional layers of permissions beyond what a regular Unix has. You may have to go to the Security area of Preferences and mark something as having "Full Disk Access". I use a script similar to rsnapshot on my system, and had to add "cron" to the apps under "Full Disk Access". I also have "Terminal.app" in that list, though I'd guess that wouldn't matter for this case, if you're running rsnapshot as the user owning the data. > > It looks like Cryptomator also uses MacFUSE to implement the filesystem, which could provide an additional layer of issues to look at. Ten years ago I'd have suggested looking at Console.app for log lines that relate, but in the past five years or so there is so much noise in Console.app that it has become almost impossible to use successfully to debug anything. > > -scott > > > On Sat, Feb 10, 2024 at 9:55 AM christopher.social--- via rsnapshot-discuss <rsn...@li... <mailto:rsn...@li...>> wrote: >> Hello everyone, >> >> can someone help with this problem: https://unix.stackexchange.com/questions/767801/rsnapshot-backup-fails-with-cryptomator-mounted-volume ? >> >> Thank you and best regards, >> Chris >> >> _______________________________________________ >> rsnapshot-discuss mailing list >> rsn...@li... <mailto:rsn...@li...> >> https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss |
From: Scott H. <sc...@do...> - 2024-02-10 18:12:50
|
That doesn't look like a problem with the localhost/ target directory, because it can't even open the source directory. Are you running the rsnapshot backup as USERNAME? Are you seeing the error when running at the command-line, or is it running from cron? MacOS has additional layers of permissions beyond what a regular Unix has. You may have to go to the Security area of Preferences and mark something as having "Full Disk Access". I use a script similar to rsnapshot on my system, and had to add "cron" to the apps under "Full Disk Access". I also have "Terminal.app" in that list, though I'd guess that wouldn't matter for this case, if you're running rsnapshot as the user owning the data. It looks like Cryptomator also uses MacFUSE to implement the filesystem, which could provide an additional layer of issues to look at. Ten years ago I'd have suggested looking at Console.app for log lines that relate, but in the past five years or so there is so much noise in Console.app that it has become almost impossible to use successfully to debug anything. -scott On Sat, Feb 10, 2024 at 9:55 AM christopher.social--- via rsnapshot-discuss <rsn...@li...> wrote: > Hello everyone, > > can someone help with this problem: > https://unix.stackexchange.com/questions/767801/rsnapshot-backup-fails-with-cryptomator-mounted-volume > ? > > Thank you and best regards, > Chris > > _______________________________________________ > rsnapshot-discuss mailing list > rsn...@li... > https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss > |
From: <chr...@ma...> - 2024-02-10 17:54:16
|
Hello everyone, can someone help with this problem: https://unix.stackexchange.com/questions/767801/rsnapshot-backup-fails-with-cryptomator-mounted-volume ? Thank you and best regards, Chris |
From: Thierry L. <th...@8p...> - 2024-01-07 20:40:27
|
Hi all. I love rsnapshot and will continue to use it. Thank you so much for the development - it saved my life many times I would like to get your impressions on BACULA. Does it work the same as Rsnapshot, IE retrieving only the DIFF of files and making hard links to reduce the size of iteration? Anyone has good/bad/any experience with it that they would like to share? Thank you so much! |
From: Peter B. <pb...@ba...> - 2024-01-05 00:45:14
|
Whenever I make a change to options or exclude lists etc I often use the --dry-run option to verify what will happen by looking at log file. I once lost a lot of files I wanted to keep! Peter On 4 January 2024 6:29:24 pm AEDT, "May Doušak" <ph...@ap...> wrote: >Thank you David, Peter, > >I'll add those flags to the settings. >Probably, I have been a little scared of the delete options when setting it up and removed them (can't remember it though, it's been a year or so). > >Thank you both, > >May > > >On 3. 01. 24 21:17, David Cantrell wrote: >> On 03/01/2024 11:34, Peter Barker wrote: >> >>> They should be deleted if you pass the --delete --delete-excluded options to rsync (rsync_long_args in rsnapshot config file) >> >> And note that this is the default: >> >> https://github.com/rsnapshot/rsnapshot/blob/master/rsnapshot-program.pl#L155 >> > > > >_______________________________________________ >rsnapshot-discuss mailing list >rsn...@li... >https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss |
From: May D. <ph...@ap...> - 2024-01-04 07:29:54
|
Thank you David, Peter, I'll add those flags to the settings. Probably, I have been a little scared of the delete options when setting it up and removed them (can't remember it though, it's been a year or so). Thank you both, May On 3. 01. 24 21:17, David Cantrell wrote: > On 03/01/2024 11:34, Peter Barker wrote: > >> They should be deleted if you pass the --delete --delete-excluded >> options to rsync (rsync_long_args in rsnapshot config file) > > And note that this is the default: > > https://github.com/rsnapshot/rsnapshot/blob/master/rsnapshot-program.pl#L155 > > |
From: David C. <da...@ca...> - 2024-01-03 20:36:17
|
On 03/01/2024 11:34, Peter Barker wrote: > They should be deleted if you pass the --delete --delete-excluded > options to rsync (rsync_long_args in rsnapshot config file) And note that this is the default: https://github.com/rsnapshot/rsnapshot/blob/master/rsnapshot-program.pl#L155 -- David Cantrell |
From: Peter B. <pb...@ba...> - 2024-01-03 13:56:38
|
Hi May, They should be deleted if you pass the --delete --delete-excluded options to rsync (rsync_long_args in rsnapshot config file) On 3/1/24 19:59, May Doušak wrote: > Hi all, > > I saw this question was asked in 2010 on this mailing list, but there > were no replies, so I'm asking it again since I have the same issue. > > The backup works fine and has been working great for many months now. It > helped me a couple times already :) > > There is, however, a small issue with removing deleted files from > subsequent backups (files are not removed). > > Looking into the snapshot directories (daily.X, weekly.X,...) I can see > that the files that have been removed from the subsequent snapshots > (i.e. after the file deletion). > > for example, if I made "something.txt." 30 days ago and removed it 29 > days ago from the server, that file is still present in the daily.0. > > I'd expect it to be present in the daily.30 (or monthly or weekly,...) > and not daily.0 (or any subsequent backups). > > Am I missing a flag or setting? > > Thank you! > > > _______________________________________________ > rsnapshot-discuss mailing list > rsn...@li... > https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss |
From: May D. <ph...@ap...> - 2024-01-03 09:18:40
|
Hi all, I saw this question was asked in 2010 on this mailing list, but there were no replies, so I'm asking it again since I have the same issue. The backup works fine and has been working great for many months now. It helped me a couple times already :) There is, however, a small issue with removing deleted files from subsequent backups (files are not removed). Looking into the snapshot directories (daily.X, weekly.X,...) I can see that the files that have been removed from the subsequent snapshots (i.e. after the file deletion). for example, if I made "something.txt." 30 days ago and removed it 29 days ago from the server, that file is still present in the daily.0. I'd expect it to be present in the daily.30 (or monthly or weekly,...) and not daily.0 (or any subsequent backups). Am I missing a flag or setting? Thank you! |
From: Matthew S. <mjs...@gm...> - 2023-12-19 15:49:36
|
As someone who uses rsnapshot, I went to setup a new server and came across #279 <https://github.com/rsnapshot/rsnapshot/issues/279>. While I don't have time to commit to helping with issues and such, I've found this tool useful enough that I'ld like to donate towards covering domain costs and development. I don't see how to support the maintainers/package documented on the repo (or searching through this email list, although there was reference to a PayPal pool for the domain at one point). Is there an official way? Would bebehei or sgpinkus be open to like a buy-me-a-coffee button or using the GitHub donate mechanism? |
From: Mark R. <ma...@mo...> - 2023-11-30 10:29:03
|
Good hints, thank you. My sync also generates warnings for files it can't access (a few files on my own profile on another machine which require root privileges to access, but it's never bothered me that they're not backed up so I've ignored the warnings until now). Sure enough a file a put into one of the the backed-up folders yesterday was present in .sync but hadn't been rotated into daily.0. I have dropped your script in as a fix for that, thank you. I have also adopted your crontab ordering as I understand the logic behind it. The vast majority of my backed-up stuff is incremental stuff anyway, eg archives of project files which only ever get added to - occasionally a file will get changed but deletes are pretty rare. So I'll save looking at lazy deletes for another day! Mark On Thu, 30 Nov 2023 at 05:53, Scott Hess <sc...@do...> wrote: > Yes, you'll need some manual intervention to go from not-sync_first to > sync_first. You could either simply move your last backup to .sync, or use > cp -al (on Linux) to replicate the last backup into the .sync slot. > > One way to get comfortable with things is to make a copy of your > rsnapshot.conf, adjust it to backup a subset of your target (maybe /var/log > or similar so there are changes always going on), adjust the log file, and > then run some command-line tests with that config to see what it does. > That is also a good way to do a dry run of converting your main config. > The log file should show the commands needed to replicate the .sync dir to > daily.0, so to populate your .sync dir you can just use that command in the > other direction. > > You are correct with your cron entry being the 'rsnapshot sync && > rsnapshot daily', assuming your first increment is daily. You can run > 'rsnapshot sync' as many times as you want, in fact I run one an hour > before my normal snapshot to prime the pump, as it were. > > My approach to getting the weekly/monthly/quarterly calls to not step on > each other is to run quarterly at 3am, monthly at 3:20am, weekly at 3:40am > and daily at 4am. Two reasons. First, this prevents running a ton of > delete passes happening all at once, if weekly steals daily.6, then daily > doesn't need to delete daily.6 on that day. Second, since most of the work > for these increments is in the delete, I can more easily predict > the runtime. > > I also set use_lazy_deletes, so deletes happen outside the lock. This is > a mixed advantage, though, because the I/O of the delete will still > interfere with the I/O of a later rsnapshot run, which can make that run > slower. PROBABLY fine if you stack quarterly/monthly/weekly/daily, since > your daily can probably run for some extra time safely. > > -scott > > PS: I actually run my rsnapshot sync using the attached script wrapping > rsnapshot. What I noticed is that I'd periodically get a warning from > rsnapshot which would cause it to skip a backup. Often the warning would > be about a missing file or something (an rsync warning when the filesystem > has changed while rsync was running), and there wasn't even anything to > fix. The problem was that this warning error prevented the && from > firing, so it wouldn't take my hourly backup. I think with the old > copy-then-rsync approach, the warning would not prevent the new directory > from being created and populated, so I think this more closely matches the > old behaviour. > > > > On Mon, Nov 27, 2023 at 11:24 AM Mark Rogers <ma...@mo...> > wrote: > >> I've just had a look at sync_first but I'm not really sure what it does >> if enabled now that I have backups in place. As I don't already have a >> .sync folder, do I need to build one myself from whatever the latest data >> set is before I run rsnapshot sync? >> >> What I definitely do not want to do is initiate a new backup from scratch >> - I do not have the disk space or the time for that! >> >> Based on my reading today, I think that I should start by either >> rsync'ing or moving daily.0 to .sync. Am I right that when I then run >> rsnapshot sync && rsnapshot daily >> .. it first updates .sync (the sync call) then rotates all the existing >> daily.X backups and copied .sync to create a new daily.0? >> >> As far as the weekly/monthly calls are concerned, how do I ensure they >> don't run until the daily backup completes - is it just a matter of setting >> the time late enough in crontab that I feel safe, and what happens if I get >> it wrong? >> >> On Mon, 27 Nov 2023 at 16:27, Scott Hess <sc...@do...> wrote: >> >>> You should use sync_first, which does the sync to a .sync directory, and >>> then when you make the daily it copies that from the .sync directory. That >>> means that .sync always maintains the hardlink chain with the older >>> snapshots, so you can safely delete any broken daily or weekly backups. >>> >>> Of course, due to disk full the .sync snapshot may not be COMPLETE, but >>> what it does have will maintain the hardlinks with the older backups. You >>> won't get the broken chain. >>> >>> --- >>> >>> As for how to deal with your current situation, I find that attempting >>> to figure out how broken a backup is is often a game with no winning >>> outcomes, there are just too many things that you might miss, and you don't >>> want to find out later that you have an incomplete backup. I would add a >>> "corrupt" directory, move all of the questionable backup dirs to there >>> (that will keep the hardlinks you have), and put in a README file >>> describing what you did and a target date to delete them (for when you >>> forget and come back in 9 months). Then I would carefully move the >>> most-recent complete backup to daily.0, run a manual rsnapshot daily, then >>> carefully move the daily.1 back to where the most-recent complete backup >>> was. If the above removes your weekly.0, you can carefully rename the dirs >>> down a notch. >>> >>> With all of that, consider doing a test run in some manufactured >>> directory to make sure that the timestamps and the like are all >>> maintained. Or do a --max-depth 1 (or 2) rsync replica of the snapshot >>> dir, for reference timestamps. Moving a snapshot dir INTO another snapshot >>> dir always feels terrible, because it isn't always obvious when it happens, >>> and it updates timestamps. I hate that. >>> >>> Another option is to use cp -al to fill in the gaps. That works the >>> same as --link-dest (on Unix platforms), and is generally less twitchy to >>> get right. >>> >>> --- >>> >>> And a completely other option is to create a "corrupt" subdir in your >>> snapshot root, move ALL of the existing backups in there, and cp -al (or >>> move) your best one to daily.0 in your snapshot root. Then let things run >>> along. In three months you might not care about anything in the >>> "corrupt" subdir, but if you had an emergency in the meanwhile, it's all >>> there as insurance. >>> >>> -scott >>> >>> >>> On Mon, Nov 27, 2023 at 1:41 AM Mark Rogers <ma...@mo...> >>> wrote: >>> >>>> OK, to partially answer my own question: >>>> >>>> New server up and running. I have partial daily.x snapshots; the most >>>> recent "full" backup is weekly.1. >>>> >>>> rsnapshot is running rsync with hardlinks from daily.0, and therefore >>>> the vast majority of the files it is seeing at the source are "new" files >>>> and are being copied from scratch and not linked. >>>> >>>> From "ps aufx" I see the rsync command being run (partially censored): >>>> /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded >>>> --rsh=/usr/bin/ssh --link-dest=/mnt/md0/rsnapshot/daily.1/.../ root@...:/mnt/md0/.../ >>>> /mnt/md0/rsnapshot/daily.0/.../ >>>> >>>> So I have killed that process and I'm running a modified version of it >>>> myself: >>>> nohup /usr/bin/rsync -av --delete --numeric-ids --relative >>>> --delete-excluded --rsh=/usr/bin/ssh >>>> --link-dest=/mnt/md0/rsnapshot/weekly.1/.../ root@...:/mnt/md0/.../ >>>> /mnt/md0/rsnapshot/daily.0/.../ & >>>> >>>> I'm hoping that once complete, tomorrow's backup will run fine, >>>> although I have about a dozen other backups I guess I'll also have to run >>>> manually first. >>>> >>>> If there's a "correct" way to manage this I'm definitely interested. >>>> >>>> Mark >>>> >>>> >>>> On Wed, 22 Nov 2023 at 12:29, Mark Rogers <ma...@mo...> >>>> wrote: >>>> >>>>> A few days ago my rsnapshot server's disks filled. I didn't spot it >>>>> immediately and so several snapshots failed to complete. >>>>> >>>>> I stopped the server running and replaced the disks with larger ones, >>>>> rsync'ind (with -H) the old backups to the new disks. That process has >>>>> taken days but has nearly completed. >>>>> >>>>> Where that will leave me is with some weekly.x backups which I think >>>>> are good, and some partial daily.x backups. >>>>> >>>>> What I don't want to do is end up with duplicate files which aren't >>>>> hard-linked. What steps do I need to take before or when I re-enable >>>>> rsnapshot? >>>>> >>>>> -- >>>>> Mark Rogers >>>>> >>>> >>>> >>>> -- >>>> Mark Rogers // More Solutions Ltd (Peterborough Office) // 0344 251 1450 >>>> Registered in England (0456 0902) 21 Drakes Mews, Milton Keynes, MK8 0ER >>>> _______________________________________________ >>>> rsnapshot-discuss mailing list >>>> rsn...@li... >>>> https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss >>>> >>> >> >> -- >> Mark Rogers // More Solutions Ltd (Peterborough Office) // 0344 251 1450 >> Registered in England (0456 0902) 21 Drakes Mews, Milton Keynes, MK8 0ER >> > -- Mark Rogers // More Solutions Ltd (Peterborough Office) // 0344 251 1450 Registered in England (0456 0902) 21 Drakes Mews, Milton Keynes, MK8 0ER |
From: Scott H. <sc...@do...> - 2023-11-30 05:53:43
|
Yes, you'll need some manual intervention to go from not-sync_first to sync_first. You could either simply move your last backup to .sync, or use cp -al (on Linux) to replicate the last backup into the .sync slot. One way to get comfortable with things is to make a copy of your rsnapshot.conf, adjust it to backup a subset of your target (maybe /var/log or similar so there are changes always going on), adjust the log file, and then run some command-line tests with that config to see what it does. That is also a good way to do a dry run of converting your main config. The log file should show the commands needed to replicate the .sync dir to daily.0, so to populate your .sync dir you can just use that command in the other direction. You are correct with your cron entry being the 'rsnapshot sync && rsnapshot daily', assuming your first increment is daily. You can run 'rsnapshot sync' as many times as you want, in fact I run one an hour before my normal snapshot to prime the pump, as it were. My approach to getting the weekly/monthly/quarterly calls to not step on each other is to run quarterly at 3am, monthly at 3:20am, weekly at 3:40am and daily at 4am. Two reasons. First, this prevents running a ton of delete passes happening all at once, if weekly steals daily.6, then daily doesn't need to delete daily.6 on that day. Second, since most of the work for these increments is in the delete, I can more easily predict the runtime. I also set use_lazy_deletes, so deletes happen outside the lock. This is a mixed advantage, though, because the I/O of the delete will still interfere with the I/O of a later rsnapshot run, which can make that run slower. PROBABLY fine if you stack quarterly/monthly/weekly/daily, since your daily can probably run for some extra time safely. -scott PS: I actually run my rsnapshot sync using the attached script wrapping rsnapshot. What I noticed is that I'd periodically get a warning from rsnapshot which would cause it to skip a backup. Often the warning would be about a missing file or something (an rsync warning when the filesystem has changed while rsync was running), and there wasn't even anything to fix. The problem was that this warning error prevented the && from firing, so it wouldn't take my hourly backup. I think with the old copy-then-rsync approach, the warning would not prevent the new directory from being created and populated, so I think this more closely matches the old behaviour. On Mon, Nov 27, 2023 at 11:24 AM Mark Rogers <ma...@mo...> wrote: > I've just had a look at sync_first but I'm not really sure what it does if > enabled now that I have backups in place. As I don't already have a .sync > folder, do I need to build one myself from whatever the latest data set is > before I run rsnapshot sync? > > What I definitely do not want to do is initiate a new backup from scratch > - I do not have the disk space or the time for that! > > Based on my reading today, I think that I should start by either rsync'ing > or moving daily.0 to .sync. Am I right that when I then run > rsnapshot sync && rsnapshot daily > .. it first updates .sync (the sync call) then rotates all the existing > daily.X backups and copied .sync to create a new daily.0? > > As far as the weekly/monthly calls are concerned, how do I ensure they > don't run until the daily backup completes - is it just a matter of setting > the time late enough in crontab that I feel safe, and what happens if I get > it wrong? > > On Mon, 27 Nov 2023 at 16:27, Scott Hess <sc...@do...> wrote: > >> You should use sync_first, which does the sync to a .sync directory, and >> then when you make the daily it copies that from the .sync directory. That >> means that .sync always maintains the hardlink chain with the older >> snapshots, so you can safely delete any broken daily or weekly backups. >> >> Of course, due to disk full the .sync snapshot may not be COMPLETE, but >> what it does have will maintain the hardlinks with the older backups. You >> won't get the broken chain. >> >> --- >> >> As for how to deal with your current situation, I find that attempting to >> figure out how broken a backup is is often a game with no winning outcomes, >> there are just too many things that you might miss, and you don't want to >> find out later that you have an incomplete backup. I would add a "corrupt" >> directory, move all of the questionable backup dirs to there (that will >> keep the hardlinks you have), and put in a README file describing what you >> did and a target date to delete them (for when you forget and come back in >> 9 months). Then I would carefully move the most-recent complete backup to >> daily.0, run a manual rsnapshot daily, then carefully move the daily.1 back >> to where the most-recent complete backup was. If the above removes your >> weekly.0, you can carefully rename the dirs down a notch. >> >> With all of that, consider doing a test run in some manufactured >> directory to make sure that the timestamps and the like are all >> maintained. Or do a --max-depth 1 (or 2) rsync replica of the snapshot >> dir, for reference timestamps. Moving a snapshot dir INTO another snapshot >> dir always feels terrible, because it isn't always obvious when it happens, >> and it updates timestamps. I hate that. >> >> Another option is to use cp -al to fill in the gaps. That works the same >> as --link-dest (on Unix platforms), and is generally less twitchy to get >> right. >> >> --- >> >> And a completely other option is to create a "corrupt" subdir in your >> snapshot root, move ALL of the existing backups in there, and cp -al (or >> move) your best one to daily.0 in your snapshot root. Then let things run >> along. In three months you might not care about anything in the >> "corrupt" subdir, but if you had an emergency in the meanwhile, it's all >> there as insurance. >> >> -scott >> >> >> On Mon, Nov 27, 2023 at 1:41 AM Mark Rogers <ma...@mo...> >> wrote: >> >>> OK, to partially answer my own question: >>> >>> New server up and running. I have partial daily.x snapshots; the most >>> recent "full" backup is weekly.1. >>> >>> rsnapshot is running rsync with hardlinks from daily.0, and therefore >>> the vast majority of the files it is seeing at the source are "new" files >>> and are being copied from scratch and not linked. >>> >>> From "ps aufx" I see the rsync command being run (partially censored): >>> /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded >>> --rsh=/usr/bin/ssh --link-dest=/mnt/md0/rsnapshot/daily.1/.../ root@...:/mnt/md0/.../ >>> /mnt/md0/rsnapshot/daily.0/.../ >>> >>> So I have killed that process and I'm running a modified version of it >>> myself: >>> nohup /usr/bin/rsync -av --delete --numeric-ids --relative >>> --delete-excluded --rsh=/usr/bin/ssh >>> --link-dest=/mnt/md0/rsnapshot/weekly.1/.../ root@...:/mnt/md0/.../ >>> /mnt/md0/rsnapshot/daily.0/.../ & >>> >>> I'm hoping that once complete, tomorrow's backup will run fine, although >>> I have about a dozen other backups I guess I'll also have to run manually >>> first. >>> >>> If there's a "correct" way to manage this I'm definitely interested. >>> >>> Mark >>> >>> >>> On Wed, 22 Nov 2023 at 12:29, Mark Rogers <ma...@mo...> >>> wrote: >>> >>>> A few days ago my rsnapshot server's disks filled. I didn't spot it >>>> immediately and so several snapshots failed to complete. >>>> >>>> I stopped the server running and replaced the disks with larger ones, >>>> rsync'ind (with -H) the old backups to the new disks. That process has >>>> taken days but has nearly completed. >>>> >>>> Where that will leave me is with some weekly.x backups which I think >>>> are good, and some partial daily.x backups. >>>> >>>> What I don't want to do is end up with duplicate files which aren't >>>> hard-linked. What steps do I need to take before or when I re-enable >>>> rsnapshot? >>>> >>>> -- >>>> Mark Rogers >>>> >>> >>> >>> -- >>> Mark Rogers // More Solutions Ltd (Peterborough Office) // 0344 251 1450 >>> Registered in England (0456 0902) 21 Drakes Mews, Milton Keynes, MK8 0ER >>> _______________________________________________ >>> rsnapshot-discuss mailing list >>> rsn...@li... >>> https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss >>> >> > > -- > Mark Rogers // More Solutions Ltd (Peterborough Office) // 0344 251 1450 > Registered in England (0456 0902) 21 Drakes Mews, Milton Keynes, MK8 0ER > |
From: Mark R. <ma...@mo...> - 2023-11-27 19:32:10
|
I've just had a look at sync_first but I'm not really sure what it does if enabled now that I have backups in place. As I don't already have a .sync folder, do I need to build one myself from whatever the latest data set is before I run rsnapshot sync? What I definitely do not want to do is initiate a new backup from scratch - I do not have the disk space or the time for that! Based on my reading today, I think that I should start by either rsync'ing or moving daily.0 to .sync. Am I right that when I then run rsnapshot sync && rsnapshot daily .. it first updates .sync (the sync call) then rotates all the existing daily.X backups and copied .sync to create a new daily.0? As far as the weekly/monthly calls are concerned, how do I ensure they don't run until the daily backup completes - is it just a matter of setting the time late enough in crontab that I feel safe, and what happens if I get it wrong? On Mon, 27 Nov 2023 at 16:27, Scott Hess <sc...@do...> wrote: > You should use sync_first, which does the sync to a .sync directory, and > then when you make the daily it copies that from the .sync directory. That > means that .sync always maintains the hardlink chain with the older > snapshots, so you can safely delete any broken daily or weekly backups. > > Of course, due to disk full the .sync snapshot may not be COMPLETE, but > what it does have will maintain the hardlinks with the older backups. You > won't get the broken chain. > > --- > > As for how to deal with your current situation, I find that attempting to > figure out how broken a backup is is often a game with no winning outcomes, > there are just too many things that you might miss, and you don't want to > find out later that you have an incomplete backup. I would add a "corrupt" > directory, move all of the questionable backup dirs to there (that will > keep the hardlinks you have), and put in a README file describing what you > did and a target date to delete them (for when you forget and come back in > 9 months). Then I would carefully move the most-recent complete backup to > daily.0, run a manual rsnapshot daily, then carefully move the daily.1 back > to where the most-recent complete backup was. If the above removes your > weekly.0, you can carefully rename the dirs down a notch. > > With all of that, consider doing a test run in some manufactured directory > to make sure that the timestamps and the like are all maintained. Or do a > --max-depth 1 (or 2) rsync replica of the snapshot dir, for reference > timestamps. Moving a snapshot dir INTO another snapshot dir always feels > terrible, because it isn't always obvious when it happens, and it updates > timestamps. I hate that. > > Another option is to use cp -al to fill in the gaps. That works the same > as --link-dest (on Unix platforms), and is generally less twitchy to get > right. > > --- > > And a completely other option is to create a "corrupt" subdir in your > snapshot root, move ALL of the existing backups in there, and cp -al (or > move) your best one to daily.0 in your snapshot root. Then let things run > along. In three months you might not care about anything in the > "corrupt" subdir, but if you had an emergency in the meanwhile, it's all > there as insurance. > > -scott > > > On Mon, Nov 27, 2023 at 1:41 AM Mark Rogers <ma...@mo...> > wrote: > >> OK, to partially answer my own question: >> >> New server up and running. I have partial daily.x snapshots; the most >> recent "full" backup is weekly.1. >> >> rsnapshot is running rsync with hardlinks from daily.0, and therefore the >> vast majority of the files it is seeing at the source are "new" files and >> are being copied from scratch and not linked. >> >> From "ps aufx" I see the rsync command being run (partially censored): >> /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded >> --rsh=/usr/bin/ssh --link-dest=/mnt/md0/rsnapshot/daily.1/.../ root@...:/mnt/md0/.../ >> /mnt/md0/rsnapshot/daily.0/.../ >> >> So I have killed that process and I'm running a modified version of it >> myself: >> nohup /usr/bin/rsync -av --delete --numeric-ids --relative >> --delete-excluded --rsh=/usr/bin/ssh >> --link-dest=/mnt/md0/rsnapshot/weekly.1/.../ root@...:/mnt/md0/.../ >> /mnt/md0/rsnapshot/daily.0/.../ & >> >> I'm hoping that once complete, tomorrow's backup will run fine, although >> I have about a dozen other backups I guess I'll also have to run manually >> first. >> >> If there's a "correct" way to manage this I'm definitely interested. >> >> Mark >> >> >> On Wed, 22 Nov 2023 at 12:29, Mark Rogers <ma...@mo...> >> wrote: >> >>> A few days ago my rsnapshot server's disks filled. I didn't spot it >>> immediately and so several snapshots failed to complete. >>> >>> I stopped the server running and replaced the disks with larger ones, >>> rsync'ind (with -H) the old backups to the new disks. That process has >>> taken days but has nearly completed. >>> >>> Where that will leave me is with some weekly.x backups which I think are >>> good, and some partial daily.x backups. >>> >>> What I don't want to do is end up with duplicate files which aren't >>> hard-linked. What steps do I need to take before or when I re-enable >>> rsnapshot? >>> >>> -- >>> Mark Rogers >>> >> >> >> -- >> Mark Rogers // More Solutions Ltd (Peterborough Office) // 0344 251 1450 >> Registered in England (0456 0902) 21 Drakes Mews, Milton Keynes, MK8 0ER >> _______________________________________________ >> rsnapshot-discuss mailing list >> rsn...@li... >> https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss >> > -- Mark Rogers // More Solutions Ltd (Peterborough Office) // 0344 251 1450 Registered in England (0456 0902) 21 Drakes Mews, Milton Keynes, MK8 0ER |
From: Scott H. <sc...@do...> - 2023-11-27 16:27:40
|
You should use sync_first, which does the sync to a .sync directory, and then when you make the daily it copies that from the .sync directory. That means that .sync always maintains the hardlink chain with the older snapshots, so you can safely delete any broken daily or weekly backups. Of course, due to disk full the .sync snapshot may not be COMPLETE, but what it does have will maintain the hardlinks with the older backups. You won't get the broken chain. --- As for how to deal with your current situation, I find that attempting to figure out how broken a backup is is often a game with no winning outcomes, there are just too many things that you might miss, and you don't want to find out later that you have an incomplete backup. I would add a "corrupt" directory, move all of the questionable backup dirs to there (that will keep the hardlinks you have), and put in a README file describing what you did and a target date to delete them (for when you forget and come back in 9 months). Then I would carefully move the most-recent complete backup to daily.0, run a manual rsnapshot daily, then carefully move the daily.1 back to where the most-recent complete backup was. If the above removes your weekly.0, you can carefully rename the dirs down a notch. With all of that, consider doing a test run in some manufactured directory to make sure that the timestamps and the like are all maintained. Or do a --max-depth 1 (or 2) rsync replica of the snapshot dir, for reference timestamps. Moving a snapshot dir INTO another snapshot dir always feels terrible, because it isn't always obvious when it happens, and it updates timestamps. I hate that. Another option is to use cp -al to fill in the gaps. That works the same as --link-dest (on Unix platforms), and is generally less twitchy to get right. --- And a completely other option is to create a "corrupt" subdir in your snapshot root, move ALL of the existing backups in there, and cp -al (or move) your best one to daily.0 in your snapshot root. Then let things run along. In three months you might not care about anything in the "corrupt" subdir, but if you had an emergency in the meanwhile, it's all there as insurance. -scott On Mon, Nov 27, 2023 at 1:41 AM Mark Rogers <ma...@mo...> wrote: > OK, to partially answer my own question: > > New server up and running. I have partial daily.x snapshots; the most > recent "full" backup is weekly.1. > > rsnapshot is running rsync with hardlinks from daily.0, and therefore the > vast majority of the files it is seeing at the source are "new" files and > are being copied from scratch and not linked. > > From "ps aufx" I see the rsync command being run (partially censored): > /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded > --rsh=/usr/bin/ssh --link-dest=/mnt/md0/rsnapshot/daily.1/.../ root@...:/mnt/md0/.../ > /mnt/md0/rsnapshot/daily.0/.../ > > So I have killed that process and I'm running a modified version of it > myself: > nohup /usr/bin/rsync -av --delete --numeric-ids --relative > --delete-excluded --rsh=/usr/bin/ssh > --link-dest=/mnt/md0/rsnapshot/weekly.1/.../ root@...:/mnt/md0/.../ > /mnt/md0/rsnapshot/daily.0/.../ & > > I'm hoping that once complete, tomorrow's backup will run fine, although I > have about a dozen other backups I guess I'll also have to run manually > first. > > If there's a "correct" way to manage this I'm definitely interested. > > Mark > > > On Wed, 22 Nov 2023 at 12:29, Mark Rogers <ma...@mo...> > wrote: > >> A few days ago my rsnapshot server's disks filled. I didn't spot it >> immediately and so several snapshots failed to complete. >> >> I stopped the server running and replaced the disks with larger ones, >> rsync'ind (with -H) the old backups to the new disks. That process has >> taken days but has nearly completed. >> >> Where that will leave me is with some weekly.x backups which I think are >> good, and some partial daily.x backups. >> >> What I don't want to do is end up with duplicate files which aren't >> hard-linked. What steps do I need to take before or when I re-enable >> rsnapshot? >> >> -- >> Mark Rogers >> > > > -- > Mark Rogers // More Solutions Ltd (Peterborough Office) // 0344 251 1450 > Registered in England (0456 0902) 21 Drakes Mews, Milton Keynes, MK8 0ER > _______________________________________________ > rsnapshot-discuss mailing list > rsn...@li... > https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss > |
From: Mark R. <ma...@mo...> - 2023-11-27 09:41:13
|
OK, to partially answer my own question: New server up and running. I have partial daily.x snapshots; the most recent "full" backup is weekly.1. rsnapshot is running rsync with hardlinks from daily.0, and therefore the vast majority of the files it is seeing at the source are "new" files and are being copied from scratch and not linked. >From "ps aufx" I see the rsync command being run (partially censored): /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded --rsh=/usr/bin/ssh --link-dest=/mnt/md0/rsnapshot/daily.1/.../ root@...:/mnt/md0/.../ /mnt/md0/rsnapshot/daily.0/.../ So I have killed that process and I'm running a modified version of it myself: nohup /usr/bin/rsync -av --delete --numeric-ids --relative --delete-excluded --rsh=/usr/bin/ssh --link-dest=/mnt/md0/rsnapshot/weekly.1/.../ root@...:/mnt/md0/.../ /mnt/md0/rsnapshot/daily.0/.../ & I'm hoping that once complete, tomorrow's backup will run fine, although I have about a dozen other backups I guess I'll also have to run manually first. If there's a "correct" way to manage this I'm definitely interested. Mark On Wed, 22 Nov 2023 at 12:29, Mark Rogers <ma...@mo...> wrote: > A few days ago my rsnapshot server's disks filled. I didn't spot it > immediately and so several snapshots failed to complete. > > I stopped the server running and replaced the disks with larger ones, > rsync'ind (with -H) the old backups to the new disks. That process has > taken days but has nearly completed. > > Where that will leave me is with some weekly.x backups which I think are > good, and some partial daily.x backups. > > What I don't want to do is end up with duplicate files which aren't > hard-linked. What steps do I need to take before or when I re-enable > rsnapshot? > > -- > Mark Rogers > -- Mark Rogers // More Solutions Ltd (Peterborough Office) // 0344 251 1450 Registered in England (0456 0902) 21 Drakes Mews, Milton Keynes, MK8 0ER |
From: Mark R. <ma...@mo...> - 2023-11-22 12:57:06
|
A few days ago my rsnapshot server's disks filled. I didn't spot it immediately and so several snapshots failed to complete. I stopped the server running and replaced the disks with larger ones, rsync'ind (with -H) the old backups to the new disks. That process has taken days but has nearly completed. Where that will leave me is with some weekly.x backups which I think are good, and some partial daily.x backups. What I don't want to do is end up with duplicate files which aren't hard-linked. What steps do I need to take before or when I re-enable rsnapshot? -- Mark Rogers |
From: Olivier M. <ol...@om...> - 2023-06-20 09:54:51
|
Hi, Thank you very much four your answer, You've been very clear. It must be added to the official documentation ;). Normally, eveything will be good on July 1st (except if I wrongly anticipated the futur daily.30..) Have a nice day ! Best regards, Olivier MARS Le 18/06/2023 à 00:37, Scott Hess a écrit : > Sorry for the late reply. > > First - you almost certainly have not LOST backup data, you have just > lost the hardlinks between the new location of the data and the old > location of the data. The old data will continue to represent the > state as of when that snapshot was taken, the new data will continue > to represent the state as of when that snapshot was taken. If this is > fine, then the easiest and safest option is to simply leave things > alone, and you can just stop here and not read what follows. > > --- > > My own best practice is to "move" the directories by running "cp -al > <src> <dst>", wait until the next backup has happened, then remove the > old directory. Doesn't always happen that way :-). > > My broad approach to fixing the in-place stuff would be something like > (do not run any of this until you finish reading my email): > > # Scratch area on the same volume as the backups. > TMP=/.snapshots/tmp > > # Make a copy of the old data with hardlinks. > sudo cp -al /.snapshots/monthly.0/<target>/var/application > ${TMP}/application > > # Layer over any data changed in the current snapshot. If you don't > use sync_first, > # use hourly.0 here. > sudo rsync -aHS /.snapshots/.sync/<target>/var/www/application/ > ${TMP}/application/ > > # Delete stuff INSIDE the directory, to avoid messing up timestamps on > parent directory. > sudo rm -rf /.snapshots/.sync/<target>/var/www/application/* > > # Hardlink everything back to where it belongs. > sudo cp -al ${TMP}/application/* > /.snapshots/.sync/<target>/var/www/application/ > > # Clean up top-level timestamp. > sudo rsync -aHS ${TMP}/application/ > /.snapshots/.sync/<target>/var/www/application/ > > You can also do some interesting things using the --link-dest > parameter to rsync, such as stitching together a unified directory > which hardlinks old data from older snapshots and new data from newer > snapshots. IMHO, that can be hard to get right for a one-off like > this, and is only worthwhile if you have a hard space constraint to > work around. > > You'll need to figure out which daily snapshot will be promoted to the > next monthly (the daily.30 that will be present on July 1, probably). > If that daily was taken before your change, then you can just let it > be. If it is from after your change, you might want to fix it in a > process similar to the above, then run through the process again using > that daily as your reference point. > > In my experience, fixing hourly and daily snapshots is error-prone > enough to be risky to undertake. I usually only bother to fix things > that will eventually get promoted to long-term storage, which > generally reduces the problem to one or two points that need fixed. > > -scott > > > On Mon, Jun 5, 2023 at 7:23 AM Olivier MARS via rsnapshot-discuss > <rsn...@li...> wrote: > > Hi, > > I renamed a directory in my source folder, many days ago, but I > forgot to move its twins in the rsnapshots directories. > > /var/application/ to /var/www/application > > Today, I would like to try to clean this situation, but I fear to > lost already backed up datas. > > Rsnapshot's retain : > > * retain hourly 24 > * retain daily 31 > * retain monthly 12 > > The original directory, and the new directory are present (with > all their datas) on : > > * monthly.0 > * daily.30 -> daily.3 > > Since daily.2, the original directory is present but empty. > > Best regards, > Olivier > > _______________________________________________ > rsnapshot-discuss mailing list > rsn...@li... > https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss > |
From: Scott H. <sc...@do...> - 2023-06-17 22:37:55
|
Sorry for the late reply. First - you almost certainly have not LOST backup data, you have just lost the hardlinks between the new location of the data and the old location of the data. The old data will continue to represent the state as of when that snapshot was taken, the new data will continue to represent the state as of when that snapshot was taken. If this is fine, then the easiest and safest option is to simply leave things alone, and you can just stop here and not read what follows. --- My own best practice is to "move" the directories by running "cp -al <src> <dst>", wait until the next backup has happened, then remove the old directory. Doesn't always happen that way :-). My broad approach to fixing the in-place stuff would be something like (do not run any of this until you finish reading my email): # Scratch area on the same volume as the backups. TMP=/.snapshots/tmp # Make a copy of the old data with hardlinks. sudo cp -al /.snapshots/monthly.0/<target>/var/application ${TMP}/application # Layer over any data changed in the current snapshot. If you don't use sync_first, # use hourly.0 here. sudo rsync -aHS /.snapshots/.sync/<target>/var/www/application/ ${TMP}/application/ # Delete stuff INSIDE the directory, to avoid messing up timestamps on parent directory. sudo rm -rf /.snapshots/.sync/<target>/var/www/application/* # Hardlink everything back to where it belongs. sudo cp -al ${TMP}/application/* /.snapshots/.sync/<target>/var/www/application/ # Clean up top-level timestamp. sudo rsync -aHS ${TMP}/application/ /.snapshots/.sync/<target>/var/www/application/ You can also do some interesting things using the --link-dest parameter to rsync, such as stitching together a unified directory which hardlinks old data from older snapshots and new data from newer snapshots. IMHO, that can be hard to get right for a one-off like this, and is only worthwhile if you have a hard space constraint to work around. You'll need to figure out which daily snapshot will be promoted to the next monthly (the daily.30 that will be present on July 1, probably). If that daily was taken before your change, then you can just let it be. If it is from after your change, you might want to fix it in a process similar to the above, then run through the process again using that daily as your reference point. In my experience, fixing hourly and daily snapshots is error-prone enough to be risky to undertake. I usually only bother to fix things that will eventually get promoted to long-term storage, which generally reduces the problem to one or two points that need fixed. -scott On Mon, Jun 5, 2023 at 7:23 AM Olivier MARS via rsnapshot-discuss < rsn...@li...> wrote: > Hi, > > I renamed a directory in my source folder, many days ago, but I forgot to > move its twins in the rsnapshots directories. > > /var/application/ to /var/www/application > > Today, I would like to try to clean this situation, but I fear to lost > already backed up datas. > > Rsnapshot's retain : > > - retain hourly 24 > - retain daily 31 > - retain monthly 12 > > The original directory, and the new directory are present (with all their > datas) on : > > - monthly.0 > - daily.30 -> daily.3 > > Since daily.2, the original directory is present but empty. > > Best regards, > Olivier > _______________________________________________ > rsnapshot-discuss mailing list > rsn...@li... > https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss > |
From: Olivier M. <ol...@om...> - 2023-06-05 14:21:51
|
Hi, I renamed a directory in my source folder, many days ago, but I forgot to move its twins in the rsnapshots directories. /var/application/ to /var/www/application Today, I would like to try to clean this situation, but I fear to lost already backed up datas. Rsnapshot's retain : * retain hourly 24 * retain daily 31 * retain monthly 12 The original directory, and the new directory are present (with all their datas) on : * monthly.0 * daily.30 -> daily.3 Since daily.2, the original directory is present but empty. Best regards, Olivier |
From: Scott H. <sc...@do...> - 2023-03-08 05:04:14
|
On Tue, Mar 7, 2023 at 8:45 PM Bilinmek Istemiyor <ben...@gm...> wrote: > What is general rule to prune. I know that snapshots are just hardlinks > unless they are changed. So which is the master. From which one I should > start deleting. > They are all the master, whichever copy you delete last is when the data will actually be deleted. I've attached a small Perl script I use to do targeted clean-ups of this sort. It works by using rsync to make a zero-depth copy of the top level in /tmp. Then it uses rsync again to replicate that empty directory back over the original. I did it this way because I wanted to retain the overall structure (lastmod time, etc), while deleting content within that. -scott |
From: Bilinmek I. <ben...@gm...> - 2023-03-08 04:43:55
|
Hello, I am using rsnapshot for probably 4 years now as a redundant second level backup system to deduplicating backup borg just in case their repositories become corrupt, which often do. I take daily rsnapshot of critical directories of the files system including backups of other backup systems. Yes, I know, I still do not use zfs or btrfs filesystems, but plain old ext4 system. I am too lazy to reinstall a new system, since there is a lot of undocumented or very well configuration in the system. Even if I do, I would probably keep using this rsnapshot, since it gives me relief that on a flat file system, I can access good old fashioned files without compressions, deduplications, checksums, without mounting, sending, receiving of any sort. I know that I have a full copy of the file system. I have one problem currently. I forget to compact borg repositorie, unfortunately they gotten huge and they fatten my rsnapshots. I compacted the backups, did some cleaning in the server, however rsnapshots keeps all these. I would like to prune the rsnapshot backups. What is general rule to prune. I know that snapshots are just hardlinks unless they are changed. So which is the master. From which one I should start deleting. Thanks in advance. |