You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(49) |
Aug
(39) |
Sep
(37) |
Oct
(8) |
Nov
(4) |
Dec
(14) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
(20) |
Feb
(30) |
Mar
(64) |
Apr
(70) |
May
(41) |
Jun
(43) |
Jul
(109) |
Aug
(59) |
Sep
(45) |
Oct
(39) |
Nov
(79) |
Dec
(40) |
2006 |
Jan
(132) |
Feb
(50) |
Mar
(72) |
Apr
(47) |
May
(109) |
Jun
(111) |
Jul
(71) |
Aug
(99) |
Sep
(71) |
Oct
(83) |
Nov
(103) |
Dec
(70) |
2007 |
Jan
(112) |
Feb
(115) |
Mar
(75) |
Apr
(133) |
May
(85) |
Jun
(82) |
Jul
(80) |
Aug
(127) |
Sep
(61) |
Oct
(40) |
Nov
(133) |
Dec
(58) |
2008 |
Jan
(59) |
Feb
(79) |
Mar
(88) |
Apr
(175) |
May
(53) |
Jun
(74) |
Jul
(86) |
Aug
(86) |
Sep
(70) |
Oct
(185) |
Nov
(92) |
Dec
(80) |
2009 |
Jan
(123) |
Feb
(134) |
Mar
(75) |
Apr
(63) |
May
(66) |
Jun
(55) |
Jul
(30) |
Aug
(65) |
Sep
(127) |
Oct
(96) |
Nov
(257) |
Dec
(41) |
2010 |
Jan
(66) |
Feb
(51) |
Mar
(67) |
Apr
(79) |
May
(29) |
Jun
(35) |
Jul
(60) |
Aug
(91) |
Sep
(86) |
Oct
(46) |
Nov
(28) |
Dec
(14) |
2011 |
Jan
(47) |
Feb
(22) |
Mar
(40) |
Apr
(53) |
May
(41) |
Jun
(101) |
Jul
(50) |
Aug
(56) |
Sep
(25) |
Oct
(3) |
Nov
(58) |
Dec
(43) |
2012 |
Jan
(116) |
Feb
(30) |
Mar
(83) |
Apr
(104) |
May
(62) |
Jun
(112) |
Jul
(23) |
Aug
(59) |
Sep
(46) |
Oct
(48) |
Nov
(30) |
Dec
(52) |
2013 |
Jan
(60) |
Feb
(96) |
Mar
(18) |
Apr
(67) |
May
(90) |
Jun
(61) |
Jul
(38) |
Aug
(55) |
Sep
(10) |
Oct
(21) |
Nov
(16) |
Dec
(84) |
2014 |
Jan
(16) |
Feb
(19) |
Mar
(19) |
Apr
(61) |
May
(11) |
Jun
(37) |
Jul
(75) |
Aug
(28) |
Sep
(76) |
Oct
(89) |
Nov
(20) |
Dec
(32) |
2015 |
Jan
(82) |
Feb
(22) |
Mar
(16) |
Apr
(48) |
May
(17) |
Jun
(140) |
Jul
(57) |
Aug
(21) |
Sep
(4) |
Oct
(24) |
Nov
(5) |
Dec
(10) |
2016 |
Jan
(25) |
Feb
(58) |
Mar
(29) |
Apr
(19) |
May
(90) |
Jun
(15) |
Jul
(23) |
Aug
(38) |
Sep
(39) |
Oct
(24) |
Nov
(9) |
Dec
(7) |
2017 |
Jan
(10) |
Feb
(2) |
Mar
(4) |
Apr
(12) |
May
(12) |
Jun
(10) |
Jul
(7) |
Aug
(3) |
Sep
(6) |
Oct
(11) |
Nov
(9) |
Dec
(1) |
2018 |
Jan
(17) |
Feb
|
Mar
(21) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
(13) |
2019 |
Jan
(5) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2020 |
Jan
(2) |
Feb
(14) |
Mar
|
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(11) |
2021 |
Jan
|
Feb
(10) |
Mar
|
Apr
(10) |
May
|
Jun
|
Jul
|
Aug
(4) |
Sep
|
Oct
|
Nov
|
Dec
(37) |
2022 |
Jan
(5) |
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(10) |
Aug
|
Sep
(3) |
Oct
(6) |
Nov
|
Dec
(9) |
2023 |
Jan
(13) |
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(6) |
Dec
(1) |
2024 |
Jan
(6) |
Feb
(4) |
Mar
(6) |
Apr
(10) |
May
(4) |
Jun
|
Jul
(8) |
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
From: Scott H. <sc...@do...> - 2024-10-23 17:46:12
|
It's not really an option, at least not directly. IMHO, the best way to understand rsnapshot is to not think of it as a backup system that someone designed and then implemented, but rather as a system where someone observed the effects of a particular approach, and then structured a backup system around it. The tip of the first interval is handled specially by doing a cp-al mirror followed by rsync to update, but from there every interval is processed by stealing from the previous interval. It sounds like you might prefer to have independent streams each deriving from the daily backup taken on the first of the interval. You could do this by having an independent script, which is probably the direction I would go just to be safe (you don't want your backup scripts to be super hacky!). But I just thought of a hacky way to do it using multiple configs. In your main config, make sure sync_first is turned on (I believe it's the default). Then make a new config for weekly and monthly which have those as the only interval, and which also have sync_first enabled. Then make sure your crontab is something like: 15 2 * * * /usr/bin/rsnapshot sync && /usr/bin/rsnapshot daily 0 6 * * 0 /usr/bin/rsnapshot -c /etc/rsnapshot.weekly.conf weekly 0 9 1 * * /usr/bin/rsnapshot -c /etc/rsnapshot.monthly.conf monthly I dunno, you might need to tweak the times, you want to make sure the sync pass is complete before the weekly and monthly passes run. You probably want to run it in the off hours, but you have plenty of time. You could even run them BEFORE the sync pass by setting them to run early on the following day. The gist of this is: - At 2:15am, run rsnapshot sync to update .sync, then rsnapshot daily to rotate daily and cp-al .sync to daily.0 - At 6am on Sunday, run rsnapshot weekly to rotate weekly and cp-al .sync to weekly.0 - At 9am on the 1st, run rsnapshot monthly to rotate monthly and cp-al .sync to monthly.0 Note that the config files for weekly and monthly don't have to have host definitions or anything, because it will just copy the .sync dir. You might need to experiment to get it right - note that you can use -c to experiment with backup in a small directory structure, so you can run things over and over again at the command-line to see how it works. You can do yearly as either another independent stream, or have it steal the oldest monthly before the monthly runs. Another option would be to write a wrapper script to run from cron, which would use date tests to run the follow-up scripts in order after the daily. That would remove the timing considerations. Another another option which wouldn't need sync_first would be to write a wrapper script which does a manual cp-al from daily.0 to a fake interval, then have the weekly/monthly scripts steal that interval. -scott On Wed, Oct 23, 2024 at 10:14 AM Chris Miller <cj...@tr...> wrote: > Hi Folks, > > Suppose I have four backup intervals: > > - Daily 7 > - Weekly 16 > - Monthly 12 > - Yearly 20 > > The problem I see is that when it comes time to promote a "Weekly" to > "Monthly", I am going to have a three month old backup. Is there a way to > promote the newest generation of a given backup interval, or is that > already the procedure? > > Thanks for the help, > -- > Chris. > > V:916.799.9461 > F:916.974.0428 > > A: Because we read from top to bottom, left to right. > Q: > Why should I start my reply below the quoted text? > > _______________________________________________ > rsnapshot-discuss mailing list > rsn...@li... > https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss > |
From: Chris M. <cj...@tr...> - 2024-10-23 17:13:31
|
Hi Folks, Suppose I have four backup intervals: * Daily 7 * Weekly 16 * Monthly 12 * Yearly 20 The problem I see is that when it comes time to promote a "Weekly" to "Monthly", I am going to have a three month old backup. Is there a way to promote the newest generation of a given backup interval, or is that already the procedure? Thanks for the help, -- Chris. V:916.799.9461 F:916.974.0428 A: Because we read from top to bottom, left to right. Q: > Why should I start my reply below the quoted text? |
From: David K. <dj...@ke...> - 2024-07-28 01:07:40
|
On Fri, Jul 26, 2024 at 01:13:12PM -0400, Thierry Lavallee via rsnapshot-discuss wrote: > Thanks for your quick return! > > So, yes, I sync_first. I have a .sync > > I cannot check much as navigating this 100% full disk is impossible > > What i want to do is delete the old monthly.11 directory without > breaking everything, then run the backups from there. You can go ahead and just delete monthly.11 (eg rm -rf monthly.11) without breaking anything (except your ability to restore 11 month old files that didn't exist 10 months ago, and your ability to see what existed 11 months ago - if you care about that save ls -lR or similar output before you delete). I encourage you to think of monitoring disk space as a general systems administration issue rather than specific to rsnapshot. You might want to get warnings when file systems get to 90% full, so you can take action before a file system gets too close to 100%. -- David Keegel <dj...@ke...> |
From: Thierry L. <th...@8p...> - 2024-07-26 17:13:20
|
Thanks for your quick return! So, yes, I sync_first. I have a .sync I cannot check much as navigating this 100% full disk is impossible What i want to do is delete the old monthly.11 directory without breaking everything, then run the backups from there. |
From: Scott H. <sc...@do...> - 2024-07-26 14:24:55
|
If you had a failure because of full disk, check that your recent backups are complete. Also, if you don’t run sync_first, it can mess up your hard link chain. For monitoring purposes, I’d look for something generic. Something like this: https://www.cyberciti.biz/tips/shell-script-to-watch-the-disk-space.html In a cron job, or an rsnapshot post exec, or something like that. On Thu, Jul 25, 2024 at 3:19 PM Thierry Lavallee via rsnapshot-discuss < rsn...@li...> wrote: > Hi, I just saw that my destination disk is full. > > 1-What is the procedure to *remove the oldest monthly.11 directory* > without breaking everything? > > 2-How can I ask Rsnapshot to never go over 98% - Or alert on reaching some > space usage? > > Thanks! > > > _______________________________________________ > rsnapshot-discuss mailing list > rsn...@li... > https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss > |
From: Pierre M. <pie...@te...> - 2024-07-26 06:09:46
|
Hi, > Le 25 juil. 2024 à 21:15, Thierry Lavallee via rsnapshot-discuss <rsn...@li...> a écrit : > > Hi, I just saw that my destination disk is full. > > 1-What is the procedure to remove the oldest monthly.11 directory without breaking everything? > Yes, It is the very principle of the inode link which, by its mechanism, guarantees that, as long as a link exists, the file exists. The only thing that will be deleted are the files that have disappeared since. > 2-How can I ask Rsnapshot to never go over 98% - Or alert on reaching some space usage? > No directly, Yes with a script I personally use a script testing this with a mail sending when the limit is reached. This script can be called thanks to the "cmd_preexec" parameter. > Thanks! > > > > _______________________________________________ > rsnapshot-discuss mailing list > rsn...@li... > https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss -- Pierre Malard « on ne risque rien à livrer le secret professionnel car on ne livre pas la façon de s’en servir » Jean Cocteau - « Le secret professionnel » - 1922 |\ _,,,---,,_ /,`.-'`' -. ;-;;,_ |,4- ) )-,_. ,\ ( `'-' '---''(_/--' `-'\_) πr perl -e '$_=q#: 3|\ 5_,3-3,2_: 3/,`.'"'"'`'"'"' 5-. ;-;;,_: |,A- ) )-,_. ,\ ( `'"'"'-'"'"': '"'"'-3'"'"'2(_/--'"'"' `-'"'"'\_): 24πr::#;y#:#\n#;s#(\D)(\d+)#$1x$2#ge;print' - --> Ce message n’engage que son auteur <-- |
From: Thierry L. <th...@8p...> - 2024-07-25 20:19:10
|
Hi, I just saw that my destination disk is full. 1-What is the procedure to *remove the oldest monthly.11 directory* without breaking everything? 2-How can I ask Rsnapshot to never go over 98% - Or alert on reaching some space usage? Thanks! |
From: David C. <da...@ca...> - 2024-07-22 21:11:17
|
On 21/07/2024 16:41, Tapani Tarvainen wrote: > I have a couple of clients with bad enough network connection > that I need to set rsync_numtries = 3. > > While that works, there's no easy way to see from logs how many > tries have been necessary. > > So, I would like to propose adding code for logging retries. > > Here's a simple patch that does the trick: ... That seems sensible, thanks! I've applied your patch in a branch, and there's a pull request for maintainers to look at here: https://github.com/rsnapshot/rsnapshot/pull/348 -- David Cantrell |
From: Scott H. <sc...@do...> - 2024-07-21 18:16:12
|
I'm not in the loop for checkins, but I'd probably just override the cmd_rsync setting with a shell script and handle the retries and logging directly. -scott On Sun, Jul 21, 2024 at 8:57 AM Tapani Tarvainen < rsn...@ta...> wrote: > I have a couple of clients with bad enough network connection > that I need to set rsync_numtries = 3. > > While that works, there's no easy way to see from logs how many > tries have been necessary. > > So, I would like to propose adding code for logging retries. > > Here's a simple patch that does the trick: > > > *** /usr/bin/rsnapshot 2023-08-22 19:49:43.000000000 +0300 > --- rsnapshot 2024-07-21 17:01:42.159732440 +0300 > *************** > *** 3897,3902 **** > --- 3897,3906 ---- > if (0 == $test) { > while ($tryCount < $rsync_numtries && $result != 0) { > > + if ($tryCount > 0) { > + print_msg("retrying, tryCount=".$tryCount, > 3); > + } > + > # open rsync and capture STDOUT and STDERR > # the 3rd argument is undefined, that STDERR gets > mashed into STDOUT and we > # don't have to care about getting both STREAMS > together without mixing up time > > > That was created with rsnapshot 1.3.1 in Ubuntu Noble, but the relevant > code looks identical in the latest version in Github. My apologies for > not being well-versed with git so that I could create a pull request. > > -- > Tapani Tarvainen > > > _______________________________________________ > rsnapshot-discuss mailing list > rsn...@li... > https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss > |
From: Tapani T. <rsn...@ta...> - 2024-07-21 15:56:59
|
I have a couple of clients with bad enough network connection that I need to set rsync_numtries = 3. While that works, there's no easy way to see from logs how many tries have been necessary. So, I would like to propose adding code for logging retries. Here's a simple patch that does the trick: *** /usr/bin/rsnapshot 2023-08-22 19:49:43.000000000 +0300 --- rsnapshot 2024-07-21 17:01:42.159732440 +0300 *************** *** 3897,3902 **** --- 3897,3906 ---- if (0 == $test) { while ($tryCount < $rsync_numtries && $result != 0) { + if ($tryCount > 0) { + print_msg("retrying, tryCount=".$tryCount, 3); + } + # open rsync and capture STDOUT and STDERR # the 3rd argument is undefined, that STDERR gets mashed into STDOUT and we # don't have to care about getting both STREAMS together without mixing up time That was created with rsnapshot 1.3.1 in Ubuntu Noble, but the relevant code looks identical in the latest version in Github. My apologies for not being well-versed with git so that I could create a pull request. -- Tapani Tarvainen |
From: Scott H. <sc...@do...> - 2024-05-02 17:05:38
|
The last WARNING line in your rsnapshot log means that that rsnapshot will exit with 2, and then the part after && in crontab will not run. My script will convert an exit of 2 to 0, so that the part after && will run. rsnapshot SHOULD return 1 in case of a hard error like being unable to contact a host, so it SHOULD be safe, but use at your own risk! I'm not doing anything clever to figure out that it's a some-files-vanished warning versus something else. Note that files vanishing is kind of impossible to fix, though. The system might rotate logs during rsync, for instance. It might make more sense to wrap rsync to suppress the 24 exit code, as I think that is fundamentally expected results sometimes, but that was harder than wrapping rsnapshot so I didn't do it :-). -scott On Thu, May 2, 2024 at 9:29 AM Thierry Lavallee <th...@8p...> wrote: > Thanks Scott, you're always a great ressource! :) > > 1-it seems my .sync is up to date. It seems it's the rotation that is not > made. What am I missing? > > 2-So, YES, I found TONS of "file has vanished:" in my log > > It seems the remote source was not stable when the syn happened. I will > try changing my sync time to allow source to be stable. > > [2024-05-02T04:13:04] file has vanished: > "/home/backup/latest/accounts/XXX/vf/SITENAME" (TONS LIKE THIS) > [2024-05-02T04:13:05] rsync warning: some files vanished before they could > be transferred (code 24) at main.c(1682) [generator=3.1.3] > [2024-05-02T04:13:05] WARNING: Some files and/or directories in > root@XXX.XXX.com:/home/backup/latest/ vanished during rsync operation > [2024-05-02T04:13:05] touch > /media/server/8tb-usb/webserver-backup/XXX/.sync/ > [2024-05-02T04:13:05] rm -f /var/run/rsnapshot_XXX_cpbackup.pid > [2024-05-02T04:13:05] WARNING: /usr/bin/rsnapshot -c > /root/scripts/rsnapshot.XXX_cpbackup.conf sync: completed, but with some > warnings > 3-As for your script... Could this simply ignore "file has vanished" > rather than all warnings? > > Thanks! > > > > On 2024-05-02 11:53, Scott Hess wrote: > > On Ubuntu, the log file is at /var/log/rsnapshot.log. Usually missing > days like this will be down to something like an error on sync. You can > run rsnapshot sync with -v or -V at the command-line to get even more > info. The && between sync and daily in cron will prevent daily if the sync > throws an error. > > Something I noticed with my setup was that I was getting periodic failures > due to an rsnapshot WARNING, and often it was something like an rsync error > 24 "Partial transfer due to vanished source files" which is expected to > happen periodically. I put this script in > /usr/local/bin/rsnapshot_ignore_warning: > > ==before > #!/bin/sh > > rsnapshot "$@" > ec=$? > > # rsnapshot exits with 2 in case of warning. > if [ $ec -eq 2 ]; then > echo "Rsnapshot warning" > exit 0; > fi > > exit $ec > ==after > > and modified my cron entry to call that for the sync pass. I don't know > if this is the problem you are having, if your problem is a hard error such > as a remote machine being unreachable or refusing ssh, you'll need to fix > THAT instead. > > -scott > > > On Thu, May 2, 2024 at 8:45 AM Thierry Lavallee via rsnapshot-discuss < > rsn...@li...> wrote: > >> Hi all, >> >> I am missing daily rotation backup between 2024/05/02 (.sync) and >> 2024/04/24 (daily.0) >> >> How can I inspect the logs to see what happened? >> .sync >> 2024/05/02 - 04:13:05 >> daily.0 >> 2024/04/24 - 05:20:48 >> daily.1 >> 2024/04/22 - 05:15:33 >> daily.2 >> 2024/04/21 - 04:54:58 >> daily.3 >> 2024/04/20 - 11:56:18 >> daily.4 >> 2024/04/18 - 05:14:17 >> daily.5 >> 2024/04/17 - 11:54:06 >> monthly.0 >> 2024/04/01 - 05:15:45 >> monthly.1 >> 2024/03/04 - 05:10:26 >> monthly.10 >> 2023/02/26 - 04:51:11 >> monthly.11 >> 2023/01/30 - 04:42:41 >> monthly.2 >> 2024/01/23 - 05:12:13 >> monthly.3 >> 2023/10/09 - 04:53:34 >> monthly.4 >> 2023/09/04 - 04:49:33 >> monthly.5 >> 2023/07/30 - 04:48:09 >> monthly.6 >> 2023/07/03 - 04:45:06 >> monthly.7 >> 2023/05/29 - 04:40:51 >> monthly.8 >> 2023/05/01 - 04:41:53 >> monthly.9 >> 2023/04/03 - 04:39:44 >> weekly.0 >> 2024/04/15 - 05:15:01 >> weekly.1 >> 2024/04/13 - 05:14:13 >> weekly.2 >> 2024/04/07 - 05:00:50 >> >> >> I have the following crons setup >> >> *Every day:* >> >> /usr/bin/rsnapshot -c /root/scripts/rsnapshot.server04_cpbackup.conf sync >> && /usr/bin/rsnapshot -c /root/scripts/rsnapshot.server04_cpbackup.conf >> daily >> >> *Every Sunday:* >> >> /usr/bin/rsnapshot -c /root/scripts/rsnapshot.server04_cpbackup.conf >> weekly >> >> *Every 1st of month:* >> >> /usr/bin/rsnapshot -c /root/scripts/rsnapshot.server04_cpbackup.conf >> monthly >> >> >> Thanks >> _______________________________________________ >> rsnapshot-discuss mailing list >> rsn...@li... >> https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss >> > |
From: Thierry L. <th...@8p...> - 2024-05-02 16:29:29
|
Thanks Scott, you're always a great ressource! :) 1-it seems my .sync is up to date. It seems it's the rotation that is not made. What am I missing? 2-So, YES, I found TONS of "file has vanished:" in my log It seems the remote source was not stable when the syn happened. I will try changing my sync time to allow source to be stable. [2024-05-02T04:13:04] file has vanished: "/home/backup/latest/accounts/XXX/vf/SITENAME" (TONS LIKE THIS) [2024-05-02T04:13:05] rsync warning: some files vanished before they could be transferred (code 24) at main.c(1682) [generator=3.1.3] [2024-05-02T04:13:05] WARNING: Some files and/or directories in root@XXX.XXX.com:/home/backup/latest/ vanished during rsync operation [2024-05-02T04:13:05] touch /media/server/8tb-usb/webserver-backup/XXX/.sync/ [2024-05-02T04:13:05] rm -f /var/run/rsnapshot_XXX_cpbackup.pid [2024-05-02T04:13:05] WARNING: /usr/bin/rsnapshot -c /root/scripts/rsnapshot.XXX_cpbackup.conf sync: completed, but with some warnings 3-As for your script... Could this simply ignore "file has vanished" rather than all warnings? Thanks! On 2024-05-02 11:53, Scott Hess wrote: > On Ubuntu, the log file is at /var/log/rsnapshot.log. Usually missing > days like this will be down to something like an error on sync. You > can run rsnapshot sync with -v or -V at the command-line to get even > more info. The && between sync and daily in cron will prevent daily if > the sync throws an error. > > Something I noticed with my setup was that I was getting periodic > failures due to an rsnapshot WARNING, and often it was something like > an rsync error 24 "Partial transfer due to vanished source files" > which is expected to happen periodically. I put this script in > /usr/local/bin/rsnapshot_ignore_warning: > > ==before > #!/bin/sh > > rsnapshot "$@" > ec=$? > > # rsnapshot exits with 2 in case of warning. > if [ $ec -eq 2 ]; then > echo "Rsnapshot warning" > exit 0; > fi > > exit $ec > ==after > > and modified my cron entry to call that for the sync pass. I don't > know if this is the problem you are having, if your problem is a hard > error such as a remote machine being unreachable or refusing ssh, > you'll need to fix THAT instead. > > -scott > > > On Thu, May 2, 2024 at 8:45 AM Thierry Lavallee via rsnapshot-discuss > <rsn...@li...> wrote: > > Hi all, > > I am missing daily rotation backup between 2024/05/02 (.sync) and > 2024/04/24 (daily.0) > > How can I inspect the logs to see what happened? > > .sync > 2024/05/02 - 04:13:05 > daily.0 > 2024/04/24 - 05:20:48 > daily.1 > 2024/04/22 - 05:15:33 > daily.2 > 2024/04/21 - 04:54:58 > daily.3 > 2024/04/20 - 11:56:18 > daily.4 > 2024/04/18 - 05:14:17 > daily.5 > 2024/04/17 - 11:54:06 > monthly.0 > 2024/04/01 - 05:15:45 > monthly.1 > 2024/03/04 - 05:10:26 > monthly.10 > 2023/02/26 - 04:51:11 > monthly.11 > 2023/01/30 - 04:42:41 > monthly.2 > 2024/01/23 - 05:12:13 > monthly.3 > 2023/10/09 - 04:53:34 > monthly.4 > 2023/09/04 - 04:49:33 > monthly.5 > 2023/07/30 - 04:48:09 > monthly.6 > 2023/07/03 - 04:45:06 > monthly.7 > 2023/05/29 - 04:40:51 > monthly.8 > 2023/05/01 - 04:41:53 > monthly.9 > 2023/04/03 - 04:39:44 > weekly.0 > 2024/04/15 - 05:15:01 > weekly.1 > 2024/04/13 - 05:14:13 > weekly.2 > 2024/04/07 - 05:00:50 > > > I have the following crons setup > > *Every day:* > > /usr/bin/rsnapshot -c > /root/scripts/rsnapshot.server04_cpbackup.conf sync && > /usr/bin/rsnapshot -c > /root/scripts/rsnapshot.server04_cpbackup.conf daily > > *Every Sunday:* > > /usr/bin/rsnapshot -c > /root/scripts/rsnapshot.server04_cpbackup.conf weekly > > *Every 1st of month:* > > /usr/bin/rsnapshot -c > /root/scripts/rsnapshot.server04_cpbackup.conf monthly > > > Thanks > > _______________________________________________ > rsnapshot-discuss mailing list > rsn...@li... > https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss > |
From: Scott H. <sc...@do...> - 2024-05-02 15:54:16
|
On Ubuntu, the log file is at /var/log/rsnapshot.log. Usually missing days like this will be down to something like an error on sync. You can run rsnapshot sync with -v or -V at the command-line to get even more info. The && between sync and daily in cron will prevent daily if the sync throws an error. Something I noticed with my setup was that I was getting periodic failures due to an rsnapshot WARNING, and often it was something like an rsync error 24 "Partial transfer due to vanished source files" which is expected to happen periodically. I put this script in /usr/local/bin/rsnapshot_ignore_warning: ==before #!/bin/sh rsnapshot "$@" ec=$? # rsnapshot exits with 2 in case of warning. if [ $ec -eq 2 ]; then echo "Rsnapshot warning" exit 0; fi exit $ec ==after and modified my cron entry to call that for the sync pass. I don't know if this is the problem you are having, if your problem is a hard error such as a remote machine being unreachable or refusing ssh, you'll need to fix THAT instead. -scott On Thu, May 2, 2024 at 8:45 AM Thierry Lavallee via rsnapshot-discuss < rsn...@li...> wrote: > Hi all, > > I am missing daily rotation backup between 2024/05/02 (.sync) and > 2024/04/24 (daily.0) > > How can I inspect the logs to see what happened? > .sync > 2024/05/02 - 04:13:05 > daily.0 > 2024/04/24 - 05:20:48 > daily.1 > 2024/04/22 - 05:15:33 > daily.2 > 2024/04/21 - 04:54:58 > daily.3 > 2024/04/20 - 11:56:18 > daily.4 > 2024/04/18 - 05:14:17 > daily.5 > 2024/04/17 - 11:54:06 > monthly.0 > 2024/04/01 - 05:15:45 > monthly.1 > 2024/03/04 - 05:10:26 > monthly.10 > 2023/02/26 - 04:51:11 > monthly.11 > 2023/01/30 - 04:42:41 > monthly.2 > 2024/01/23 - 05:12:13 > monthly.3 > 2023/10/09 - 04:53:34 > monthly.4 > 2023/09/04 - 04:49:33 > monthly.5 > 2023/07/30 - 04:48:09 > monthly.6 > 2023/07/03 - 04:45:06 > monthly.7 > 2023/05/29 - 04:40:51 > monthly.8 > 2023/05/01 - 04:41:53 > monthly.9 > 2023/04/03 - 04:39:44 > weekly.0 > 2024/04/15 - 05:15:01 > weekly.1 > 2024/04/13 - 05:14:13 > weekly.2 > 2024/04/07 - 05:00:50 > > > I have the following crons setup > > *Every day:* > > /usr/bin/rsnapshot -c /root/scripts/rsnapshot.server04_cpbackup.conf sync > && /usr/bin/rsnapshot -c /root/scripts/rsnapshot.server04_cpbackup.conf > daily > > *Every Sunday:* > > /usr/bin/rsnapshot -c /root/scripts/rsnapshot.server04_cpbackup.conf weekly > > *Every 1st of month:* > > /usr/bin/rsnapshot -c /root/scripts/rsnapshot.server04_cpbackup.conf > monthly > > > Thanks > _______________________________________________ > rsnapshot-discuss mailing list > rsn...@li... > https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss > |
From: Thierry L. <th...@8p...> - 2024-05-02 15:44:30
|
Hi all, I am missing daily rotation backup between 2024/05/02 (.sync) and 2024/04/24 (daily.0) How can I inspect the logs to see what happened? .sync 2024/05/02 - 04:13:05 daily.0 2024/04/24 - 05:20:48 daily.1 2024/04/22 - 05:15:33 daily.2 2024/04/21 - 04:54:58 daily.3 2024/04/20 - 11:56:18 daily.4 2024/04/18 - 05:14:17 daily.5 2024/04/17 - 11:54:06 monthly.0 2024/04/01 - 05:15:45 monthly.1 2024/03/04 - 05:10:26 monthly.10 2023/02/26 - 04:51:11 monthly.11 2023/01/30 - 04:42:41 monthly.2 2024/01/23 - 05:12:13 monthly.3 2023/10/09 - 04:53:34 monthly.4 2023/09/04 - 04:49:33 monthly.5 2023/07/30 - 04:48:09 monthly.6 2023/07/03 - 04:45:06 monthly.7 2023/05/29 - 04:40:51 monthly.8 2023/05/01 - 04:41:53 monthly.9 2023/04/03 - 04:39:44 weekly.0 2024/04/15 - 05:15:01 weekly.1 2024/04/13 - 05:14:13 weekly.2 2024/04/07 - 05:00:50 I have the following crons setup *Every day:* /usr/bin/rsnapshot -c /root/scripts/rsnapshot.server04_cpbackup.conf sync && /usr/bin/rsnapshot -c /root/scripts/rsnapshot.server04_cpbackup.conf daily *Every Sunday:* /usr/bin/rsnapshot -c /root/scripts/rsnapshot.server04_cpbackup.conf weekly *Every 1st of month:* /usr/bin/rsnapshot -c /root/scripts/rsnapshot.server04_cpbackup.conf monthly Thanks |
From: <t.s...@di...> - 2024-04-04 12:36:46
|
Doing it all in one go worked as expected. This issue is solved. Thanks to everybody who contributed to this solution. Am 2024-04-03 12:10, schrieb David Cantrell: > On Wed, Apr 03, 2024 at 09:19:01AM +0200, t.schneider--- via > rsnapshot-discuss wrote: > >> Hi, >> >> let me first clarify what is meant when I say "I cannot use rsync". >> >> This means that copying a directory to target server using any >> rsync-option creates a directory with full size. >> >> For instance, I migrate daily.0 to target server, this has size 12GB, >> and after this I migrate daily.1 to target server, and this has 12GB, >> too. > > If you do them separately then `rsync -H` won't help, as it won't know > that when you are syncing daily.1 it should also look in the daily.0 > that you didn't tell it about. > > Do them all in one go and `rsync -H` will do what you want. |
From: David C. <da...@ca...> - 2024-04-03 11:22:38
|
On Wed, Apr 03, 2024 at 09:19:01AM +0200, t.schneider--- via rsnapshot-discuss wrote: > Hi, > > let me first clarify what is meant when I say "I cannot use rsync". > > This means that copying a directory to target server using any > rsync-option creates a directory with full size. > > For instance, I migrate daily.0 to target server, this has size 12GB, > and after this I migrate daily.1 to target server, and this has 12GB, > too. If you do them separately then `rsync -H` won't help, as it won't know that when you are syncing daily.1 it should also look in the daily.0 that you didn't tell it about. Do them all in one go and `rsync -H` will do what you want. -- David Cantrell |
From: Martin S. <ma...@on...> - 2024-04-03 08:09:42
|
Am Mi., 3. Apr. 2024 um 10:05 Uhr schrieb t.schneider--- via rsnapshot-discuss <rsn...@li...>: > Target directory /backup is NFS4 share of a central storage server provided by NetApp. NFS might be the reason; try to use ssh. Best Martin |
From: <t.s...@di...> - 2024-04-03 08:04:00
|
I'm connected to target server. Source directory /backup is mounted on target server /mnt/backup using NFS. Target directory /backup is NFS4 share of a central storage server provided by NetApp. To transfer the data, I have executed these commands: rsync -aHS /mnt/backup/<hostname>/daily.1 /backup/<hostname>/ cp -al /mnt/backup/<hostname>/daily.1/ /backup/<hostname>/ Am 2024-04-03 09:30, schrieb Martin Schröder: > Am Mi., 3. Apr. 2024 um 09:20 Uhr schrieb t.schneider--- via > rsnapshot-discuss <rsn...@li...>: > >> let me first clarify what is meant when I say "I cannot use rsync". >> >> This means that copying a directory to target server using any >> rsync-option creates a directory with full size. >> >> For instance, I migrate daily.0 to target server, this has size 12GB, >> and after this I migrate daily.1 to target server, and this has 12GB, >> too. > > How do you "migrate"? > How do you copy via rsync? > Which OS and filesystems are on both sides? > > Best > Martin > > _______________________________________________ > rsnapshot-discuss mailing list > rsn...@li... > https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss |
From: Martin S. <ma...@on...> - 2024-04-03 07:31:18
|
Am Mi., 3. Apr. 2024 um 09:20 Uhr schrieb t.schneider--- via rsnapshot-discuss <rsn...@li...>: > let me first clarify what is meant when I say "I cannot use rsync". > > This means that copying a directory to target server using any rsync-option creates a directory with full size. > > For instance, I migrate daily.0 to target server, this has size 12GB, and after this I migrate daily.1 to target server, and this has 12GB, too. How do you "migrate"? How do you copy via rsync? Which OS and filesystems are on both sides? Best Martin |
From: <t.s...@di...> - 2024-04-03 07:19:19
|
Hi, let me first clarify what is meant when I say "I cannot use rsync". This means that copying a directory to target server using any rsync-option creates a directory with full size. For instance, I migrate daily.0 to target server, this has size 12GB, and after this I migrate daily.1 to target server, and this has 12GB, too. Imo the hardlinks are resolved. Am 2024-04-03 02:36, schrieb Peter Barker: > If rsync is filling destination disk, is it possible that the > destination file system cannot handle hard links? > > 3 Apr 2024 8:50:05 am David Cantrell <da...@ca...>: > > On 02/04/2024 14:20, t.schneider--- via rsnapshot-discuss wrote: > > I cannot use rsync, any possible option is creating full directory > content. > And this is blowing up my destination device. > I'm a bit surprised that you can't use rsync, because you can't use > rsnapshot without it. > > You could use cpio or tar over ssh. There's an example here: > https://bradthemad.org/tech/notes/cpio_directory.php > > -- > David Cantrell > > _______________________________________________ > rsnapshot-discuss mailing list > rsn...@li... > https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss _______________________________________________ rsnapshot-discuss mailing list rsn...@li... https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss |
From: Peter B. <pb...@ba...> - 2024-04-03 00:54:34
|
If rsync is filling destination disk, is it possible that the destination file system cannot handle hard links? 3 Apr 2024 8:50:05 am David Cantrell <da...@ca...>: > On 02/04/2024 14:20, t.schneider--- via rsnapshot-discuss wrote: > >> I cannot use rsync, any possible option is creating full directory >> content. >> And this is blowing up my destination device. > > I'm a bit surprised that you can't use rsync, because you can't use > rsnapshot without it. > > You could use cpio or tar over ssh. There's an example here: > https://bradthemad.org/tech/notes/cpio_directory.php > > -- > David Cantrell > > > _______________________________________________ > rsnapshot-discuss mailing list > rsn...@li... > https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss |
From: David C. <da...@ca...> - 2024-04-02 22:08:35
|
On 02/04/2024 14:20, t.schneider--- via rsnapshot-discuss wrote: > I cannot use rsync, any possible option is creating full directory content. > > And this is blowing up my destination device. I'm a bit surprised that you can't use rsync, because you can't use rsnapshot without it. You could use cpio or tar over ssh. There's an example here: https://bradthemad.org/tech/notes/cpio_directory.php -- David Cantrell |
From: Scott H. <sc...@do...> - 2024-04-02 13:52:25
|
If you don't have rsync, the next-best option is something like GNU cp (such as "cp -al"). Whatever you are using to make the copy has to be capable of maintaining hardlinks, and wherever you are copying the structure to has to be capable of handling hardlinks. It might be worthwhile to express what "copy these backups to another server" means in this context. The files have to get from place to place somehow, and that mechanism has to be able to notice and maintain the hardlinks. -scott On Tue, Apr 2, 2024 at 6:23 AM t.schneider--- via rsnapshot-discuss < rsn...@li...> wrote: > I cannot use rsync, any possible option is creating full directory content. > > And this is blowing up my destination device. > > > > Am 2024-03-29 13:22, schrieb David Keegel: > > On Fri, Mar 29, 2024 at 09:49:40AM +0100, Dirk Heinrichs wrote: > > t.schneider--- via rsnapshot-discuss: > > How can I copy these backups to another server w/o blowing up the > target directory, means any hard link remains a hard link? > > > dd over ssh: > https://www.thegeekdiary.com/how-to-clone-linux-disk-partition-over-network-using-dd/ > > > I think Scott's suggestion of rsync -aHS is more generally applicable. > > _______________________________________________ > rsnapshot-discuss mailing list > rsn...@li... > https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss > |
From: <t.s...@di...> - 2024-04-02 13:20:56
|
I cannot use rsync, any possible option is creating full directory content. And this is blowing up my destination device. Am 2024-03-29 13:22, schrieb David Keegel: > On Fri, Mar 29, 2024 at 09:49:40AM +0100, Dirk Heinrichs wrote: > t.schneider--- via rsnapshot-discuss: > > How can I copy these backups to another server w/o blowing up the > target directory, means any hard link remains a hard link? > dd over ssh: > https://www.thegeekdiary.com/how-to-clone-linux-disk-partition-over-network-using-dd/ I think Scott's suggestion of rsync -aHS is more generally applicable. |
From: David K. <dj...@ke...> - 2024-03-29 12:37:32
|
On Fri, Mar 29, 2024 at 09:49:40AM +0100, Dirk Heinrichs wrote: > t.schneider--- via rsnapshot-discuss: > > >How can I copy these backups to another server w/o blowing up the > >target directory, means any hard link remains a hard link? > > dd over ssh: https://www.thegeekdiary.com/how-to-clone-linux-disk-partition-over-network-using-dd/ I think Scott's suggestion of rsync -aHS is more generally applicable. -- David Keegel <dj...@ke...> |