You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(19) |
Nov
(2) |
Dec
(23) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(18) |
Feb
(15) |
Mar
(27) |
Apr
(6) |
May
(40) |
Jun
(53) |
Jul
(67) |
Aug
(107) |
Sep
(75) |
Oct
(74) |
Nov
(73) |
Dec
(63) |
2003 |
Jan
(93) |
Feb
(97) |
Mar
(72) |
Apr
(129) |
May
(110) |
Jun
(97) |
Jul
(151) |
Aug
(124) |
Sep
(66) |
Oct
(216) |
Nov
(105) |
Dec
(151) |
2004 |
Jan
(107) |
Feb
(181) |
Mar
(235) |
Apr
(212) |
May
(231) |
Jun
(231) |
Jul
(264) |
Aug
(278) |
Sep
(173) |
Oct
(259) |
Nov
(164) |
Dec
(244) |
2005 |
Jan
(318) |
Feb
(206) |
Mar
(287) |
Apr
(222) |
May
(240) |
Jun
(255) |
Jul
(166) |
Aug
(289) |
Sep
(233) |
Oct
(200) |
Nov
(307) |
Dec
(170) |
2006 |
Jan
(289) |
Feb
(270) |
Mar
(306) |
Apr
(150) |
May
(181) |
Jun
(263) |
Jul
(181) |
Aug
(291) |
Sep
(147) |
Oct
(155) |
Nov
(381) |
Dec
(310) |
2007 |
Jan
(431) |
Feb
(306) |
Mar
(378) |
Apr
(216) |
May
(313) |
Jun
(235) |
Jul
(373) |
Aug
(171) |
Sep
(459) |
Oct
(642) |
Nov
(464) |
Dec
(419) |
2008 |
Jan
(374) |
Feb
(445) |
Mar
(400) |
Apr
(406) |
May
(374) |
Jun
(346) |
Jul
(387) |
Aug
(302) |
Sep
(255) |
Oct
(374) |
Nov
(292) |
Dec
(488) |
2009 |
Jan
(392) |
Feb
(240) |
Mar
(245) |
Apr
(483) |
May
(310) |
Jun
(494) |
Jul
(265) |
Aug
(515) |
Sep
(514) |
Oct
(284) |
Nov
(338) |
Dec
(329) |
2010 |
Jan
(305) |
Feb
(246) |
Mar
(404) |
Apr
(391) |
May
(302) |
Jun
(166) |
Jul
(166) |
Aug
(234) |
Sep
(222) |
Oct
(267) |
Nov
(219) |
Dec
(244) |
2011 |
Jan
(189) |
Feb
(220) |
Mar
(353) |
Apr
(322) |
May
(270) |
Jun
(202) |
Jul
(172) |
Aug
(215) |
Sep
(226) |
Oct
(169) |
Nov
(163) |
Dec
(152) |
2012 |
Jan
(182) |
Feb
(221) |
Mar
(117) |
Apr
(151) |
May
(169) |
Jun
(135) |
Jul
(140) |
Aug
(108) |
Sep
(148) |
Oct
(97) |
Nov
(119) |
Dec
(66) |
2013 |
Jan
(105) |
Feb
(127) |
Mar
(265) |
Apr
(84) |
May
(75) |
Jun
(116) |
Jul
(89) |
Aug
(118) |
Sep
(132) |
Oct
(247) |
Nov
(98) |
Dec
(109) |
2014 |
Jan
(81) |
Feb
(101) |
Mar
(101) |
Apr
(79) |
May
(132) |
Jun
(102) |
Jul
(91) |
Aug
(114) |
Sep
(104) |
Oct
(126) |
Nov
(146) |
Dec
(46) |
2015 |
Jan
(51) |
Feb
(44) |
Mar
(83) |
Apr
(40) |
May
(68) |
Jun
(43) |
Jul
(38) |
Aug
(33) |
Sep
(88) |
Oct
(54) |
Nov
(53) |
Dec
(119) |
2016 |
Jan
(268) |
Feb
(42) |
Mar
(86) |
Apr
(73) |
May
(239) |
Jun
(93) |
Jul
(89) |
Aug
(60) |
Sep
(49) |
Oct
(66) |
Nov
(70) |
Dec
(34) |
2017 |
Jan
(81) |
Feb
(103) |
Mar
(161) |
Apr
(137) |
May
(230) |
Jun
(111) |
Jul
(135) |
Aug
(92) |
Sep
(118) |
Oct
(85) |
Nov
(110) |
Dec
(84) |
2018 |
Jan
(75) |
Feb
(59) |
Mar
(48) |
Apr
(50) |
May
(63) |
Jun
(44) |
Jul
(44) |
Aug
(61) |
Sep
(42) |
Oct
(108) |
Nov
(76) |
Dec
(48) |
2019 |
Jan
(38) |
Feb
(47) |
Mar
(18) |
Apr
(98) |
May
(47) |
Jun
(53) |
Jul
(48) |
Aug
(52) |
Sep
(33) |
Oct
(20) |
Nov
(30) |
Dec
(38) |
2020 |
Jan
(29) |
Feb
(49) |
Mar
(37) |
Apr
(87) |
May
(66) |
Jun
(98) |
Jul
(25) |
Aug
(49) |
Sep
(22) |
Oct
(124) |
Nov
(66) |
Dec
(26) |
2021 |
Jan
(131) |
Feb
(109) |
Mar
(71) |
Apr
(56) |
May
(29) |
Jun
(12) |
Jul
(36) |
Aug
(38) |
Sep
(54) |
Oct
(17) |
Nov
(38) |
Dec
(23) |
2022 |
Jan
(56) |
Feb
(56) |
Mar
(73) |
Apr
(25) |
May
(15) |
Jun
(22) |
Jul
(20) |
Aug
(36) |
Sep
(24) |
Oct
(21) |
Nov
(78) |
Dec
(42) |
2023 |
Jan
(47) |
Feb
(45) |
Mar
(31) |
Apr
(4) |
May
(15) |
Jun
(10) |
Jul
(37) |
Aug
(24) |
Sep
(21) |
Oct
(15) |
Nov
(15) |
Dec
(20) |
2024 |
Jan
(24) |
Feb
(37) |
Mar
(14) |
Apr
(23) |
May
(12) |
Jun
(1) |
Jul
(14) |
Aug
(34) |
Sep
(38) |
Oct
(13) |
Nov
(33) |
Dec
(14) |
2025 |
Jan
(15) |
Feb
(19) |
Mar
(28) |
Apr
(12) |
May
(23) |
Jun
(43) |
Jul
(32) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: G.W. H. <ba...@ju...> - 2025-06-08 15:43:14
|
Hi there, On Sun, 8 Jun 2025, Matthew Pounsett wrote: > I'm not sure what I've done, but something I did in the last few > days broke my pool size graphs. ... > ... > ... It happens. :( For something like this I'd get into the nitty-gritty, which I hope shouldn't be too difficult. BackupPC runs as an endless loop which in my installations is in the Perl script at /usr/local/BackupPC/bin/BackupPC but in your case might for example be in /some/other/directory/bin/BackupPC. It's actually "$InstallDir/bin/BackupPC", the "$InstallDir" path is given in your /etc/BackupPC/config.pl file. In that file, the endless loop is called the "Main loop". It's short and sweet, with about eight things in it that it actually does, each of them being a separate call to a Perl sub or script. The pool graphs are created by a call near the end of the "Main loop" to a sub called "Main_Check_Job_Messages" which unfortunately isn't so short nor so sweet. Main_Check_Job_Messages waits for EOF from the client connection and then does a bunch of stuff including *scheduling* a call to be made to a Perl script "$BinDir/BackupPC_rrdUpdate" at some future time. After the call has happened, your pool usage RRD file and graphs will theoretically be in the BackupPC log directory, which on my systems is /var/log/BackupPC/ but see $Conf{LogDir} in your config.pl. You should see that the timestamps on the .rrd and .png files are being updated at the expected times. Here's one of my systems as of this afternoon: # ls -lrt /var/log/BackupPC/ | tail -rw-r----- 1 backuppc backuppc 5715 Jun 7 01:00 LOG.1.z -rw-r----- 1 backuppc backuppc 5693 Jun 8 01:00 LOG.0.z -rw-r----- 1 backuppc backuppc 1793 Jun 8 01:33 UserEmailInfo.pl -rw-r--r-- 1 backuppc backuppc 31084 Jun 8 01:33 poolUsage.rrd -rw-r----- 1 backuppc backuppc 6298 Jun 8 01:33 poolUsage4.png -rw-r----- 1 backuppc backuppc 8106 Jun 8 01:33 poolUsage52.png -rw-r----- 1 backuppc backuppc 47571 Jun 8 15:00 status.pl.old -rw-r----- 1 backuppc backuppc 47571 Jun 8 16:00 status.pl -rw-r----- 1 backuppc backuppc 0 Jun 8 16:00 LOCK -rw-r----- 1 backuppc backuppc 78046 Jun 8 16:00 LOG # As you can see the .rrd file and the .png files were updated in the small hours of this morning, as expected on this system. First I'd check that the image files that your BackupPC system has in $Conf{LogDir} are the same files which Apache is rendering. To check that Apache renders those .png files on the BackupPC Status page, you might for example replace them with more or less any suitably-sized .png files of your own. From your problem description I'm guessing that the files will not have been updated - if the image files *are* being updated, and they look sane - you can view them in just about any image viewer - you need to find out why (presumably) Apache isn't rendering them. :( If the files are *not* being updated as expected then you could check that $BinDir/BackupPC_rrdUpdate is getting called, that it will run, that it can produce an .rrd file, that the .rrd file is sane and in a sane place, that the .png files are being created from it, and that the conditions for the scheduling above to actually get done are met (for example are backups even completing? Check the logs, increase the logging verbosity perhaps, maybe even try adding some logging of your own to the scripts to find out). Somewhere along that route you may find the issue. It might need a bit more work to fix it, whatever it is. Sorry, I have to go start cooking dinner now. :) -- 73, Ged. |
From: G.W. H. <ba...@ju...> - 2025-06-08 13:50:54
|
Hi there, Apologies, I wrote this out and then forgot to send it. Late night. FWIW I agree with with the consensus so far. :) On Sat, 7 Jun 2025, Christian V?lker wrote: > I have a V4 pool where I had compression disabled for couple of months. > Now I decided (after thinking twice ;)) to enable compression on the pool. > > Is there any chance to move the existing (non compressed) pool into the > cpool? > > Or best to just wait another months until the pool-files are getting > outdated? > ... Personally, I think I'd start again. Here's why: in the documentation at http://piplus.local.jubileegroup.co.uk/BackupPC_Admin?action=view&type=docs#Overview there's a section headed "Here is a more detailed discussion" which at point 5 says "CompressLevel has toggled on/off between backups. This isn't well tested and it's very hard to support efficiently. ..." So I'd probably rename my machines as far as BackupPC is concerned, and remove the old backups using the Web interface after gaining some confidence that they weren't needed. Another option might be to work on one host at a time. For example (first, stop BackupPC to be on the safe side) delete 'host' from your config.pl, add the same host with a different IP/ID, then delete all the files in pc/'host'/ and finally restart BackupPC. I'd hope for the uncompressed files for 'host' which are no longer needed then to be removed (eventually depending on config) by the nightly cleanups. Whatever happens I think if you try any of this then we on the mailing list would welcome notes on your experiences. -- 73, Ged. |
From: Christian V. <cvo...@kn...> - 2025-06-08 07:32:12
|
Hi, thanks- somehow I expected something like this. And I agree the process you suggested will be too flawky... I will start with a fresh backup pool, emptying everything. For safety I will keep the original btrfs-disk until the new pool has been populated with some stuff. Thanks! /KNEBB Am 08.06.25 um 03:49 schrieb bac...@ko...: > This probably won't be very easy and I don't know of any automatic way > to do it. > You could do it manually by doing something like the following: > > 1a. Write a program that uses zlib to compress a file and insert the > appropriate backuppc header. This isn't too hard - you can examine > similar code in the backuppc sources. > > 1b. Write a script to recurse through cpool, compress each file using > above routine at a given level of compression (typically 3). Check > to make sure no existing copy in cpool before moving (assuming > you already started using compression so that you have a mixed pool > and cpool now) > > 2. Recurse through *all* the attrib files in the pc directory and > change 'compress' to 3 for all files in the attrib file (assuming > you want to use the standard compression level) > You could write a simple Perl script to do this > > 3. Also, change the 'compress' entry in 'backupInfo' for each backup > > 4. Similarly, change the 'compress' entry in each 'backup' file at the > root of each host > 5a. Change the refCont entries in each cpool/[0-9a-f][0-9a-f] directory > 5b. Simioarly, change refCnt entries in each backup root for each host > You could do both of the above brute force by running > /usr/share/backuppc/bin/BackupPC_fsck -f -s > > Given all the manipulations above, I would be worried that the process > is "fragile" and could destroy invaluable backups. > > So unless someone knows of a well-tested, automatic way of doing this, > I'm not sure I would recommend it. > > I am not sure it is wise to switch now from uncompressed to compressed > as I'm not sure what happens if you re-backup a file that previously > was stored as uncompressed. It may very well now create a new > compressed copy resulting in significant file duplication. > > > Christian Völker via BackupPC-users wrote at about 14:04:39 +0200 on Saturday, June 7, 2025: > > Hi, > > > > I have a V4 pool where I had compression disabled for couple of months. > > Now I decided (after thinking twice ;)) to enable compression on the pool. > > > > Is there any chance to move the existing (non compressed) pool into the > > cpool? > > > > Or best to just wait another months until the pool-files are getting > > outdated? > > > > [Note: > > I used BTRFS with compression inside a VM. I prefer enabling it on > > filesystem level so I disabled compression for BackuppPC initially. But > > as this is a VM running on QNAP NAS with underlying ZFS it doe not make > > sense to have a CoW fs like BTRFS on top of anothe CoW fs (QNAP ZFS). So > > I decided to move the pool to ext4 instead of btrfs. But as the VM is > > using luks encryption I can not enable ZFS compression as it will not > > see any compression possibilities. So I want to enable it on BackupPC > > and possibly moving existing uncompressed pool files to compressed pool. > > > > So any ideas? > > > > Thanks! > > > > Christian > > > > > > _______________________________________________ > > BackupPC-users mailing list > > Bac...@li... > > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > > Wiki: https://github.com/backuppc/backuppc/wiki > > Project: https://backuppc.github.io/backuppc/ > > > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ |
From: Adam P. <pr...@lo...> - 2025-06-08 06:15:19
|
On Sat, 7 Jun 2025, Paul Fox wrote: > I don't think Adam's image links will be useful to anyone but him, but > his post reminds me that I, too, had duplicate graphs after migrating > from V3 to V4. My conclusion was that one of the graphs, which I'd > been used to, was actually provided by a Debian patch. And it was > still there simply because the code to remove it no longer existed > after the upgrade. > > See https://sourceforge.net/p/backuppc/mailman/backuppc-users/?viewmonth=202211&viewday=8 So, interestingly enought - the patches are still present in debian 4.4.0-8 packages while they originate from bPC 3.3.1. The patches are however removed in debian 4.4.0-11 but it lingers in testing. > paul ..this probably does not help Mathew, just maybe a hint of where to find the rrd generation depending on distro/package used? Adam Pribyl > > adam wrote: > > Your description is way too long, but if you migrated the pool then new > > BackupPC v4 uses a different way to generated a graphs. I remember it was > > a bit fiddly to find out, on old V3 the the URL to image is like > > > > https://server/backuppc/index.cgi?image=4 > > > > on new V4: > > https://server/backuppc/index.cgi?action=view&type=poolUsage&num=4 > > > > I can not find any note about how I made it to display new one, but on one > > of backuppc I do have pictures from both - old and new pool. > > > > It is possible that I had to delete the poolusage rrd or something too.. > > > > Adam Prbyl > > > > On Sat, 7 Jun 2025, Matthew Pounsett wrote: > > > > > I'm not sure what I've done, but something I did in the last few days broke > > > my pool size graphs. I've looked at a lot of things, and I'm not sure what > > > may or may not be related, so this email is a bit of a huge infodump. > > > Probably most of this is unrelated, but I'm being verbose to avoid missing > > > something because I just _think_ it's not related. > > > > > > My graphs haven't had new data since what I assume is the 4th, and the > > > graph file and the rrd both have a last-mod time of 01:00 UTC on the 6th > > > (as I'm writing this it is 15:30 on the 7th). > > > > > > The graph is filled to the end of day 3, week 23. This looks like it's > > > using strftime's %V to get the week, which would make that 23:59:59 UTC on > > > the 4th. I then have nearly three empty days (graph updated, no data) > > > after that—which should bring it up to sometime on the 7th, which looks > > > like the present and would make sense if the only problem were that I just > > > wasn't getting new data... but the mod time on the file that I think is > > > being loaded is a day and a half ago. So that's confusing. > > > > > > I'm going to attach the current graph, but I've tried to make the above > > > description really verbose in case the list strips the image. > > > > > > # ls -l ~backuppc/log/poolUsage* > > > -rw-r----- 1 backuppc backuppc 6420 Jun 6 01:07 > > > /var/lib/backuppc/log/poolUsage4.png > > > -rw-r----- 1 backuppc backuppc 8439 Jun 6 01:07 > > > /var/lib/backuppc/log/poolUsage52.png > > > -rw-r--r-- 1 backuppc backuppc 31112 Jun 6 01:07 > > > /var/lib/backuppc/log/poolUsage.rrd > > > > > > I ran BackupPC_migrateV3toV4 on the 3rd (started late in the UTC day, ran > > > overnight with backuppc stopped, restarted backuppc on the 4th) to clear > > > out the last of our old v3 backups (there were around 10 backups that it > > > processed). > > > > > > Since then BackupPC_nightly has been taking _ages_ to run. On the 5th I > > > bumped up our MaxBackupPCNightlyJobs to 4 (for the 01:00 run on the 6th), > > > thinking giving it half our cores would help (the other 4 for MaxBackups), > > > but it just seemed to be heavily IO bound, so I returned that to its > > > original 2 during the day on the 6th (for today's 01:00 run). Today, the > > > two processes have been running for 14.5 hours so far. I am assuming the > > > long run times for the last couple of days are a direct result of the > > > migrateV3toV4 run, and presumably they should settle down again in 14 days > > > (PoolSizeNightlyUpdatePeriod is 16). > > > > > > Checking log files I haven't found any references (so no errors) to the > > > poolUsage files. > > > > > > I did find a lot of "BackupPC_refCountUpdate: missing pool file" > > > log entries and errors related to V4 pool files with incorrect digests on > > > the 5th and 7th. None at all of either on the 6th, and none before the > > > 5th. Probably not related to the RRD/graph issue, but again mentioning it > > > just in case. > > > > > > Is this "normal" while cleaning up after a migrateV3toV4 run? Or maybe > > > related to incomplete backups? I did kill some running backups this week > > > while working on things. > > > > > > 2025-06-05 18:32:02 admin3 : BackupPC_refCountUpdate: missing pool file > > > c0cc2020cd7e247781c0fdd893a48505 count 1 > > > 2025-06-05 18:57:20 admin1 : BackupPC_refCountUpdate: ERROR pool file > > > /var/lib/backuppc/cpool/44/ec/45ecf1513b6ad26f10825b16de74832d has digest > > > d41d8cd98f00b204e9800998ecf8427e instead of 45ecf1513b6ad26f10825b16de74832d > > > > > > 2025-06-07 01:44:39 admin1 : BackupPC_refCountUpdate: missing pool file > > > 8095ad44fdb78424b9998de06f6ffedc count 1 > > > 2025-06-07 10:56:18 admin1 : BackupPC_refCountUpdate: ERROR pool file > > > /var/lib/backuppc/cpool/ae/ca/aeca0c1090c0c08e947465d7b4fd6ca7 has digest > > > d41d8cd98f00b204e9800998ecf8427e instead of aeca0c1090c0c08e947465d7b4fd6ca7 > > > > > > Searching for known reasons for a failure to update RRD or graph files I > > > mostly found just the obvious things .. check perms, check paths, make sure > > > the right dependencies are installed, etc. None of those things have > > > changed since the graphs were updating successfully. > > > > > > The only thing I found out of the ordinary was a note from 4.0 alpha about > > > upgrades and needing to convert the RRD file using rrd_2_v4.pl, but it > > > looks like that no longer applies? I don't have that script, but I do have > > > an old pool.rrd file from before the v4 migration, and poolUsage.rrd seems > > > to have the extra DS. This doesn't seem relevant since the graph was > > > working fine up until a couple of days ago, but I'm mentioning it just in > > > case. > > > > > > I do wonder if the very long nightly runs are somehow related to the rrd > > > data not being updated? Or did running the V3toV4 migration break > > > something I haven't found? That doesn't quite line up because I ran that > > > on the 3rd/4th, but seem to have graph data for the end of the day on the > > > 4th... but there may be other side effects I'm unaware of that would > > > account for that. > > > > > > I have made very few actual config changes this week. Here's what I've > > > touched: > > > - changed ServerHost from the single-label hostname to the FQDN (probably > > > shouldn't matter because ServerPort is -1) > > > - MaxBackupPCNightlyJobs changed from 2 to 4, returned to 2 > > > - removed --one-file-system from RsyncArgs > > > - added an entry to the '*' list in BackupFilesExclude > > > - added some new hosts (the reason this whole saga started) > > > > > > What have I not looked at which might explain the failure to update the RRD > > > data for three days? > > > > > > > _______________________________________________ > > BackupPC-users mailing list > > Bac...@li... > > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > > Wiki: https://github.com/backuppc/backuppc/wiki > > Project: https://backuppc.github.io/backuppc/ > > > =---------------------- > paul fox, pg...@fo... (arlington, ma, where it's 66.6 degrees) > > > > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ |
From: <bac...@ko...> - 2025-06-08 01:49:26
|
This probably won't be very easy and I don't know of any automatic way to do it. You could do it manually by doing something like the following: 1a. Write a program that uses zlib to compress a file and insert the appropriate backuppc header. This isn't too hard - you can examine similar code in the backuppc sources. 1b. Write a script to recurse through cpool, compress each file using above routine at a given level of compression (typically 3). Check to make sure no existing copy in cpool before moving (assuming you already started using compression so that you have a mixed pool and cpool now) 2. Recurse through *all* the attrib files in the pc directory and change 'compress' to 3 for all files in the attrib file (assuming you want to use the standard compression level) You could write a simple Perl script to do this 3. Also, change the 'compress' entry in 'backupInfo' for each backup 4. Similarly, change the 'compress' entry in each 'backup' file at the root of each host 5a. Change the refCont entries in each cpool/[0-9a-f][0-9a-f] directory 5b. Simioarly, change refCnt entries in each backup root for each host You could do both of the above brute force by running /usr/share/backuppc/bin/BackupPC_fsck -f -s Given all the manipulations above, I would be worried that the process is "fragile" and could destroy invaluable backups. So unless someone knows of a well-tested, automatic way of doing this, I'm not sure I would recommend it. I am not sure it is wise to switch now from uncompressed to compressed as I'm not sure what happens if you re-backup a file that previously was stored as uncompressed. It may very well now create a new compressed copy resulting in significant file duplication. Christian Völker via BackupPC-users wrote at about 14:04:39 +0200 on Saturday, June 7, 2025: > Hi, > > I have a V4 pool where I had compression disabled for couple of months. > Now I decided (after thinking twice ;)) to enable compression on the pool. > > Is there any chance to move the existing (non compressed) pool into the > cpool? > > Or best to just wait another months until the pool-files are getting > outdated? > > [Note: > I used BTRFS with compression inside a VM. I prefer enabling it on > filesystem level so I disabled compression for BackuppPC initially. But > as this is a VM running on QNAP NAS with underlying ZFS it doe not make > sense to have a CoW fs like BTRFS on top of anothe CoW fs (QNAP ZFS). So > I decided to move the pool to ext4 instead of btrfs. But as the VM is > using luks encryption I can not enable ZFS compression as it will not > see any compression possibilities. So I want to enable it on BackupPC > and possibly moving existing uncompressed pool files to compressed pool. > > So any ideas? > > Thanks! > > Christian > > > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ |
From: Matthew P. <ma...@co...> - 2025-06-07 20:36:42
|
On Sat, Jun 7, 2025 at 3:24 PM Adam Pribyl <pr...@lo...> wrote: > Your description is way too long, but if you migrated the pool then new > BackupPC v4 uses a different way to generated a graphs. I remember it was > a bit fiddly to find out, on old V3 the the URL to image is like > Yeah, it has been working under v4 for a long time. The only migration I did this week was migrating the last remaining v3 cpool backups. The image file is loading in the browser... but the image (and the RRD) aren't being updated on disk. |
From: Paul F. <pg...@fo...> - 2025-06-07 20:20:46
|
I don't think Adam's image links will be useful to anyone but him, but his post reminds me that I, too, had duplicate graphs after migrating from V3 to V4. My conclusion was that one of the graphs, which I'd been used to, was actually provided by a Debian patch. And it was still there simply because the code to remove it no longer existed after the upgrade. See https://sourceforge.net/p/backuppc/mailman/backuppc-users/?viewmonth=202211&viewday=8 paul adam wrote: > Your description is way too long, but if you migrated the pool then new > BackupPC v4 uses a different way to generated a graphs. I remember it was > a bit fiddly to find out, on old V3 the the URL to image is like > > https://server/backuppc/index.cgi?image=4 > > on new V4: > https://server/backuppc/index.cgi?action=view&type=poolUsage&num=4 > > I can not find any note about how I made it to display new one, but on one > of backuppc I do have pictures from both - old and new pool. > > It is possible that I had to delete the poolusage rrd or something too.. > > Adam Prbyl > > On Sat, 7 Jun 2025, Matthew Pounsett wrote: > > > I'm not sure what I've done, but something I did in the last few days broke > > my pool size graphs. I've looked at a lot of things, and I'm not sure what > > may or may not be related, so this email is a bit of a huge infodump. > > Probably most of this is unrelated, but I'm being verbose to avoid missing > > something because I just _think_ it's not related. > > > > My graphs haven't had new data since what I assume is the 4th, and the > > graph file and the rrd both have a last-mod time of 01:00 UTC on the 6th > > (as I'm writing this it is 15:30 on the 7th). > > > > The graph is filled to the end of day 3, week 23. This looks like it's > > using strftime's %V to get the week, which would make that 23:59:59 UTC on > > the 4th. I then have nearly three empty days (graph updated, no data) > > after that—which should bring it up to sometime on the 7th, which looks > > like the present and would make sense if the only problem were that I just > > wasn't getting new data... but the mod time on the file that I think is > > being loaded is a day and a half ago. So that's confusing. > > > > I'm going to attach the current graph, but I've tried to make the above > > description really verbose in case the list strips the image. > > > > # ls -l ~backuppc/log/poolUsage* > > -rw-r----- 1 backuppc backuppc 6420 Jun 6 01:07 > > /var/lib/backuppc/log/poolUsage4.png > > -rw-r----- 1 backuppc backuppc 8439 Jun 6 01:07 > > /var/lib/backuppc/log/poolUsage52.png > > -rw-r--r-- 1 backuppc backuppc 31112 Jun 6 01:07 > > /var/lib/backuppc/log/poolUsage.rrd > > > > I ran BackupPC_migrateV3toV4 on the 3rd (started late in the UTC day, ran > > overnight with backuppc stopped, restarted backuppc on the 4th) to clear > > out the last of our old v3 backups (there were around 10 backups that it > > processed). > > > > Since then BackupPC_nightly has been taking _ages_ to run. On the 5th I > > bumped up our MaxBackupPCNightlyJobs to 4 (for the 01:00 run on the 6th), > > thinking giving it half our cores would help (the other 4 for MaxBackups), > > but it just seemed to be heavily IO bound, so I returned that to its > > original 2 during the day on the 6th (for today's 01:00 run). Today, the > > two processes have been running for 14.5 hours so far. I am assuming the > > long run times for the last couple of days are a direct result of the > > migrateV3toV4 run, and presumably they should settle down again in 14 days > > (PoolSizeNightlyUpdatePeriod is 16). > > > > Checking log files I haven't found any references (so no errors) to the > > poolUsage files. > > > > I did find a lot of "BackupPC_refCountUpdate: missing pool file" > > log entries and errors related to V4 pool files with incorrect digests on > > the 5th and 7th. None at all of either on the 6th, and none before the > > 5th. Probably not related to the RRD/graph issue, but again mentioning it > > just in case. > > > > Is this "normal" while cleaning up after a migrateV3toV4 run? Or maybe > > related to incomplete backups? I did kill some running backups this week > > while working on things. > > > > 2025-06-05 18:32:02 admin3 : BackupPC_refCountUpdate: missing pool file > > c0cc2020cd7e247781c0fdd893a48505 count 1 > > 2025-06-05 18:57:20 admin1 : BackupPC_refCountUpdate: ERROR pool file > > /var/lib/backuppc/cpool/44/ec/45ecf1513b6ad26f10825b16de74832d has digest > > d41d8cd98f00b204e9800998ecf8427e instead of 45ecf1513b6ad26f10825b16de74832d > > > > 2025-06-07 01:44:39 admin1 : BackupPC_refCountUpdate: missing pool file > > 8095ad44fdb78424b9998de06f6ffedc count 1 > > 2025-06-07 10:56:18 admin1 : BackupPC_refCountUpdate: ERROR pool file > > /var/lib/backuppc/cpool/ae/ca/aeca0c1090c0c08e947465d7b4fd6ca7 has digest > > d41d8cd98f00b204e9800998ecf8427e instead of aeca0c1090c0c08e947465d7b4fd6ca7 > > > > Searching for known reasons for a failure to update RRD or graph files I > > mostly found just the obvious things .. check perms, check paths, make sure > > the right dependencies are installed, etc. None of those things have > > changed since the graphs were updating successfully. > > > > The only thing I found out of the ordinary was a note from 4.0 alpha about > > upgrades and needing to convert the RRD file using rrd_2_v4.pl, but it > > looks like that no longer applies? I don't have that script, but I do have > > an old pool.rrd file from before the v4 migration, and poolUsage.rrd seems > > to have the extra DS. This doesn't seem relevant since the graph was > > working fine up until a couple of days ago, but I'm mentioning it just in > > case. > > > > I do wonder if the very long nightly runs are somehow related to the rrd > > data not being updated? Or did running the V3toV4 migration break > > something I haven't found? That doesn't quite line up because I ran that > > on the 3rd/4th, but seem to have graph data for the end of the day on the > > 4th... but there may be other side effects I'm unaware of that would > > account for that. > > > > I have made very few actual config changes this week. Here's what I've > > touched: > > - changed ServerHost from the single-label hostname to the FQDN (probably > > shouldn't matter because ServerPort is -1) > > - MaxBackupPCNightlyJobs changed from 2 to 4, returned to 2 > > - removed --one-file-system from RsyncArgs > > - added an entry to the '*' list in BackupFilesExclude > > - added some new hosts (the reason this whole saga started) > > > > What have I not looked at which might explain the failure to update the RRD > > data for three days? > > > > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ =---------------------- paul fox, pg...@fo... (arlington, ma, where it's 66.6 degrees) |
From: Adam P. <pr...@lo...> - 2025-06-07 19:24:09
|
Your description is way too long, but if you migrated the pool then new BackupPC v4 uses a different way to generated a graphs. I remember it was a bit fiddly to find out, on old V3 the the URL to image is like https://server/backuppc/index.cgi?image=4 on new V4: https://server/backuppc/index.cgi?action=view&type=poolUsage&num=4 I can not find any note about how I made it to display new one, but on one of backuppc I do have pictures from both - old and new pool. It is possible that I had to delete the poolusage rrd or something too.. Adam Prbyl On Sat, 7 Jun 2025, Matthew Pounsett wrote: > I'm not sure what I've done, but something I did in the last few days broke > my pool size graphs. I've looked at a lot of things, and I'm not sure what > may or may not be related, so this email is a bit of a huge infodump. > Probably most of this is unrelated, but I'm being verbose to avoid missing > something because I just _think_ it's not related. > > My graphs haven't had new data since what I assume is the 4th, and the > graph file and the rrd both have a last-mod time of 01:00 UTC on the 6th > (as I'm writing this it is 15:30 on the 7th). > > The graph is filled to the end of day 3, week 23. This looks like it's > using strftime's %V to get the week, which would make that 23:59:59 UTC on > the 4th. I then have nearly three empty days (graph updated, no data) > after that—which should bring it up to sometime on the 7th, which looks > like the present and would make sense if the only problem were that I just > wasn't getting new data... but the mod time on the file that I think is > being loaded is a day and a half ago. So that's confusing. > > I'm going to attach the current graph, but I've tried to make the above > description really verbose in case the list strips the image. > > # ls -l ~backuppc/log/poolUsage* > -rw-r----- 1 backuppc backuppc 6420 Jun 6 01:07 > /var/lib/backuppc/log/poolUsage4.png > -rw-r----- 1 backuppc backuppc 8439 Jun 6 01:07 > /var/lib/backuppc/log/poolUsage52.png > -rw-r--r-- 1 backuppc backuppc 31112 Jun 6 01:07 > /var/lib/backuppc/log/poolUsage.rrd > > I ran BackupPC_migrateV3toV4 on the 3rd (started late in the UTC day, ran > overnight with backuppc stopped, restarted backuppc on the 4th) to clear > out the last of our old v3 backups (there were around 10 backups that it > processed). > > Since then BackupPC_nightly has been taking _ages_ to run. On the 5th I > bumped up our MaxBackupPCNightlyJobs to 4 (for the 01:00 run on the 6th), > thinking giving it half our cores would help (the other 4 for MaxBackups), > but it just seemed to be heavily IO bound, so I returned that to its > original 2 during the day on the 6th (for today's 01:00 run). Today, the > two processes have been running for 14.5 hours so far. I am assuming the > long run times for the last couple of days are a direct result of the > migrateV3toV4 run, and presumably they should settle down again in 14 days > (PoolSizeNightlyUpdatePeriod is 16). > > Checking log files I haven't found any references (so no errors) to the > poolUsage files. > > I did find a lot of "BackupPC_refCountUpdate: missing pool file" > log entries and errors related to V4 pool files with incorrect digests on > the 5th and 7th. None at all of either on the 6th, and none before the > 5th. Probably not related to the RRD/graph issue, but again mentioning it > just in case. > > Is this "normal" while cleaning up after a migrateV3toV4 run? Or maybe > related to incomplete backups? I did kill some running backups this week > while working on things. > > 2025-06-05 18:32:02 admin3 : BackupPC_refCountUpdate: missing pool file > c0cc2020cd7e247781c0fdd893a48505 count 1 > 2025-06-05 18:57:20 admin1 : BackupPC_refCountUpdate: ERROR pool file > /var/lib/backuppc/cpool/44/ec/45ecf1513b6ad26f10825b16de74832d has digest > d41d8cd98f00b204e9800998ecf8427e instead of 45ecf1513b6ad26f10825b16de74832d > > 2025-06-07 01:44:39 admin1 : BackupPC_refCountUpdate: missing pool file > 8095ad44fdb78424b9998de06f6ffedc count 1 > 2025-06-07 10:56:18 admin1 : BackupPC_refCountUpdate: ERROR pool file > /var/lib/backuppc/cpool/ae/ca/aeca0c1090c0c08e947465d7b4fd6ca7 has digest > d41d8cd98f00b204e9800998ecf8427e instead of aeca0c1090c0c08e947465d7b4fd6ca7 > > Searching for known reasons for a failure to update RRD or graph files I > mostly found just the obvious things .. check perms, check paths, make sure > the right dependencies are installed, etc. None of those things have > changed since the graphs were updating successfully. > > The only thing I found out of the ordinary was a note from 4.0 alpha about > upgrades and needing to convert the RRD file using rrd_2_v4.pl, but it > looks like that no longer applies? I don't have that script, but I do have > an old pool.rrd file from before the v4 migration, and poolUsage.rrd seems > to have the extra DS. This doesn't seem relevant since the graph was > working fine up until a couple of days ago, but I'm mentioning it just in > case. > > I do wonder if the very long nightly runs are somehow related to the rrd > data not being updated? Or did running the V3toV4 migration break > something I haven't found? That doesn't quite line up because I ran that > on the 3rd/4th, but seem to have graph data for the end of the day on the > 4th... but there may be other side effects I'm unaware of that would > account for that. > > I have made very few actual config changes this week. Here's what I've > touched: > - changed ServerHost from the single-label hostname to the FQDN (probably > shouldn't matter because ServerPort is -1) > - MaxBackupPCNightlyJobs changed from 2 to 4, returned to 2 > - removed --one-file-system from RsyncArgs > - added an entry to the '*' list in BackupFilesExclude > - added some new hosts (the reason this whole saga started) > > What have I not looked at which might explain the failure to update the RRD > data for three days? > |
From: Matthew P. <ma...@co...> - 2025-06-07 18:29:12
|
On Sat, Jun 7, 2025 at 1:51 PM Paul Fox <pg...@fo...> wrote: > Matthew Pounsett wrote: > > Incidentally, what do people normally use to read compressed > BackupPC log > > files? The docs say they're compressed with zlib, and so does > > /usr/bin/file, but gzip can't read them. > > Back in the day 'uncompress' probably would have done it (although > that > > would be .Z, not .z), but uncompress seems to just be a link to gzip > now, > > at least on Debian. It took some searching but I eventually found > > zlib-flate which can do it ... but this seems like a rather obscure > tool. > > > # sudo -u backuppc /usr/share/backuppc/bin/BackupPC_zcat LOG.0.z > Ah, excellent. Thanks! |
From: Paul F. <pg...@fo...> - 2025-06-07 17:50:22
|
Matthew Pounsett wrote: > Incidentally, what do people normally use to read compressed BackupPC log > files? The docs say they're compressed with zlib, and so does > /usr/bin/file, but gzip can't read them. > Back in the day 'uncompress' probably would have done it (although that > would be .Z, not .z), but uncompress seems to just be a link to gzip now, > at least on Debian. It took some searching but I eventually found > zlib-flate which can do it ... but this seems like a rather obscure tool. # sudo -u backuppc /usr/share/backuppc/bin/BackupPC_zcat LOG.0.z =---------------------- paul fox, pg...@fo... (arlington, ma, where it's 67.7 degrees) |
From: Matthew P. <ma...@co...> - 2025-06-07 17:16:26
|
Incidentally, what do people normally use to read compressed BackupPC log files? The docs say they're compressed with zlib, and so does /usr/bin/file, but gzip can't read them. Back in the day 'uncompress' probably would have done it (although that would be .Z, not .z), but uncompress seems to just be a link to gzip now, at least on Debian. It took some searching but I eventually found zlib-flate which can do it ... but this seems like a rather obscure tool. # uncompress -c LOG.0.z | head gzip: LOG.0.z: not in gzip format |
From: Matthew P. <ma...@co...> - 2025-06-07 17:05:07
|
I'm not sure what I've done, but something I did in the last few days broke my pool size graphs. I've looked at a lot of things, and I'm not sure what may or may not be related, so this email is a bit of a huge infodump. Probably most of this is unrelated, but I'm being verbose to avoid missing something because I just _think_ it's not related. My graphs haven't had new data since what I assume is the 4th, and the graph file and the rrd both have a last-mod time of 01:00 UTC on the 6th (as I'm writing this it is 15:30 on the 7th). The graph is filled to the end of day 3, week 23. This looks like it's using strftime's %V to get the week, which would make that 23:59:59 UTC on the 4th. I then have nearly three empty days (graph updated, no data) after that—which should bring it up to sometime on the 7th, which looks like the present and would make sense if the only problem were that I just wasn't getting new data... but the mod time on the file that I think is being loaded is a day and a half ago. So that's confusing. I'm going to attach the current graph, but I've tried to make the above description really verbose in case the list strips the image. # ls -l ~backuppc/log/poolUsage* -rw-r----- 1 backuppc backuppc 6420 Jun 6 01:07 /var/lib/backuppc/log/poolUsage4.png -rw-r----- 1 backuppc backuppc 8439 Jun 6 01:07 /var/lib/backuppc/log/poolUsage52.png -rw-r--r-- 1 backuppc backuppc 31112 Jun 6 01:07 /var/lib/backuppc/log/poolUsage.rrd I ran BackupPC_migrateV3toV4 on the 3rd (started late in the UTC day, ran overnight with backuppc stopped, restarted backuppc on the 4th) to clear out the last of our old v3 backups (there were around 10 backups that it processed). Since then BackupPC_nightly has been taking _ages_ to run. On the 5th I bumped up our MaxBackupPCNightlyJobs to 4 (for the 01:00 run on the 6th), thinking giving it half our cores would help (the other 4 for MaxBackups), but it just seemed to be heavily IO bound, so I returned that to its original 2 during the day on the 6th (for today's 01:00 run). Today, the two processes have been running for 14.5 hours so far. I am assuming the long run times for the last couple of days are a direct result of the migrateV3toV4 run, and presumably they should settle down again in 14 days (PoolSizeNightlyUpdatePeriod is 16). Checking log files I haven't found any references (so no errors) to the poolUsage files. I did find a lot of "BackupPC_refCountUpdate: missing pool file" log entries and errors related to V4 pool files with incorrect digests on the 5th and 7th. None at all of either on the 6th, and none before the 5th. Probably not related to the RRD/graph issue, but again mentioning it just in case. Is this "normal" while cleaning up after a migrateV3toV4 run? Or maybe related to incomplete backups? I did kill some running backups this week while working on things. 2025-06-05 18:32:02 admin3 : BackupPC_refCountUpdate: missing pool file c0cc2020cd7e247781c0fdd893a48505 count 1 2025-06-05 18:57:20 admin1 : BackupPC_refCountUpdate: ERROR pool file /var/lib/backuppc/cpool/44/ec/45ecf1513b6ad26f10825b16de74832d has digest d41d8cd98f00b204e9800998ecf8427e instead of 45ecf1513b6ad26f10825b16de74832d 2025-06-07 01:44:39 admin1 : BackupPC_refCountUpdate: missing pool file 8095ad44fdb78424b9998de06f6ffedc count 1 2025-06-07 10:56:18 admin1 : BackupPC_refCountUpdate: ERROR pool file /var/lib/backuppc/cpool/ae/ca/aeca0c1090c0c08e947465d7b4fd6ca7 has digest d41d8cd98f00b204e9800998ecf8427e instead of aeca0c1090c0c08e947465d7b4fd6ca7 Searching for known reasons for a failure to update RRD or graph files I mostly found just the obvious things .. check perms, check paths, make sure the right dependencies are installed, etc. None of those things have changed since the graphs were updating successfully. The only thing I found out of the ordinary was a note from 4.0 alpha about upgrades and needing to convert the RRD file using rrd_2_v4.pl, but it looks like that no longer applies? I don't have that script, but I do have an old pool.rrd file from before the v4 migration, and poolUsage.rrd seems to have the extra DS. This doesn't seem relevant since the graph was working fine up until a couple of days ago, but I'm mentioning it just in case. I do wonder if the very long nightly runs are somehow related to the rrd data not being updated? Or did running the V3toV4 migration break something I haven't found? That doesn't quite line up because I ran that on the 3rd/4th, but seem to have graph data for the end of the day on the 4th... but there may be other side effects I'm unaware of that would account for that. I have made very few actual config changes this week. Here's what I've touched: - changed ServerHost from the single-label hostname to the FQDN (probably shouldn't matter because ServerPort is -1) - MaxBackupPCNightlyJobs changed from 2 to 4, returned to 2 - removed --one-file-system from RsyncArgs - added an entry to the '*' list in BackupFilesExclude - added some new hosts (the reason this whole saga started) What have I not looked at which might explain the failure to update the RRD data for three days? |
From: Christian V. <cvo...@kn...> - 2025-06-07 12:04:55
|
Hi, I have a V4 pool where I had compression disabled for couple of months. Now I decided (after thinking twice ;)) to enable compression on the pool. Is there any chance to move the existing (non compressed) pool into the cpool? Or best to just wait another months until the pool-files are getting outdated? [Note: I used BTRFS with compression inside a VM. I prefer enabling it on filesystem level so I disabled compression for BackuppPC initially. But as this is a VM running on QNAP NAS with underlying ZFS it doe not make sense to have a CoW fs like BTRFS on top of anothe CoW fs (QNAP ZFS). So I decided to move the pool to ext4 instead of btrfs. But as the VM is using luks encryption I can not enable ZFS compression as it will not see any compression possibilities. So I want to enable it on BackupPC and possibly moving existing uncompressed pool files to compressed pool. So any ideas? Thanks! Christian |
From: Les M. <les...@gm...> - 2025-06-04 15:54:40
|
On Wed, Jun 4, 2025 at 10:07 AM Christian Völker via BackupPC-users <bac...@li...> wrote: > > But to be honest what I haven't tested up to now is a full restore of a > Linux box... Since backuppc doesn't do bare metal restores, it is always a good exercise to go through the motions of getting at least a virtual machine to a point where you can restore to it. Then maybe do an 'rsync -an' between the source machine and and look through the list of files that are different. A long time ago I toyed with the idea of setting up something automatic using ReaR to make a bootable image for disaster recovery and using backuppc as the restore method but never got farther than being able to do it manually. Conceptually there should be a way to rebuild the ReaR image after OS updates - or on a schedule, and have backuppc include it in the backed-up contents for the host. That way you could do a complete restore by extracting that image file to a USB, booting from it, and then using rsync or tar restore methods to put everything else back. -- Les Mikesell les...@gm... |
From: Christian V. <cvo...@kn...> - 2025-06-04 15:05:26
|
Hi, oh, yes. I am restoring data (mostly a few small files) more or less regularly. Once a month at least I tend to say. But to be honest what I haven't tested up to now is a full restore of a Linux box... /Christian Am 04.06.25 um 16:54 schrieb G.W. Haywood: > Hi there, > > On Wed, 4 Jun 2025, Matthew Pounsett wrote: > >> ... probably this has been broken since our last upgrade which was ... >> longer ago than I want to think about. > > There's an important lesson here. No apologies for repeating it. > > A backup really isn't a backup unless you've tested it - or at least > that you test on some sort of schedule that the procedure is working > and that you can recover some of the data you think you've backed up. > > Anyone care to contribute to a poll? When did you last test that you > can recover data from BackupPC? Any data at all. > > Here it was four days ago, May 31st at 11:54 UTC. > |
From: G.W. H. <ba...@ju...> - 2025-06-04 14:54:17
|
Hi there, On Wed, 4 Jun 2025, Matthew Pounsett wrote: > ... probably this has been broken since our last upgrade which was ... > longer ago than I want to think about. There's an important lesson here. No apologies for repeating it. A backup really isn't a backup unless you've tested it - or at least that you test on some sort of schedule that the procedure is working and that you can recover some of the data you think you've backed up. Anyone care to contribute to a poll? When did you last test that you can recover data from BackupPC? Any data at all. Here it was four days ago, May 31st at 11:54 UTC. -- 73, Ged. |
From: Matthew P. <ma...@co...> - 2025-06-03 14:20:07
|
On Mon, Jun 2, 2025 at 8:58 PM Guillermo Rozas <gui...@gm...> wrote: > Hi, > > looks like you're backing up '/', but also have the '--one-file-system' > option enabled. As probably most of the interesting folders below root are > mounted, that option results in rsync ignoring all of them and you get > empty backups. See https://github.com/backuppc/backuppc/issues/437 > Ooof! Yeah, that's almost certainly it. That's a pernicious default. :-/ So probably this has been broken since our last upgrade which was ... longer ago than I want to think about. |
From: Guillermo R. <gui...@gm...> - 2025-06-03 00:57:20
|
Hi, looks like you're backing up '/', but also have the '--one-file-system' option enabled. As probably most of the interesting folders below root are mounted, that option results in rsync ignoring all of them and you get empty backups. See https://github.com/backuppc/backuppc/issues/437 Regards, Guillermo On Mon, Jun 2, 2025, 20:13 Matthew Pounsett <ma...@co...> wrote: > I'm trying to track down why we seem to be getting empty backups on our > backuppc server. What's weird is that we have no errors. There doesn't > seem to be a debug setting for logs, and I think I've run out of things to > look at. > > Some recent FULL backups have completed in around 10 seconds, which is > impossible. We're using rsync over ssh, and have a small selection of > excluded directories in order to avoid some things that are either too big, > too volatile, or already have redundancies. > > I can ssh from the backuppc server to the clients as the backuppc user > (although I haven't seen such errors, so no surprise that works). I'm also > not getting the sort of errors I'd expect if sudo were broken, but just in > case I opened up the sudoers rule on a target server to allow rsync with > any arguments, but no change in behaviour. > > Here are the config options I can find that look like they might be > relevant. I've also included the current log for the client I've been > testing with, including a couple of attempted full backups. > > What am I missing here? > > $Conf{XferMethod} = 'rsync'; > $Conf{RsyncClientPath} = 'sudo /usr/bin/rsync'; > $Conf{RsyncBackupPCPath} = '/usr/libexec/backuppc-rsync/rsync_bpc'; > $Conf{RsyncSshArgs} = [ > '-e', > '$sshPath -l backuppc' > ]; > $Conf{RsyncShareName} = [ > '/' > ]; > $Conf{RsyncArgs} = [ > '--super', > '--recursive', > '--protect-args', > '--numeric-ids', > '--perms', > '--owner', > '--group', > '-D', > '--times', > '--links', > '--hard-links', > '--delete', > '--delete-excluded', > '--one-file-system', > '--partial', > '--log-format=log: %o %i %B %8U,%8G %9l %f%L', > '--stats' > ]; > $Conf{RsyncArgsExtra} = []; > $Conf{RsyncFullArgsExtra} = [ > '--checksum' > ]; > $Conf{RsyncIncrArgsExtra} = []; > $Conf{BackupFilesExclude} = { > '*' => [ > '/bigfs*', > '/data', > '/dev', > '/mnt', > '/pool*', > '/proc', > '/run', > '/sys', > '/var/lib/postgresql', > '/var/tmp' > ] > }; > > > 2025-06-01 06:00:00 incr backup started for directory / > 2025-06-01 06:00:13 incr backup 1895 complete, 3767 files, 195896461 > bytes, 0 xferErrs (0 bad files, 0 bad shares, 0 other) > 2025-06-01 06:00:13 Removing unfilled backup 1877 > 2025-06-01 06:00:13 BackupPC_backupDelete: removing #1877 > 2025-06-01 06:00:13 BackupPC_backupDelete: No prior backup for merge > 2025-06-01 06:01:03 BackupPC_refCountUpdate: host host.example.com got 0 > errors (took 50 secs) > 2025-06-01 06:01:03 Finished BackupPC_backupDelete, status = 0 (running > time: 50 sec) > 2025-06-02 06:00:01 incr backup started for directory / > 2025-06-02 06:00:08 incr backup 1896 complete, 3767 files, 195896461 > bytes, 0 xferErrs (0 bad files, 0 bad shares, 0 other) > 2025-06-02 06:00:08 Removing unfilled backup 1878 > 2025-06-02 06:00:08 BackupPC_backupDelete: removing #1878 > 2025-06-02 06:00:08 BackupPC_backupDelete: No prior backup for merge > 2025-06-02 06:00:57 BackupPC_refCountUpdate: host host.example.com got 0 > errors (took 48 secs) > 2025-06-02 06:00:57 Finished BackupPC_backupDelete, status = 0 (running > time: 49 sec) > 2025-06-02 20:42:01 full backup started for directory / > 2025-06-02 20:42:59 full backup 1897 complete, 3766 files, 144820510 > bytes, 0 xferErrs (0 bad files, 0 bad shares, 0 other) > 2025-06-02 20:42:59 Removing unfilled backup 1880 > 2025-06-02 20:42:59 BackupPC_backupDelete: removing #1880 > 2025-06-02 20:42:59 BackupPC_backupDelete: No prior backup for merge > 2025-06-02 20:43:48 BackupPC_refCountUpdate: host host.example.com got 0 > errors (took 49 secs) > 2025-06-02 20:43:48 Finished BackupPC_backupDelete, status = 0 (running > time: 49 sec) > 2025-06-02 21:04:47 full backup started for directory / > 2025-06-02 21:04:50 full backup 1898 complete, 3766 files, 144820553 > bytes, 0 xferErrs (0 bad files, 0 bad shares, 0 other) > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ > |
From: Matthew P. <ma...@co...> - 2025-06-02 23:10:37
|
I'm trying to track down why we seem to be getting empty backups on our backuppc server. What's weird is that we have no errors. There doesn't seem to be a debug setting for logs, and I think I've run out of things to look at. Some recent FULL backups have completed in around 10 seconds, which is impossible. We're using rsync over ssh, and have a small selection of excluded directories in order to avoid some things that are either too big, too volatile, or already have redundancies. I can ssh from the backuppc server to the clients as the backuppc user (although I haven't seen such errors, so no surprise that works). I'm also not getting the sort of errors I'd expect if sudo were broken, but just in case I opened up the sudoers rule on a target server to allow rsync with any arguments, but no change in behaviour. Here are the config options I can find that look like they might be relevant. I've also included the current log for the client I've been testing with, including a couple of attempted full backups. What am I missing here? $Conf{XferMethod} = 'rsync'; $Conf{RsyncClientPath} = 'sudo /usr/bin/rsync'; $Conf{RsyncBackupPCPath} = '/usr/libexec/backuppc-rsync/rsync_bpc'; $Conf{RsyncSshArgs} = [ '-e', '$sshPath -l backuppc' ]; $Conf{RsyncShareName} = [ '/' ]; $Conf{RsyncArgs} = [ '--super', '--recursive', '--protect-args', '--numeric-ids', '--perms', '--owner', '--group', '-D', '--times', '--links', '--hard-links', '--delete', '--delete-excluded', '--one-file-system', '--partial', '--log-format=log: %o %i %B %8U,%8G %9l %f%L', '--stats' ]; $Conf{RsyncArgsExtra} = []; $Conf{RsyncFullArgsExtra} = [ '--checksum' ]; $Conf{RsyncIncrArgsExtra} = []; $Conf{BackupFilesExclude} = { '*' => [ '/bigfs*', '/data', '/dev', '/mnt', '/pool*', '/proc', '/run', '/sys', '/var/lib/postgresql', '/var/tmp' ] }; 2025-06-01 06:00:00 incr backup started for directory / 2025-06-01 06:00:13 incr backup 1895 complete, 3767 files, 195896461 bytes, 0 xferErrs (0 bad files, 0 bad shares, 0 other) 2025-06-01 06:00:13 Removing unfilled backup 1877 2025-06-01 06:00:13 BackupPC_backupDelete: removing #1877 2025-06-01 06:00:13 BackupPC_backupDelete: No prior backup for merge 2025-06-01 06:01:03 BackupPC_refCountUpdate: host host.example.com got 0 errors (took 50 secs) 2025-06-01 06:01:03 Finished BackupPC_backupDelete, status = 0 (running time: 50 sec) 2025-06-02 06:00:01 incr backup started for directory / 2025-06-02 06:00:08 incr backup 1896 complete, 3767 files, 195896461 bytes, 0 xferErrs (0 bad files, 0 bad shares, 0 other) 2025-06-02 06:00:08 Removing unfilled backup 1878 2025-06-02 06:00:08 BackupPC_backupDelete: removing #1878 2025-06-02 06:00:08 BackupPC_backupDelete: No prior backup for merge 2025-06-02 06:00:57 BackupPC_refCountUpdate: host host.example.com got 0 errors (took 48 secs) 2025-06-02 06:00:57 Finished BackupPC_backupDelete, status = 0 (running time: 49 sec) 2025-06-02 20:42:01 full backup started for directory / 2025-06-02 20:42:59 full backup 1897 complete, 3766 files, 144820510 bytes, 0 xferErrs (0 bad files, 0 bad shares, 0 other) 2025-06-02 20:42:59 Removing unfilled backup 1880 2025-06-02 20:42:59 BackupPC_backupDelete: removing #1880 2025-06-02 20:42:59 BackupPC_backupDelete: No prior backup for merge 2025-06-02 20:43:48 BackupPC_refCountUpdate: host host.example.com got 0 errors (took 49 secs) 2025-06-02 20:43:48 Finished BackupPC_backupDelete, status = 0 (running time: 49 sec) 2025-06-02 21:04:47 full backup started for directory / 2025-06-02 21:04:50 full backup 1898 complete, 3766 files, 144820553 bytes, 0 xferErrs (0 bad files, 0 bad shares, 0 other) |
From: Steven B. <ste...@qu...> - 2025-06-02 14:38:23
|
Thanks everybody for the continued consideration of this question. It's encouraging that there seems to be some interest and appetite for a slightly more demonstrably maintained version of BackupPC that might provide releases in response to fixes for published CVEs in its dependencies etc. Since we are new to this community, we'll leave it to others to determine the best direction of travel for enabling new releases to be issued. But if something can be put in place we would be interested to participate further with some minor developments (UI tweaks etc.) that we've been considering. But in the first instance a new release, just so that we can point to a 2025 version, would certainly help! On 01/06/2025 02:07, bac...@ko... wrote: > [You don't often get email from bac...@ko.... Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ] > > Caution: This sender is from outside of Quintessa. Do not click links or open attachments unless you recognise the sender and know the content is safe. > > > Indeed. > > I think the most urgent need for a "maintainer" now would be to > validate and merge in existing patches as well as to potentially > update the source relying on rsync (even though technically not > necessary as pointed out by G.W. Haywood). > > [It's pretty easy to do as I have in the past pulled in new versions > for my own private builds though one would of course need to test it > again for each version to make sure nothing breaks] > > Beyond that I don't think much regular maintenance activity is > required as the program is otherwise rock-solid. > > Having an ongoing maintainer would also at least serve as a point of > reference and approval for any future patches -- whether they be > updates, bug fixes, or code extensions. > > Mr. Haywood -- it is certainly very generous of you to step up and > agree to maintain a "fork". > > Has anyone heard from Craig over the past couple of years because if > he is around, it would be good to get his blessing. Even better if in > lieu of Craig maintaining active ownership, he would invite > G.W. Haywood plus minus/others to serve as co-maintainers of the > existing branch rather than "forcing" a fork. > > Kenneth Porter wrote at about 16:27:57 -0700 on Friday, May 30, 2025: > > --On Friday, May 30, 2025 7:04 PM +0100 "G.W. Haywood" > > <ba...@ju...> wrote: > > > > > Maybe the best I could offer would be to fork the project on Github and > > > undertake to maintain the fork. Could that help you? > > > > It might be worthwhile to bring in the people who package BackupPC for > > various distros to create a "packaging" fork that merges all the patches > > that they bave to do anyway. > > > > > > > > _______________________________________________ > > BackupPC-users mailing list > > Bac...@li... > > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > > Wiki: https://github.com/backuppc/backuppc/wiki > > Project: https://backuppc.github.io/backuppc/ > > > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ -- Dr Steven J Benbow BSc MSc PhD CMath MIMA Quintessa Ltd, The Hub, 14 Station Road, Henley-on-Thames, Oxfordshire, RG9 1AY, UK Tel: 01491 636246 DD: 01491 630051 Web: http://www.quintessa.org Quintessa Limited is an employee-owned company registered in England, Number 3716623. Registered office: Quintessa Ltd, The Hub, 14 Station Road, Henley-on-Thames, Oxfordshire, RG9 1AY, UK If you have received this e-mail in error, please notify pr...@qu... and delete it from your systems. Our privacy and other policies are available from https://www.quintessa.org/about-us/policies |
From: Paul F. <pg...@fo...> - 2025-06-02 14:35:23
|
"G.W. Haywood" wrote: > Hi there, > > On Mon, 2 Jun 2025, Paul Fox wrote: > > > I've recently had an SSD drive start failing ... > > When I look at backuppc, I see that it detected 11 bad files ... > > is there a setting I could enable which would cause > > email to be sent when certain (or any), errors are detected? ... > > Something like this? > > https://github.com/backuppc/backuppc/wiki/How-to-setup-a-Success---Failure-notification-script Indeed! Thank you. It's not clear from the man page whether read errors on files would actually trigger DumpPostUserCmd, nor what env variables would hold that error count. But I'll take a look. paul =---------------------- paul fox, pg...@fo... (arlington, ma, where it's 64.4 degrees) |
From: G.W. H. <ba...@ju...> - 2025-06-02 14:23:54
|
Hi there, On Mon, 2 Jun 2025, Paul Fox wrote: > I've recently had an SSD drive start failing ... > When I look at backuppc, I see that it detected 11 bad files ... > is there a setting I could enable which would cause > email to be sent when certain (or any), errors are detected? ... Something like this? https://github.com/backuppc/backuppc/wiki/How-to-setup-a-Success---Failure-notification-script -- 73, Ged. |
From: <Mat...@gm...> - 2025-06-02 06:36:52
|
Hello, I removed "--log-format=log:\ %o\ %i\ %B\ %8U,%8G\ %9l\ %f%L" from $Conf{RsyncArgs} and logging stops :) Br Matthias Am Sonntag, dem 11.05.2025 um 01:17 +0200 schrieb Mat...@gm...: > Hello, > > Please - someone have a hint? > > It is really strange and my XferLOGs are several MB instead some KB as in the past. > Strange is that an XferLOG can start backing up the first shares without logging of received files > and start logging only with 2nd or 3rd share. > > How can I disable this logging information? > > Thanks in advance > Matthias > > Am Dienstag, dem 06.05.2025 um 11:15 +0200 schrieb Mat...@gm...: > > Hello, > > > > Sometimes backing up a share produce a lot of logging messages in the XferLOG. Can I disable > > that? > > > > Running: /usr/libexec/backuppc-rsync/rsync_bpc --bpc-top-dir /var/lib/backuppc --bpc-host- > > name > > athlux --bpc-share-name root --bpc-bkup-num 4342 --bpc-bkup-comp 0 --bpc-bkup-prevnum 4341 -- > > bpc- > > bkup-prevcomp 0 --bpc-bkup-inode0 15023074 --bpc-log-level 0 --bpc-attrib-new --numeric-ids - > > - > > protect-args --perms --acls --xattrs --owner --group --times -D --links --hard-links -- > > recursive > > --partial --one-file-system --checksum --timeout=21600 --contimeout=300 --log-format=log:\ > > %o\ > > %i\ %B\ %8U,%8G\ %9l\ %f%L --stats --port=10120 --password- > > file=/var/lib/backuppc/pc/athlux/.rsyncdpw3108314 --exclude=\*/lost+found/\* -- > > exclude=\*/profile/\* --exclude=\*.nobackup --exclude=\*/Mail/SPAM\* -- > > exclude=\*/Mail/Witze\* > > -- > > exclude=\*/Mail/sent-mail --exclude=\*/Mail/outbox --exclude=\*/trash/\* -- > > exclude=\*/Trash/\* > > -- > > exclude=\*/Downloads/\* --exclude=\*/.kde/socket\* --exclude=\*/.kde/share/config/session/\* > > -- > > exclude=\*/.xine/session\* --exclude=\*/.thumbnails/\* --exclude=\*/tmp/\* --exclude=\*.tmp - > > - > > exclude=\*/temp/\* --exclude=\*.temp --exclude=\*.qic --exclude=\*.log --exclude=\*_log -- > > exclude=\*.kpackage\* --exclude=.xsession-error\* --exclude=\*.slave-socket -- > > exclude=\*.journal > > --exclude=\*.DCOPserver_\* --exclude=\*.std --exclude=\*.vmss --exclude=\*.vmdk\* -- > > exclude=\*.nvram --exclude=\*.vmem --exclude=\*.vmsn --exclude=\*.vmx --exclude=\*.000 -- > > exclude=/var/run/\* --exclude=/proc/\* --exclude=/var/log/atop/\* --exclude=/2ndSwap -- > > exclude=NewVirtualDisk\*.vdi bac...@at...backup::root / > > incr backup started for directory root > > Xfer PIDs are now 3110059 > > This is the rsync child about to exec /usr/libexec/backuppc-rsync/rsync_bpc > > Xfer PIDs are now 3110059,3110062 > > xferPids 3110059,3110062 > > log: recv >f..t...... rw-r----- 106, DEFAULT 0 etc/backuppc/LOCK > > log: recv .d..t...... rwxr-xr-x 0, DEFAULT 4096 etc/cups > > : > > > > Best regards > > Matthias > |
From: Paul F. <pg...@fo...> - 2025-06-01 23:39:11
|
I've recently had an SSD drive start failing on one of my machines. I detected it when Chrome crashed when starting to type a URL, which I guess made it hit one of the bad files in its cache. When I look at backuppc, I see that it detected 11 bad files during it's (very recent) full backup. (If I browse that backup, the failed files seem to be filled from a previous backup. This surprises me a bit, since I'd have expected a full backup to be a literal snapshot of the disk, and not filled at all. But that's not really what my question is about.) What I'm wondering: is there a setting I could enable which would cause email to be sent when certain (or any), errors are detected? It would have been nice to get an earlier warning of this failure. Even looking at the host summary page, I have to scroll down and look at the Xfer Error Summary to see these errors. Something at the top of the page would be nice. paul p.s. the drive is an M.2 drive in a 10 year old constant use chromebox. =---------------------- paul fox, pg...@fo... (arlington, ma, where it's 58.1 degrees) |
From: <bac...@ko...> - 2025-06-01 01:07:59
|
Indeed. I think the most urgent need for a "maintainer" now would be to validate and merge in existing patches as well as to potentially update the source relying on rsync (even though technically not necessary as pointed out by G.W. Haywood). [It's pretty easy to do as I have in the past pulled in new versions for my own private builds though one would of course need to test it again for each version to make sure nothing breaks] Beyond that I don't think much regular maintenance activity is required as the program is otherwise rock-solid. Having an ongoing maintainer would also at least serve as a point of reference and approval for any future patches -- whether they be updates, bug fixes, or code extensions. Mr. Haywood -- it is certainly very generous of you to step up and agree to maintain a "fork". Has anyone heard from Craig over the past couple of years because if he is around, it would be good to get his blessing. Even better if in lieu of Craig maintaining active ownership, he would invite G.W. Haywood plus minus/others to serve as co-maintainers of the existing branch rather than "forcing" a fork. Kenneth Porter wrote at about 16:27:57 -0700 on Friday, May 30, 2025: > --On Friday, May 30, 2025 7:04 PM +0100 "G.W. Haywood" > <ba...@ju...> wrote: > > > Maybe the best I could offer would be to fork the project on Github and > > undertake to maintain the fork. Could that help you? > > It might be worthwhile to bring in the people who package BackupPC for > various distros to create a "packaging" fork that merges all the patches > that they bave to do anyway. > > > > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ |