You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(19) |
Nov
(2) |
Dec
(23) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(18) |
Feb
(15) |
Mar
(27) |
Apr
(6) |
May
(40) |
Jun
(53) |
Jul
(67) |
Aug
(107) |
Sep
(75) |
Oct
(74) |
Nov
(73) |
Dec
(63) |
2003 |
Jan
(93) |
Feb
(97) |
Mar
(72) |
Apr
(129) |
May
(110) |
Jun
(97) |
Jul
(151) |
Aug
(124) |
Sep
(66) |
Oct
(216) |
Nov
(105) |
Dec
(151) |
2004 |
Jan
(107) |
Feb
(181) |
Mar
(235) |
Apr
(212) |
May
(231) |
Jun
(231) |
Jul
(264) |
Aug
(278) |
Sep
(173) |
Oct
(259) |
Nov
(164) |
Dec
(244) |
2005 |
Jan
(318) |
Feb
(206) |
Mar
(287) |
Apr
(222) |
May
(240) |
Jun
(255) |
Jul
(166) |
Aug
(289) |
Sep
(233) |
Oct
(200) |
Nov
(307) |
Dec
(170) |
2006 |
Jan
(289) |
Feb
(270) |
Mar
(306) |
Apr
(150) |
May
(181) |
Jun
(263) |
Jul
(181) |
Aug
(291) |
Sep
(147) |
Oct
(155) |
Nov
(381) |
Dec
(310) |
2007 |
Jan
(431) |
Feb
(306) |
Mar
(378) |
Apr
(216) |
May
(313) |
Jun
(235) |
Jul
(373) |
Aug
(171) |
Sep
(459) |
Oct
(642) |
Nov
(464) |
Dec
(419) |
2008 |
Jan
(374) |
Feb
(445) |
Mar
(400) |
Apr
(406) |
May
(374) |
Jun
(346) |
Jul
(387) |
Aug
(302) |
Sep
(255) |
Oct
(374) |
Nov
(292) |
Dec
(488) |
2009 |
Jan
(392) |
Feb
(240) |
Mar
(245) |
Apr
(483) |
May
(310) |
Jun
(494) |
Jul
(265) |
Aug
(515) |
Sep
(514) |
Oct
(284) |
Nov
(338) |
Dec
(329) |
2010 |
Jan
(305) |
Feb
(246) |
Mar
(404) |
Apr
(391) |
May
(302) |
Jun
(166) |
Jul
(166) |
Aug
(234) |
Sep
(222) |
Oct
(267) |
Nov
(219) |
Dec
(244) |
2011 |
Jan
(189) |
Feb
(220) |
Mar
(353) |
Apr
(322) |
May
(270) |
Jun
(202) |
Jul
(172) |
Aug
(215) |
Sep
(226) |
Oct
(169) |
Nov
(163) |
Dec
(152) |
2012 |
Jan
(182) |
Feb
(221) |
Mar
(117) |
Apr
(151) |
May
(169) |
Jun
(135) |
Jul
(140) |
Aug
(108) |
Sep
(148) |
Oct
(97) |
Nov
(119) |
Dec
(66) |
2013 |
Jan
(105) |
Feb
(127) |
Mar
(265) |
Apr
(84) |
May
(75) |
Jun
(116) |
Jul
(89) |
Aug
(118) |
Sep
(132) |
Oct
(247) |
Nov
(98) |
Dec
(109) |
2014 |
Jan
(81) |
Feb
(101) |
Mar
(101) |
Apr
(79) |
May
(132) |
Jun
(102) |
Jul
(91) |
Aug
(114) |
Sep
(104) |
Oct
(126) |
Nov
(146) |
Dec
(46) |
2015 |
Jan
(51) |
Feb
(44) |
Mar
(83) |
Apr
(40) |
May
(68) |
Jun
(43) |
Jul
(38) |
Aug
(33) |
Sep
(88) |
Oct
(54) |
Nov
(53) |
Dec
(119) |
2016 |
Jan
(268) |
Feb
(42) |
Mar
(86) |
Apr
(73) |
May
(239) |
Jun
(93) |
Jul
(89) |
Aug
(60) |
Sep
(49) |
Oct
(66) |
Nov
(70) |
Dec
(34) |
2017 |
Jan
(81) |
Feb
(103) |
Mar
(161) |
Apr
(137) |
May
(230) |
Jun
(111) |
Jul
(135) |
Aug
(92) |
Sep
(118) |
Oct
(85) |
Nov
(110) |
Dec
(84) |
2018 |
Jan
(75) |
Feb
(59) |
Mar
(48) |
Apr
(50) |
May
(63) |
Jun
(44) |
Jul
(44) |
Aug
(61) |
Sep
(42) |
Oct
(108) |
Nov
(76) |
Dec
(48) |
2019 |
Jan
(38) |
Feb
(47) |
Mar
(18) |
Apr
(98) |
May
(47) |
Jun
(53) |
Jul
(48) |
Aug
(52) |
Sep
(33) |
Oct
(20) |
Nov
(30) |
Dec
(38) |
2020 |
Jan
(29) |
Feb
(49) |
Mar
(37) |
Apr
(87) |
May
(66) |
Jun
(98) |
Jul
(25) |
Aug
(49) |
Sep
(22) |
Oct
(124) |
Nov
(66) |
Dec
(26) |
2021 |
Jan
(131) |
Feb
(109) |
Mar
(71) |
Apr
(56) |
May
(29) |
Jun
(12) |
Jul
(36) |
Aug
(38) |
Sep
(54) |
Oct
(17) |
Nov
(38) |
Dec
(23) |
2022 |
Jan
(56) |
Feb
(56) |
Mar
(73) |
Apr
(25) |
May
(15) |
Jun
(22) |
Jul
(20) |
Aug
(36) |
Sep
(24) |
Oct
(21) |
Nov
(78) |
Dec
(42) |
2023 |
Jan
(47) |
Feb
(45) |
Mar
(31) |
Apr
(4) |
May
(15) |
Jun
(10) |
Jul
(37) |
Aug
(24) |
Sep
(21) |
Oct
(15) |
Nov
(15) |
Dec
(20) |
2024 |
Jan
(24) |
Feb
(37) |
Mar
(14) |
Apr
(23) |
May
(12) |
Jun
(1) |
Jul
(14) |
Aug
(34) |
Sep
(38) |
Oct
(13) |
Nov
(33) |
Dec
(14) |
2025 |
Jan
(15) |
Feb
(19) |
Mar
(28) |
Apr
(12) |
May
(23) |
Jun
(43) |
Jul
(21) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: G.W. H. <ba...@ju...> - 2025-07-23 14:07:42
|
Hi there, On Tue, 22 Jul 2025, Matthew Pounsett wrote: > On Fri, Jul 18, 2025 at 2:59?PM G.W. Haywood wrote: >> >> Are you saying that during the period in which you allow backups there >> should be enough time to perform a backup for every client, but that a >> client wasn't getting backed up despite that? > > Yes. > ... > ... >> Since February 2020 I have had MaxBackups set at 1, and the blackout >> period for all the clients set at Monday-Saturday from 07:30 to 23:30. > > No, that's not identical. If all of your hosts have the same blackout > period, then none of them can stomp on the blackout periods of other > hosts. Sorry, I somehow lost sight of the fact that the problem host had a shorter blackout than the others. Even though the problem has been resolved there are two reasons for writing now. The first is that investigation can be started relatively painlessly. If you look around line 1760 in .../bin/BackupPC you should find sub HostSortCompare. Here are the comments at its head: 8<---------------------------------------------------------------------- # Compare function for host sort. Hosts with errors go first, # sorted with the oldest errors first. The remaining hosts # are sorted so that those with the oldest backups go first. 8<---------------------------------------------------------------------- If things aren't working this way then maybe something needs fixing. When hosts are added to the queue, sub HostSortCompare is called by sub QueueAllPCs (which is at around line 1875). In sub QueueAllPCs there's this bit of code 8<---------------------------------------------------------------------- foreach my $host ( sort HostSortCompare keys(%$Hosts) ) { $nSkip++ if ( QueueOnePC($host, $host, 'BackupPC', 'bg', 'auto') == 2 ); } foreach my $dhcp ( @{$Conf{DHCPAddressRanges}} ) { for ( my $i = $dhcp->{first} ; $i <= $dhcp->{last} ; $i++ ) { my $ipAddr = "$dhcp->{ipAddrBase}.$i"; $nSkip++ if ( QueueOnePC($ipAddr, $ipAddr, 'BackupPC', 'bg', 'dhcpPoll') == 2 ); } } 8<---------------------------------------------------------------------- I don't have time to investigate this now. If you want to dig further then I suggest that when you next set MaxBackups to '1', immediately before the first occurrence of the "$nSkip++ if ( ..." you could add a log line such as print(LOG $bpc->timeStamp, "Attempting to queue host $host\n"); and immedaitely begore the second occurrence a line something like print(LOG $bpc->timeStamp, "Attempting to queue IP $ipAddr\n"); You could obviously extend this to do more complex logging with a bit of Perl tinkering. If you don't have the time either, then maybe raise an issue on Github (WHAT AM I SAYING????) and put a link to this thread in there. The second and I think more important reason for writing now is On Wed, 23 Jul 2025, Adam Goryachev wrote: > ... > 4) Enable monitoring so that hosts with backups that are too old are > notified so that you can resolve the issue, instead of finding out later > after some potential failure > ... +1 In my config.pl I've changed $Conf{EMailNotifyOldBackupDays} from the default of 7 days to 1.2. I'm thinking about increasing it to 1.5. :) This notification system does work. I have several hosts which don't get switched on very often and I get almost daily emails about them. But it sounds like this notification didn't happen for Mr. Pounsett even though the host was not backed up for two weeks, is that right? If so, this needs attention. -- 73, Ged. |
From: Adam G. <mai...@we...> - 2025-07-23 02:05:20
|
On 23/7/2025 00:35, Matthew Pounsett wrote: > > > On Fri, Jul 18, 2025 at 5:51 PM Christian Völker via BackupPC-users > <bac...@li...> wrote: > > > As far as I can say (and I see it here in my v4 setup) is: there > is an > order in the host list to be backed up. If a scheduled time occurs, > BackupPC takes the host for the next backup where the last backup > date > is the oldest. Of course, if checks then for some blackout > periods. So > it might skip this host. But then it'll grab the next from the > ordered > list of "last backup". > > > This is not what I saw. I let the problem host get up to 8 days > without a backup before I realized we had a problem and I had to start > manually queueing a backup every couple days. Backuppc appeared to be > grabbing whatever was at the top of the queue, not whatever had the > least-recent last backup. > > Now that I'm no longer on MaxBackups=1, most backups are > completed early in the day so by the time I get to my desk the Current > Queue is mostly empty, and harder to observe what its behaviour is. I think you can validate the behavior easily by viewing the queue page. At each wakeup time, all hosts are added to the queue, seemingly in alphabetical order. If there is a spare "backup slot" ie, MaxBackups is > currently in progress, then a host is taken from the top of the queue, and evaluated whether a backup should start or not (blackout period, last backup time, etc) Repeat the above, until backups in progress == MaxBackups, then pause and do not continue to process the queue until a backup completes So, if your host is sorted later in the list, and you are unable to complete all backups during the backup window, then the host will indeed never be backed up. There is a valid reason to sort the queue for "oldest" backup, however, this may simply break someone else's workflow, for example, schedule the important host to have a backup every hour, and some other host to backup once a week, so the "oldest" is the once a week, even though it isn't really due. Admittedly, it would get skipped pretty quickly, since there is no backup due. However, probably the better sort would be based on when the next backup is due, but I expect the "overhead" of performing this analysis is not worth the effort for implementation. Possibly, hosts could be added to the queue based on when next backup is due, (ie, instead of being evaluated each time a host needs to be retrieved from the queue, it is evaluated once when adding the hosts to the queue), but again... probably not worth it. In your case, the suggestions would be: 1) Make sure all backups can be completed within the backup window allowed. 2) Consider adding more wake up times, including at or close to midnight 3) Consider adding blackout windows to your other hosts to allow this one to complete at the 1am window first, and other hosts can only start at the next window after 1am. 4) Enable monitoring so that hosts with backups that are too old are notified so that you can resolve the issue, instead of finding out later after some potential failure Either way, sounds like you have it resolved now, but always helpful to understand more about how BPC is working. |
From: Matthew P. <ma...@co...> - 2025-07-22 14:35:31
|
On Fri, Jul 18, 2025 at 5:51 PM Christian Völker via BackupPC-users < bac...@li...> wrote: > > As far as I can say (and I see it here in my v4 setup) is: there is an > order in the host list to be backed up. If a scheduled time occurs, > BackupPC takes the host for the next backup where the last backup date > is the oldest. Of course, if checks then for some blackout periods. So > it might skip this host. But then it'll grab the next from the ordered > list of "last backup". > This is not what I saw. I let the problem host get up to 8 days without a backup before I realized we had a problem and I had to start manually queueing a backup every couple days. Backuppc appeared to be grabbing whatever was at the top of the queue, not whatever had the least-recent last backup. Now that I'm no longer on MaxBackups=1, most backups are completed early in the day so by the time I get to my desk the Current Queue is mostly empty, and harder to observe what its behaviour is. |
From: Matthew P. <ma...@co...> - 2025-07-22 14:29:42
|
On Fri, Jul 18, 2025 at 2:59 PM G.W. Haywood <ba...@ju...> wrote: > > Is there anything in the logs which might shed any light? > Nothing that looked meaningful to me. > > Are you saying that during the period in which you allow backups there > should be enough time to perform a backup for every client, but that a > client wasn't getting backed up despite that? > Yes. > > If there was enough time then something seems wrong, but if the 8.1 > hours is too short I'd guess one way or another you'll have a problem. > Buut I'd call it a configuration problem. (BTW I don't think 23.9 in > the blackout times means 23:59. More like 23:54.) > Yeah, I was approximating. :) The 8-ish hours is not enough time to back up all hosts, but only the one host has that restriction. With MaxBackups=1, I think all hosts were getting backed up within about 16 hours. The behaviour I was seeing is that there was a low probability that the host with the blackout period would end up near the top of the queue outside its blackout period. It's possible the problem is compounded by the fact that the end of its blackout period coincides with the first wakeup. (blackout period ends at ~midnight, but there is no 0h wakeup by default... 1st wakeup is at 01:00). What I was seeing is that the first 8 hours of the day, when this host could be backed up, the queue was concentrating on other hosts. Then there'd be a long period at the end of the day where nothing was being backed up.. because everything _except_ the host with the blackout period had been done by then, and the blacked out host was.. well.. blacked out. > > Since February 2020 I have had MaxBackups set at 1, and the blackout > period for all the clients set at Monday-Saturday from 07:30 to 23:30. > This seems more or less identical to your setup yet I've never seen > the symptoms you describe. It was a fresh V4 install. > No, that's not identical. If all of your hosts have the same blackout period, then none of them can stomp on the blackout periods of other hosts. And obviously your entire collection of hosts can be backed up in that period of time. Mine can't. |
From: Christian V. <cvo...@kn...> - 2025-07-18 21:51:06
|
As far as I can say (and I see it here in my v4 setup) is: there is an order in the host list to be backed up. If a scheduled time occurs, BackupPC takes the host for the next backup where the last backup date is the oldest. Of course, if checks then for some blackout periods. So it might skip this host. But then it'll grab the next from the ordered list of "last backup". So I am really surprised your host does not get backed up. It should start with the one if there have been no backups for a couple of days. /KNEBB Am 18.07.25 um 20:57 schrieb G.W. Haywood: > Hi there, > > On Fri, 18 Jul 2025, Matthew Pounsett wrote: > >> For I/O preserving reasons that aren't really relevant to the list, I've >> temporarily had my backup server on MaxBackups = 1 for a couple of >> weeks. >> I also have a backup client that is on a tightly restricted >> BlackoutPeriod >> of 8.0 through 23.9 (allowing backups from 23:59 through 08:00). >> >> The combination of these two things has resulted in the blackout server >> never receiving an automatically queued backup for two straight weeks. >> ... >> ... >> This is my guess about what's going on ... >> ... >> Thoughts? Am I completely off base? > > Is there anything in the logs which might shed any light? > > Are you saying that during the period in which you allow backups there > should be enough time to perform a backup for every client, but that a > client wasn't getting backed up despite that? > > If there was enough time then something seems wrong, but if the 8.1 > hours is too short I'd guess one way or another you'll have a problem. > Buut I'd call it a configuration problem. (BTW I don't think 23.9 in > the blackout times means 23:59. More like 23:54.) > > Since February 2020 I have had MaxBackups set at 1, and the blackout > period for all the clients set at Monday-Saturday from 07:30 to 23:30. > This seems more or less identical to your setup yet I've never seen > the symptoms you describe. It was a fresh V4 install. > > The host status page is on a tab in my browser. I keep it permanently > open, and most mornings the first thing I'll do when I switch on my > screen is check the 'Last Backup (days)' column on the host status > page. It's very unusual for me to see anything unexpected there, but > if I do it's usually a network problem, a host down, or something like > that. I think the only times I have seen BackupPC fail to start the > backup for a host is if I've dropped multiple gigabytes of junk in a > silly place, so that a share that should take a few minutes to back up > takes a couple of days, and nothing else gets a chance to be queued. > > I wonder if there's something else perhaps, er, unusual in your config > which might have a bearing on the issue. ISTR that you converted your > backups from V3 to V4 quite recently. I'm not sure how thoroughly the > convert process has been exercised and I wonder if it's relevant. We > probably should look at the entire config. I guess it's not an issue > for you if you now have MaxBackups at some higher number, but if you > have any qualms you could always install a V4 system from scratch on a > spare machine, making a bare minimum of configuration changes so that > if thigs go awry it's easier to nail down. > |
From: G.W. H. <ba...@ju...> - 2025-07-18 18:58:11
|
Hi there, On Fri, 18 Jul 2025, Matthew Pounsett wrote: > For I/O preserving reasons that aren't really relevant to the list, I've > temporarily had my backup server on MaxBackups = 1 for a couple of weeks. > I also have a backup client that is on a tightly restricted BlackoutPeriod > of 8.0 through 23.9 (allowing backups from 23:59 through 08:00). > > The combination of these two things has resulted in the blackout server > never receiving an automatically queued backup for two straight weeks. > ... > ... > This is my guess about what's going on ... > ... > Thoughts? Am I completely off base? Is there anything in the logs which might shed any light? Are you saying that during the period in which you allow backups there should be enough time to perform a backup for every client, but that a client wasn't getting backed up despite that? If there was enough time then something seems wrong, but if the 8.1 hours is too short I'd guess one way or another you'll have a problem. Buut I'd call it a configuration problem. (BTW I don't think 23.9 in the blackout times means 23:59. More like 23:54.) Since February 2020 I have had MaxBackups set at 1, and the blackout period for all the clients set at Monday-Saturday from 07:30 to 23:30. This seems more or less identical to your setup yet I've never seen the symptoms you describe. It was a fresh V4 install. The host status page is on a tab in my browser. I keep it permanently open, and most mornings the first thing I'll do when I switch on my screen is check the 'Last Backup (days)' column on the host status page. It's very unusual for me to see anything unexpected there, but if I do it's usually a network problem, a host down, or something like that. I think the only times I have seen BackupPC fail to start the backup for a host is if I've dropped multiple gigabytes of junk in a silly place, so that a share that should take a few minutes to back up takes a couple of days, and nothing else gets a chance to be queued. I wonder if there's something else perhaps, er, unusual in your config which might have a bearing on the issue. ISTR that you converted your backups from V3 to V4 quite recently. I'm not sure how thoroughly the convert process has been exercised and I wonder if it's relevant. We probably should look at the entire config. I guess it's not an issue for you if you now have MaxBackups at some higher number, but if you have any qualms you could always install a V4 system from scratch on a spare machine, making a bare minimum of configuration changes so that if thigs go awry it's easier to nail down. -- 73, Ged. |
From: Matthew P. <ma...@co...> - 2025-07-17 18:15:37
|
For I/O preserving reasons that aren't really relevant to the list, I've temporarily had my backup server on MaxBackups = 1 for a couple of weeks. I also have a backup client that is on a tightly restricted BlackoutPeriod of 8.0 through 23.9 (allowing backups from 23:59 through 08:00). The combination of these two things has resulted in the blackout server never receiving an automatically queued backup for two straight weeks. We're back to regular MaxBackups now, and the other server will return to unrestricted hours soon, but I thought I'd bring this up in case there's a design problem with the scheduler that can be addressed. This is my guess about what's going on ... I assume someone in here knows more than I do about the scheduler and can correct or confirm my assumptions. When BackupPC does a wakeup it adds every host to the queue, and doesn't appear to apply any kind of sorting (is it explicitly random order?). When the server gets under MaxBackups and decides to start a new backup, it appears to iterate over the current queue in the order listed, discarding hosts that are not due for backups at all as it finds them. The first host it finds that is due for a backup of any kind, it starts the backup and removes that host from the queue. If it encounters a host that is due for backup but in blackout, it discards/removes that host as if it were not due for backup. The end result is that a server that has a blackout period might never get a backup, because the server will only execute a backup for it if, outside of its blackout period, there are no servers ahead of it in the queue that are due for backup. The problem does not require that there be too many backups to complete in a single BackupNightlyPeriod. All it needs is for there to be enough servers for there to be a low probability of a server with a blackout period randomly appearing near the top of the queue outside its blackout hours. Assuming I'm right about the mechanisms in play, there are two fixes I can think of (one can be changed in two places): 1a. hosts are added to the queue in reverse order of their "Last Backup (days)" value from the Host Summary (larger numbers first). 1b. backuppc processes the queue in Last Backup order, larger numbers first. 2. when picking from the queue, hosts that are due for backup but in blackout are skipped without being removed, leaving them in their queue position Thoughts? Am I completely off base? |
From: Dave B. <dav...@ho...> - 2025-07-15 20:10:33
|
It seems that I was the one who missed the obvious as I had expected that BackupPC_migrateV3toV4 would have allowed me to access the backups via the GUI. Much thanx for continuing my education! ________________________________ From: Adam Goryachev via BackupPC-users <bac...@li...> Sent: Tuesday, July 15, 2025 06:16 To: bac...@li... <bac...@li...> Cc: Adam Goryachev <mai...@we...> Subject: Re: [BackupPC-users] Problem accessing v3 backups On 14/7/2025 07:45, Dave Bachmann wrote: It's obvious that I didn't fully understand how to use BackupPC_ls and had moved on to try other techniques. I had expected that one or the other would have eventually led to having a GUI interface that no longer said Error: Directory XXXXX is empty, but instead listed the files available to be restored. Now that it seems that I'll be working at the command line exclusively, I'm in the process of trying to understand the usage and options of BackupPC_restore and BackupPC_zcat and expect that I'll have further questions coming. I appreciate your patience with one who, in this case at least, knows what he doesn't know - even if he uses a word that he first learned in boot camp many years ago to describe his ignorance. This might be silly of me, but in V3 you should be able to just access the files from the CLI under the "pc" directory.... For me, that is /var/lib/backuppc/pc/hostname/backup-num/f%2f/fetc/.... The path/filename is slightly munged, but you should be able to work it out, special chars/spaces etc will be "escaped" so "f%2f" is actually "/" and "fetc" is "etc". When you find the right file, use BackupPC_zcat to get the original content. Hope that helps, or I've missed something really obvious..... Dave ________________________________ From: G.W. Haywood <ba...@ju...><mailto:ba...@ju...> Sent: Sunday, July 13, 2025 06:15 To: bac...@li...<mailto:bac...@li...> <bac...@li...><mailto:bac...@li...> Subject: Re: [BackupPC-users] Problem accessing v3 backups Hello again, On Sun, 13 Jul 2025, Dave Bachmann wrote: > I'm not sure that I understand how I would know which digests are associated with a file that I was searching for. This, for example, is a typical result: > $ ./BackupPC_zcat /srv/backuppc/cpool/aa/aa/abaa09668efeb993980a19a1f83e52ad > VSS > Detector.so???????? ???>%???=??{~??$ > I assume that this is one chunk of a file named Detector.so, but ... I was always told that 'assume' makes an 'ass' out of 'u' and 'me'. :) No, it isn't one chunk. It's the content of the whole file (and its name very likely isn't 'Detector.so'). > clearly not the way to find and recreate all of the chunks of that file. No need for anything like that. The files are not stored in chunks, they're stored whole, but usually (hopefully) in a compressed form as you've seen. > To repeat, what I *think* I need to do is to recreate the pool from > the cpool as the first step in order to be able to find a file > named, for example, "/home/common/data/tunes/A/Allman > Brothers/Allman Brothers Band/Dreams.mp3" on a specific host. Is > there a BackupPC_? script/function/setting/whatever to do that? Well if you put it like that ... to repeat: 8<---------------------------------------------------------------------- On Thu, 10 Jul 2025, G.W. Haywood wrote: > On Thu, 10 Jul 2025, Dave Bachmann wrote: > >> I am trying to see if I can recover a file that may have been >> deleted years ago and *may* be on a very old backup disk. ... >> ... > > Have you tried using the 'BackupPC_ls' script? 8<---------------------------------------------------------------------- You could do worse than read the V4 documentation, particularly the part which begins "Here is a more detailed discussion:". :) > Much thanx for your continued help! You're very welcome. In these exchanges I often learn as much as - if not more than - the person I'm helping. -- 73, Ged. _______________________________________________ BackupPC-users mailing list Bac...@li...<mailto:Bac...@li...> List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/ _______________________________________________ BackupPC-users mailing list Bac...@li...<mailto:Bac...@li...> List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/ |
From: Adam G. <mai...@we...> - 2025-07-15 13:43:45
|
On 14/7/2025 07:45, Dave Bachmann wrote: > It's obvious that I didn't fully understand how to use BackupPC_ls and > had moved on to try other techniques. I had expected that one or the > other would have eventually led to having a GUI interface that no > longer said Error: Directory XXXXX is empty, but instead listed the > files available to be restored. Now that it seems that I'll be working > at the command line exclusively, I'm in the process of trying to > understand the usage and options of BackupPC_restore and BackupPC_zcat > and expect that I'll have further questions coming. > > I appreciate your patience with one who, in this case at least, knows > what he doesn't know - even if he uses a word that he first learned in > boot camp many years ago to describe his ignorance. > This might be silly of me, but in V3 you should be able to just access the files from the CLI under the "pc" directory.... For me, that is /var/lib/backuppc/pc/hostname/backup-num/f%2f/fetc/.... The path/filename is slightly munged, but you should be able to work it out, special chars/spaces etc will be "escaped" so "f%2f" is actually "/" and "fetc" is "etc". When you find the right file, use BackupPC_zcat to get the original content. Hope that helps, or I've missed something really obvious..... > Dave > ------------------------------------------------------------------------ > *From:* G.W. Haywood <ba...@ju...> > *Sent:* Sunday, July 13, 2025 06:15 > *To:* bac...@li... > <bac...@li...> > *Subject:* Re: [BackupPC-users] Problem accessing v3 backups > Hello again, > > On Sun, 13 Jul 2025, Dave Bachmann wrote: > > > I'm not sure that I understand how I would know which digests are > associated with a file that I was searching for. This, for example, is > a typical result: > > $ ./BackupPC_zcat > /srv/backuppc/cpool/aa/aa/abaa09668efeb993980a19a1f83e52ad > > VSS > > Detector.so???????? ???>%???=??{~??$ > > > I assume that this is one chunk of a file named Detector.so, but ... > > I was always told that 'assume' makes an 'ass' out of 'u' and 'me'. :) > > No, it isn't one chunk. It's the content of the whole file (and its > name very likely isn't 'Detector.so'). > > > clearly not the way to find and recreate all of the chunks of that file. > > No need for anything like that. The files are not stored in chunks, > they're stored whole, but usually (hopefully) in a compressed form as > you've seen. > > > To repeat, what I *think* I need to do is to recreate the pool from > > the cpool as the first step in order to be able to find a file > > named, for example, "/home/common/data/tunes/A/Allman > > Brothers/Allman Brothers Band/Dreams.mp3" on a specific host. Is > > there a BackupPC_? script/function/setting/whatever to do that? > > Well if you put it like that ... to repeat: > > 8<---------------------------------------------------------------------- > On Thu, 10 Jul 2025, G.W. Haywood wrote: > > On Thu, 10 Jul 2025, Dave Bachmann wrote: > > > >> I am trying to see if I can recover a file that may have been > >> deleted years ago and *may* be on a very old backup disk. ... > >> ... > > > > Have you tried using the 'BackupPC_ls' script? > 8<---------------------------------------------------------------------- > > You could do worse than read the V4 documentation, particularly the > part which begins "Here is a more detailed discussion:". :) > > > Much thanx for your continued help! > > You're very welcome. In these exchanges I often learn as much as - if > not more than - the person I'm helping. > > -- > > 73, > Ged. > > > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > <https://lists.sourceforge.net/lists/listinfo/backuppc-users> > Wiki: https://github.com/backuppc/backuppc/wiki > <https://github.com/backuppc/backuppc/wiki> > Project: https://backuppc.github.io/backuppc/ > <https://backuppc.github.io/backuppc/> > > > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki:https://github.com/backuppc/backuppc/wiki > Project:https://backuppc.github.io/backuppc/ |
From: Dave B. <dav...@ho...> - 2025-07-13 21:46:09
|
It's obvious that I didn't fully understand how to use BackupPC_ls and had moved on to try other techniques. I had expected that one or the other would have eventually led to having a GUI interface that no longer said Error: Directory XXXXX is empty, but instead listed the files available to be restored. Now that it seems that I'll be working at the command line exclusively, I'm in the process of trying to understand the usage and options of BackupPC_restore and BackupPC_zcat and expect that I'll have further questions coming. I appreciate your patience with one who, in this case at least, knows what he doesn't know - even if he uses a word that he first learned in boot camp many years ago to describe his ignorance. Dave ________________________________ From: G.W. Haywood <ba...@ju...> Sent: Sunday, July 13, 2025 06:15 To: bac...@li... <bac...@li...> Subject: Re: [BackupPC-users] Problem accessing v3 backups Hello again, On Sun, 13 Jul 2025, Dave Bachmann wrote: > I'm not sure that I understand how I would know which digests are associated with a file that I was searching for. This, for example, is a typical result: > $ ./BackupPC_zcat /srv/backuppc/cpool/aa/aa/abaa09668efeb993980a19a1f83e52ad > VSS > Detector.so???????? ???>%???=??{~??$ > I assume that this is one chunk of a file named Detector.so, but ... I was always told that 'assume' makes an 'ass' out of 'u' and 'me'. :) No, it isn't one chunk. It's the content of the whole file (and its name very likely isn't 'Detector.so'). > clearly not the way to find and recreate all of the chunks of that file. No need for anything like that. The files are not stored in chunks, they're stored whole, but usually (hopefully) in a compressed form as you've seen. > To repeat, what I *think* I need to do is to recreate the pool from > the cpool as the first step in order to be able to find a file > named, for example, "/home/common/data/tunes/A/Allman > Brothers/Allman Brothers Band/Dreams.mp3" on a specific host. Is > there a BackupPC_? script/function/setting/whatever to do that? Well if you put it like that ... to repeat: 8<---------------------------------------------------------------------- On Thu, 10 Jul 2025, G.W. Haywood wrote: > On Thu, 10 Jul 2025, Dave Bachmann wrote: > >> I am trying to see if I can recover a file that may have been >> deleted years ago and *may* be on a very old backup disk. ... >> ... > > Have you tried using the 'BackupPC_ls' script? 8<---------------------------------------------------------------------- You could do worse than read the V4 documentation, particularly the part which begins "Here is a more detailed discussion:". :) > Much thanx for your continued help! You're very welcome. In these exchanges I often learn as much as - if not more than - the person I'm helping. -- 73, Ged. _______________________________________________ BackupPC-users mailing list Bac...@li... List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/ |
From: G.W. H. <ba...@ju...> - 2025-07-13 13:16:13
|
Hello again, On Sun, 13 Jul 2025, Dave Bachmann wrote: > I'm not sure that I understand how I would know which digests are associated with a file that I was searching for. This, for example, is a typical result: > $ ./BackupPC_zcat /srv/backuppc/cpool/aa/aa/abaa09668efeb993980a19a1f83e52ad > VSS > Detector.so???????? ???>%???=??{~??$ > I assume that this is one chunk of a file named Detector.so, but ... I was always told that 'assume' makes an 'ass' out of 'u' and 'me'. :) No, it isn't one chunk. It's the content of the whole file (and its name very likely isn't 'Detector.so'). > clearly not the way to find and recreate all of the chunks of that file. No need for anything like that. The files are not stored in chunks, they're stored whole, but usually (hopefully) in a compressed form as you've seen. > To repeat, what I *think* I need to do is to recreate the pool from > the cpool as the first step in order to be able to find a file > named, for example, "/home/common/data/tunes/A/Allman > Brothers/Allman Brothers Band/Dreams.mp3" on a specific host. Is > there a BackupPC_? script/function/setting/whatever to do that? Well if you put it like that ... to repeat: 8<---------------------------------------------------------------------- On Thu, 10 Jul 2025, G.W. Haywood wrote: > On Thu, 10 Jul 2025, Dave Bachmann wrote: > >> I am trying to see if I can recover a file that may have been >> deleted years ago and *may* be on a very old backup disk. ... >> ... > > Have you tried using the 'BackupPC_ls' script? 8<---------------------------------------------------------------------- You could do worse than read the V4 documentation, particularly the part which begins "Here is a more detailed discussion:". :) > Much thanx for your continued help! You're very welcome. In these exchanges I often learn as much as - if not more than - the person I'm helping. -- 73, Ged. |
From: Dave B. <dav...@ho...> - 2025-07-12 19:25:03
|
I'm not sure that I understand how I would know which digests are associated with a file that I was searching for. This, for example, is a typical result: $ ./BackupPC_zcat /srv/backuppc/cpool/aa/aa/abaa09668efeb993980a19a1f83e52ad VSS Detector.so̭������� ���>%���=��{~�晇$ I assume that this is one chunk of a file named Detector.so, but clearly not the way to find and recreate all of the chunks of that file. To repeat, what I *think* I need to do is to recreate the pool from the cpool as the first step in order to be able to find a file named, for example, "/home/common/data/tunes/A/Allman Brothers/Allman Brothers Band/Dreams.mp3" on a specific host. Is there a BackupPC_? script/function/setting/whatever to do that? Much thanx for your continued help! ________________________________ From: G.W. Haywood <ba...@ju...> Sent: Friday, July 11, 2025 07:18 To: bac...@li... <bac...@li...> Subject: Re: [BackupPC-users] Problem accessing v3 backups Hi there, On Fri, 11 Jul 2025, Dave Bachmann wrote: > Any thoughts how I can access the data which surely seems to be > there in one form or another? You can use BackupPC_zcat - just give it the digest directly. For example to cat a more or less random file here, logged in as the backuppc user: $ BackupPC_zcat 4b54f2036a7908351ea9e5e94e3b4c87 | head module TZInfo module Definitions module Asia module Bishkek include TimezoneDefinition timezone 'Asia/Bishkek' do |tz| tz.offset :o0, 17904, 0, :LMT tz.offset :o1, 18000, 0, :FRUT tz.offset :o2, 21600, 0, :FRUT $ -- 73, Ged. _______________________________________________ BackupPC-users mailing list Bac...@li... List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/ |
From: G.W. H. <ba...@ju...> - 2025-07-11 15:11:12
|
Hi there, On Fri, 11 Jul 2025, Dave Bachmann wrote: > Any thoughts how I can access the data which surely seems to be > there in one form or another? You can use BackupPC_zcat - just give it the digest directly. For example to cat a more or less random file here, logged in as the backuppc user: $ BackupPC_zcat 4b54f2036a7908351ea9e5e94e3b4c87 | head module TZInfo module Definitions module Asia module Bishkek include TimezoneDefinition timezone 'Asia/Bishkek' do |tz| tz.offset :o0, 17904, 0, :LMT tz.offset :o1, 18000, 0, :FRUT tz.offset :o2, 21600, 0, :FRUT $ -- 73, Ged. |
From: Dale K. <da...@da...> - 2025-07-11 02:15:01
|
Further to Jeff Kosowsky's wonderful server side support for client VSS snapshots [1] and with the deprectation and removal of WMIC from new Windows 11 builds [2], I've gone ahead and updated the commands to use Powershell instead of WMIC. I haven't found a way to use Jeff's scripts with rsyncd, only rsync over ssh - and the required changes look too complicated - so I've switched to rsync. I found all this out the hard way as I haven't needed to touch BackupPC for a number of years and only just got a new Windows desktop. So this has been tested in exactly this one scenario and "works for me" :) I also updated the "mount" section for my use case as I also saw another user on the list with the same issue [3] - where cygwin's "mount -m" doesn't show anything unless you have actually updated fstab. Most people don't so I think using the default output of "mount" is more reliable. [1] https://sourceforge.net/p/backuppc/mailman/message/37227904/ [2] https://techcommunity.microsoft.com/blog/windows-itpro-blog/wmi-command-line-wmic-utility-deprecation-next-steps/4039242 [3] https://sourceforge.net/p/backuppc/mailman/message/37228426/ # diff -Naru vss.pl_append vss.pl_append_new --- vss.pl_append 2025-07-11 10:55:31.000000000 +1000 +++ vss.pl_append_new 2025-07-11 10:53:57.000000000 +1000 @@ -67,11 +67,12 @@ #BASH_CREATESHADOW: Create single shadow for drive letter: $I my $bash_createshadow = <<'EOF'; #Create shadow copy and capture shadow id - { SHADOWID="$(wmic shadowcopy call create Volume=${I}:\\ | sed -ne 's|[ \t]*ShadowID = "\([^"]*\).*|\1|p')" ; } 2> >(tail +2) - #Note: redirection removes extra new line from stderr + SHADOWID=$(powershell -c "Invoke-CimMethod -ClassName Win32_ShadowCopy -MethodName Create -Arguments @{ Volume = \"${I}:\\\" } | Select-Object -ExpandProperty ShadowID") + SHADOWID=$(echo "$SHADOWID" | tr -d '\r\n') #Get shadow GLOBALROOT path from shadow id - SHADOWPATH="$(wmic shadowcopy | awk -v id=$SHADOWID 'id == $8 {print $3}')" + SHADOWPATH=$(powershell -c "Get-CimInstance -ClassName Win32_ShadowCopy | Where-Object { \$_.ID -eq \"$SHADOWID\" } | Select-Object -ExpandProperty DeviceObject") + SHADOWPATH=$(echo "$SHADOWPATH" | tr -d '\r\n') #Create reparse-point link in shadowdir (since GLOBALROOT paths not readable by cygwin or rsync) SHADOWLINK="$(cygpath -w ${shadowdir})$I-$hosttimestamp" @@ -85,7 +86,7 @@ my $bash_loopcreateshadow = <<'EOF'; [ -n "$shadows" ] && mkdir -p $shadowdir for I in $shadows; do - if ! [ -d "$(cygpath -u ${I}:)" ] || ! grep -qE "^${I^^}: \S+ ntfs " <(mount -m); then + if ! [ -d "$(cygpath -u ${I}:)" ] || ! grep -qE "^${I^^}: on \S+ type ntfs " <(mount); then echo "No such NTFS drive '${I}:' skipping corresponding shadow setup..." continue fi @@ -102,11 +103,13 @@ DRIVE=${SHADOWLINK##*\\}; DRIVE=${DRIVE%%-*}; DRIVE="${DRIVE^^}:" #Fsutil used to get the target reparse point which is the GLOBALROOT path of the shadow copy - #NOTE: '\r' is used to remove trailing '^M' in output of fsutil - SHADOWPATH=$(fsutil reparsepoint query $SHADOWLINK | sed -ne "s|^Print Name:[[:space:]]*\(.*\)\\\\\r|\1|p") + SHADOWPATH=$(fsutil reparsepoint query $SHADOWLINK | sed -ne "s|^Print Name:[[:space:]]*\(.*\)\$|\1|p") + SHADOWPATH=$(echo "$SHADOWPATH" | tr -d '\r\n' ) + SHADOWPATH=${SHADOWPATH%\\} #Get the shadow id based on the shadowpath - SHADOWID="$(wmic shadowcopy | awk -v path=${SHADOWPATH//\\/\\\\} 'path == $3 {print $8}')" + SHADOWID=$(powershell -c "Get-CimInstance -ClassName Win32_ShadowCopy | Where-Object { \$_.DeviceObject -eq \"$SHADOWPATH\" } | Select-Object -ExpandProperty ID") + SHADOWID=$(echo "$SHADOWID" | tr -d '\r\n') echo " Deleting shadow for '$DRIVE' PATH=$SHADOWPATH; ID=$SHADOWID; LINK=$SHADOWLINK" #Delete the shadow copy |
From: Dave B. <dav...@ho...> - 2025-07-10 19:07:41
|
Much thanx for the quick response! This is what I called "reasonable looking filenames" as found in cpool's aa/aa: -r--r--r-- 1 backuppc backuppc 65 2023-10-27 10:34 abaa09668efeb993980a19a1f83e52ad -r--r--r-- 1 backuppc backuppc 1320 2023-10-27 10:09 abaab70385aa9ba127a02878dd50a649 And at pc/host/<backupnumber>, this is what I have: drwxr-x--- 10 backuppc backuppc 4096 2018-01-01 01:03 f%2fhome -rw-rw---- 1 backuppc backuppc 0 2018-01-05 01:00 attrib_89709371bfc1111d0ab13fb461401f85 drwxr-x--- 134 backuppc backuppc 4096 2018-01-05 01:00 f%2fetc drwxr-x--- 11 backuppc backuppc 4096 2018-01-05 01:04 f%2froot -rw-r----- 1 backuppc backuppc 561 2018-01-05 01:04 backupInfo drwxr-x--- 2 backuppc backuppc 12288 2018-01-05 01:04 refCnt As I don't want to overwrite any backups on this ancient disk all hosts have been auto-disabled and they have no log files. The server's log file today looked like this: Contents of file /var/lib/backuppc/log/LOG, modified 2025-07-10 08:00:00 2025-07-10 01:00:01 Running 2 BackupPC_nightly jobs from 0..15 (out of 0..15) 2025-07-10 01:00:01 Running BackupPC_nightly -m -P 12 0 127 (pid=1251834) 2025-07-10 01:00:01 Running BackupPC_nightly -P 12 128 255 (pid=1251835) 2025-07-10 01:00:01 Next wakeup is 2025-07-10 02:00:00 2025-07-10 01:00:02 BackupPC_nightly now running BackupPC_refCountUpdate -m -s -c -P 12 -r 128-255 2025-07-10 01:00:02 BackupPC_nightly now running BackupPC_refCountUpdate -m -s -c -P 12 -r 0-127 2025-07-10 01:00:02 admin1 : __bpc_pidStart__ 1251857 2025-07-10 01:00:02 admin : __bpc_pidStart__ 1251858 2025-07-10 01:00:02 admin1 : BackupPC_refCountUpdate: missing pool file 800fc7992694fd8d405fb9e2bfae2fce count 11 2025-07-10 01:00:02 admin1 : BackupPC_refCountUpdate: missing pool file 8177f5548c9e0fff43b441518f34912a count 11 2025-07-10 01:00:02 admin1 : BackupPC_refCountUpdate: missing pool file 808da59f47929ea1c8f139aeb7f81ab3 count 1 2025-07-10 01:00:02 admin1 : BackupPC_refCountUpdate: missing pool file 80350ff3be63b92f0b1f53e04d4ba7a6 count 6 2025-07-10 01:00:02 admin1 : BackupPC_refCountUpdate: missing pool file 81146a7887f0d5ae4e9ac717068b62ae count 5 . . . 2025-07-10 01:00:22 admin1 : BackupPC_refCountUpdate: missing pool file ff18066981f5338926eb377d345edc88 count 2 2025-07-10 01:00:22 admin1 : BackupPC_refCountUpdate total errors: 348019 2025-07-10 01:00:22 admin1 : __bpc_pidEnd__ 1251857 2025-07-10 01:00:22 Finished admin1 (BackupPC_nightly -P 12 128 255) 2025-07-10 01:00:22 Pool nightly clean removed 0 files of size 0.00GB 2025-07-10 01:00:22 Pool is 0.00GB, 0 files (0 repeated, 0 max chain, 0 max links), 1 directories 2025-07-10 01:00:22 Cpool nightly clean removed 0 files of size 0.00GB 2025-07-10 01:00:22 Cpool is 0.00GB, 0 files (0 repeated, 0 max chain, 0 max links), 1 directories 2025-07-10 01:00:22 Pool4 nightly clean removed 0 files of size 0.00GB 2025-07-10 01:00:22 Pool4 is -1298.44GB, 311675 files (0 repeated, 0 max chain, 340 max links), 16512 directories 2025-07-10 01:00:22 Cpool4 nightly clean removed 0 files of size 0.00GB 2025-07-10 01:00:22 Cpool4 is -31.59GB, 416002 files (0 repeated, 0 max chain, 2698 max links), 16512 directories 2025-07-10 01:00:22 Running BackupPC_rrdUpdate (pid=1251914) 2025-07-10 01:00:22 admin-1 : RRD updated: date 1752192000; cpoolKb 0.000000; total 38591622558.731445; poolKb 0.000000; pool4Kb -1329601096.000000; cpool4Kb -32346012.000000 2025-07-10 01:00:23 Finished admin-1 (BackupPC_rrdUpdate) Any thoughts how I can access the data which surely seems to be there in one form or another? Dave ________________________________ From: G.W. Haywood <ba...@ju...> Sent: Thursday, July 10, 2025 06:22 To: bac...@li... <bac...@li...> Subject: Re: [BackupPC-users] Problem accessing v3 backups Hi there, On Thu, 10 Jul 2025, Dave Bachmann wrote: > I am trying to see if I can recover a file that may have been > deleted years ago and *may* be on a very old backup disk. The > backups were done with a v3 version of BackupPC and I've done what I > thought was required to make that disk usable but it's still not > working right. So far, I've tried all the suggestions I could find > in the documentation and online yet still have the same problems - I > can see the dates and info about all of the backups on each of the > hosts as all of the log files seems to still be intact, but trying > to access the contents always generates messages like: "Error: > Directory /var/lib/backuppc/pc/XXXXXX/49 is empty" and, of course, > it is empty. Have you tried using the 'BackupPC_ls' script? > The pool structure looks normal, but there are no files in > pool/aa/aa for example That's more or less as I'd expect if the backups are compressed. I say 'more or less' because my uncompressed pool is entirely empty. > while the Cpool structure is filled with > reasonable looking filenames in folders like cpool/aa/aa. Can you be more specific about the 'reasonable looking filenames'? > I've tried running "BackupPC_migrateV3toV4 -av" If you've successfully converted a V3 backup to a V4 backup then the names of the files in .../cpool/aa/aa should be digests. The first four characters of each name will be 'aaaa', 'aaab', 'abaa' or 'abab'. What I would call reasonable-looking filenames will be under the tree at .../pc/host/<backupnumber>/. Here's an example from my backups: (I have aliased 'l' to 'ls -lrt' in my bash profile and the backup server is not really called 'backupserver':) backupserver:/var/lib/BackupPC$ >>> l total 12 drwxr-x--- 16 backuppc backuppc 4096 Dec 1 2024 pc drwxr-x--- 2 backuppc backuppc 4096 Jan 6 2025 pool drwxr-x--- 130 backuppc backuppc 4096 Jul 10 07:58 cpool backupserver:/var/lib/BackupPC$ >>> l pc/laptop3/1986/ total 36 -rw-r----- 1 backuppc backuppc 0 Jul 10 06:00 attrib_d7b4879008be7535abe378a001211f00 drwxr-x--- 7 backuppc backuppc 4096 Jul 10 06:00 inode drwxr-x--- 223 backuppc backuppc 12288 Jul 10 06:09 fConfig drwxr-x--- 6 backuppc backuppc 4096 Jul 10 06:09 fHomes -rw-r----- 1 backuppc backuppc 583 Jul 10 06:16 backupInfo drwxr-x--- 2 backuppc backuppc 12288 Jul 10 06:20 refCnt backupserver:/var/lib/BackupPC$ >>> l pool/ total 0 piplus:/var/lib/BackupPC$ >>> ls cpool/aa/aa aaaa01d2d42e7572fc4cf5ec83f700e3 aaaa7334a2d53adff96cb26bc800a815 ... ... ... -- 73, Ged. _______________________________________________ BackupPC-users mailing list Bac...@li... List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/ |
From: G.W. H. <ba...@ju...> - 2025-07-10 13:22:52
|
Hi there, On Thu, 10 Jul 2025, Dave Bachmann wrote: > I am trying to see if I can recover a file that may have been > deleted years ago and *may* be on a very old backup disk. The > backups were done with a v3 version of BackupPC and I've done what I > thought was required to make that disk usable but it's still not > working right. So far, I've tried all the suggestions I could find > in the documentation and online yet still have the same problems - I > can see the dates and info about all of the backups on each of the > hosts as all of the log files seems to still be intact, but trying > to access the contents always generates messages like: "Error: > Directory /var/lib/backuppc/pc/XXXXXX/49 is empty" and, of course, > it is empty. Have you tried using the 'BackupPC_ls' script? > The pool structure looks normal, but there are no files in > pool/aa/aa for example That's more or less as I'd expect if the backups are compressed. I say 'more or less' because my uncompressed pool is entirely empty. > while the Cpool structure is filled with > reasonable looking filenames in folders like cpool/aa/aa. Can you be more specific about the 'reasonable looking filenames'? > I've tried running "BackupPC_migrateV3toV4 -av" If you've successfully converted a V3 backup to a V4 backup then the names of the files in .../cpool/aa/aa should be digests. The first four characters of each name will be 'aaaa', 'aaab', 'abaa' or 'abab'. What I would call reasonable-looking filenames will be under the tree at .../pc/host/<backupnumber>/. Here's an example from my backups: (I have aliased 'l' to 'ls -lrt' in my bash profile and the backup server is not really called 'backupserver':) backupserver:/var/lib/BackupPC$ >>> l total 12 drwxr-x--- 16 backuppc backuppc 4096 Dec 1 2024 pc drwxr-x--- 2 backuppc backuppc 4096 Jan 6 2025 pool drwxr-x--- 130 backuppc backuppc 4096 Jul 10 07:58 cpool backupserver:/var/lib/BackupPC$ >>> l pc/laptop3/1986/ total 36 -rw-r----- 1 backuppc backuppc 0 Jul 10 06:00 attrib_d7b4879008be7535abe378a001211f00 drwxr-x--- 7 backuppc backuppc 4096 Jul 10 06:00 inode drwxr-x--- 223 backuppc backuppc 12288 Jul 10 06:09 fConfig drwxr-x--- 6 backuppc backuppc 4096 Jul 10 06:09 fHomes -rw-r----- 1 backuppc backuppc 583 Jul 10 06:16 backupInfo drwxr-x--- 2 backuppc backuppc 12288 Jul 10 06:20 refCnt backupserver:/var/lib/BackupPC$ >>> l pool/ total 0 piplus:/var/lib/BackupPC$ >>> ls cpool/aa/aa aaaa01d2d42e7572fc4cf5ec83f700e3 aaaa7334a2d53adff96cb26bc800a815 ... ... ... -- 73, Ged. |
From: Gabriele <gab...@br...> - 2025-07-10 12:38:16
|
On 10/07/25 13:11, Gabriele wrote: > I can't find the differences since $Conf{XferLogLevel}=2 only logs on > *failed* backups, and full backup doesn't fail. How can I tell > BackupPc to store XferLog also for good ones? Never mind, a full backup just failed: the only difference is that during full backups --ignore-times is added |
From: Gabriele <gab...@br...> - 2025-07-10 11:14:59
|
Hello, Thanks for your fast reply! On 10/07/25 09:59, G.W. Haywood wrote: > Yikes! I think it's a decade since I last used version 3.something. > You really should upgrade, there are good reasons for it in the docs. > Some of them might even have a bearing on your problem. I know it's old, but it's the one shipped with the OS, and upgrading means upgrade OS too. Now that I've got a faster connection I'm planning to discontinue my old VM, moving a lot of stuff locally, and using a new VM just for SMTP and proxy. But time is short... > It's not clear to me at this stage that you have grounds for blaming > a cluster side change and/or networking stuff but anything's possible. Because it stopped working after years. My first blame was for the growing storage dimension, so I did a very big cleaning and now full backup runs in 1h30m, before was about 3h30m. But incremental backup are still not working, and in past they worked with a bigger filesystem. > > I can't remember what a version 3 configuration looks like but I feel > sure that you could configure all automatic backups to be fulls. I will search further, but I didn't find anything since now. > Presumably xxx.net is not really your host. It's much better to use > "example.net" rather than a hostname for which you have no rights. You're right, next time I will use example.net > My guess is that it's just taking a long time to calculate what needs > to be put into an incremental backup. BackupPC V4 is much better than > version 3 in this respect, it treats fulls and incrementals in a very > different way. Searching how to disable incremental backups I found this: "Starting in 2.1.0, BackupPC supports optional checksum caching, which means the block and file checksums only need to be computed once for each file. This results in a significant performance improvement. This only works for compressed pool files. It is enabled by adding '--checksum-seed=32761', to $Conf{RsyncArgs} and $Conf{RsyncRestoreArgs}." I will have a try, maybe it can mitigate my problem. But I think that it will work after a number of failed incremental backup, hoping that BackupPc stores checksum also when fails. > I would expect the interesting stuff to be in the BackupPC server > logs. Maybe try increasing the log verbosity in the configuration? > If you set the configuration option > > $Conf{XferLogLevel}=2; I'm already with log level 2, but yesterday I missed the XferLog: Executing DumpPreUserCmd: /usr/bin/ssh -q -x -l root example.net /root/script/dbbkp.sh dumping example1 dumping example2 dumping mailserver dumping mantis dumping ** mysql ** dumping nextcloud skipping performance_schema dumping phpmyadmin dumping roundcube incr backup started back to 2025-07-07 09:31:41 (backup #1125) for directory / Running: /usr/bin/ssh -q -x -l root example.net /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive . / Xfer PIDs are now 18241 Got remote protocol 31 Negotiated protocol version 28 Checksum seed is 1752123612 Got checksumSeed 0x686f48dc Sent exclude: /proc Sent exclude: /sys Sent exclude: /var/lib/mysql Sent exclude: /var/lib/php/sessions Sent exclude: /mnt/data/mysql Sent exclude: /tmp Sent exclude: /run Got file list: 431657 entries Child PID is 18243 Xfer PIDs are now 18241,18243 Sending csums, cnt = 431657, phase = 0 Read EOF: Tried again: got 0 bytes Child is aborting Parent read EOF from child: fatal error! Sending csums, cnt = 8394, phase = 1 Done: 0 files, 0 bytes Got fatal error during xfer (Child exited prematurely) Backup aborted (Child exited prematurely) >> My suspects are on RsyncArgs config, > > What are your reasons for your suspicions? My suspicions are driven by the difference between full and incremental, that changes only in how rsync works. Apparently incremental are heavier than full. Do you think it's correct, after the new XferLog just above? I can't find the differences since $Conf{XferLogLevel}=2 only logs on *failed* backups, and full backup doesn't fail. How can I tell BackupPc to store XferLog also for good ones? > I should have thought '/dev' ought to be in there too, and what do you > have in '/var/log'? As I tend to to keep logs for a very long time I > have a habit of excluding that too. Good hint remove /dev, just did it but unfortunately incremental are still failing /var/log is not so big: # du -h --max-depth=1 /var/log/ 1,3M /var/log/letsencrypt 100K /var/log/apt 64K /var/log/squid3 36K /var/log/mongodb 4,0K /var/log/ntpstats 12K /var/log/fsck 4,0K /var/log/news 4,0K /var/log/stunnel4 4,0K /var/log/unattended-upgrades 67M /var/log/nginx 64K /var/log/mysql 137M /var/log/ >> My questions are two: >> >> [1] How can I debug what's killing my incremental backups? Is it >> possible to let them be more ligthweight, maybe letting them finish, >> working on RsyncArgs or maybe elsewhere? > > If it isn't something as simple as adding '/dev' to BackupFilesExclude > you could try excluding more directories, or, perhaps better, rather > than backing up a share which contains the entire '/' you could define > several shares, with one directory or a few directories from the VM's > root in each share. Then maybe the problem will only appear in one of > the shares and you can keep on paring down the backed up files until > you get to the bottom of it. Good hint, I will split my backup trying to find where it fails. >> [2] If [1] fails, how can I completly disable incremental scheduling >> only full backups? > > As I said it's a very long time since I used version 3 of BackupPC so > I probably won't be able to help very much with this. Can you share > your 'config.pl' file with us? For sure, here you are: # more config.pl |grep -v '#' |grep -v '^[[:space:]]*$' $ENV{'PATH'} = '/bin:/usr/bin'; delete @ENV{'IFS', 'CDPATH', 'ENV', 'BASH_ENV'}; $Conf{ServerHost} = 'backup'; chomp($Conf{ServerHost}); $Conf{ServerPort} = -1; $Conf{ServerMesgSecret} = ''; $Conf{MyPath} = '/bin'; $Conf{UmaskMode} = 23; $Conf{WakeupSchedule} = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23 ]; $Conf{MaxBackups} = 4; $Conf{MaxUserBackups} = 4; $Conf{MaxPendingCmds} = 15; $Conf{CmdQueueNice} = 10; $Conf{MaxBackupPCNightlyJobs} = 2; $Conf{BackupPCNightlyPeriod} = 1; $Conf{MaxOldLogFiles} = 14; $Conf{DfPath} = '/bin/df'; $Conf{DfCmd} = '$dfPath $topDir'; $Conf{SplitPath} = '/usr/bin/split'; $Conf{ParPath} = undef; $Conf{CatPath} = '/bin/cat'; $Conf{GzipPath} = '/bin/gzip'; $Conf{Bzip2Path} = '/bin/bzip2'; $Conf{DfMaxUsagePct} = 95; $Conf{TrashCleanSleepSec} = 300; $Conf{DHCPAddressRanges} = []; $Conf{BackupPCUser} = 'backuppc'; $Conf{TopDir} = '/var/lib/backuppc'; $Conf{ConfDir} = '/etc/backuppc'; $Conf{LogDir} = '/var/lib/backuppc/log'; $Conf{InstallDir} = '/usr/share/backuppc'; $Conf{CgiDir} = '/usr/share/backuppc/cgi-bin'; $Conf{BackupPCUserVerify} = '1'; $Conf{HardLinkMax} = 31999; $Conf{PerlModuleLoad} = undef; $Conf{ServerInitdPath} = undef; $Conf{ServerInitdStartCmd} = ''; $Conf{FullPeriod} = '6.97'; $Conf{IncrPeriod} = '0.97'; $Conf{FullKeepCnt} = [ 6 ]; $Conf{FullKeepCntMin} = 1; $Conf{FullAgeMax} = 90; $Conf{IncrKeepCnt} = 6; $Conf{IncrKeepCntMin} = 1; $Conf{IncrAgeMax} = 30; $Conf{IncrLevels} = [ 1 ]; $Conf{BackupsDisable} = 0; $Conf{PartialAgeMax} = 3; $Conf{IncrFill} = '0'; $Conf{RestoreInfoKeepCnt} = 10; $Conf{ArchiveInfoKeepCnt} = 10; $Conf{BackupFilesOnly} = {}; $Conf{BackupFilesExclude} = { '*' => [ '/proc', '/sys', '/mnt/nas' ] }; $Conf{BlackoutBadPingLimit} = 3; $Conf{BlackoutGoodCnt} = 7; $Conf{BlackoutPeriods} = [ { 'hourEnd' => '23.5', 'hourBegin' => 7, 'weekDays' => [ 1, 2, 3, 4, 5 ] } ]; $Conf{BackupZeroFilesIsFatal} = '0'; $Conf{XferMethod} = 'rsync'; $Conf{XferLogLevel} = 2; $Conf{ClientCharset} = ''; $Conf{ClientCharsetLegacy} = 'iso-8859-1'; $Conf{SmbShareName} = [ 'C$' ]; $Conf{SmbShareUserName} = ''; $Conf{SmbSharePasswd} = ''; $Conf{SmbClientPath} = '/usr/bin/smbclient'; $Conf{SmbClientFullCmd} = '$smbClientPath \\\\$host\\$shareName $I_option -U $userName -E -d 1 -c tarmode\\ full -Tc$X_option - $fileList'; $Conf{SmbClientIncrCmd} = '$smbClientPath \\\\$host\\$shareName $I_option -U $userName -E -d 1 -c tarmode\\ full -TcN$X_option $timeStampFile - $fileList'; $Conf{SmbClientRestoreCmd} = '$smbClientPath \\\\$host\\$shareName $I_option -U $userName -E -d 5 -c tarmode\\ full -Tx -'; $Conf{TarShareName} = [ '/' ]; $Conf{TarClientCmd} = '$sshPath -q -x -n -l root $host env LC_ALL=C $tarPath -c -v -f - -C $shareName+ --totals'; $Conf{TarFullArgs} = '$fileList+'; $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+'; $Conf{TarClientRestoreCmd} = '$sshPath -q -x -l root $host env LC_ALL=C $tarPath -x -p --numeric-owner --same-owner -v -f - -C $shareName+'; $Conf{TarClientPath} = '/bin/tar'; $Conf{RsyncClientPath} = '/usr/bin/rsync'; $Conf{RsyncClientCmd} = '$sshPath -q -x -l root $host $rsyncPath $argList+'; $Conf{RsyncClientRestoreCmd} = '$sshPath -q -x -l root $host $rsyncPath $argList+'; $Conf{RsyncShareName} = [ '/' ]; $Conf{RsyncdClientPort} = 873; $Conf{RsyncdUserName} = ''; $Conf{RsyncdPasswd} = ''; $Conf{RsyncdAuthRequired} = '1'; $Conf{RsyncCsumCacheVerifyProb} = '0.01'; $Conf{RsyncArgs} = [ '--numeric-ids', '--perms', '--owner', '--group', '-D', '--links', '--hard-links', '--times', '--block-size=2048', '--recursive' ]; $Conf{RsyncArgsExtra} = []; $Conf{RsyncRestoreArgs} = [ '--numeric-ids', '--perms', '--owner', '--group', '-D', '--links', '--hard-links', '--times', '--block-size=2048', '--relative', '--ignore-times', '--recursive' ]; $Conf{FtpShareName} = [ '' ]; $Conf{FtpUserName} = ''; $Conf{FtpPasswd} = ''; $Conf{FtpPassive} = '1'; $Conf{FtpBlockSize} = 10240; $Conf{FtpPort} = 21; $Conf{FtpTimeout} = 120; $Conf{FtpFollowSymlinks} = '0'; $Conf{ArchiveDest} = '/tmp'; $Conf{ArchiveComp} = 'gzip'; $Conf{ArchivePar} = '0'; $Conf{ArchiveSplit} = 0; $Conf{ArchiveClientCmd} = '$Installdir/bin/BackupPC_archiveHost $tarCreatePath $splitpath $parpath $host $backupnumber $compression $compext $splitsize $archiveloc $parfile *'; $Conf{SshPath} = '/usr/bin/ssh'; $Conf{NmbLookupPath} = '/usr/bin/nmblookup'; $Conf{NmbLookupCmd} = '$nmbLookupPath -A $host'; $Conf{NmbLookupFindHostCmd} = '$nmbLookupPath $host'; $Conf{FixedIPNetBiosNameCheck} = '0'; $Conf{PingPath} = '/bin/ping'; $Conf{PingCmd} = '$pingPath -c 1 $host'; $Conf{PingMaxMsec} = 80; $Conf{CompressLevel} = 3; $Conf{ClientTimeout} = 72000; $Conf{MaxOldPerPCLogFiles} = 12; $Conf{DumpPreUserCmd} = '$sshPath -q -x -l root $host /root/script/dbbkp.sh'; $Conf{DumpPostUserCmd} = undef; $Conf{DumpPreShareCmd} = undef; $Conf{DumpPostShareCmd} = undef; $Conf{RestorePreUserCmd} = undef; $Conf{RestorePostUserCmd} = undef; $Conf{ArchivePreUserCmd} = undef; $Conf{ArchivePostUserCmd} = undef; $Conf{UserCmdCheckStatus} = '0'; $Conf{ClientNameAlias} = undef; $Conf{SendmailPath} = '/usr/sbin/sendmail'; $Conf{EMailNotifyMinDays} = '2.5'; $Conf{EMailFromUserName} = 'bac...@ex...'; $Conf{EMailAdminUserName} = 'gab...@ex...'; $Conf{EMailUserDestDomain} = ''; $Conf{EMailNoBackupEverSubj} = undef; $Conf{EMailNoBackupEverMesg} = undef; $Conf{EMailNotifyOldBackupDays} = '1.5'; $Conf{EMailNoBackupRecentSubj} = undef; $Conf{EMailNoBackupRecentMesg} = undef; $Conf{EMailNotifyOldOutlookDays} = 5; $Conf{EMailOutlookBackupSubj} = undef; $Conf{EMailOutlookBackupMesg} = undef; $Conf{EMailHeaders} = 'MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" '; $Conf{CgiAdminUserGroup} = 'backuppc'; $Conf{CgiAdminUsers} = 'backuppc'; $Conf{CgiURL} = 'http://backup/backuppc/index.cgi'; $Conf{Language} = 'en'; $Conf{CgiUserHomePageCheck} = ''; $Conf{CgiUserUrlCreate} = 'mailto:%s'; $Conf{CgiDateFormatMMDD} = 2; $Conf{CgiNavBarAdminAllHosts} = '1'; $Conf{CgiSearchBoxEnable} = '1'; $Conf{CgiNavBarLinks} = [ { 'name' => undef, 'lname' => 'Documentation', 'link' => '?action=view&type=docs' }, { 'name' => 'Wiki', 'lname' => undef, 'link' => 'http://backuppc.wiki.sourceforge.net' }, { 'lname' => undef, 'name' => 'SourceForge', 'link' => 'http://backuppc.sourceforge.net' } ]; $Conf{CgiStatusHilightColor} = { }; $Conf{CgiHeaders} = '<meta http-equiv="pragma" content="no-cache">'; $Conf{CgiImageDir} = '/usr/share/backuppc/image'; $Conf{CgiExt2ContentType} = {}; $Conf{CgiImageDirURL} = '/backuppc/image'; $Conf{CgiCSSFile} = 'BackupPC_stnd.css'; $Conf{CgiUserConfigEditEnable} = '1'; $Conf{CgiUserConfigEdit} = { 'ClientTimeout' => '1', 'RsyncdPasswd' => '1', 'TarClientPath' => '0', 'RestoreInfoKeepCnt' => '1', 'FtpShareName' => '1', 'RsyncdClientPort' => '1', 'DumpPostUserCmd' => '0', 'ArchivePreUserCmd' => '0', 'FullPeriod' => '1', 'RsyncArgsExtra' => '1', 'RsyncCsumCacheVerifyProb' => '1', 'NmbLookupCmd' => '0', 'ClientCharset' => '1', 'EMailNoBackupRecentSubj' => '1', 'ArchiveSplit' => '1', 'FtpPasswd' => '1', 'RestorePostUserCmd' => '0', 'RsyncdUserName' => '1', 'MaxOldPerPCLogFiles' => '1', 'UserCmdCheckStatus' => '0', 'IncrPeriod' => '1', 'TarFullArgs' => '1', 'SmbClientIncrCmd' => '0', 'ArchivePar' => '1', 'EMailNoBackupEverSubj' => '1', 'ClientCharsetLegacy' => '1', 'PartialAgeMax' => '1', 'EMailNoBackupRecentMesg' => '1', 'SmbShareName' => '1', 'EMailUserDestDomain' => '1', 'BackupFilesOnly' => '1', 'RsyncClientPath' => '0', 'TarClientRestoreCmd' => '0', 'ArchiveComp' => '1', 'FtpRestoreEnabled' => '1', 'RsyncClientCmd' => '0', 'ArchiveClientCmd' => '0', 'FtpPort' => '1', 'EMailOutlookBackupSubj' => '1', 'BackupsDisable' => '1', 'RsyncRestoreArgs' => '1', 'EMailNotifyOldBackupDays' => '1', 'BackupFilesExclude' => '1', 'XferLogLevel' => '1', 'TarIncrArgs' => '1', 'DumpPreShareCmd' => '0', 'NmbLookupFindHostCmd' => '0', 'FullAgeMax' => '1', 'EMailOutlookBackupMesg' => '1', 'BlackoutPeriods' => '1', 'FullKeepCnt' => '1', 'EMailHeaders' => '1', 'FullKeepCntMin' => '1', 'FtpFollowSymlinks' => '1', 'CompressLevel' => '1', 'EMailNoBackupEverMesg' => '1', 'ArchivePostUserCmd' => '0', 'PingCmd' => '0', 'EMailFromUserName' => '1', 'FtpTimeout' => '1', 'RsyncdAuthRequired' => '1', 'PingMaxMsec' => '1', 'SmbClientFullCmd' => '0', 'FtpBlockSize' => '1', 'ClientNameAlias' => '1', 'RestorePreUserCmd' => '0', 'RsyncClientRestoreCmd' => '0', 'IncrKeepCnt' => '1', 'IncrAgeMax' => '1', 'BackupZeroFilesIsFatal' => '1', 'FixedIPNetBiosNameCheck' => '1', 'EMailAdminUserName' => '1', 'RsyncArgs' => '1', 'DumpPostShareCmd' => '0', 'FtpUserName' => '1', 'ArchiveInfoKeepCnt' => '1', 'IncrLevels' => '1', 'XferMethod' => '1', 'EMailNotifyMinDays' => '1', 'BlackoutGoodCnt' => '1', 'IncrKeepCntMin' => '1', 'SmbShareUserName' => '1', 'BlackoutBadPingLimit' => '1', 'SmbSharePasswd' => '1', 'SmbClientRestoreCmd' => '0', 'RsyncShareName' => '1', 'IncrFill' => '1', 'TarClientCmd' => '0', 'EMailNotifyOldOutlookDays' => '1', 'TarShareName' => '1', 'DumpPreUserCmd' => '0', 'ArchiveDest' => '1' }; $Conf{Ping6Path} = undef; Thanks a lot for your time, I really appreciate your help! g4b0 |
From: G.W. H. <ba...@ju...> - 2025-07-10 07:59:46
|
Hi there, On Wed, 9 Jul 2025, Gabriele wrote: > I have a problem during remote VM backup, with BackupPc 3.3.2 ... Yikes! I think it's a decade since I last used version 3.something. You really should upgrade, there are good reasons for it in the docs. Some of them might even have a bearing on your problem. > ... The VM is running debian 8 (i know it's old... :), and I'm > backupping it via rsync/ssh. I worked for years, but I guess that > something changed cluster side, and since it's managed I've not got full > control over networking stuff. It's not clear to me at this stage that you have grounds for blaming a cluster side change and/or networking stuff but anything's possible. > The problem is that incremental backups stopped working 207 days ago, > and since there are no fresh incremental they have schedule precedence > over full backups, so the only way I have to backup my data is doing > manual full backups. I can't remember what a version 3 configuration looks like but I feel sure that you could configure all automatic backups to be fulls. > They simply always fails after 31/32 minutes with the following logs: > > ?? ?2025-07-09 07:00:10 Started incr backup on xxx.net (pid=3331, share=/) > ?? ?2025-07-09 07:31:37 Backup failed on xxx.net (Child exited prematurely) > ?? ?2025-07-09 08:00:00 Next wakeup is 2025-07-09 09:00:00 Presumably xxx.net is not really your host. It's much better to use "example.net" rather than a hostname for which you have no rights. > Full backup works like a charm in less than 1h 30m: My guess is that it's just taking a long time to calculate what needs to be put into an incremental backup. BackupPC V4 is much better than version 3 in this respect, it treats fulls and incrementals in a very different way. > I've searched for logs VM side, but I didn't found nothing intresting. I would expect the interesting stuff to be in the BackupPC server logs. Maybe try increasing the log verbosity in the configuration? If you set the configuration option $Conf{XferLogLevel}=2; (or whatever passes for that in BackupPC V3.2.2 config) you might see where (i.e. on what file) the incremental backups fail. I *have* seen a backup hang when it tried to back up a ridiculously large error log from the client's X server - and it took a while to find because it was a hidden file - but that's the only time I've seen that happen. > My suspects are on RsyncArgs config, What are your reasons for your suspicions? > but I can't figure which are the differencies between full and > incremental backups: > ... > ... In my installations, the only difference in the arguments between incremental and full backups are the addition of '--checksum' for full backups. But as this is version 4 that probably doesn't help very much. However with $Conf{XferLogLevel} = 1; in my 'config.pl' the rsync_bpc command line is logged in the XferLOG. > Following BackupPc configuration for this machine: > > $Conf{PingMaxMsec} = 800; > $Conf{FullAgeMax} = 60; > $Conf{FullPeriod} = '6.97'; > $Conf{BackupFilesExclude} = { > ? '*' => [ > ??? '/proc', > ??? '/sys', > ??? '/var/lib/mysql', > ??? '/var/lib/php/sessions', > ??? '/mnt/data/mysql', > ??? '/tmp', > ??? '/run' > ? ] > }; > ... > ... I should have thought '/dev' ought to be in there too, and what do you have in '/var/log'? As I tend to to keep logs for a very long time I have a habit of excluding that too. > My questions are two: > > [1] How can I debug what's killing my incremental backups? Is it > possible to let them be more ligthweight, maybe letting them finish, > working on RsyncArgs or maybe elsewhere? If it isn't something as simple as adding '/dev' to BackupFilesExclude you could try excluding more directories, or, perhaps better, rather than backing up a share which contains the entire '/' you could define several shares, with one directory or a few directories from the VM's root in each share. Then maybe the problem will only appear in one of the shares and you can keep on paring down the backed up files until you get to the bottom of it. > [2] If [1] fails, how can I completly disable incremental scheduling > only full backups? As I said it's a very long time since I used version 3 of BackupPC so I probably won't be able to help very much with this. Can you share your 'config.pl' file with us? -- 73, Ged. |
From: Dave B. <dav...@ho...> - 2025-07-10 00:46:35
|
I am trying to see if I can recover a file that may have been deleted years ago and *may* be on a very old backup disk. The backups were done with a v3 version of BackupPC and I've done what I thought was required to make that disk usable but it's still not working right. So far, I've tried all the suggestions I could find in the documentation and online yet still have the same problems - I can see the dates and info about all of the backups on each of the hosts as all of the log files seems to still be intact, but trying to access the contents always generates messages like: "Error: Directory /var/lib/backuppc/pc/XXXXXX/49 is empty" and, of course, it is empty. The pool structure looks normal, but there are no files in pool/aa/aa for example while the Cpool structure is filled with reasonable looking filenames in folders like cpool/aa/aa. Here is today's General Server Information - note the negative numbers: The servers PID is 1039976, on host XXXXX, version 4.4.0, started at 2025-07-09 12:24. This status was generated at 2025-07-09 17:10. The configuration was last loaded at 2025-07-09 12:24. PCs will be next queued at 2025-07-09 18:00. Other info: 0 pending backup requests from last scheduled wakeup, 0 pending user backup requests, 0 pending command requests, Uncompressed pool: Pool is -1684.86+0.00GiB comprising 311675+0 files and 16512+1 directories (as of 2025-07-09 01:00), Pool hashing gives 0+0 repeated files with longest chain 0+0, Nightly cleanup removed 0+0 files of size 0.00+0.00GiB (around 2025-07-09 01:00), Compressed pool: Pool is -50.75+0.00GiB comprising 416002+0 files and 16512+1 directories (as of 2025-07-09 01:00), Pool hashing gives 0+0 repeated files with longest chain 0+0, Nightly cleanup removed 0+0 files of size 0.00+0.00GiB (around 2025-07-09 01:00), Pool file system was recently at 1% (2025-07-09 17:10), today's max is 1% (2025-07-09 01:00) and yesterday's max was 1%. Pool file system inode usage was recently at 1% (2025-07-09 17:10), today's max is 1% (2025-07-09 01:00) and yesterday's max was 1%. I've tried running "BackupPC_migrateV3toV4 -av" and "BackupPC_nightly 0 255" both of which showed activity but no explicit errors. Similarly it didn't matter the setting of PoolV3Enabled. I know I must have overlooked something simple, but I've run out of ideas so any thoughts or suggestions will be gratefully appreciated. Dave |
From: Gabriele <gab...@br...> - 2025-07-09 11:00:37
|
Hi all, I have a problem during remote VM backup, with BackupPc 3.3.2 on Raspbian 10. The VM is running debian 8 (i know it's old... :), and I'm backupping it via rsync/ssh. I worked for years, but I guess that something changed cluster side, and since it's managed I've not got full control over networking stuff. The problem is that incremental backups stopped working 207 days ago, and since there are no fresh incremental they have schedule precedence over full backups, so the only way I have to backup my data is doing manual full backups. They simply always fails after 31/32 minutes with the following logs: 2025-07-09 07:00:10 Started incr backup on xxx.net (pid=3331, share=/) 2025-07-09 07:31:37 Backup failed on xxx.net (Child exited prematurely) 2025-07-09 08:00:00 Next wakeup is 2025-07-09 09:00:00 Full backup works like a charm in less than 1h 30m: 2025-07-07 09:31:39 User backuppc requested backup of xxx.net (xxx.net) 2025-07-07 09:31:50 Started full backup on xxx.net (pid=23685, share=/) 2025-07-07 10:00:01 Next wakeup is 2025-07-07 11:00:00 2025-07-07 10:54:17 Finished full backup on xxx.net 2025-07-07 10:54:17 Running BackupPC_link xxx.net (pid=30281) 2025-07-07 10:54:53 Finished xxx.net (BackupPC_link xxx.net) 2025-07-07 1:00:00 Next wakeup is 2025-07-07 12:00:00 I've searched for logs VM side, but I didn't found nothing intresting. My suspects are on RsyncArgs config, but I can't figure which are the differencies between full and incremental backups: $Conf{RsyncArgs} = [ '--numeric-ids', '--perms', '--owner', '--group', '-D', '--links', '--hard-links', '--times', '--block-size=2048', '--recursive' ]; Following BackupPc configuration for this machine: $Conf{PingMaxMsec} = 800; $Conf{FullAgeMax} = 60; $Conf{FullPeriod} = '6.97'; $Conf{BackupFilesExclude} = { '*' => [ '/proc', '/sys', '/var/lib/mysql', '/var/lib/php/sessions', '/mnt/data/mysql', '/tmp', '/run' ] }; $Conf{IncrPeriod} = '0.97'; $Conf{BlackoutPeriods} = [ { 'hourEnd' => '23.5', 'hourBegin' => 8, 'weekDays' => [ 0, 1, 2, 3, 4, 5, 6 ] } ]; $Conf{IncrKeepCnt} = 15; My questions are two: [1] How can I debug what's killing my incremental backups? Is it possible to let them be more ligthweight, maybe letting them finish, working on RsyncArgs or maybe elsewhere? [2] If [1] fails, how can I completly disable incremental scheduling only full backups? Thanks a lot g4b0 |
From: Craig B. <cba...@us...> - 2025-06-25 02:16:09
|
BackupPC community, As you all know, I haven't been able to commit any time to supporting BackupPC for the last several years. I'm excited to announce that Ged Haywood, a long-time user and participant on the BackupPC user list, has agreed to take over supporting BackupPC. I'll try to help him when I can. He now has admin privileges on the three github repositories. Hopefully some of the outstanding PRs and easy fixes will get some attention now. Thanks to Ged for taking this on! I appreciate everyone who has helped over the years with developing features, PRs, testing or helping users. Both of us would be happy to hear from anyone else who can also help. Cheers, Craig |
From: Craig B. <cba...@us...> - 2025-06-15 18:13:39
|
Hi everyone, I apologize for my long absence. Starting about 5 years ago I got very busy with some other projects, and I haven't been able to spend any time on BackupPC. Over the last 5 years I've mainly been developing in Python, Rust and Go, and I haven't spent any time writing Perl code. In any case, I appreciate everyone wanting to help maintain BackupPC. While I can't devote time to maintaining it, I can perhaps help from time to time. At a minimum, I'll make an effort to keep up with the mail list, which as you know I haven't been doing for a few years now. I am more than happy to add developers to the existing github repos, so long as they are competent Perl programmers and know how to use github. Some knowledge of C would be good too (for BackupPC::XS and rsync-bpc), but is probably not required. As a first step, someone should review the pending PRs to see if they should be merged... Cheers, Craig On Wed, Jun 11, 2025 at 7:58 AM G.W. Haywood <ba...@ju...> wrote: > Hi there, > > On Wed, 11 Jun 2025, Michael Schumacher wrote: > > > looks like the BackupPC doesn't make it into the EPEL-10 repository. > > > > https://bugzilla.redhat.com/show_bug.cgi?id=2370641 > > > > anybody interested (and capable) to take the challenge? > > AFAICT the list hasn't heard from Dr. Barratt for about three years. > > I wonder does anyone here know how to get in touch with him? His > company (Intuitive Surgical) isn't forthcoming with contact details > for its Chairman. I've searched, and come up with a very old atheros > address plus old gmail and sourceforge addresses which I've cc'd here. > > I'm on record as being willing and capable of taking over maintenance > of BackupPC, BackupPC::XS and rsync-bpc which if I'm not mistaken are > the three packages needed for a working BackupPC version 4 install. > > Rather than fork a new repository I'd prefer to take over the existing > Github account, so here's my shot at that. > > Dr. Barratt, please would you get in touch with me about this? > > If I get no reply in the next month I'll fork the three packages. > > -- > > 73, > Ged. > |
From: <to...@tu...> - 2025-06-15 05:24:18
|
On Sat, Jun 14, 2025 at 04:40:32PM -0500, Kim Scarborough wrote: > > > Where were you planning on forking your source base, in that case? > > > > CPAN. > > I'm not anybody special, but I would strongly discourage you from doing > this. Nobody is going to view the fork as "official" if you do, and very few > people will want to contribute or even know how. Like it or not, Github is > where it's at nowadays, and you want a platform that the most people are > familiar with. There you are, working for free for Microsoft. Cheers -- tomás |
From: <bac...@ko...> - 2025-06-15 02:21:39
|
I am no github expert and for a long time I resisted getting to know it but grudgingly I do appreciate its value, especially for projects that benefit from multiple contributors. The most important elements in my mind are: 1. The ability for potential contributors to push PR's that the maintainer(s) can then review and decide whether to include in the current branch. This makes it almost trivial to review and incorporate patches. 2. Ability to have multiple branches (especially a 'testing' branch for cutting edge code that has not yet been sufficiently tested to qualify as a release) 3. Ability to see clear history of changes making it easy to see where things have been added or potentially broken. 4. Ability for individual contributors to create their own local "forks" to test and maintain their personal changes and where appropriate push them back as PRs to the main branch. It takes a few hours maybe to learn but the benefits for a multiple-contributor, community project are huge and worthwhile IMHO. G.W. Haywood wrote at about 17:33:12 +0100 on Saturday, June 14, 2025: > Hi there, > > On Fri, 13 Jun 2025, Paul Fox wrote: > > > Where were you planning on forking your source base, in that case? > > CPAN. > > > Is there another source base I don't know about? (Quite possible, > > but github seems to be where releases since 3.3.0 have come from.) > > I think the version on Github is what you'd call the canonical one, > at least for the present. > > > I wouldn't hold out much hope of finding him: > > Someone sent me a telephone number privately. The sender didn't know > for sure if it was correct but it appears that it is. An answering > service responded, I left a message asking Dr. Barratt to take a look > at the mailing list. > > > ... So you simply don't know how to use github. > > Not so much that I don't know how to use it (although I'm not really > familiar with it) as that even the bits that do work are like pulling > teeth and most of the time for me it doesn't work at all. For example > I tried to respond to the post of "Two weeks ago" in issue #518 from > Mr. Moisseev. In the post he said: > > "I don’t see much benefit until we have someone actually willing to > take on the maintainer role. Once there’s a clear candidate, asking > him about expanding the team would make sense." > > I wanted to reply with links to a couple of my posts to this mailing > list (dated 17 September 2024 and 11 June 2025) in which I indicated > my willingness to take on maintainership of BackupPC. > > This was the result when I tried to reply using Github: > > [quote] > Open > Status of the project > #518 > Description > @haarp > haarp > opened on Mar 10, 2024 > > Hello, > > is BackupPC still maintained @craigbarratt ? > > I'm asking because this repo has not seen a commit, issue or issue comment by the author in two years now. The issue count keeps rising, and there are a couple of open PRs with no progress towards merging. > > There are some issues like #437 which can lead to surprises and problems, but would be easy to fix. Other issues like #252 show potential interoperability issues with some tools which used to work differently, showing BackupPC's aging progress. > > BackupPC is a great tool that sees a lot of use by people and organizations, with more popping up all the time. It would be a shame if it started getting buried under the sands of time. > > Cheers! > Timeline cannot be loaded > > The timeline is currently unavailable due to a system error. Try reloading the page. Contact support if the problem persists. > [/quote] > > The quoted post isn't even the one that I was trying to reply to. > > I really don't have time to spend trying to get around the excessively > complex interactions between somebody's broken Web pages and somebody > else's broken browsers. > > OTOH I've been using my mail client for decades with no problems. The > tool just doesn't get in the way. > > -- > > 73, > Ged. > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ |