You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(19) |
Nov
(2) |
Dec
(23) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(18) |
Feb
(15) |
Mar
(27) |
Apr
(6) |
May
(40) |
Jun
(53) |
Jul
(67) |
Aug
(107) |
Sep
(75) |
Oct
(74) |
Nov
(73) |
Dec
(63) |
2003 |
Jan
(93) |
Feb
(97) |
Mar
(72) |
Apr
(129) |
May
(110) |
Jun
(97) |
Jul
(151) |
Aug
(124) |
Sep
(66) |
Oct
(216) |
Nov
(105) |
Dec
(151) |
2004 |
Jan
(107) |
Feb
(181) |
Mar
(235) |
Apr
(212) |
May
(231) |
Jun
(231) |
Jul
(264) |
Aug
(278) |
Sep
(173) |
Oct
(259) |
Nov
(164) |
Dec
(244) |
2005 |
Jan
(318) |
Feb
(206) |
Mar
(287) |
Apr
(222) |
May
(240) |
Jun
(255) |
Jul
(166) |
Aug
(289) |
Sep
(233) |
Oct
(200) |
Nov
(307) |
Dec
(170) |
2006 |
Jan
(289) |
Feb
(270) |
Mar
(306) |
Apr
(150) |
May
(181) |
Jun
(263) |
Jul
(181) |
Aug
(291) |
Sep
(147) |
Oct
(155) |
Nov
(381) |
Dec
(310) |
2007 |
Jan
(431) |
Feb
(306) |
Mar
(378) |
Apr
(216) |
May
(313) |
Jun
(235) |
Jul
(373) |
Aug
(171) |
Sep
(459) |
Oct
(642) |
Nov
(464) |
Dec
(419) |
2008 |
Jan
(374) |
Feb
(445) |
Mar
(400) |
Apr
(406) |
May
(374) |
Jun
(346) |
Jul
(387) |
Aug
(302) |
Sep
(255) |
Oct
(374) |
Nov
(292) |
Dec
(488) |
2009 |
Jan
(392) |
Feb
(240) |
Mar
(245) |
Apr
(483) |
May
(310) |
Jun
(494) |
Jul
(265) |
Aug
(515) |
Sep
(514) |
Oct
(284) |
Nov
(338) |
Dec
(329) |
2010 |
Jan
(305) |
Feb
(246) |
Mar
(404) |
Apr
(391) |
May
(302) |
Jun
(166) |
Jul
(166) |
Aug
(234) |
Sep
(222) |
Oct
(267) |
Nov
(219) |
Dec
(244) |
2011 |
Jan
(189) |
Feb
(220) |
Mar
(353) |
Apr
(322) |
May
(270) |
Jun
(202) |
Jul
(172) |
Aug
(215) |
Sep
(226) |
Oct
(169) |
Nov
(163) |
Dec
(152) |
2012 |
Jan
(182) |
Feb
(221) |
Mar
(117) |
Apr
(151) |
May
(169) |
Jun
(135) |
Jul
(140) |
Aug
(108) |
Sep
(148) |
Oct
(97) |
Nov
(119) |
Dec
(66) |
2013 |
Jan
(105) |
Feb
(127) |
Mar
(265) |
Apr
(84) |
May
(75) |
Jun
(116) |
Jul
(89) |
Aug
(118) |
Sep
(132) |
Oct
(247) |
Nov
(98) |
Dec
(109) |
2014 |
Jan
(81) |
Feb
(101) |
Mar
(101) |
Apr
(79) |
May
(132) |
Jun
(102) |
Jul
(91) |
Aug
(114) |
Sep
(104) |
Oct
(126) |
Nov
(146) |
Dec
(46) |
2015 |
Jan
(51) |
Feb
(44) |
Mar
(83) |
Apr
(40) |
May
(68) |
Jun
(43) |
Jul
(38) |
Aug
(33) |
Sep
(88) |
Oct
(54) |
Nov
(53) |
Dec
(119) |
2016 |
Jan
(268) |
Feb
(42) |
Mar
(86) |
Apr
(73) |
May
(239) |
Jun
(93) |
Jul
(89) |
Aug
(60) |
Sep
(49) |
Oct
(66) |
Nov
(70) |
Dec
(34) |
2017 |
Jan
(81) |
Feb
(103) |
Mar
(161) |
Apr
(137) |
May
(230) |
Jun
(111) |
Jul
(135) |
Aug
(92) |
Sep
(118) |
Oct
(85) |
Nov
(110) |
Dec
(84) |
2018 |
Jan
(75) |
Feb
(59) |
Mar
(48) |
Apr
(50) |
May
(63) |
Jun
(44) |
Jul
(44) |
Aug
(61) |
Sep
(42) |
Oct
(108) |
Nov
(76) |
Dec
(48) |
2019 |
Jan
(38) |
Feb
(47) |
Mar
(18) |
Apr
(98) |
May
(47) |
Jun
(53) |
Jul
(48) |
Aug
(52) |
Sep
(33) |
Oct
(20) |
Nov
(30) |
Dec
(38) |
2020 |
Jan
(29) |
Feb
(49) |
Mar
(37) |
Apr
(87) |
May
(66) |
Jun
(98) |
Jul
(25) |
Aug
(49) |
Sep
(22) |
Oct
(124) |
Nov
(66) |
Dec
(26) |
2021 |
Jan
(131) |
Feb
(109) |
Mar
(71) |
Apr
(56) |
May
(29) |
Jun
(12) |
Jul
(36) |
Aug
(38) |
Sep
(54) |
Oct
(17) |
Nov
(38) |
Dec
(23) |
2022 |
Jan
(56) |
Feb
(56) |
Mar
(73) |
Apr
(25) |
May
(15) |
Jun
(22) |
Jul
(20) |
Aug
(36) |
Sep
(24) |
Oct
(21) |
Nov
(78) |
Dec
(42) |
2023 |
Jan
(47) |
Feb
(45) |
Mar
(31) |
Apr
(4) |
May
(15) |
Jun
(10) |
Jul
(37) |
Aug
(24) |
Sep
(21) |
Oct
(15) |
Nov
(15) |
Dec
(20) |
2024 |
Jan
(24) |
Feb
(37) |
Mar
(14) |
Apr
(23) |
May
(12) |
Jun
(1) |
Jul
(14) |
Aug
(34) |
Sep
(38) |
Oct
(13) |
Nov
(33) |
Dec
(14) |
2025 |
Jan
(15) |
Feb
(19) |
Mar
(28) |
Apr
(12) |
May
(23) |
Jun
(43) |
Jul
(32) |
Aug
(29) |
Sep
(11) |
Oct
|
Nov
|
Dec
|
From: Falko T. <ne...@tr...> - 2025-09-28 18:17:01
|
G.W. Haywood schrieb am 28.09.25 um 13:49: > Comments welcome. I can't test on anything more recent than a Win7 VM > which I have kicking around from the days - sorry, years - when I > supported Windows installations, thankfully now long past. That's what I did more than five years ago, too - but we didn't use samba client for Windows backups with BackupPC, but this: https://www.michaelstowe.com/backuppc/ and I remember it working very well. Especially that it did use shadow copies. I think we used Win10 these days. But later we switched to Urbackup client. If you don't have recent Windows versions at hand ... Usually it is possible to get some Windows developer edition VMs. Ah, we did use this: https://github.com/xdissent/ievms/ Maybe some newer fixes are in the forks or pull requests like: https://github.com/xdissent/ievms/pull/343 Just my 2¢ Falko |
From: Michael S. <mic...@me...> - 2025-09-28 17:44:44
|
On 2025-09-28 04:49, G.W. Haywood wrote: > Hi there, > > For many years BackupPC has offered the option of using using Samba's > smbclient to back up Windows SMB shares. > > Unfortunately it seems that in this use case there are multiple issues > with smbclient, see for example from January 2019: > > https://github.com/backuppc/backuppc/issues/252 > > Windows junctions can make it impossible to exclude things you want to > exclude; one particular problem with smbclient is that excludes don't > seem to work properly, and haven't done for some time. > > The 'regex' option for smbclient's tarmode is (a) broken since about > Samba 4.13, five years or more ago, and (b) deprecated in any case by > the Samba team since Samba 4.2 was released in March 2015 - so I don't > know if the option is ever likely to be fixed; one day it might simply > disappear. Although Samba is very actively developed, a regression > reported on the Samba bugzilla in November 2022: > > https://bugzilla.samba.org/show_bug.cgi?id=15229 > > has generated no response. That may be because the 'r' option is, as > noted, now deprecated by Samba. A 'wontfix' would have helped, but I > suspect that the lack of response from BackupPC developers to queries > in the past by Samba team members might figure to some extent in the > equation. For the avoidance of doubt, I'm now the BackupPC maintainer > and I want somehow to get this fixed for Windows users. > > So I got to wondering if anyone here is using CIFS to mount Windows 10 > or Windows 11 SMB shares for BackupPC to read? I'm sure it's possible > (apparently a lot of people using older NAS devices do it) but I don't > personally know anyone who's doing it. It needs some administration > to get it working because it's not enabled by default in later Windows > versions, largely I think because of the potential security issues. > > It seems to me that even though like anything else CIFS has security > issues, CIFS access could be restricted to the BackupPC server which > should eliminate the vast majority. Using a remote mount would allow > BackupPC to use tar instead of smbclient, thus eliminating the problem > with excludes and potentially a bunch of other issues using smbclient. > > Yet another alternative might be to use a different SMB client, but I > know nothing about such clients except that at least one exists. This > would probably require quite a lot of development in BackupPC which to > me makes it an unlikely candidate at least in the near term (i.e. the > next few years) unless someone here wants to step up and do the work. > > Comments welcome. I can't test on anything more recent than a Win7 VM > which I have kicking around from the days - sorry, years - when I > supported Windows installations, thankfully now long past. It's not just Windows junctions, but Windows file semantics that make smbclient (and CIFS) a second choice. There are at least a couple rsync/Windows solutions out there, both of which use shadow volumes to align with Windows' own backup paradigms. They also have the advantage of not requiring Windows shares at all, which (as you've noted) have fallen out of fashion. There have been a handful of people who reported mounting Windows shares locally to back those up via BackupPC, but I recall the experience being slow and brittle and I'd be a little surprised if anybody continued with it. |
From: G.W. H. <ba...@ju...> - 2025-09-28 11:49:58
|
Hi there, For many years BackupPC has offered the option of using using Samba's smbclient to back up Windows SMB shares. Unfortunately it seems that in this use case there are multiple issues with smbclient, see for example from January 2019: https://github.com/backuppc/backuppc/issues/252 Windows junctions can make it impossible to exclude things you want to exclude; one particular problem with smbclient is that excludes don't seem to work properly, and haven't done for some time. The 'regex' option for smbclient's tarmode is (a) broken since about Samba 4.13, five years or more ago, and (b) deprecated in any case by the Samba team since Samba 4.2 was released in March 2015 - so I don't know if the option is ever likely to be fixed; one day it might simply disappear. Although Samba is very actively developed, a regression reported on the Samba bugzilla in November 2022: https://bugzilla.samba.org/show_bug.cgi?id=15229 has generated no response. That may be because the 'r' option is, as noted, now deprecated by Samba. A 'wontfix' would have helped, but I suspect that the lack of response from BackupPC developers to queries in the past by Samba team members might figure to some extent in the equation. For the avoidance of doubt, I'm now the BackupPC maintainer and I want somehow to get this fixed for Windows users. So I got to wondering if anyone here is using CIFS to mount Windows 10 or Windows 11 SMB shares for BackupPC to read? I'm sure it's possible (apparently a lot of people using older NAS devices do it) but I don't personally know anyone who's doing it. It needs some administration to get it working because it's not enabled by default in later Windows versions, largely I think because of the potential security issues. It seems to me that even though like anything else CIFS has security issues, CIFS access could be restricted to the BackupPC server which should eliminate the vast majority. Using a remote mount would allow BackupPC to use tar instead of smbclient, thus eliminating the problem with excludes and potentially a bunch of other issues using smbclient. Yet another alternative might be to use a different SMB client, but I know nothing about such clients except that at least one exists. This would probably require quite a lot of development in BackupPC which to me makes it an unlikely candidate at least in the near term (i.e. the next few years) unless someone here wants to step up and do the work. Comments welcome. I can't test on anything more recent than a Win7 VM which I have kicking around from the days - sorry, years - when I supported Windows installations, thankfully now long past. -- 73, Ged. |
From: Matthew P. <ma...@co...> - 2025-09-17 22:43:41
|
On Wed, Sep 17, 2025 at 10:36 AM G.W. Haywood <ba...@ju...> wrote: > Firstly, I'd be inclined to check the numbers in some way other than > by looking at the graph. :) It wouldn't be the first time that some > graph or other didn't perfectly reflect the underlying data. It may > be helpful to know exactly how much data we're talking about in the V3 > pool (as the graphs aren't terribly precise) and also how many files. > It would be good to eliminate somehow any possibility that BackupPC > isn't the culprit, since BackupPC's graph just shows you approximately > how much data is in the pool directories, it doesn't actually know how > it got there. > I can't find clear documentation about how to identify v3 pool files. It seems to be sort of implied in the docs that they are the single-character directories in cpool? I'm guessing based on the fact that the section on v4 attrib files only mentions two-character directories at the top of the tree. If that's the case then it's about 15% of the total (1.2G out of 7.7G). > Secondly, presumably you have > $Conf{PoolV3Enabled} = 1; > I do. In that case when it's doing a backup, BackupPC will of course use the > V3 pool and matching data found there. Like you, I wouldn't expect > *new* V3 pool/cpool data to be written but I haven't done any tests at > all to see what might happen under oddball circumstances[*]. I see > setting $Conf{PoolV3Enabled} = 1; in the nature of stop-gap measure, > and I recommend setting it to 0 ASAP. Of course that supposes that > all your V3 backups are expired or considered to be of no more use. > So I looked... the paths under `cpool/?` (single-character pool directories) with the most recent mod-times are: 1) directories (e.g. `cpool/f/6/0`) 2) most recently modified 105 days ago That is definitely within a few days of when I ran BackupPC_migrateV3toV4, if not that very day. So I think that confirms I'm not getting new v3 backups. Which really just leaves some kind of anomaly or variation in the data collection for the RRD or data extraction from the RRD. Again, I'm not really concerned about this, just curious what's up. If I find myself with all kinds of free time (HA!) maybe I'll dig into the data tables just to make sure which side of that line the issue is on. Thanks for the assist! Matt > |
From: G.W. H. <ba...@ju...> - 2025-09-17 14:35:35
|
Hi there, On Wed, 17 Sep 2025, Matthew Pounsett wrote: Re: Why is my v3 pool growing? > ... the reported size of the v3 pool seems to rise and fall. Since > I don't think new v3 backups should be getting created?that would be > astonishing?I'm at a loss to explain it. And I'm really, really > curious what's up. I expect it to shrink over time as very-old > backups expire, and eventually just reach zero. I did not expect to > see it ever grow. > > Does anyone have an idea of what's happening here? Firstly, I'd be inclined to check the numbers in some way other than by looking at the graph. :) It wouldn't be the first time that some graph or other didn't perfectly reflect the underlying data. It may be helpful to know exactly how much data we're talking about in the V3 pool (as the graphs aren't terribly precise) and also how many files. It would be good to eliminate somehow any possibility that BackupPC isn't the culprit, since BackupPC's graph just shows you approximately how much data is in the pool directories, it doesn't actually know how it got there. Secondly, presumably you have $Conf{PoolV3Enabled} = 1; in your configuration file. In that case when it's doing a backup, BackupPC will of course use the V3 pool and matching data found there. Like you, I wouldn't expect *new* V3 pool/cpool data to be written but I haven't done any tests at all to see what might happen under oddball circumstances[*]. I see setting $Conf{PoolV3Enabled} = 1; in the nature of stop-gap measure, and I recommend setting it to 0 ASAP. Of course that supposes that all your V3 backups are expired or considered to be of no more use. If V3 backups still exist and you set $Conf{PoolV3Enabled} = 0; then I guess that the worst that will happen is that files get copied to the V4 pool instead of the existing V3 copies being used. That will take some extra time and probably storage space, so we're back to asking how much data (and perhaps how much free storage?), but that will only happen once. Any remaining V3 backups would *not* then be cleaned up by BackupPC itself, and would need to be deleted manually if required. It wouldn't be a huge stretch of the imagination to think that the V3 pool might sometimes be used when it's not expected. I'd want to know under what circumstances. To find out if this happens and if so when, if it were my system I'd probably for example run cron jobs to dump directory listings between backup runs, to view at my (ho-ho) leisure. Maybe I'd also add some extra logging statements to the BackupPC code, there are already a few which might give you some ideas, for example in sub ScanAndCleanV3Pool() in BackupPC_nightly. [*] For example hash collisions, system restarts, old tmpfiles? -- 73, Ged. |
From: Matthew P. <ma...@co...> - 2025-09-16 17:04:10
|
This is not a Problem...I'm just trying to understand some behaviour. A few months ago I ran the V3->V4 conversion because it appeared like that hadn't been done when our v4 upgrade was done. That got me brand new data in my graphs tracking the remaining v3 CPool. Yay! [big-thumbs-up] However, I've noticed that the reported size of the v3 pool seems to rise and fall. Since I don't think new v3 backups should be getting created—that would be astonishing—I'm at a loss to explain it. And I'm really, really curious what's up. I expect it to shrink over time as very-old backups expire, and eventually just reach zero. I did not expect to see it ever grow. Does anyone have an idea of what's happening here? I don't see any attachments in my email history for the list, so I'm assuming an image attachment would be stripped. Here's a URL to a screenshot to see what I mean: < https://www.dropbox.com/scl/fi/t4woykfalnr0h9yx5hwth/Screenshot-2025-09-16-at-12.01.18.png?rlkey=h4qfrfjahycyw7vacvgq6e5du&dl=0 > |
From: G.W. H. <ba...@ju...> - 2025-09-02 14:34:20
|
Hi there, On Tue, 2 Sep 2025, Steve Richards wrote: > > Not really recommended but if you just deleted the huge files from the > > pool then you'd get some error messages when e.g. nightly checks were > > run, but I think that should be about the extent of the inconvenience. > > Not sure I want to try that if it's not recommended but, if I did, how > would I go about finding and deleting the files given that the historic > guidance is now obsolete? You can use the script BackupPC_ls to find the files in the pool. The script is in BackupPC's /bin/ directory. Below is an example, which I ran on our pool on backup server 'piplus' to list directory 'cups' in share 'Config' on our host 'alpha'. Years ago I configured the share 'Config' to point to the '/etc/' directory on this host - every host I back up has a BackupPC 'Config' share. 8<---------------------------------------------------------------------- piplus:# >>> su - backuppc backuppc@piplus: $ /usr/local/BackupPC/bin/BackupPC_ls -h alpha -n 2004 -s Config cups cups: -rw-r--r-- 0/0 27408 2024-09-27 13:34:52 cups/cups-browsed.conf (0b812d29a7f82f521a3c384ba53f831c) -rw-r--r-- 0/0 27303 2019-04-17 10:04:46 cups/cups-browsed.conf~ (b2cbc92bec253cfd9c7fadb2c018cd66) -rw-r--r-- 0/0 2923 2019-08-21 08:43:13 cups/cups-files.conf (39776f9574026852ef283113ac2485d8) -rw-r----- 0/7 4669 2022-12-02 15:16:05 cups/cupsd.conf (6dfdf833d9ee3da91ecc0c46c415cf50) -rw-r--r-- 0/0 6402 2020-01-13 17:37:56 cups/cupsd.conf.O (1faa528b0f83af98e55510e5945e0298) drwxr-xr-x 0/0 0 2019-08-21 08:43:13 cups/interfaces/ drwxr-xr-x 0/7 0 2020-12-18 15:33:01 cups/ppd/ -rw------- 0/7 2214 2025-08-31 00:00:42 cups/printers.conf (316f85ea58575570bdda0773e81d811c) -rw------- 0/7 2214 2025-08-30 00:00:36 cups/printers.conf.O (316f85ea58575570bdda0773e81d811c) -rw-r--r-- 0/0 240 2020-01-13 17:38:05 cups/raw.convs (cb64c71ae2fd35a2ff07e54a6098eecf) -rw-r--r-- 0/0 211 2020-01-13 17:38:05 cups/raw.types (b507ca634c5be3c42db5700ec745ac0d) -rw-r--r-- 0/0 142 2022-12-02 15:17:37 cups/snmp.conf (47b8f1c3fecdc44e3d1fdee4b9eeb3f5) drwx------ 0/7 0 2023-05-03 10:36:28 cups/ssl/ -rw-r----- 0/7 388 2025-08-31 00:00:43 cups/subscriptions.conf (0692d940f7ea1701919938333a8a0ab8) -rw-r----- 0/7 94 2025-08-31 00:00:09 cups/subscriptions.conf.O (b0b6799bfe8fea62737eb7deb2037bfa) backuppc@piplus: $ ls -l /var/lib/BackupPC/cpool/30/6e/316f85ea58575570bdda0773e81d811c -r--r--r-- 1 backuppc backuppc 880 Jul 9 01:32 /var/lib/BackupPC/cpool/30/6e/316f85ea58575570bdda0773e81d811c backuppc@piplus: $ /usr/local/BackupPC/bin/BackupPC_zcat /var/lib/BackupPC/cpool/30/6e/316f85ea58575570bdda0773e81d811c # Printer configuration file for CUPS ... ... ... backuppc@piplus: $ logout 8<---------------------------------------------------------------------- As you can see BackupPC_ls gives the names of the files in the pool. The file names are hashes of the file content - hexadecimal numbers. For better efficiency of directory searches the files are stored in two levels of directories. Each level has 128 directories, named as 00, 02, 04, ... fa, fc, fe. As there are 128 (not 256) directories, the first four characters of each filename are converted to 'even' numbers and these are used as the path to the directory where the file will be stored. In my example I changed '31' to '30' and '6f' to '6e' and gave that path to BackupPC_zcat to uncompress the file to stdout. It's all very straightforward when you get the hang of it, but do be careful please if you do this with non-text files, as sending binary data to a terminal can leave it in a weird state. I find that typing the 'reset' command will usually fix it. :) > ... Would deleting all backups that included/referenced the files > cause them to be cleaned out of the pool (and the space recovered) > automatically ... Yes, the nightly routines will do that. As they are time-consuming, BackupPC can be configured to check just a part of the pool each night so depending on your configuration options it may take a few days to recover all the space. See config.pl for details. -- 73, Ged. |
From: <Mat...@gm...> - 2025-09-01 15:43:30
|
Hello Steve, There are some differences in storage layout between BackupPC V3 and V4. Because this you can't apply a 20 year old documentation 😁️. In BackupPC V4 all files are stored only once in ../pool/??/??/ or in ../cpool/??/??/. You can delete this file in [c]pool but you will harvest on "admin : BackupPC_refCountUpdate: missing pool file <hex-number>" in your log file. What you have to do to remove the file (e.g. for bash): 1. find the name of the poolfile - hex=$(sudo -u backuppc $bpcBin/BackupPC_attribPrint /var/lib/backuppc/pc/<yourHost>/<bckNr>/f<share>/<path>/attrib_* | grep -A2 <yourFileName> | grep -A2 "{" | grep digest | awk '{print $3}') - hex=${hex:1:${#hex}-3} - hex1=$(printf "%02x" $(( 0x${hex:0:2}/2*2))) - hex2=$(printf "%02x" $(( 0x${hex:2:2}/2*2))) 2. remove the pool-file 1. rm -f ../[c]pool/$hex1/$hex2/$hex I've never done it - so be carefully and test it :) What I'd propose - and I'll do this soon - would be to find a small pool-file and create a hardlink. So your 8GB file is linked to this small pool-file and is using no additional size. Br Matthias Am Montag, dem 01.09.2025 um 11:37 +0100 schrieb Steve Richards: > I have accidentally allowed BackupPC to backup some very large files (3 x ~8GB) from one of my > hosts. I don't need them backed up and would like to get rid of them from the backup storage. If I > understand correctly that would mean removing stuff from BackupPC's pool system. Can I do that > safely? > > I found a very old post (20+ years ago) about deleting from the pool but I know BackupPC has had > at least one major revision in that time and I don't know whether the advice back then is still > appropriate with current versions. The post says: > > # cd __TOPDIR__/pc/ > # find . -name *.mp3 > <you should see a list of your .mp3 files here. if it looks like a good list > (i.e. you didn't typo something that will be painful to fix), go to the next > step.> > # find . -name *.mp3 -exec rm -f {} \; > > On my installation (v4.4.0) that doesn't seem to find any files matching the search pattern > (*.raw in my case), or indeed any identifiable files, which makes me wonder if the pooling > mechanism and structure might have changed. In any case I don't want to mess with anything low- > level without up to date advice. The "rogue" files were first backed-up only a couple of days ago, > so I would be happy to delete all backups from that date if that would help. > > Thanks. > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ |
From: Steve R. <bp...@bo...> - 2025-09-01 15:24:09
|
Thanks for the response. > Not really recommended but if you just deleted the huge files from the > pool then you'd get some error messages when e.g. nightly checks were > run, but I think that should be about the extent of the inconvenience. Not sure I want to try that if it's not recommended but, if I did, how would I go about finding and deleting the files given that the historic guidance is now obsolete? > You can safely delete entire backups using the Web interface. You can > safely use the script BackupPC_backupDelete for more control to delete > shares or directories, but not to the level of individual files. The > script is in BackupPC's /bin/ directory. Like most BackupPC scripts, > it should be run by user backuppc. If you run it with no arguments it > will give help on usage. To get the hang of using it, maybe try it on > a backup that you aren't especially fond of. OK, thanks. Would deleting all backups that included/referenced the files cause them to be cleaned out of the pool (and the space recovered) automatically, or is there a further step? |
From: G.W. H. <ba...@ju...> - 2025-09-01 15:03:09
|
Hi there, On Mon, 1 Sep 2025, Steve Richards wrote: https://sourceforge.net/p/backuppc/mailman/message/37695816/ > I have accidentally allowed BackupPC to backup some very large files (3 > x ~8GB) from one of my hosts. I don't need them backed up and would like > to get rid of them from the backup storage. If I understand correctly > that would mean removing stuff from BackupPC's pool system. Can I do > that safely? Not really recommended but if you just deleted the huge files from the pool then you'd get some error messages when e.g. nightly checks were run, but I think that should be about the extent of the inconvenience. > I found a very old post (20+ years ago) about deleting from the pool but > I know BackupPC has had at least one major revision in that time and I > don't know whether the advice back then is still appropriate ... Version 4 layout is very different from V3, I wouldn't rely on that. > ... The "rogue" files were first backed-up only a couple of days > ago, so I would be happy to delete all backups from that date if > that would help. You can safely delete entire backups using the Web interface. You can safely use the script BackupPC_backupDelete for more control to delete shares or directories, but not to the level of individual files. The script is in BackupPC's /bin/ directory. Like most BackupPC scripts, it should be run by user backuppc. If you run it with no arguments it will give help on usage. To get the hang of using it, maybe try it on a backup that you aren't especially fond of. -- 73, Ged. |
From: Steve R. <bp...@bo...> - 2025-09-01 10:52:47
|
I have accidentally allowed BackupPC to backup some very large files (3 x ~8GB) from one of my hosts. I don't need them backed up and would like to get rid of them from the backup storage. If I understand correctly that would mean removing stuff from BackupPC's pool system. Can I do that safely? I found a very old post (20+ years ago) about deleting from the pool but I know BackupPC has had at least one major revision in that time and I don't know whether the advice back then is still appropriate with current versions. The post says: # cd __TOPDIR__/pc/ # find . -name *.mp3 <you should see a list of your .mp3 files here. if it looks like a good list (i.e. you didn't typo something that will be painful to fix), go to the next step.> # find . -name *.mp3 -exec rm -f {} \; On my installation (v4.4.0) that doesn't seem to find any files matching the search pattern (*.raw in my case), or indeed any identifiable files, which makes me wonder if the pooling mechanism and structure might have changed. In any case I don't want to mess with anything low-level without up to date advice. The "rogue" files were first backed-up only a couple of days ago, so I would be happy to delete all backups from that date if that would help. Thanks. |
From: G.W. H. <ba...@ju...> - 2025-08-26 18:08:26
|
Hi there, On Tue, 26 Aug 2025, Way3.com Info wrote: > ... Was that something setup via Backuppc or within FreeBSD? More or less any Linux distribution can be set up to use one or more encrypted partition(s) or filesystem(s). The partitions/FSs on any given system don't all need to be encrypted - there can be advantages to having the bootable system on non-encrypted storage for example. > Is there a way to encrypt with Backuppc? BackupPC does not do any encryption. I'm not sure that you'd want to do it that way anyway. If you did, then BackupPC would need to have access to the encryption key at all times, which would increase the risk of it escaping. If you use a partition or filesystem encrypted by the OS, then the only time the key is used is when you type it at a terminal when the OS mounts the partition/FS for example at boot time. Software which makes use of the encrypted data does not need access to the encryption key, thus the risk of key compromise is reduced. Of course none of this helps if the system itself is compromised. -- 73, Ged. |
From: Way3.com I. <in...@wa...> - 2025-08-25 21:56:08
|
Thanks! I am running on Rocky 8. Was that something setup via Backuppc or within FreeBSD? Is there a way to encrypt with Backuppc? From: Brad Alexander <st...@gm...> Sent: Friday, August 22, 2025 8:55 AM To: in...@wa...; General list for user discussion, questions and support <bac...@li...> Subject: Re: [BackupPC-users] Encryption On Fri, Aug 22, 2025 at 9:41 AM Way3.com Info <in...@wa... <mailto:in...@wa...> > wrote: I have a client that performs annual audits. They want to see the file on the live production server and then see the file my Backuppc server. Two questions: 1. How do you go about finding and showing the specific file on the Backuppc server? I would say from within the web interface. You could download it to somewhere on your system or another system and compare them. 2. Are the files encrypted on the Backuppc server? If not, is there a setting to encrypt the files? Not involved in backuppc development, but I run my backupoc install on FreeBSD, so I am running on an encrypted ZFS pool. When I was running it on linux, I was on an encrypted LUKS partition, so the pool was encrypted at rest. That is how I have traditionally done it. |
From: Les M. <les...@gm...> - 2025-08-22 17:11:42
|
On Fri, Aug 22, 2025 at 11:53 AM <bac...@ko...> wrote: > > Thanks. As I mentioned in a similar reply I have had similar > experiences if the machine stays dead... but I replaced it with new > identical HW and would like to keep the same name :) > Maybe you could rename the backup if you want to keep the snapshot of the old machine separate. -- Les Mikesell les...@gm... |
From: <bac...@ko...> - 2025-08-22 16:52:02
|
Thanks. As I mentioned in a similar reply I have had similar experiences if the machine stays dead... but I replaced it with new identical HW and would like to keep the same name :) G.W. Haywood wrote at about 14:58:36 +0100 on Friday, August 22, 2025: > Hi there, > 41;366;0c > On Fri, 22 Aug 2025, Ronald Colcernian wrote: > > On Wed, Aug 20, 2025, Matthias--- via BackupPC-users wrote: > >> On Wed, Aug 20, 2025, bac...@ko... wrote: > >> > >>> My device died and I am no longer to access the device or the data on > >>> the device. > >>> > >>> I would therefore like to save the last incremental as a "full" & > >>> "filled" backup so that I can mark it to "keep" it as the last valid > >>> backup. > >>> > >>> Is there any reliable way to convert the (last) incremental into a > >>> full backup? > >>> > >>> I imagine I could restore the backup to some free space on another > >>> device and then back it up again but that would require some hacking > >>> with device names, share names and path to make it "look" like a > >>> legitimate backup from the original device with consistent naming and > >>> paths. > >> > >> edit ../pc/<host>/backups and replace incr by full. > >> It will not change the content of your backup but you should be able to > >> avoid deletion of it. > >> > >> On the other hand - if you don't make new backups it will not be removed > >> anyway - as far as your > >> $Conf{IncrAgeMax} is not reached. > > > > If the backups on that host are disabled, the incrementals will hang around > > forever. > > > > That has been my experience. > > Here's an extract from my config.pl: > > 8<------------------------------------------------------------ > $Conf{IncrKeepCnt} = 6; > $Conf{IncrKeepCntMin} = 1; > $Conf{IncrAgeMax} = 30; > 8<------------------------------------------------------------ > > I have several machines which haven't been backed up for years as they > haven't been switched on in all that time, but which are still listed > in /etc/BackupPC/hosts. Picking one of them more or less at random: > > 8<------------------------------------------------------------ > This PC is used by backuppc. > Last email sent to backuppc was at 2025-08-22 05:14, subject "BackupPC: no recent backups on tornado". > Last status is state "idle" (no ping) as of 2025-08-22 14:00. > Last error is "no ping response from tornado". > Pings to tornado have failed 45258 consecutive times. > ... > ... > > Backup# Type Filled Level Start Date Duration/mins Age/days Server Backup Path > 25 incr yes 1 2020-02-20 22:05 7.7 2009.6 /var/lib/BackupPC/pc/tornado/25 > 24 incr no 1 2020-02-19 22:06 7.7 2010.6 /var/lib/BackupPC/pc/tornado/24 > 22 full yes 0 2020-02-17 22:05 135.1 2012.6 /var/lib/BackupPC/pc/tornado/22 > 16 full yes 0 2020-02-10 02:57 135.2 2020.4 /var/lib/BackupPC/pc/tornado/16 > 13 full yes 0 2019-12-06 20:08 156.5 2085.7 /var/lib/BackupPC/pc/tornado/13 > 6 full yes 0 2019-11-29 20:09 170.7 2092.7 /var/lib/BackupPC/pc/tornado/6 > 0 full yes 0 2019-11-22 13:12 1058.6 2100.0 /var/lib/BackupPC/pc/tornado/0 > 8<------------------------------------------------------------ > > Choosing the most recent backup because it was a filled incremental I > restored a fourteen-year-old notice from the UK tax office which was > last backed up in February 2020. It restored just fine. > > Jeff, I think that as long as the dead machine stays dead, you might > not actually need to do anything. :) > > -- > > 73, > Ged. > > > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ |
From: <bac...@ko...> - 2025-08-22 15:56:35
|
Thanks - that has been my experience too. However, I am not disabling backups for that device as I replaced it with an identical one Ronald Colcernian wrote at about 21:00:12 -0400 on Thursday, August 21, 2025: > If the backups on that host are disabled, the incrementals will hang around > forever. > > That has been my experience. > > Rob C. > > On Wed, Aug 20, 2025, 14:04 Matthias--- via BackupPC-users < > bac...@li...> wrote: > > > edit ../pc/<host>/backups and replace incr by full. > > It will not change the content of your backup but you should be able to > > avoid deletion of it. > > > > On the other hand - if you don't make new backups it will not be removed > > anyway - as far as your > > $Conf{IncrAgeMax} is not reached. > > > > br > > Matthias > > > > Am Mittwoch, dem 20.08.2025 um 17:45 +0000 schrieb bac...@ko...: > > > My device died and I am no longer to access the device or the data on > > > the device. > > > > > > I would therefore like to save the last incremental as a "full" & > > > "filled" backup so that I can mark it to "keep" it as the last valid > > > backup. > > > > > > Is there any reliable way to convert the (last) incremental into a > > > full backup? > > > > > > I imagine I could restore the backup to some free space on another > > > device and then back it up again but that would require some hacking > > > with device names, share names and path to make it "look" like a > > > legitimate backup from the original device with consistent naming and > > > paths. > > > > > > Any ideas? > > > > > > > > > _______________________________________________ > > > BackupPC-users mailing list > > > Bac...@li... > > > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > > > Wiki: https://github.com/backuppc/backuppc/wiki > > > Project: https://backuppc.github.io/backuppc/ > > > > > > > > > > _______________________________________________ > > BackupPC-users mailing list > > Bac...@li... > > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > > Wiki: https://github.com/backuppc/backuppc/wiki > > Project: https://backuppc.github.io/backuppc/ > > > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ |
From: <bac...@ko...> - 2025-08-22 15:55:06
|
Are you sure this is "safe"? Matthias--- via BackupPC-users wrote at about 20:04:15 +0200 on Wednesday, August 20, 2025: > edit ../pc/<host>/backups and replace incr by full. > It will not change the content of your backup but you should be able to avoid deletion of it. > > On the other hand - if you don't make new backups it will not be removed anyway - as far as your > $Conf{IncrAgeMax} is not reached. > > br > Matthias > > Am Mittwoch, dem 20.08.2025 um 17:45 +0000 schrieb bac...@ko...: > > My device died and I am no longer to access the device or the data on > > the device. > > > > I would therefore like to save the last incremental as a "full" & > > "filled" backup so that I can mark it to "keep" it as the last valid > > backup. > > > > Is there any reliable way to convert the (last) incremental into a > > full backup? > > > > I imagine I could restore the backup to some free space on another > > device and then back it up again but that would require some hacking > > with device names, share names and path to make it "look" like a > > legitimate backup from the original device with consistent naming and > > paths. > > > > Any ideas? > > > > > > _______________________________________________ > > BackupPC-users mailing list > > Bac...@li... > > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > > Wiki: https://github.com/backuppc/backuppc/wiki > > Project: https://backuppc.github.io/backuppc/ > > > > > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ |
From: Brad A. <st...@gm...> - 2025-08-22 15:55:06
|
On Fri, Aug 22, 2025 at 9:41 AM Way3.com Info <in...@wa...> wrote: > I have a client that performs annual audits. They want to see the file on > the live production server and then see the file my Backuppc server. > > Two questions: > > 1. How do you go about finding and showing the specific file on the > Backuppc server? > > I would say from within the web interface. You could download it to somewhere on your system or another system and compare them. > 2. Are the files encrypted on the Backuppc server? If not, is there a > setting to encrypt the files? > Not involved in backuppc development, but I run my backupoc install on FreeBSD, so I am running on an encrypted ZFS pool. When I was running it on linux, I was on an encrypted LUKS partition, so the pool was encrypted at rest. That is how I have traditionally done it. |
From: Way3.com I. <in...@wa...> - 2025-08-22 14:39:53
|
I have a client that performs annual audits. They want to see the file on the live production server and then see the file my Backuppc server. Two questions: 1. How do you go about finding and showing the specific file on the Backuppc server? 2. Are the files encrypted on the Backuppc server? If not, is there a setting to encrypt the files? |
From: G.W. H. <ba...@ju...> - 2025-08-22 13:58:54
|
Hi there, 41;366;0c On Fri, 22 Aug 2025, Ronald Colcernian wrote: > On Wed, Aug 20, 2025, Matthias--- via BackupPC-users wrote: >> On Wed, Aug 20, 2025, bac...@ko... wrote: >> >>> My device died and I am no longer to access the device or the data on >>> the device. >>> >>> I would therefore like to save the last incremental as a "full" & >>> "filled" backup so that I can mark it to "keep" it as the last valid >>> backup. >>> >>> Is there any reliable way to convert the (last) incremental into a >>> full backup? >>> >>> I imagine I could restore the backup to some free space on another >>> device and then back it up again but that would require some hacking >>> with device names, share names and path to make it "look" like a >>> legitimate backup from the original device with consistent naming and >>> paths. >> >> edit ../pc/<host>/backups and replace incr by full. >> It will not change the content of your backup but you should be able to >> avoid deletion of it. >> >> On the other hand - if you don't make new backups it will not be removed >> anyway - as far as your >> $Conf{IncrAgeMax} is not reached. > > If the backups on that host are disabled, the incrementals will hang around > forever. > > That has been my experience. Here's an extract from my config.pl: 8<------------------------------------------------------------ $Conf{IncrKeepCnt} = 6; $Conf{IncrKeepCntMin} = 1; $Conf{IncrAgeMax} = 30; 8<------------------------------------------------------------ I have several machines which haven't been backed up for years as they haven't been switched on in all that time, but which are still listed in /etc/BackupPC/hosts. Picking one of them more or less at random: 8<------------------------------------------------------------ This PC is used by backuppc. Last email sent to backuppc was at 2025-08-22 05:14, subject "BackupPC: no recent backups on tornado". Last status is state "idle" (no ping) as of 2025-08-22 14:00. Last error is "no ping response from tornado". Pings to tornado have failed 45258 consecutive times. ... ... Backup# Type Filled Level Start Date Duration/mins Age/days Server Backup Path 25 incr yes 1 2020-02-20 22:05 7.7 2009.6 /var/lib/BackupPC/pc/tornado/25 24 incr no 1 2020-02-19 22:06 7.7 2010.6 /var/lib/BackupPC/pc/tornado/24 22 full yes 0 2020-02-17 22:05 135.1 2012.6 /var/lib/BackupPC/pc/tornado/22 16 full yes 0 2020-02-10 02:57 135.2 2020.4 /var/lib/BackupPC/pc/tornado/16 13 full yes 0 2019-12-06 20:08 156.5 2085.7 /var/lib/BackupPC/pc/tornado/13 6 full yes 0 2019-11-29 20:09 170.7 2092.7 /var/lib/BackupPC/pc/tornado/6 0 full yes 0 2019-11-22 13:12 1058.6 2100.0 /var/lib/BackupPC/pc/tornado/0 8<------------------------------------------------------------ Choosing the most recent backup because it was a filled incremental I restored a fourteen-year-old notice from the UK tax office which was last backed up in February 2020. It restored just fine. Jeff, I think that as long as the dead machine stays dead, you might not actually need to do anything. :) -- 73, Ged. |
From: Ronald C. <rc...@dr...> - 2025-08-22 01:25:36
|
If the backups on that host are disabled, the incrementals will hang around forever. That has been my experience. Rob C. On Wed, Aug 20, 2025, 14:04 Matthias--- via BackupPC-users < bac...@li...> wrote: > edit ../pc/<host>/backups and replace incr by full. > It will not change the content of your backup but you should be able to > avoid deletion of it. > > On the other hand - if you don't make new backups it will not be removed > anyway - as far as your > $Conf{IncrAgeMax} is not reached. > > br > Matthias > > Am Mittwoch, dem 20.08.2025 um 17:45 +0000 schrieb bac...@ko...: > > My device died and I am no longer to access the device or the data on > > the device. > > > > I would therefore like to save the last incremental as a "full" & > > "filled" backup so that I can mark it to "keep" it as the last valid > > backup. > > > > Is there any reliable way to convert the (last) incremental into a > > full backup? > > > > I imagine I could restore the backup to some free space on another > > device and then back it up again but that would require some hacking > > with device names, share names and path to make it "look" like a > > legitimate backup from the original device with consistent naming and > > paths. > > > > Any ideas? > > > > > > _______________________________________________ > > BackupPC-users mailing list > > Bac...@li... > > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > > Wiki: https://github.com/backuppc/backuppc/wiki > > Project: https://backuppc.github.io/backuppc/ > > > > > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ > |
From: <Mat...@gm...> - 2025-08-20 18:45:20
|
Just to know: uid/gid, listet by BackupPC_ls are uid/gid of the original file on client side. It isn't uid/git of the file in the backup. br Matthias Am Mittwoch, dem 20.08.2025 um 17:34 +0200 schrieb Mat...@gm...: > Hello again :) > > * Patch https://github.com/backuppc/backuppc/issues/541 > - My patch is in github as Issue 541. I'm sure I missunderstand your "If no Github issue yet > exists, you could for example add your suggestion to > https://github.com/backuppc/backuppc/issues/5" > - I don't implement a "YetAnotherDumpPreUserCmd" because than I've to implement it in the GUI > and > in the Documentation too. On the other hand "DumpPreUserCmd" is just called sometime > earlier. I > believe it can affect other users only if they expect a limited time between > "DumpPreUserCmd" and > "DumpPreShareCmd". > - But if there would be a chance to get this patch into normal BackuPC source, I'd implement > it as > you asked for and add it to above Issue 541 > > The others just for giving you an answer. I know what I have to do next: > * I'll provide my script to https://github.com/backuppc/backuppc/wiki/How-to-find-which-backups- > reference-a-particular-pool-file, as soon as testing is finished. > * Patches: What you propose is (similar) what I'm done. 2 directories and running: > - diff -ruN ... > - patch -p0 ... > * > * "bpc_attrib_dirRead: can't open" for existing file: > - I have no 1014 in my /etc/passwd and no 544 in my /etc/group. But there are a lot of files, > readable by BackupPC_ls, with 1014/513 or 544/513. I don't know where this uid/gid are > coming > from. Before migration the files are owned by backuppc:backuppc (106/106) > - > - What I'm wondering about: > + /home/Backup4U/.ssh/authorized_keys has digest e86ae4879765b4579bb3deee0626e88b > + bpc_attrib_dirRead: can't open > /var/lib/backuppc/pool/74/fa/74faf6dde97ccc2439ee042541197853 > + That's another digest, probably another file or directory what have not been outlined? > - But don't worry about. I have a V3 backup from the already migrated V4 backup. I'll migrate > it to > V4 using ulimit -n 10000. I'm very sure the migrate job will not have any problems than. If > not - > it is a really old backup and I'll just remove it 🤭️ > * 11,000 out of 30,000,000: > - No I don't asked for your opinion. I find out that this missing files are from 2009 till > 2023. So > no relevant backup is affected. Just historical data. > * e2fsck: Me too. I don't have issues with ext4 since several years. Thanks to journaling🙏️. I > probably lost files several years ago. e2fsck repaired something and I removed the files from > Lost+Found because I'm not able to restore files from there🤣️ > > Have a great day > Matthias > > Am Dienstag, dem 19.08.2025 um 18:42 +0100 schrieb G.W. Haywood: > > Hi there, > > > > On Sun, 17 Aug 2025, Mat...@gm... wrote: > > > > > ... > > > ... > > > BTW1: > > > * is there a place where I could provide this (bash) script? > > > > The scripts section at https://github.com/backuppc/backuppc/wiki > > > > > * sometimes I'm developing patches to BackupPC perl scripts (applyable with patch -p0 --dry- > > > run > > > -- > > > reject-file=oops.rej <file.patch) and better documented than below. Is there a place where > > > I > > > could provide them? > > > > Github is the place. > > > > Being an Old School greybeard I'm very comfortable with patches, but I > > guess most people never use 'patch' any more. The latest thing is > > 'pull requests' on Github. Unfortunately more or less every time I > > try to use Github's User Interface it breaks in some new and to be > > frank very uninteresting way, so I tend to use git from the command > > line to do any code changes on Github (and gitk to look at history). > > I'm not above downloading two different versions of the whole thing as > > an archive, extracting them to a temporary directory, and then running > > 'diff -r -U3 ...' on the two subdirectories. > > > > The point buried beneath all this is that if you can recruit a few > > people to test your patches that will help enormously. Despite my own > > reservations about and dreadful experiences of Github I think you will > > get better traction if (1) you use Github's facilities to publish your > > changes for testing and (2) after you've made the changes available on > > Github you send a message to this mailing list announcing them. In my > > view it's much better to keep discussion on this mailing list than to > > try to use Github as some kind of forum. > > > > > - BackupPC_backupDuplicate can need some time and the client could go to sleep in the > > > meantime. A > > > patch to??BackupPC_dump move the call of DumpPreUserCmd before execution of > > > BackupPC_backupDuplicate and a users DumpPreUserCmd can disable hibernation on client > > > side. > > > - BackupPC_restore could need some time between calculation of $lastNum and using it for > > > RestoreInfo and RestoreLOG. A patch move the calculation behind the call of the > > > RestorePostUserCmd and if someone, like me??, is calling?BackupPC_restore from a > > > programm > > > several times in parallel for different shares and dirs of a host, each single call can > > > use > > > another?$lastNum. > > > > All good stuff. :) > > > > Your patch for BackupPC_restore is on Github at > > > > https://github.com/backuppc/backuppc/issues/541 > > > > I know that at some point I've at least seen something describing the > > rationale behind the BackupPC_backupDuplicate patch, but I have been > > unable to find it (for my TODO list:). Did you mention it on Github, > > or this mailing list or somewhere...? > > > > In general I'm much happier with changes which add functionality as an > > *option* and which won't affect existing users in any way unless they > > deliberately ask for the option. In both patches I'd worry much less > > if you added YetAnotherDumpPreUserCmd, so if YetAnotherDumpPreUserCmd > > is undefined there would be no change to the current operation. If no > > Github issue yet exists, you could for example add your suggestion to > > > > https://github.com/backuppc/backuppc/issues/5 > > > > which I want to look into more carefully when I can get to it. > > > > > BTW2: > > > ... > > > ... > > > -rwxrwx--- 1014/544 415 2008-12-14 14:12:10 /home/Backup4U/.ssh/authorized_keys > > > ... > > > ... I don't understand the second one because "/home/Backup4U/.ssh/authorized_keys" exists. > > > > To be able to make a backup of a file, the user 'backuppc' (or > > whatever you have set set in the variable $Conf{BackupPCUser} in > > /etc/BackupPC/config.pl) needs to be able to read the file when it > > tries to make the backup. Can your $Conf{BackupPCUser} read a file > > which has UID 1014 and GID 544, but no 'world' read permission? > > > > > I think the main reason of this corruptions in my system was an > > > insufficient maximum number of open file descriptors. As soon as I > > > recognized this and set "ulimit -n 10000" all remaining migrations > > > went well. > > > > Useful information. It seems to me that it should be possible to add > > a check in the conversion script to check ulimit, maybe warn about it. > > I added it to my TODO, but don't hold your breath it's low priority. > > > > > > It seems likely that your conversion of V3 backups to V4 backups did > > > > not go very well.? You said that there were 'issues', and that now the > > > > count of missing pool files is nearly two thousand.? (1.800 - is that > > > > right? in the UK we use comma , not decimal point . as the thousands > > > > separator).? You are right to want to investigate.? Are you able to > > > > recover a few files successfully?? Perhaps choose some at random, and > > > > some because they're big/small/valuable/new/old? > > > 1,800 missing pool files found last night, in 1/16 of the pool. In total it is more than > > > 11,000 > > > out > > > of 30,000,000. So only a few files are affected??. > > > > I'm not sure if you're asking if I suggested that 11,000 files is only > > "a few files". For the avoidance of doubt, I did not. I asked if you > > were able to restore a few files but I suggested choosing the files in > > a number of different ways to get a hopefully representative sample of > > your success rate, as a way of checking that your backup recovery can > > "sort of work" on a good day. Obviously it isn't an exhaustive test, > > but I'd probably try something like that before I tried something more > > exhaustive. > > > > > Probably because of filesystem corrections made by e2fsck in the > > > past or because of some aborted migrations, mentioned above. > > > > I have no idea what will happen to a V3 BackupPC pool if e2fsck was > > obliged to make corrections to it, but I wouldn't feel that I could > > trust it without making careful tests. My personal view is that the > > filesystem for your backup system must be completely beyond reproach, > > and if it starts to need maintenance of that kind then it's probably > > time to replace it unless there's some obvious explanation with an > > equally obvious and easy fix. There was a time, decades ago, when I > > spent many hours every year fixing filesystems, but in general they > > are all a lot more reliable nowadays and now I can't remember the last > > time I had to run any kind of filesystem fixing tool - even on the USB > > attached hard drives on the several Raspberry Pis which we use here. > > > |
From: <Mat...@gm...> - 2025-08-20 18:04:29
|
edit ../pc/<host>/backups and replace incr by full. It will not change the content of your backup but you should be able to avoid deletion of it. On the other hand - if you don't make new backups it will not be removed anyway - as far as your $Conf{IncrAgeMax} is not reached. br Matthias Am Mittwoch, dem 20.08.2025 um 17:45 +0000 schrieb bac...@ko...: > My device died and I am no longer to access the device or the data on > the device. > > I would therefore like to save the last incremental as a "full" & > "filled" backup so that I can mark it to "keep" it as the last valid > backup. > > Is there any reliable way to convert the (last) incremental into a > full backup? > > I imagine I could restore the backup to some free space on another > device and then back it up again but that would require some hacking > with device names, share names and path to make it "look" like a > legitimate backup from the original device with consistent naming and > paths. > > Any ideas? > > > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ |
From: <Mat...@gm...> - 2025-08-20 17:57:54
|
I found the reason for this mystery: /home/Backup4U/.ssh: -rwxrwx--- 1014/544 415 2008-12-14 14:12:10 /home/Backup4U/.ssh/authorized_keys (e86ae4879765b4579bb3deee0626e88b) bpc_attrib_dirRead: can't open /var/lib/backuppc/pool/74/fa/74faf6dde97ccc2439ee042541197853 /home/Backup4U/OBS: -rwxrwx--- 1014/544 5 2008-12-22 23:29:12 /home/Backup4U/OBS/.bash_history (f4b524261fce06c1fbd10b4681ad0b97) It is covering the /fhome/attrib_74faf6dde97ccc2439ee042541197853 file which is not in pool. Properly because of a failed migration because of my too small open file limits. Maybe BackupPC_ls could print this information 🤭️ Br Matthias Am Mittwoch, dem 20.08.2025 um 17:34 +0200 schrieb Mat...@gm...: > Hello again :) > > * Patch https://github.com/backuppc/backuppc/issues/541 > - My patch is in github as Issue 541. I'm sure I missunderstand your "If no Github issue yet > exists, you could for example add your suggestion to > https://github.com/backuppc/backuppc/issues/5" > - I don't implement a "YetAnotherDumpPreUserCmd" because than I've to implement it in the GUI > and > in the Documentation too. On the other hand "DumpPreUserCmd" is just called sometime > earlier. I > believe it can affect other users only if they expect a limited time between > "DumpPreUserCmd" and > "DumpPreShareCmd". > - But if there would be a chance to get this patch into normal BackuPC source, I'd implement > it as > you asked for and add it to above Issue 541 > > The others just for giving you an answer. I know what I have to do next: > * I'll provide my script to https://github.com/backuppc/backuppc/wiki/How-to-find-which-backups- > reference-a-particular-pool-file, as soon as testing is finished. > * Patches: What you propose is (similar) what I'm done. 2 directories and running: > - diff -ruN ... > - patch -p0 ... > * > * "bpc_attrib_dirRead: can't open" for existing file: > - I have no 1014 in my /etc/passwd and no 544 in my /etc/group. But there are a lot of files, > readable by BackupPC_ls, with 1014/513 or 544/513. I don't know where this uid/gid are > coming > from. Before migration the files are owned by backuppc:backuppc (106/106) > - > - What I'm wondering about: > + /home/Backup4U/.ssh/authorized_keys has digest e86ae4879765b4579bb3deee0626e88b > + bpc_attrib_dirRead: can't open > /var/lib/backuppc/pool/74/fa/74faf6dde97ccc2439ee042541197853 > + That's another digest, probably another file or directory what have not been outlined? > - But don't worry about. I have a V3 backup from the already migrated V4 backup. I'll migrate > it to > V4 using ulimit -n 10000. I'm very sure the migrate job will not have any problems than. If > not - > it is a really old backup and I'll just remove it 🤭️ > * 11,000 out of 30,000,000: > - No I don't asked for your opinion. I find out that this missing files are from 2009 till > 2023. So > no relevant backup is affected. Just historical data. > * e2fsck: Me too. I don't have issues with ext4 since several years. Thanks to journaling🙏️. I > probably lost files several years ago. e2fsck repaired something and I removed the files from > Lost+Found because I'm not able to restore files from there🤣️ > > Have a great day > Matthias > > Am Dienstag, dem 19.08.2025 um 18:42 +0100 schrieb G.W. Haywood: > > Hi there, > > > > On Sun, 17 Aug 2025, Mat...@gm... wrote: > > > > > ... > > > ... > > > BTW1: > > > * is there a place where I could provide this (bash) script? > > > > The scripts section at https://github.com/backuppc/backuppc/wiki > > > > > * sometimes I'm developing patches to BackupPC perl scripts (applyable with patch -p0 --dry- > > > run > > > -- > > > reject-file=oops.rej <file.patch) and better documented than below. Is there a place where > > > I > > > could provide them? > > > > Github is the place. > > > > Being an Old School greybeard I'm very comfortable with patches, but I > > guess most people never use 'patch' any more. The latest thing is > > 'pull requests' on Github. Unfortunately more or less every time I > > try to use Github's User Interface it breaks in some new and to be > > frank very uninteresting way, so I tend to use git from the command > > line to do any code changes on Github (and gitk to look at history). > > I'm not above downloading two different versions of the whole thing as > > an archive, extracting them to a temporary directory, and then running > > 'diff -r -U3 ...' on the two subdirectories. > > > > The point buried beneath all this is that if you can recruit a few > > people to test your patches that will help enormously. Despite my own > > reservations about and dreadful experiences of Github I think you will > > get better traction if (1) you use Github's facilities to publish your > > changes for testing and (2) after you've made the changes available on > > Github you send a message to this mailing list announcing them. In my > > view it's much better to keep discussion on this mailing list than to > > try to use Github as some kind of forum. > > > > > - BackupPC_backupDuplicate can need some time and the client could go to sleep in the > > > meantime. A > > > patch to??BackupPC_dump move the call of DumpPreUserCmd before execution of > > > BackupPC_backupDuplicate and a users DumpPreUserCmd can disable hibernation on client > > > side. > > > - BackupPC_restore could need some time between calculation of $lastNum and using it for > > > RestoreInfo and RestoreLOG. A patch move the calculation behind the call of the > > > RestorePostUserCmd and if someone, like me??, is calling?BackupPC_restore from a > > > programm > > > several times in parallel for different shares and dirs of a host, each single call can > > > use > > > another?$lastNum. > > > > All good stuff. :) > > > > Your patch for BackupPC_restore is on Github at > > > > https://github.com/backuppc/backuppc/issues/541 > > > > I know that at some point I've at least seen something describing the > > rationale behind the BackupPC_backupDuplicate patch, but I have been > > unable to find it (for my TODO list:). Did you mention it on Github, > > or this mailing list or somewhere...? > > > > In general I'm much happier with changes which add functionality as an > > *option* and which won't affect existing users in any way unless they > > deliberately ask for the option. In both patches I'd worry much less > > if you added YetAnotherDumpPreUserCmd, so if YetAnotherDumpPreUserCmd > > is undefined there would be no change to the current operation. If no > > Github issue yet exists, you could for example add your suggestion to > > > > https://github.com/backuppc/backuppc/issues/5 > > > > which I want to look into more carefully when I can get to it. > > > > > BTW2: > > > ... > > > ... > > > -rwxrwx--- 1014/544 415 2008-12-14 14:12:10 /home/Backup4U/.ssh/authorized_keys > > > ... > > > ... I don't understand the second one because "/home/Backup4U/.ssh/authorized_keys" exists. > > > > To be able to make a backup of a file, the user 'backuppc' (or > > whatever you have set set in the variable $Conf{BackupPCUser} in > > /etc/BackupPC/config.pl) needs to be able to read the file when it > > tries to make the backup. Can your $Conf{BackupPCUser} read a file > > which has UID 1014 and GID 544, but no 'world' read permission? > > > > > I think the main reason of this corruptions in my system was an > > > insufficient maximum number of open file descriptors. As soon as I > > > recognized this and set "ulimit -n 10000" all remaining migrations > > > went well. > > > > Useful information. It seems to me that it should be possible to add > > a check in the conversion script to check ulimit, maybe warn about it. > > I added it to my TODO, but don't hold your breath it's low priority. > > > > > > It seems likely that your conversion of V3 backups to V4 backups did > > > > not go very well.? You said that there were 'issues', and that now the > > > > count of missing pool files is nearly two thousand.? (1.800 - is that > > > > right? in the UK we use comma , not decimal point . as the thousands > > > > separator).? You are right to want to investigate.? Are you able to > > > > recover a few files successfully?? Perhaps choose some at random, and > > > > some because they're big/small/valuable/new/old? > > > 1,800 missing pool files found last night, in 1/16 of the pool. In total it is more than > > > 11,000 > > > out > > > of 30,000,000. So only a few files are affected??. > > > > I'm not sure if you're asking if I suggested that 11,000 files is only > > "a few files". For the avoidance of doubt, I did not. I asked if you > > were able to restore a few files but I suggested choosing the files in > > a number of different ways to get a hopefully representative sample of > > your success rate, as a way of checking that your backup recovery can > > "sort of work" on a good day. Obviously it isn't an exhaustive test, > > but I'd probably try something like that before I tried something more > > exhaustive. > > > > > Probably because of filesystem corrections made by e2fsck in the > > > past or because of some aborted migrations, mentioned above. > > > > I have no idea what will happen to a V3 BackupPC pool if e2fsck was > > obliged to make corrections to it, but I wouldn't feel that I could > > trust it without making careful tests. My personal view is that the > > filesystem for your backup system must be completely beyond reproach, > > and if it starts to need maintenance of that kind then it's probably > > time to replace it unless there's some obvious explanation with an > > equally obvious and easy fix. There was a time, decades ago, when I > > spent many hours every year fixing filesystems, but in general they > > are all a lot more reliable nowadays and now I can't remember the last > > time I had to run any kind of filesystem fixing tool - even on the USB > > attached hard drives on the several Raspberry Pis which we use here. > > > |
From: <bac...@ko...> - 2025-08-20 17:46:00
|
My device died and I am no longer to access the device or the data on the device. I would therefore like to save the last incremental as a "full" & "filled" backup so that I can mark it to "keep" it as the last valid backup. Is there any reliable way to convert the (last) incremental into a full backup? I imagine I could restore the backup to some free space on another device and then back it up again but that would require some hacking with device names, share names and path to make it "look" like a legitimate backup from the original device with consistent naming and paths. Any ideas? |