You can subscribe to this list here.
| 2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(19) |
Nov
(2) |
Dec
(23) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2002 |
Jan
(18) |
Feb
(15) |
Mar
(27) |
Apr
(6) |
May
(40) |
Jun
(53) |
Jul
(67) |
Aug
(107) |
Sep
(75) |
Oct
(74) |
Nov
(73) |
Dec
(63) |
| 2003 |
Jan
(93) |
Feb
(97) |
Mar
(72) |
Apr
(129) |
May
(110) |
Jun
(97) |
Jul
(151) |
Aug
(124) |
Sep
(66) |
Oct
(216) |
Nov
(105) |
Dec
(151) |
| 2004 |
Jan
(107) |
Feb
(181) |
Mar
(235) |
Apr
(212) |
May
(231) |
Jun
(231) |
Jul
(264) |
Aug
(278) |
Sep
(173) |
Oct
(259) |
Nov
(164) |
Dec
(244) |
| 2005 |
Jan
(318) |
Feb
(206) |
Mar
(287) |
Apr
(222) |
May
(240) |
Jun
(255) |
Jul
(166) |
Aug
(289) |
Sep
(233) |
Oct
(200) |
Nov
(307) |
Dec
(170) |
| 2006 |
Jan
(289) |
Feb
(270) |
Mar
(306) |
Apr
(150) |
May
(181) |
Jun
(263) |
Jul
(181) |
Aug
(291) |
Sep
(147) |
Oct
(155) |
Nov
(381) |
Dec
(310) |
| 2007 |
Jan
(431) |
Feb
(306) |
Mar
(378) |
Apr
(216) |
May
(313) |
Jun
(235) |
Jul
(373) |
Aug
(171) |
Sep
(459) |
Oct
(642) |
Nov
(464) |
Dec
(419) |
| 2008 |
Jan
(374) |
Feb
(445) |
Mar
(400) |
Apr
(406) |
May
(374) |
Jun
(346) |
Jul
(387) |
Aug
(302) |
Sep
(255) |
Oct
(374) |
Nov
(292) |
Dec
(488) |
| 2009 |
Jan
(392) |
Feb
(240) |
Mar
(245) |
Apr
(483) |
May
(310) |
Jun
(494) |
Jul
(265) |
Aug
(515) |
Sep
(514) |
Oct
(284) |
Nov
(338) |
Dec
(329) |
| 2010 |
Jan
(305) |
Feb
(246) |
Mar
(404) |
Apr
(391) |
May
(302) |
Jun
(166) |
Jul
(166) |
Aug
(234) |
Sep
(222) |
Oct
(267) |
Nov
(219) |
Dec
(244) |
| 2011 |
Jan
(189) |
Feb
(220) |
Mar
(353) |
Apr
(322) |
May
(270) |
Jun
(202) |
Jul
(172) |
Aug
(215) |
Sep
(226) |
Oct
(169) |
Nov
(163) |
Dec
(152) |
| 2012 |
Jan
(182) |
Feb
(221) |
Mar
(117) |
Apr
(151) |
May
(169) |
Jun
(135) |
Jul
(140) |
Aug
(108) |
Sep
(148) |
Oct
(97) |
Nov
(119) |
Dec
(66) |
| 2013 |
Jan
(105) |
Feb
(127) |
Mar
(265) |
Apr
(84) |
May
(75) |
Jun
(116) |
Jul
(89) |
Aug
(118) |
Sep
(132) |
Oct
(247) |
Nov
(98) |
Dec
(109) |
| 2014 |
Jan
(81) |
Feb
(101) |
Mar
(101) |
Apr
(79) |
May
(132) |
Jun
(102) |
Jul
(91) |
Aug
(114) |
Sep
(104) |
Oct
(126) |
Nov
(146) |
Dec
(46) |
| 2015 |
Jan
(51) |
Feb
(44) |
Mar
(83) |
Apr
(40) |
May
(68) |
Jun
(43) |
Jul
(38) |
Aug
(33) |
Sep
(88) |
Oct
(54) |
Nov
(53) |
Dec
(119) |
| 2016 |
Jan
(268) |
Feb
(42) |
Mar
(86) |
Apr
(73) |
May
(239) |
Jun
(93) |
Jul
(89) |
Aug
(60) |
Sep
(49) |
Oct
(66) |
Nov
(70) |
Dec
(34) |
| 2017 |
Jan
(81) |
Feb
(103) |
Mar
(161) |
Apr
(137) |
May
(230) |
Jun
(111) |
Jul
(135) |
Aug
(92) |
Sep
(118) |
Oct
(85) |
Nov
(110) |
Dec
(84) |
| 2018 |
Jan
(75) |
Feb
(59) |
Mar
(48) |
Apr
(50) |
May
(63) |
Jun
(44) |
Jul
(44) |
Aug
(61) |
Sep
(42) |
Oct
(108) |
Nov
(76) |
Dec
(48) |
| 2019 |
Jan
(38) |
Feb
(47) |
Mar
(18) |
Apr
(98) |
May
(47) |
Jun
(53) |
Jul
(48) |
Aug
(52) |
Sep
(33) |
Oct
(20) |
Nov
(30) |
Dec
(38) |
| 2020 |
Jan
(29) |
Feb
(49) |
Mar
(37) |
Apr
(87) |
May
(66) |
Jun
(98) |
Jul
(25) |
Aug
(49) |
Sep
(22) |
Oct
(124) |
Nov
(66) |
Dec
(26) |
| 2021 |
Jan
(131) |
Feb
(109) |
Mar
(71) |
Apr
(56) |
May
(29) |
Jun
(12) |
Jul
(36) |
Aug
(38) |
Sep
(54) |
Oct
(17) |
Nov
(38) |
Dec
(23) |
| 2022 |
Jan
(56) |
Feb
(56) |
Mar
(73) |
Apr
(25) |
May
(15) |
Jun
(22) |
Jul
(20) |
Aug
(36) |
Sep
(24) |
Oct
(21) |
Nov
(78) |
Dec
(42) |
| 2023 |
Jan
(47) |
Feb
(45) |
Mar
(31) |
Apr
(4) |
May
(15) |
Jun
(10) |
Jul
(37) |
Aug
(24) |
Sep
(21) |
Oct
(15) |
Nov
(15) |
Dec
(20) |
| 2024 |
Jan
(24) |
Feb
(37) |
Mar
(14) |
Apr
(23) |
May
(12) |
Jun
(1) |
Jul
(14) |
Aug
(34) |
Sep
(38) |
Oct
(13) |
Nov
(33) |
Dec
(14) |
| 2025 |
Jan
(15) |
Feb
(19) |
Mar
(28) |
Apr
(12) |
May
(23) |
Jun
(43) |
Jul
(32) |
Aug
(29) |
Sep
(11) |
Oct
(17) |
Nov
(66) |
Dec
(34) |
| 2026 |
Jan
(15) |
Feb
(2) |
Mar
(12) |
Apr
(9) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: G.W. H. <ba...@ju...> - 2026-04-24 15:55:34
|
Hi there, Quoting an article here: https://www.openwall.com/lists/oss-security/2026/04/16/2 SentinelOne claims in an AI-generated advisory here: https://www.sentinelone.com/vulnerability-database/cve-2026-41035/ that there is a vulnerability in rsync from versions 3.0.1 to 3.4.1 which is (again, claimed to be) exploitable under certain circumstances. The currently released version of rsync-bpc is based on rsync 3.1.3 and could therefore be expected to suffer from this vulnerability. There is indeed an issue and rsync authors have committed a fixed for it here: https://github.com/RsyncProject/rsync/commit/bb0a8118c2d2ab01140bac5e4e327e5e1ef90c9c The authors state that this fault is not exploitable. By default the configuration of rsync-bpc does not use the '--xattrs' option which the AI thinks allows an exploit. In any case, I've made the same fix in the rsync-bpc code currently in development and it will be released when rsync-bpc 3.4.1.0rc1 comes out (which I hope will be soon:). For those of you who were wondering, at the moment I'm working on an occasional excessive memory usage issue. When that's fixed, rsync-bpc 3.4.1.0rc1 will be ready for release. At the moment I see no need to rush it. -- 73, Ged. |
|
From: G.W. H. <ba...@ju...> - 2026-04-07 15:17:09
|
Hello again, On Tue, 7 Apr 2026, Emmett Culley wrote: > On 4/6/26 7:26 AM, G.W. Haywood wrote: > > On Mon, 6 Apr 2026, Emmett Culley wrote: > > > > ... > >> ... can only restore one file into an existing directory... both > >> rsyncd and rsync methods and neither one can restore multiple files. > > >> > >> ... google ... nothing ... > >> > >> Any suggestion would be appreciated. > > > > The place to start is probably the BackupPC documentation: > > > > https://backuppc.github.io/backuppc/BackupPC.html > > > > especially this part: > > > > 8<---------------------------------------------------------------------- > > > > ? * Flexible restore options. Single files can be downloaded from any > > ??? backup directly from the CGI interface. Zip or Tar archives for > > ??? selected files or directories from any backup can also be downloaded > > ??? from the CGI interface. Finally, direct restore to the client > > ??? machine (using smb or tar) for selected files or directories is also > > ??? supported from the CGI interface. > > > > 8<---------------------------------------------------------------------- > > > > It looks to me like things are behaving as documented. ... > > ... > This used to work flawlessly. Select a directory to restore, click > on the restore buttons, files are restored on the host as expected. Seems like the goal posts just moved - quite a distance. Your original question was "Why doesn't this work?" Your question now seems to have changed to "Why did something which used to work suddenly stop working?" I'm far from convinced that it did, but let's look into it. When I first responded I tried *really* hard not to do the ESR thing and sound condescending or combative. I went against my instincts, which were to ask you exactly what software you're using and how you configured it. I wanted to ask you for operating system details and software version numbers, full copies of your configuration files and relevant extracts from your logs. All that does tend to sound, well, not very welcoming but I'm asking now because there's no better way. > What changed? If something changed, it was you who changed it. You might need some help in figuring out how the change came about, and that's what we're all here for. First you need to tell us exactly what you've done, as until we have full information we can only guess. Right now my best guesses if something really did change are that either you've upgraded some software or you changed some configuration. We need you to tell us what you did. > Why would the support restore via SMB but not rsync. Makes no sense > to me. When a network share is mounted over SMB on a BackupPC host, the files can be copied from the remote share to the local (BackupPC) host using utilities like cp and (plain) tar. Files can be copied from the local host to a remote share by the same means. But without help, utilities like cp and tar can't copy remote files directly; they don't know how. That's why 'rcp' amd 'rsh' came about, and later 'rsync'. The waters unfortunately are already getting a bit muddy, because 'tar' knows how to cheat a little and it can use rsh to read/write remote tar archives but not to read/write remote directories - that's where rsync comes into its own. If you use rsync over ssh, the commands you give effectively create a transparent network connection between the remote host and the local host *purely for that one connection* so that effectively you're doing a *local* copy. The ssh connection just hides the fact that the data, instead of coming from a local file store is coming from a remote one. SMB does such remote access stuff for an entire remote directory, and once mounted it keeps it available for as long as you like so you can use ordinary operating system tools like 'cp' and 'cat' on the files in that directory. That's a complete new level of (persistent) remote access, not like the z-over-ssh thing (where z can be almost anything. for example you can run 'cat' over ssh, or run an 'X' server over ssh, although there it just got a lot more complicated). NFS has a lot of the characteristics of SMB. These aren't always characteristics that you'd want, but that's another story. Suffice it to say that you can copy to/from NFS mounts in much the same way that you can with SMB (if as always of course you have things like permissions set up properly). But if you copy using what I'll call 'plain' rsync - that is, rsync which is *not*running over ssh - then you're not doing the same thing at all. You're asking a bit of code which is network aware to do the data transfer to and from the remote host for you. The rsync utility connects to a remote server and asks for permission for a connection. Simple operating system utilities aren't involved, they don't know how to do it. You can tell rsync to go fetch a whole bunch of files from a remote host, or write them and the entire directory hierarchy to the remote host. *Two* rsync programs run. One of them is on the remote, and of course is controlled by the remote's administrator. These two programs work together, transferring data from one to the other, to do the copying that needs to be done. When you use 'cp' for example only one program is running. It only knows how to talk to the local OS. I glossed over a couple of things, but does that make more sense? In your posts of last July (Github issue #545) you mentioned that depending on the host being backed up you may use rsync over ssh or you may use rsyncd. Does that have a bearing on this? -- 73, Ged. |
|
From: Emmett C. <lst...@we...> - 2026-04-07 13:26:06
|
On 4/7/26 5:16 AM, jbk wrote: > On 4/5/26 12:02 PM, Emmett Culley wrote: >> Recently I attempted to restore a directory and all it's files to a host. All that was restored was the top directory. None of the files or sub-directories were restored. >> >> The log indicated that zero files were restored. >> >> If I attempt to restore a single file, the files gets restored and the log tells me one file was restored. >> >> I am able to restore to a tar file then restore the files in a directory, so I know the files are in the archive. >> >> It doesn't matter what host I try to restore, I can only restore one file into an existing directory. I use both rsyncd and rsync methods and neither one can restore multiple files. >> >> I search via google and can find nothing that clues me into where to look. >> >> Any suggestion would be appreciated. >> >> Emmett >> >> >> >> >> > Sorry not to think of this earlier but it is important to understand. If your definition of a share is just "/", the root file system then you will have no backups of files and folders in shares that are on a separate partition such as /home if it is a mountpoint. It will backkup the folder name that defines the mount but will not descend. You have to define /home as a separate share to backup. This is done to avoid the case where the same network share is mounted on multiple machines or in multiple user directories thus creating unnecessary churn backing up the same directory multiple times. > In all cases the files I need are backed up, and so are in the archive. I only back up machine specific files using defined shares to get only what I need to restore. Emmett |
|
From: Emmett C. <lst...@we...> - 2026-04-07 13:20:57
|
On 4/6/26 6:26 PM, jbk wrote: > On 4/5/26 12:02 PM, Emmett Culley wrote: >> Recently I attempted to restore a directory and all it's files to a host. All that was restored was the top directory. None of the files or sub-directories were restored. >> >> The log indicated that zero files were restored. >> >> If I attempt to restore a single file, the files gets restored and the log tells me one file was restored. >> >> I am able to restore to a tar file then restore the files in a directory, so I know the files are in the archive. >> >> It doesn't matter what host I try to restore, I can only restore one file into an existing directory. I use both rsyncd and rsync methods and neither one can restore multiple files. >> >> I search via google and can find nothing that clues me into where to look. >> >> Any suggestion would be appreciated. >> >> Emmett >> >> >> > I only get digests of replies to the list so you may have gotten some help already. It would be good to know what version of BackupPC you are using. I'm using 4.4.0 on a rocky9.7 server. I just did a test restore of a month old user directory of my laptop to an alternate path on this same machine so as not to mix existing data. In the CGI I browsed to an existing backup # for the machine and found the top level share /home which is a mountpoint and clicked on the square to reveal the user directories below that with the option to select indiividual directories or all. In this case there is only one directory because i'm the only user, but I selected all then clicked Restore. This brought up another dialog menu that allowed me to either restore directly back to the source destination or pick an alternate path which I chose to do. Following the prompts I successfully restored the user directory to the sub-directory as per the log output here: > ....................................................................... > > Trimming / from remoteDir -> /data/jbkdat/tmp > Wrote source file list to /var/lib/BackupPC//pc/lt14/.rsyncFilesFrom69427: / > Running: /usr/bin/rsync_bpc --bpc-top-dir /var/lib/BackupPC/ --bpc-host-name > lt14 --bpc-share-name /home --bpc-bkup-num 48 --bpc-bkup-comp 3 --bpc-bkup-merge > 48/3/4 --bpc-log-level 1 --bpc-attrib-new -e /usr/bin/ssh\ -l\ backuppc > --rsync-path=/usr/bin/sudo\ /usr/bin/rsync --recursive --super > --protect-args --numeric-ids --perms --owner --group -D --times --links > --hard-links --delete --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ %9l\ %f%L > --stats --files-from=/var/lib/BackupPC//pc/lt14/.rsyncFilesFrom69427 / lt14:/data/jbkdat/tmp > This is the rsync child about to exec /usr/bin/rsync_bpc > [ skipped 5526 lines ] > > Number of files: 5,526 (reg: 4,515, dir: 996, link: 13, special: 2) > Number of created files: 5,525 (reg: 4,515, dir: 995, link: 13, special: 2) > Number of deleted files: 0 > Number of regular files transferred: 4,515 > Total file size: 1,532,213,410 bytes > > ............................................................................................. > > To me it seems to work as intended. I would never to a full restore to the source directory because there is too great a chance of applications getting out of sync. Where I have done full restores is when I move users to another machine and it has worked well as long as UID are correctly created. > You can't restore a top level share in one step but as demonstrated you can restore whole subdirectoris below them at least in this small test. > > > -- > Jim KR Thanks Jim That has always been my experience as well. I am on version 4.4.0. I upgraded to 4.x about three years ago and all was working until recently. I am a developer and some times need to restore a clients site (usually wordpress) when they mess it up. This has always worked until now. One response I get suggested that restore shouldn't work with rsync. You response is a relief. I'll keep looking. Emmett |
|
From: jbk <jb...@kj...> - 2026-04-07 12:17:11
|
On 4/5/26 12:02 PM, Emmett Culley wrote: > Recently I attempted to restore a directory and all it's > files to a host. All that was restored was the top > directory. None of the files or sub-directories were > restored. > > The log indicated that zero files were restored. > > If I attempt to restore a single file, the files gets > restored and the log tells me one file was restored. > > I am able to restore to a tar file then restore the files > in a directory, so I know the files are in the archive. > > It doesn't matter what host I try to restore, I can only > restore one file into an existing directory. I use both > rsyncd and rsync methods and neither one can restore > multiple files. > > I search via google and can find nothing that clues me > into where to look. > > Any suggestion would be appreciated. > > Emmett > > > > > Sorry not to think of this earlier but it is important to understand. If your definition of a share is just "/", the root file system then you will have no backups of files and folders in shares that are on a separate partition such as /home if it is a mountpoint. It will backkup the folder name that defines the mount but will not descend. You have to define /home as a separate share to backup. This is done to avoid the case where the same network share is mounted on multiple machines or in multiple user directories thus creating unnecessary churn backing up the same directory multiple times. -- Jim KR |
|
From: jbk <jb...@kj...> - 2026-04-07 01:42:12
|
On 4/5/26 12:02 PM, Emmett Culley wrote: > Recently I attempted to restore a directory and all it's > files to a host. All that was restored was the top > directory. None of the files or sub-directories were > restored. > > The log indicated that zero files were restored. > > If I attempt to restore a single file, the files gets > restored and the log tells me one file was restored. > > I am able to restore to a tar file then restore the files > in a directory, so I know the files are in the archive. > > It doesn't matter what host I try to restore, I can only > restore one file into an existing directory. I use both > rsyncd and rsync methods and neither one can restore > multiple files. > > I search via google and can find nothing that clues me > into where to look. > > Any suggestion would be appreciated. > > Emmett > > > I only get digests of replies to the list so you may have gotten some help already. It would be good to know what version of BackupPC you are using. I'm using 4.4.0 on a rocky9.7 server. I just did a test restore of a month old user directory of my laptop to an alternate path on this same machine so as not to mix existing data. In the CGI I browsed to an existing backup # for the machine and found the top level share /home which is a mountpoint and clicked on the square to reveal the user directories below that with the option to select indiividual directories or all. In this case there is only one directory because i'm the only user, but I selected all then clicked Restore. This brought up another dialog menu that allowed me to either restore directly back to the source destination or pick an alternate path which I chose to do. Following the prompts I successfully restored the user directory to the sub-directory as per the log output here: ....................................................................... Trimming / from remoteDir -> /data/jbkdat/tmp Wrote source file list to /var/lib/BackupPC//pc/lt14/.rsyncFilesFrom69427: / Running: /usr/bin/rsync_bpc --bpc-top-dir /var/lib/BackupPC/ --bpc-host-name lt14 --bpc-share-name /home --bpc-bkup-num 48 --bpc-bkup-comp 3 --bpc-bkup-merge 48/3/4 --bpc-log-level 1 --bpc-attrib-new -e /usr/bin/ssh\ -l\ backuppc --rsync-path=/usr/bin/sudo\ /usr/bin/rsync --recursive --super --protect-args --numeric-ids --perms --owner --group -D --times --links --hard-links --delete --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ %9l\ %f%L --stats --files-from=/var/lib/BackupPC//pc/lt14/.rsyncFilesFrom69427 / lt14:/data/jbkdat/tmp This is the rsync child about to exec /usr/bin/rsync_bpc [ skipped 5526 lines ] Number of files: 5,526 (reg: 4,515, dir: 996, link: 13, special: 2) Number of created files: 5,525 (reg: 4,515, dir: 995, link: 13, special: 2) Number of deleted files: 0 Number of regular files transferred: 4,515 Total file size: 1,532,213,410 bytes ............................................................................................. To me it seems to work as intended. I would never to a full restore to the source directory because there is too great a chance of applications getting out of sync. Where I have done full restores is when I move users to another machine and it has worked well as long as UID are correctly created. You can't restore a top level share in one step but as demonstrated you can restore whole subdirectoris below them at least in this small test. -- Jim KR |
|
From: Emmett C. <lst...@we...> - 2026-04-06 14:37:43
|
On 4/6/26 7:26 AM, G.W. Haywood wrote: > Hi there, > > On Mon, 6 Apr 2026, Emmett Culley wrote: > >> ... >> If I attempt to restore a single file, the files gets restored and >> the log tells me one file was restored. > > :) > >> I am able to restore to a tar file then restore the files in a >> directory, so I know the files are in the archive. > > :) > >> It doesn't matter what host I try to restore, I can only restore one >> file into an existing directory. I use both rsyncd and rsync >> methods and neither one can restore multiple files. >> >> ... google ... nothing ... >> >> Any suggestion would be appreciated. > > The place to start is probably the BackupPC documentation: > > https://backuppc.github.io/backuppc/BackupPC.html > > especially this part: > > 8<---------------------------------------------------------------------- > > * Flexible restore options. Single files can be downloaded from any > backup directly from the CGI interface. Zip or Tar archives for > selected files or directories from any backup can also be downloaded > from the CGI interface. Finally, direct restore to the client > machine (using smb or tar) for selected files or directories is also > supported from the CGI interface. > > 8<---------------------------------------------------------------------- > > It looks to me like things are behaving as documented. It says that > you can restore directories using SMB or tar, but it doesn't say that > you can do it using rsync/rsyncd. My personal feeling is that in any > case rather than trying to restore directly from the Web interface, to > avoid nasty surprises I'd *always* grab an archive and save it in some > temporary directory. Then I could extract whatever files I needed and > put them (carefully) wherever they needed to be, and then check that > the ownerships, permissions etc. are as needed. > This used to work flawlessly. Select a directory to restore, click on the restore buttons, files are restored on the host as expected. What changed? I will read the docs, but I doubt they will explain why restore from the web UI no longer work as it once did. Why would the support restore via SMB but not rsync. Makes no sense to me. Emmett |
|
From: G.W. H. <ba...@ju...> - 2026-04-06 14:26:54
|
Hi there, On Mon, 6 Apr 2026, Emmett Culley wrote: > ... > If I attempt to restore a single file, the files gets restored and > the log tells me one file was restored. :) > I am able to restore to a tar file then restore the files in a > directory, so I know the files are in the archive. :) > It doesn't matter what host I try to restore, I can only restore one > file into an existing directory. I use both rsyncd and rsync > methods and neither one can restore multiple files. > > ... google ... nothing ... > > Any suggestion would be appreciated. The place to start is probably the BackupPC documentation: https://backuppc.github.io/backuppc/BackupPC.html especially this part: 8<---------------------------------------------------------------------- * Flexible restore options. Single files can be downloaded from any backup directly from the CGI interface. Zip or Tar archives for selected files or directories from any backup can also be downloaded from the CGI interface. Finally, direct restore to the client machine (using smb or tar) for selected files or directories is also supported from the CGI interface. 8<---------------------------------------------------------------------- It looks to me like things are behaving as documented. It says that you can restore directories using SMB or tar, but it doesn't say that you can do it using rsync/rsyncd. My personal feeling is that in any case rather than trying to restore directly from the Web interface, to avoid nasty surprises I'd *always* grab an archive and save it in some temporary directory. Then I could extract whatever files I needed and put them (carefully) wherever they needed to be, and then check that the ownerships, permissions etc. are as needed. -- 73, Ged. |
|
From: Emmett C. <lst...@we...> - 2026-04-05 16:20:30
|
Recently I attempted to restore a directory and all it's files to a host. All that was restored was the top directory. None of the files or sub-directories were restored. The log indicated that zero files were restored. If I attempt to restore a single file, the files gets restored and the log tells me one file was restored. I am able to restore to a tar file then restore the files in a directory, so I know the files are in the archive. It doesn't matter what host I try to restore, I can only restore one file into an existing directory. I use both rsyncd and rsync methods and neither one can restore multiple files. I search via google and can find nothing that clues me into where to look. Any suggestion would be appreciated. Emmett |
|
From: G.W. H. <ba...@ju...> - 2026-03-23 14:43:53
|
Hi there, On Sun, 22 Mar 2026, George King wrote: > I'm not sure if it's considered a good approach compared with the > other approaches mentioned but I find DeltaCopy Server very easy and > effective to use on my Windows box.? Rsyncd on BackupPC. Good point. Some observations: 1. For a number of reasons, as far as BackupPC is concerned, running rsync on Windows clients is far superior to using what effectively amount to kludges like SMB. But it *does* mean that you have to load software onto the Windows clients, which you don't have to do if you use smbclient and you can accept some (considerable) tradeoffs, for example the very dodgy file excludes in smbclient: https://github.com/backuppc/backuppc/issues/252 ---------------------------------------------------------------------- 2. As I've never even downloaded Deltacopy I don't feel qualified to comment on its usability or effectiveness. Basically I gather that it is versions (I don't know which versions, and that might be important) of Cygwin and rsync packaged for easy installation on Windows. These tools have commonly been installed by people here on the mailing list for backing up Windows boxes and they seem to do the job. ISTR using them decades ago, most likely Cygwin version 1.something, but I can't remember where nor why. I can't remember any big problems with them, so guess they worked well enough at the time. ---------------------------------------------------------------------- 3. I quickly looked through the Deltacopy documentation just now: http://www.aboutmyip.com/files/DeltaCopyManual.pdf On page 7 of the PDF manual there's a suggestion that you may need to open port 873 on your firewall if you're backing up over the Internet. This recommendation alone makes me question the use of anything from the Deltacopy stable. Nobody should ever do that because (a) it opens up rsync to the world, which is a security nightmare you really don't want to experience, see for example CVE-2024-12084 and (b) the data transfers over the Internet would be plain text, which is of course another security nightmare although possibly not as serious as some criminal getting remote code execution on your rsync server. If you need backups to traverse the Internet, some form of encryption is in my view essential. My preference is for a VPN such as OpenVPN (which I use routinely for remote backups and which provides much more than just a backup path) but you can for example use rsync over ssh by setting it up in the BackupPC configuration. The encryption overhead may be significant. Using a VPN gives you the flexibility to offload that work to machines other than the backup server and its clients if it turns out to be necessary to make the backups proceed more quickly. It could be the difference between having a backup and not having one. ---------------------------------------------------------------------- 4. The documentation gives me little idea which versions of Windows are supported by Deltacopy. The page at http://www.aboutmyip.com/AboutMyXApp/DeltaCopy.jsp gives under "System Requirements" * XP, 2000, 2003, 2008, Vista and Windows 7. We have not tested DeltaCopy on Win9x. * 10 MB hard disk * 64 MB ram * 1 GHz processor or better which looks like it could use an update. It doesn't say for example that Cygwin dropped support for XP in 2015: https://sourceware.org/pipermail/cygwin-announce/2015-August/006392.html that Cygwin dropped 32-bit support altogether in 2022: https://cygwin.com/pipermail/cygwin/2022-November/252542.html nor that Cygwin support of Windows 7, Windows 8, Server 2008 R2 and Server 2012 was dropped in version 3.5 early in 2024: https://cygwin.com/pipermail/cygwin-announce/2024-February/011524.html ---------------------------------------------------------------------- 5. One of the issues people come up against with Cygwin/rsync is that backing up open files can be problematic because Windows restricts access to them. If it's essential to back up open files then people usually get around this using "shadow copies" of the data, see e.g. http://www.michaelstowe.com/backuppc/ which links to http://www.goodjobsucking.com/?p=62 giving a very clear account of one implementation. This seems to be a problem that has been fairly well put to bed. ---------------------------------------------------------------------- Other observations invited and very welcome. HTH -- 73, Ged. |
|
From: George K. <ge...@ki...> - 2026-03-21 19:38:09
|
I'm not sure if it's considered a good approach compared with the other approaches mentioned but I find DeltaCopy Server very easy and effective to use on my Windows box. Rsyncd on BackupPC. George On 21/03/2026 15:27, Dan Johansson via BackupPC-users wrote: > Hello Falko and Ged, > > > Thanks for your feedback, that was the info I was looking for. > I can now try to configure my backuppc to backup my sole wintendo > laptop as well. ๐ > Lets see how it goes... > > > Regards, > > Dan > > > On 18-03-2026 18:32, Falko Trojahn wrote: >> Hello Dan, >> >> Dan Johansson via BackupPC-users schrieb am 18.03.26 um 16:02: >> >>> Some time ago, someone posted a script (or part thereof) for backing >>> up a Windows 11 host with BackupPC using native Windows tools. >>> >>> I can not find the post/script among my emails ๐ >>> >>> Could someone please tell me where I can find those scripts? >>> >> >> may be you mean this? >> >> >> > That's what I did more than five years ago, too - but we didn't use >> > samba client for Windows backups with BackupPC, but this: >> > >> > https://www.michaelstowe.com/backuppc/ >> > >> > and I remember it working very well. Especially that it did >> > use shadow copies. I think we used Win10 these days. >> > >> > But later we switched to Urbackup client. >> > >> > If you don't have recent Windows versions at hand ... >> > Usually it is possible to get some Windows developer edition VMs. >> > Ah, we did use this: https://github.com/xdissent/ievms/ >> > >> > Maybe some newer fixes are in the forks or pull requests like: >> > https://github.com/xdissent/ievms/pull/343 >> > >> > Just my 2ยข >> > Falko >> >> Link to the archive is here: >> https://sourceforge.net/p/backuppc/mailman/backuppc-users/?viewmonth=202509&viewday=28 >> >> >> Another thread about Win 11 Backup was here: >> https://sourceforge.net/p/backuppc/mailman/backuppc-users/thread/DM6PR20MB344338A189BF8697A04E4CC8DD242%40DM6PR20MB3443.namprd20.prod.outlook.com/#msg58747558 >> >> >> Best regards >> Falko >> >> |
|
From: Dan J. <bac...@dm...> - 2026-03-21 15:27:32
|
Hello Falko and Ged, Thanks for your feedback, that was the info I was looking for. I can now try to configure my backuppc to backup my sole wintendo laptop as well. ๐ Lets see how it goes... Regards, Dan On 18-03-2026 18:32, Falko Trojahn wrote: > Hello Dan, > > Dan Johansson via BackupPC-users schrieb am 18.03.26 um 16:02: > >> Some time ago, someone posted a script (or part thereof) for backing up a Windows 11 host with BackupPC using native Windows tools. >> >> I can not find the post/script among my emails ๐ >> >> Could someone please tell me where I can find those scripts? >> > > may be you mean this? > > > > That's what I did more than five years ago, too - but we didn't use > > samba client for Windows backups with BackupPC, but this: > > > > https://www.michaelstowe.com/backuppc/ > > > > and I remember it working very well. Especially that it did > > use shadow copies. I think we used Win10 these days. > > > > But later we switched to Urbackup client. > > > > If you don't have recent Windows versions at hand ... > > Usually it is possible to get some Windows developer edition VMs. > > Ah, we did use this: https://github.com/xdissent/ievms/ > > > > Maybe some newer fixes are in the forks or pull requests like: > > https://github.com/xdissent/ievms/pull/343 > > > > Just my 2ยข > > Falko > > Link to the archive is here: > https://sourceforge.net/p/backuppc/mailman/backuppc-users/?viewmonth=202509&viewday=28 > > Another thread about Win 11 Backup was here: > https://sourceforge.net/p/backuppc/mailman/backuppc-users/thread/DM6PR20MB344338A189BF8697A04E4CC8DD242%40DM6PR20MB3443.namprd20.prod.outlook.com/#msg58747558 > > Best regards > Falko > > -- Dan Johansson, *************************************************** This message is printed on 100% recycled electrons! *************************************************** |
|
From: G.W. H. <ba...@ju...> - 2026-03-19 13:33:29
|
Hi there, On Thu, 19 Mar 2026, Dan Johansson wrote: > Some time ago, someone posted a script (or part thereof) for backing > up a Windows 11 host with BackupPC using native Windows tools. > > I can not find the post/script among my emails ? > > Could someone please tell me where I can find those scripts? I'm not sure that you've given us enough information to identify any particular post, but in addition to the threads to which others have pointed (from March 2024 and September 2025) in July 2025 there was: https://sourceforge.net/p/backuppc/mailman/message/59205241/ In addition there are a (very) few articles in the BackupPC Wiki at https://github.com/backuppc/backuppc/wiki for example https://github.com/backuppc/backuppc/wiki/How-to-backup-a-Windows-machine-using-rsync-over-ssh-using-ssh-keys-for-authentication If you find ways of backing up Win 11 which are easier and/or better than anything in the documentatiion so far mentioned in this thread, I'd be grateful if you could either add something to the Wiki or drop us a line on the list to let us know what information could usefully be added. I'd like to think that the Wiki will eventually become the go-to resource for things like this. -- 73, Ged. |
|
From: Paul L. <pau...@gm...> - 2026-03-18 19:52:29
|
Exactly what I did. Worked without any issues. Paul On 18/03/2026 17:46, Falko Trojahn via BackupPC-users wrote: > Hello Matthias, > > Matthias--- via BackupPC-users schrieb am 09.03.26 um 18:48: >> chown -R backuppc:backuppc /var/log/backuppc was the missing link๐ ๏ธ >> >> Sorry for asking a few minutes too early. > I'm a bit late, but ... next time: for experienced users > it should be possible, too, to just change the uid/gid > of the backuppc user in the new system's /etc/passwd and /etc/group > files. > Then the chown is just needed for the config files in e.g. > /etc/backuppc. AFAIR I did it like this some time ago. > > Since, for large volumes of the backuppc pool, the chown could take > some time. And it is not always a good idea to touch all the files in > the pool. > > F. > > > > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ |
|
From: Falko T. <ne...@tr...> - 2026-03-18 17:52:11
|
Hello Dan, Dan Johansson via BackupPC-users schrieb am 18.03.26 um 16:02: > Some time ago, someone posted a script (or part thereof) for backing up > a Windows 11 host with BackupPC using native Windows tools. > > I can not find the post/script among my emails ๐ > > Could someone please tell me where I can find those scripts? > may be you mean this? > That's what I did more than five years ago, too - but we didn't use > samba client for Windows backups with BackupPC, but this: > > https://www.michaelstowe.com/backuppc/ > > and I remember it working very well. Especially that it did > use shadow copies. I think we used Win10 these days. > > But later we switched to Urbackup client. > > If you don't have recent Windows versions at hand ... > Usually it is possible to get some Windows developer edition VMs. > Ah, we did use this: https://github.com/xdissent/ievms/ > > Maybe some newer fixes are in the forks or pull requests like: > https://github.com/xdissent/ievms/pull/343 > > Just my 2ยข > Falko Link to the archive is here: https://sourceforge.net/p/backuppc/mailman/backuppc-users/?viewmonth=202509&viewday=28 Another thread about Win 11 Backup was here: https://sourceforge.net/p/backuppc/mailman/backuppc-users/thread/DM6PR20MB344338A189BF8697A04E4CC8DD242%40DM6PR20MB3443.namprd20.prod.outlook.com/#msg58747558 Best regards Falko |
|
From: Falko T. <ne...@tr...> - 2026-03-18 17:47:09
|
G.W. Haywood schrieb am 18.03.26 um 14:41: > The fuss about malicious changes to code on Github has probably not > escaped your attention. > > ... > > In addition I have also scanned these repositories for indicators of > compromise that have been published in some of the incident reports, > plus a couple more of my own devising. > > You'll be pleased to know that no indicator of compromise was found. > Thank you very much for your attention to these threats and that you care about security of this application. I really appreciate this. Have a good time F. |
|
From: Falko T. <ne...@tr...> - 2026-03-18 17:46:21
|
Hello Matthias, Matthias--- via BackupPC-users schrieb am 09.03.26 um 18:48: > chown -R backuppc:backuppc /var/log/backuppc was the missing link๐ ๏ธ > > Sorry for asking a few minutes too early. I'm a bit late, but ... next time: for experienced users it should be possible, too, to just change the uid/gid of the backuppc user in the new system's /etc/passwd and /etc/group files. Then the chown is just needed for the config files in e.g. /etc/backuppc. AFAIR I did it like this some time ago. Since, for large volumes of the backuppc pool, the chown could take some time. And it is not always a good idea to touch all the files in the pool. F. |
|
From: Dan J. <bac...@dm...> - 2026-03-18 15:02:53
|
Hello, Some time ago, someone posted a script (or part thereof) for backing up a Windows 11 host with BackupPC using native Windows tools. I can not find the post/script among my emails ๐ Could someone please tell me where I can find those scripts? Regards, -- Dan Johansson, *************************************************** This message is printed on 100% recycled electrons! *************************************************** |
|
From: G.W. H. <ba...@ju...> - 2026-03-18 13:41:37
|
Hi all, The fuss about malicious changes to code on Github has probably not escaped your attention. In case you haven't noticed I'm extremely cautious about commits to the BackupPC repositories so I'd like to think that there's no risk at all that something like these attacks would succeed. But in any case I've compared the Github repositories for backuppc, backuppc-xs and rsync-bpc with local safe copies to check that no unauthorized changes have been made to the three BackupPC repositories. In addition I have also scanned these repositories for indicators of compromise that have been published in some of the incident reports, plus a couple more of my own devising. You'll be pleased to know that no indicator of compromise was found. I plan that the next commit will be to rsync-bpc. It should be fairly soon and it will be rather a big one as it will be changing from a base rsync version of 3.1.3 to a base of 3.4.1. I thought you'd like to be forewarned of the upcoming unusually large commit, so that in troubled times it doesn't raise undue suspicions. I'd still be pleased to hear from anyone interested in testing the new rsync-bpc version. There's been quite a bit of testing since January, so the caveat that I made back then - that the new version might crash and burn - can now (I think:) be disregarded. -- 73, Ged. |
|
From: <Mat...@gm...> - 2026-03-09 17:48:31
|
Hello - I'm happy again๐๏ธ chown -R backuppc:backuppc /var/log/backuppc was the missing link๐ ๏ธ Sorry for asking a few minutes too early. Br Matthias Am Montag, dem 09.03.2026 um 18:17 +0100 schrieb Matthias--- via BackupPC-users: > Hello, > > I destroyed my system (Debian Bullseye) and restored it from backup. During this the UID of > backuppc > has been changed. > I "chown -R backuppc:backuppc" all files in /var/lib/backuppc, /usr/share/backuppc/lib, > /usr/share/backuppc/bin and /etc/backuppc to backuppc:backuppc. > > But /etc/backuppc/pc is still owned by root:root. > ls -l /usr/share/backuppc/bin/BackupPC > -rwxr-xr-x 1 backuppc backuppc 89034 12. Apr 2021 BackupPC > > service backuppc startservice backuppc status > โ backuppc.service - BackupPC server > Loaded: loaded (/lib/systemd/system/backuppc.service; enabled; vendor preset: enabled) > Active: activating (auto-restart) (Result: exit-code) since Mon 2026-03-09 18:13:31 CET; > 283ms > ago > Process: 42138 ExecStart=/usr/share/backuppc/bin/BackupPC (code=exited, status=13) > Main PID: 42138 (code=exited, status=13) > CPU: 72ms > > What is going wrong? How to find the root case? > > Thanks in advance > Matthias > > > > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ |
|
From: <Mat...@gm...> - 2026-03-09 17:30:46
|
Hello,
I destroyed my system (Debian Bullseye) and restored it from backup. During this the UID of backuppc
has been changed.
I "chown -R backuppc:backuppc" all files in /var/lib/backuppc, /usr/share/backuppc/lib,
/usr/share/backuppc/bin and /etc/backuppc to backuppc:backuppc.
But /etc/backuppc/pc is still owned by root:root.
ls -l /usr/share/backuppc/bin/BackupPC
-rwxr-xr-x 1 backuppc backuppc 89034 12. Apr 2021 BackupPC
service backuppc startservice backuppc status
โ backuppc.service - BackupPC server
Loaded: loaded (/lib/systemd/system/backuppc.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Mon 2026-03-09 18:13:31 CET; 283ms
ago
Process: 42138 ExecStart=/usr/share/backuppc/bin/BackupPC (code=exited, status=13)
Main PID: 42138 (code=exited, status=13)
CPU: 72ms
What is going wrong? How to find the root case?
Thanks in advance
Matthias
|
|
From: G.W. H. <ba...@ju...> - 2026-02-21 17:22:48
|
Hi there,
On Sat, 21 Feb 2026, Elio Coutinho wrote:
> I've reported on github an issue on BackupPC_backupDelete, in which it
> is unable to delete a folder inside a share. GWHAYWOOD said there's a
> new version of DirOps.pm that would help. I'd be happy to test it, if
> you would mind to share it.
It seems that there might be more than one issue in BackupPC issue
#557 on Github.
Firstly the BackupPC Web interface will show files and directories
which are not actually present in the pool directories (incremental
backups) because they have not changed from a previous backup. This
is for convenience and efficiency but it may lead to couterintuitive
behaviour. BackupPC_backupDelete won't delete something which isn't
actually there.
Secondly the code both in BackupPC_backupDelete and the modules which
support it could probably benefit from more testing in non-Unix/Linux
systems. Use of the forward slash character "/" in file and directory
names might well cause problems. At present much of the code assumes
that this is always the path separator character, and that otherwise
it will not appear. I am not sure if that is the case in the report
in issue #557. More information would be helpful.
The version of DirOps.pm which is attached is much more verbose if you
set $conf{XferLogLevel} to 4 or 5. Please treat this as experimental.
At LogLevel 4 it will log a message if you try to delete something
from a backup which is not actually present in that particular backup.
At LogLevel 5 it will in addition log what it is asked to delete and
confirm what it has deleted. It will show how it walks the directory
tree. It will log some failures which are *expected*, when it tries
to 'unlink' a directory without first checking that it is a directory.
This is an efficiency measure to avoid unnecessary stat() calls and is
not an indication that something is wrong.
As well as being used by BackupPC_backupDelete, the code in DirOps.pm
is used in routine operation by BackupPC whenever an old backup needs
to be removed. Consider the logging carefully. Backups may take much
longer if verbose logging is enabled, especially if a full backup is
deleted - because verbose logging may write more than one line to the
log for each deleted file. Not only may this mean that some backups,
that would have been expected to complete if verbose logging was not
enabled, do not in fact complete e.g. overnight, but also it may mean
that log files are very large so storage space comes under pressure.
I am running this version of DirOps.pm on a server here. It seems to
work OK, but it's early days for something with so many changes.
--
73,
Ged. |
|
From: Elio C. <eco...@gm...> - 2026-02-20 15:25:27
|
Hi, I've reported on github an issue on BackupPC_backupDelete, in which it is unable to delete a folder inside a share. GWHAYWOOD said there's a new version of DirOps.pm that would help. I'd be happy to test it, if you would mind to share it. Thanks, Elio |
|
From: Emmett C. <lst...@we...> - 2026-01-20 15:31:10
|
On 1/19/26 8:35 AM, G.W. Haywood wrote: > At this stage, rather than putting something on Github which may quite > possibly crash and burn, I'd prefer to email the source to volunteers. > > Any takers? > BackupPC is an important part of my system infrastructure, and so I'd like to help out. I am new to building on Linux, but have done a lot of C and C++ programming in the past. I don't have a lot of time but am willing to apply some of the time I do have on this. Emmett |
|
From: jbk <jb...@kj...> - 2026-01-20 14:05:53
|
On 1/19/26 11:35 AM, G.W. Haywood wrote:
> Hi there,
>
> As mentioned in another thread, I think the release of the
> next
> version of rsync-bpc - version 3.4.1.0rc1 - is approaching.
>
> As its version number implies, this version is based on
> the latest
> upstream rsync, version 3.4.1. That version contains a
> number of
> improvements plus a few fixes for vulnerabilites.
> Vulnerabilities
> have been found in supporting packages such as zlib and
> popt. For
> whatever reasons, many packages (not just rsync and
> BackupPC) bundled
> the code for these libraries in their own source
> distributions. The
> problem with doing that, of course, is that the bundled
> versions tend
> to get out of date which is what happened for rsync,
> BackupPC-XS and
> rsync-bpc although the resulting issues were fortunately
> not serious.
> It could easily have been otherwise.
>
> As for BackupPC-XS, the new version of rsync-bpc does away
> with the
> bundled zlib and popt code, instead relying on the system
> to provide
> these libraries. If the distribution's security teams and
> your update
> practice are up to snuff, then these paths to
> vulnerability should be
> closed off for good.
>
> This has been quite a journey and I'm asking for
> volunteers, to begin
> with, to sanity-check what I've done.
>
> The changes to the rsync-bpc code have been very extensive
> (think in
> terms of a context diff with a line count well into five
> digits) so I
> really do want to get people to put this thing under the
> microscope.
> It will help enormously if you're a proficient 'C' coder
> but it isn't
> an absolute requirement; just building and testing by
> anyone will be
> more than welcome. Running here, it's just done its first
> incremental
> backup after four fulls for just one share on just one
> machine. I'll
> be adding to that population in the coming days. So far I
> only built
> on one architecture (armhf). Anything could happen on a
> 64-bit box,
> the data structures in rsync are truly scary.
>
> There are still a couple of outstanding issues which I
> want to look
> into, and I'll be glad of any comments from users:
>
> 1. While testing on a box with usually ~3G spare memory, I
> noticed a
> tendency for one of the rsync-bpc processes to use it all
> up and then
> crash. After I'd removed most of my debugging (gigabyte+
> log files)
> this seemed to go away, but I'm reminded of Github issue
> #547 and I
> think it needs a bit more investigation. Also in the
> coming days I
> plan to do that. I know where to look and what to log, I
> just don't
> yet know if having a bunch of 8 MByte file buffers in RAM
> is an issue.
>
> 2. The latest version of rsync offers extra options for
> checksumming,
> and that seemed to throw off the new rsync-bpc's use of
> MD5 for file
> comparisons. My very klundgy fix for that has been to add
> the rsync
> '--checksum-choice=md5' option to $Conf{RsyncArgs} but I'd
> much prefer
> to look at modifying the code to force the use of the
> right checksums
> without requiring any changes to the configuration. At
> this stage I
> don't know what this might mean for people using e.g.
> 'openrsync' on
> the Mac - I don't know if the Mac's rsync recognizes the
> option - and
> I don't know if it will be an issue for people using very
> old versions
> of 'genuine' rsync on other clients. If there are Mac
> users Out There
> reading, *please* help me with the testing if you can. If
> your rsync
> version _is_ very old, I promise not to tell.
>
> At this stage, rather than putting something on Github
> which may quite
> possibly crash and burn, I'd prefer to email the source to
> volunteers.
>
> Any takers?
>
I'd be happy to give it a shot. You have my email so send a
"tar.gz" when ready.
--
Jim KR
|