You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(15) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(7) |
Feb
(6) |
Mar
(3) |
Apr
(9) |
May
(5) |
Jun
(11) |
Jul
(40) |
Aug
(15) |
Sep
(3) |
Oct
(2) |
Nov
(1) |
Dec
|
| 2004 |
Jan
(2) |
Feb
(8) |
Mar
(8) |
Apr
(25) |
May
(10) |
Jun
(11) |
Jul
(5) |
Aug
(9) |
Sep
(2) |
Oct
(7) |
Nov
(7) |
Dec
(6) |
| 2005 |
Jan
(6) |
Feb
(17) |
Mar
(9) |
Apr
(3) |
May
(4) |
Jun
(11) |
Jul
(42) |
Aug
(33) |
Sep
(13) |
Oct
(14) |
Nov
(19) |
Dec
(7) |
| 2006 |
Jan
(23) |
Feb
(19) |
Mar
(6) |
Apr
(8) |
May
(1) |
Jun
(12) |
Jul
(50) |
Aug
(16) |
Sep
(4) |
Oct
(18) |
Nov
(15) |
Dec
(10) |
| 2007 |
Jan
(10) |
Feb
(13) |
Mar
(1) |
Apr
|
May
(1) |
Jun
(1) |
Jul
(5) |
Aug
(7) |
Sep
(25) |
Oct
(58) |
Nov
(15) |
Dec
(4) |
| 2008 |
Jan
(2) |
Feb
(13) |
Mar
(3) |
Apr
(10) |
May
(1) |
Jun
(4) |
Jul
(4) |
Aug
(23) |
Sep
(21) |
Oct
(7) |
Nov
(3) |
Dec
(12) |
| 2009 |
Jan
(5) |
Feb
(2) |
Mar
|
Apr
(7) |
May
(6) |
Jun
(25) |
Jul
|
Aug
(2) |
Sep
(2) |
Oct
(3) |
Nov
(2) |
Dec
(2) |
| 2010 |
Jan
(1) |
Feb
|
Mar
(3) |
Apr
(2) |
May
(2) |
Jun
|
Jul
(3) |
Aug
(3) |
Sep
(2) |
Oct
|
Nov
|
Dec
(3) |
| 2011 |
Jan
(10) |
Feb
(2) |
Mar
(71) |
Apr
(4) |
May
(8) |
Jun
|
Jul
|
Aug
(4) |
Sep
(3) |
Oct
(1) |
Nov
(5) |
Dec
(1) |
| 2012 |
Jan
|
Feb
|
Mar
(2) |
Apr
(2) |
May
(15) |
Jun
(1) |
Jul
|
Aug
(20) |
Sep
|
Oct
|
Nov
(2) |
Dec
|
| 2013 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(4) |
May
(2) |
Jun
(11) |
Jul
(12) |
Aug
|
Sep
(19) |
Oct
(25) |
Nov
(7) |
Dec
(9) |
| 2014 |
Jan
(6) |
Feb
(2) |
Mar
(7) |
Apr
(5) |
May
(1) |
Jun
|
Jul
(1) |
Aug
(1) |
Sep
(1) |
Oct
(3) |
Nov
(2) |
Dec
(4) |
| 2015 |
Jan
(8) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(3) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
| 2016 |
Jan
(16) |
Feb
|
Mar
(1) |
Apr
(1) |
May
(12) |
Jun
(3) |
Jul
|
Aug
(5) |
Sep
|
Oct
(2) |
Nov
|
Dec
|
| 2017 |
Jan
(2) |
Feb
(6) |
Mar
(68) |
Apr
(18) |
May
(8) |
Jun
(1) |
Jul
|
Aug
(10) |
Sep
(2) |
Oct
(1) |
Nov
(13) |
Dec
(25) |
| 2018 |
Jan
(18) |
Feb
(2) |
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
| 2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2020 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
(1) |
Oct
(7) |
Nov
|
Dec
|
|
From: Craig B. <cba...@us...> - 2017-04-29 21:08:00
|
In addition to getting the exclude settings right, in 4.x if you add an
exclude for an already backed-up file or directory, the file or directory
won't be backed up (correct), but won't get deleted on the server, so it
will be still be visible in the most recent filled/full backup (incorrect).
If that's an issue you have, you can add the --delete-excluded option to
$Conf{RsyncArgs} to fix this. I pushed that change to git in a few days
ago and it will be in 4.1.2.
Craig
On Sat, Apr 29, 2017 at 7:47 AM, Michael Stowe <ms...@ch...>
wrote:
> On 2017-04-29 08:05, Richard Shaw wrote:
> > Ok, getting a little closer... I changed the configuration to:
> >
> > $Conf{BackupFilesExclude} = {
> > '*' => [
> > '/home/*/.cache',
> > ''
> > ]
> > };
> >
> > And ran the dump from the command line and got:
> >
> > --exclude=/home/\*/.cache --exclude= <host>:/home/<user>/
> >
> > It seemed to generate two excludes but the second one seems to be
> > malformed with the space after the = and I'm not sure it's even
> > needed...
> >
> > Thanks,
> > Richard
>
> ... which begs the question, why did you include it? Why not just use
> the first exclude, which seems perfectly valid, i.e.
>
> $Conf{BackupFilesExclude} = {
> '*' => [
> '/home/*/.cache'
> ]
> };
>
> I'm genuinely baffled.
>
> ------------------------------------------------------------
> ------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> BackupPC-devel mailing list
> Bac...@li...
> List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel
> Wiki: http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
|
|
From: Michael S. <ms...@ch...> - 2017-04-29 15:19:45
|
On 2017-04-29 08:05, Richard Shaw wrote:
> Ok, getting a little closer... I changed the configuration to:
>
> $Conf{BackupFilesExclude} = {
> '*' => [
> '/home/*/.cache',
> ''
> ]
> };
>
> And ran the dump from the command line and got:
>
> --exclude=/home/\*/.cache --exclude= <host>:/home/<user>/
>
> It seemed to generate two excludes but the second one seems to be
> malformed with the space after the = and I'm not sure it's even
> needed...
>
> Thanks,
> Richard
... which begs the question, why did you include it? Why not just use
the first exclude, which seems perfectly valid, i.e.
$Conf{BackupFilesExclude} = {
'*' => [
'/home/*/.cache'
]
};
I'm genuinely baffled.
|
|
From: Richard S. <hob...@gm...> - 2017-04-29 13:05:25
|
Ok, getting a little closer... I changed the configuration to:
$Conf{BackupFilesExclude} = {
'*' => [
'/home/*/.cache',
''
]
};
And ran the dump from the command line and got:
--exclude=/home/\*/.cache --exclude= <host>:/home/<user>/
It seemed to generate two excludes but the second one seems to be malformed
with the space after the = and I'm not sure it's even needed...
Thanks,
Richard
|
|
From: Richard S. <hob...@gm...> - 2017-04-29 12:44:48
|
I'm trying to exclude .cache from all /home backups but can't quite seem to get it right... Right now out of desparation I'm trying: Share name => "*" Exclude => "/.cache" I though that the "*" would match "/home/<user>" and then it would put it together to be: /home/<user>/.cache But I just did an incremental backup and no luck... Thanks, Richard |
|
From: Craig B. <cba...@us...> - 2017-04-08 18:19:26
|
Alexander, Hmmm, good points. However, the move to github was only about a year ago and I doubt many packages have been released based on github releases since then. So while there might be a little bit of pain from the changes, no one has complained (yet). Craig On Mon, Apr 3, 2017 at 7:44 AM, Alexander Moisseev <mo...@me...> wrote: > On 03.04.2017 0:17, Craig Barratt wrote: > >> Yesterday I renamed all the tags in backuppc, backuppc-xs and rsync-bpc >> from "vX_Y_Z" to "X.Y.Z" (eg, v4_1_1 -> 4.1.1). The reason from way back >> is that CVS doesn't allow periods in tag names. >> >> > Craig, when I asked about changing tags format to "X.Y.Z", I didn't mean > changing existing tags. > I believe changing something (like commits or tags) in a public repository > is a terrible idea. > At least this change has made sources unfetchable for current ports. > > Would you please rename them back (just commits previously tagged as > "vX_Y_Z")? > Or maybe if you like you can use both tags "vX_Y_Z" and "X.Y.Z" for each > of those commits. > > Of course, "X.Y.Z" format for new releases is fine. > |
|
From: Craig B. <cba...@us...> - 2017-04-08 17:55:09
|
Bill,
/backuppc/cpool/0/0/0 is just one of the 4096 3.x pool directories (each of
the last three digits is a single hex value from 0..9a..f). So to see the
total storage remaining in the 3.x pool you should do this:
du -csh /backuppc/cpool/?/?/?
I'm not sure why some of the 3.x files didn't get migrated. You could pick
one that has more than one link (eg: 00002564f9012849e45bfa1f4fd47578
above) and find its inode:
ls -li /backuppc/cpool/0/0/0/00002564f9012849e45bfa1f4fd47578
then look for other files that have that same inode (replace NNN with the
inode printed by ls -i):
find /backuppc -inum NNN -print
But given your point about 0/0/0 being quite small, it's unlikely this can
explain 3TB of extra usage, and I suspect the du command above won't show
more than a few MB.
So another path is to use du to find which directories are so large. For
example:
du -hs /backuppc
If that number is reasonable, then it must be something outside /backuppc
that is using so much space.
Next:
du -hs /backuppc/cpool /backuppc/pool /backuppc/pc
Are any of those close to 3TB? If so, do the du inside those directories
to narrow things down.
Is it possible your excludes aren't work after the 4.x transition? For
example, on some linux systems /var/log/lastlog is a sparse file, and
backing it up will create a huge (regular) file.
You could also use find to look for single huge files, eg:
find /backuppc -size +1G -print
will list all files over 1G.
Craig
On Fri, Apr 7, 2017 at 1:08 AM, Bill Broadley <bi...@br...> wrote:
>
> On 04/05/2017 03:25 PM, higuita wrote:
> > Hi
> >
> > On Tue, 4 Apr 2017 23:04:44 -0700, Bill Broadley <bi...@br...>
> > wrote:
> >> -rw-r----- 1 backuppc backuppc 145 Feb 11 2015
> 00012f3df3fef9176f4a08f470d1f5e6
> > ^ |
> > This field is the number of hardlinks.
> > So you have entries >1, then you still have backups pointing to the v3
> pool.
>
> Odd.
>
> I ran the V3 to V4 migration script several times and it wasn't finding
> anything
> and running quickly. I was worried that my filesystem was corrupt
> somehow, it
> had been up for 360 some days. I umounted and fsck'd, not a single
> complaint.
>
> root@node1:/backuppc/cpool/0/0/0# ls -al | awk ' { print $2 } ' | grep -v
> "1" |
> wc -l
> 39
>
> I didn't have many with more than one link. The entire dir is small:
> root@node1:/backuppc/cpool/0/0/0# du -hs .
> 380K .
>
> I see similar elsewhere:
> root@node1:/backuppc/cpool/8/8/8# ls -al |wc -l; ls -al | awk ' { print
> $2 } ' |
> grep -v "1" | wc -l
> 59
> 31
>
> (59 files, 31 with links).
>
> I'm still seeing crazy disk usage, over 3TB, only about 650GB (total from
> the
> host status "full size" column) visible to backuppc.
>
> Keep in mind this happened with no changes to the server. 30 hosts backed
> up
> for a week or so, then suddenly much more disk is used. No host has larger
> backups, just a factor of 6 larger pool one night.
>
> I was using the v3 to v4 migration script from git since it wasn't in the
> release package yet (that's been fixed.
>
> I upgraded to backuppc 4.1.1, and the current versions of backuppc-xs and
> rsync-bpc. I ran the V3 to V4 migration script (now included in the
> release)
> again and it's doing some serious chewing (unlike before). It used to
> just fly
> through them all with "refCnt directory; skipping this backup".
>
> So maybe this will fix it.
>
>
> ------------------------------------------------------------
> ------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> BackupPC-devel mailing list
> Bac...@li...
> List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel
> Wiki: http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
|
|
From: Bill B. <bi...@br...> - 2017-04-07 08:08:58
|
On 04/05/2017 03:25 PM, higuita wrote:
> Hi
>
> On Tue, 4 Apr 2017 23:04:44 -0700, Bill Broadley <bi...@br...>
> wrote:
>> -rw-r----- 1 backuppc backuppc 145 Feb 11 2015
00012f3df3fef9176f4a08f470d1f5e6
> ^ |
> This field is the number of hardlinks.
> So you have entries >1, then you still have backups pointing to the v3 pool.
Odd.
I ran the V3 to V4 migration script several times and it wasn't finding anything
and running quickly. I was worried that my filesystem was corrupt somehow, it
had been up for 360 some days. I umounted and fsck'd, not a single complaint.
root@node1:/backuppc/cpool/0/0/0# ls -al | awk ' { print $2 } ' | grep -v "1" |
wc -l
39
I didn't have many with more than one link. The entire dir is small:
root@node1:/backuppc/cpool/0/0/0# du -hs .
380K .
I see similar elsewhere:
root@node1:/backuppc/cpool/8/8/8# ls -al |wc -l; ls -al | awk ' { print $2 } ' |
grep -v "1" | wc -l
59
31
(59 files, 31 with links).
I'm still seeing crazy disk usage, over 3TB, only about 650GB (total from the
host status "full size" column) visible to backuppc.
Keep in mind this happened with no changes to the server. 30 hosts backed up
for a week or so, then suddenly much more disk is used. No host has larger
backups, just a factor of 6 larger pool one night.
I was using the v3 to v4 migration script from git since it wasn't in the
release package yet (that's been fixed.
I upgraded to backuppc 4.1.1, and the current versions of backuppc-xs and
rsync-bpc. I ran the V3 to V4 migration script (now included in the release)
again and it's doing some serious chewing (unlike before). It used to just fly
through them all with "refCnt directory; skipping this backup".
So maybe this will fix it.
|
|
From: higuita <hi...@GM...> - 2017-04-05 22:26:14
|
Hi
On Tue, 4 Apr 2017 23:04:44 -0700, Bill Broadley <bi...@br...>
wrote:
> root@node1:/backuppc# ls -al cpool/0/0/0 | head -10
> total 384
> drwxr-x--- 2 backuppc backuppc 114688 Mar 28 21:58 .
> drwxr-x--- 18 backuppc backuppc 4096 Aug 24 2011 ..
> -rw-r----- 3 backuppc backuppc 32 Apr 23 2014 00002564f9012849e45bfa1f4fd47578
> -rw-r----- 2 backuppc backuppc 159 Apr 23 2014 000026b5ae9afbffa56382c6019dbfe1
> -rw-r----- 2 backuppc backuppc 40 Oct 29 2014 0000c32069243ef9cb6fd5113bd0891c
> -rw-r----- 1 backuppc backuppc 35 Feb 15 20:07 0000c34a3e2faf9ccf2c31c6ded1c849
> -rw-r----- 2 backuppc backuppc 55 Dec 28 22:00 0000c50fe18070ca0ae0d11aaed2f261
> -rw-r----- 2 backuppc backuppc 133 Apr 23 2014 0000feef14f3c8f589894b983456389a
> -rw-r----- 1 backuppc backuppc 145 Feb 11 2015 00012f3df3fef9176f4a08f470d1f5e6
^
|
This field is the number of hardlinks.
So you have entries >1, then you still have backups pointing to the v3 pool.
Try to find what backups are still in V3 format
Best regards
--
Naturally the common people don't want war... but after all it is the
leaders of a country who determine the policy, and it is always a
simple matter to drag the people along, whether it is a democracy, or
a fascist dictatorship, or a parliament, or a communist dictatorship.
Voice or no voice, the people can always be brought to the bidding of
the leaders. That is easy. All you have to do is tell them they are
being attacked, and denounce the pacifists for lack of patriotism and
exposing the country to danger. It works the same in every country.
-- Hermann Goering, Nazi and war criminal, 1883-1946
|
|
From: Bill B. <bi...@br...> - 2017-04-05 06:04:53
|
On 04/04/2017 10:59 PM, Craig Barratt wrote: > Bill, > > Is the V3 pool empty? You can check by looking in, eg, TOPDIR/cpool/0/0/0. If > there are files there, check that all of them have only 1 link. root@node1:/backuppc# du -hs cpool/0/0/0 380K cpool/0/0/0 root@node1:/backuppc# find cpool/0/0/0 | wc -l 63 root@node1:/backuppc# ls -al cpool/0/0/0 | head -10 total 384 drwxr-x--- 2 backuppc backuppc 114688 Mar 28 21:58 . drwxr-x--- 18 backuppc backuppc 4096 Aug 24 2011 .. -rw-r----- 3 backuppc backuppc 32 Apr 23 2014 00002564f9012849e45bfa1f4fd47578 -rw-r----- 2 backuppc backuppc 159 Apr 23 2014 000026b5ae9afbffa56382c6019dbfe1 -rw-r----- 2 backuppc backuppc 40 Oct 29 2014 0000c32069243ef9cb6fd5113bd0891c -rw-r----- 1 backuppc backuppc 35 Feb 15 20:07 0000c34a3e2faf9ccf2c31c6ded1c849 -rw-r----- 2 backuppc backuppc 55 Dec 28 22:00 0000c50fe18070ca0ae0d11aaed2f261 -rw-r----- 2 backuppc backuppc 133 Apr 23 2014 0000feef14f3c8f589894b983456389a -rw-r----- 1 backuppc backuppc 145 Feb 11 2015 00012f3df3fef9176f4a08f470d1f5e6 Not sure how to check the 1 link thing. |
|
From: Craig B. <cba...@us...> - 2017-04-05 06:00:15
|
Bill,
Is the V3 pool empty? You can check by looking in, eg,
TOPDIR/cpool/0/0/0. If there are files there, check that all of them have
only 1 link.
Craig
On Tue, Apr 4, 2017 at 10:19 PM, Bill Broadley <bi...@br...> wrote:
> On 04/04/2017 10:16 PM, Craig Barratt wrote:
> > Bill,
> >
> > What is $Conf{PoolV3Enabled} set to?
>
> root@node1:/etc/backuppc# cat config.pl | grep PoolV3
> $Conf{PoolV3Enabled} = '0';
>
> I did that after the V3 to V4 migrate completed. I ran the migrate again
> and it
> didn't do anything and finished quickly. I restarted the backuppc daemon
> as well.
>
>
>
> ------------------------------------------------------------
> ------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> BackupPC-devel mailing list
> Bac...@li...
> List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel
> Wiki: http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
|
|
From: Bill B. <bi...@br...> - 2017-04-05 05:19:52
|
On 04/04/2017 10:16 PM, Craig Barratt wrote:
> Bill,
>
> What is $Conf{PoolV3Enabled} set to?
root@node1:/etc/backuppc# cat config.pl | grep PoolV3
$Conf{PoolV3Enabled} = '0';
I did that after the V3 to V4 migrate completed. I ran the migrate again and it
didn't do anything and finished quickly. I restarted the backuppc daemon as well.
|
|
From: Craig B. <cba...@us...> - 2017-04-05 05:17:04
|
Bill,
What is $Conf{PoolV3Enabled} set to?
Craig
On Tue, Apr 4, 2017 at 8:48 PM, Bill Broadley <bi...@br...> wrote:
> Two or so weeks ago I upgraded a backuppc 3.3 system to 4.0. Things
> generally
> worked pretty well. Pool was 500GB or so, 31 hosts, all using rsync+ssh
> on the
> client side and rsync-bpc on the server side.
>
> A larger setup with 40 hosts, 2.7TB pool, and backuppc-4.1.1 has had no
> problems.
>
> I haven't tinkered with it for a week:
> http://broadley.org/backuppc-pool.png
>
> But suddenly the pool is 100% full:
> $ df -h /backuppc/
> Filesystem Size Used Avail Use% Mounted on
> /dev/md1 3.6T 3.4T 984M 100% /backuppc
>
> I poked around a bit without finding anything obvious.
>
> Any thoughts on this being worth tracking down? Or just I just upgrade to
> backuppc 4.1.1 (btw the main backuppc page still lists 4.0) and related
> rsync-bpc and friends?
>
> I'm running du -x on that partition, but I think it's got 55M files so it's
> going to take awhile. I'm hoping feeding that to xdu or similar will show
> something obvious.
>
> I did stop/start the backuppc daemon, it's not hung file handle from the
> main
> backuppc process.
>
>
>
> ------------------------------------------------------------
> ------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> BackupPC-devel mailing list
> Bac...@li...
> List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel
> Wiki: http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
|
|
From: Bill B. <bi...@br...> - 2017-04-05 03:48:51
|
Two or so weeks ago I upgraded a backuppc 3.3 system to 4.0. Things generally worked pretty well. Pool was 500GB or so, 31 hosts, all using rsync+ssh on the client side and rsync-bpc on the server side. A larger setup with 40 hosts, 2.7TB pool, and backuppc-4.1.1 has had no problems. I haven't tinkered with it for a week: http://broadley.org/backuppc-pool.png But suddenly the pool is 100% full: $ df -h /backuppc/ Filesystem Size Used Avail Use% Mounted on /dev/md1 3.6T 3.4T 984M 100% /backuppc I poked around a bit without finding anything obvious. Any thoughts on this being worth tracking down? Or just I just upgrade to backuppc 4.1.1 (btw the main backuppc page still lists 4.0) and related rsync-bpc and friends? I'm running du -x on that partition, but I think it's got 55M files so it's going to take awhile. I'm hoping feeding that to xdu or similar will show something obvious. I did stop/start the backuppc daemon, it's not hung file handle from the main backuppc process. |
|
From: Alexander M. <mo...@me...> - 2017-04-03 14:44:37
|
On 03.04.2017 0:17, Craig Barratt wrote: > Yesterday I renamed all the tags in backuppc, backuppc-xs and rsync-bpc from "vX_Y_Z" to "X.Y.Z" (eg, v4_1_1 -> 4.1.1). The reason from way back is that CVS doesn't allow periods in tag names. > Craig, when I asked about changing tags format to "X.Y.Z", I didn't mean changing existing tags. I believe changing something (like commits or tags) in a public repository is a terrible idea. At least this change has made sources unfetchable for current ports. Would you please rename them back (just commits previously tagged as "vX_Y_Z")? Or maybe if you like you can use both tags "vX_Y_Z" and "X.Y.Z" for each of those commits. Of course, "X.Y.Z" format for new releases is fine. |
|
From: Raoul B. <ra...@bh...> - 2017-04-03 13:27:15
|
On 2017-04-02 23:17, Craig Barratt wrote: > Yesterday I renamed all the tags in backuppc, backuppc-xs and rsync-bpc > from "vX_Y_Z" to "X.Y.Z" (eg, v4_1_1 -> 4.1.1). > The reason from way back is that CVS doesn't allow periods in tag > names. > > I also just created a new branch in rsync-bpc called 3.0.9 (the rsync > version it is based upon). > 3.0.9 is the stable branch (current release is 3.0.9.6). > The master is now based on rsync 3.1.2. > It's only minimally tested, so I don't recommend the master for broad > use yet. Thanks Craig! I've followed this change and renamed my DEBIAN branch to https://github.com/raoulbhatia/rsync-bpc/tree/3.0.9.6-DEBIAN . Raoul -- DI (FH) Raoul Bhatia M.Sc. E-Mail. ra...@bh... Tel. +43 699 10132530 |
|
From: Craig B. <cba...@us...> - 2017-04-02 21:17:31
|
Yesterday I renamed all the tags in backuppc, backuppc-xs and rsync-bpc from "vX_Y_Z" to "X.Y.Z" (eg, v4_1_1 -> 4.1.1). The reason from way back is that CVS doesn't allow periods in tag names. I also just created a new branch in rsync-bpc called 3.0.9 (the rsync version it is based upon). 3.0.9 is the stable branch (current release is 3.0.9.6). The master is now based on rsync 3.1.2. It's only minimally tested, so I don't recommend the master for broad use yet. Craig |
|
From: Craig B. <cba...@us...> - 2017-03-30 16:22:22
|
BackupPC 4.1.1 <https://github.com/backuppc/backuppc/releases/tag/4.1.1> has been released on Github. BackupPC 4.1.1 is a bug fix release. There are several minor bug fixes listed below. Craig * Merged pull requests: #77, #78, #79, #82 * Added missing BackupPC_migrateV3toV4 to makeDist (issue #75) reported by spikebike. * Fixed divide-by-zero in progress % report in BackupPC_migrateV3toV4 (issue #75) reported by spikebike. * In lib/BackupPC/Lib.pm, if Socket::getaddrinfo() doesn't exist (ie, an old version of Socket.pm), then default to ipv4 ping. * Updates to configure.pl to make config-path default be based on config-dir (#79), prepended config-path with dest-dir, fixing a config.pl merge bug affecting $Conf{PingPath} reported by Richard Shaw, and a few other fixes. * Updated required version of BackupPC::XS to 0.53 and rsync_bpc to 3.0.9.6. * Minor changes to systemd/src/init.d/gentoo-backuppc from sigmoidal (#82). * Added RuntimeDirectory to systemd/src/backuppc.service. * Use the scalar form of getpwnam() in lib/BackupPC/CGI/Lib.pm and lib/BackupPC/Lib.pm |
|
From: Craig B. <cba...@us...> - 2017-03-30 02:14:53
|
Bill,
My previous statement wasn't correct. In V4, each directory in a backup
tree consumes 2 inodes, one for the directory and the other for the (empty)
attrib file. In V3, each directory in a backup tree consumes 1 inode for
the directory, and everything else is hardlinked, including the attrib
file.
So when you migrate a V3 backup, the number of inodes to store the backup
trees will double, as you observe. The pool inode usage shouldn't change
much, but with lots of backups the former number dominates.
In a new V4 installation the inode usage will be somewhat lower, since in
V4 incrementals don't store the entire backup tree (just the directories
that have changes get created). In a series of backups where the directory
contents change every backup, including the pool file, V4 will use 3 inodes
per backup directory (directory, attrib file, pool file), while V3 will use
2 (directory, {attrib, pool} linked). So the inode usage is 1.5 - 2x.
I'll add a mention of the inode usage to the documentation.
Craig
On Wed, Mar 29, 2017 at 6:27 PM, Bill Broadley <bi...@br...> wrote:
> On 03/29/2017 06:13 PM, Craig Barratt wrote:
> > Bill,
> >
> > Sure, I agree that multiple hardlinks only consume one inode. Each v3
> pool file
> > (when first encountered in a v3 backup) should get moved to the new v4
> pool. So
> > that shouldn't increase the number of inodes. The per-directory backup
> storage
> > in v4 should be more efficient; I'd expect one less inode per v4
> directory. v4
> > does add some reference count files per backup (128), but that's
> rounding error.
> >
> > Can you look in the V3 pool? Eg, is $TOPDIR/cpool/0/0/0 empty?
>
> Yes:
> root@fs1:/backuppc/cpool/0/0/0# ls -al
> total 116
> drwxr-x--- 2 backuppc backuppc 110592 Mar 28 01:01 .
> drwxr-x--- 18 backuppc backuppc 4096 Jan 24 2013 ..
> root@fs1:/backuppc/cpool/0/0/0#
>
>
> > It could be it
> > didn't get cleaned if you turned off the V3 pool before BackupPC_nightly
> ran the
> > next time. If so, I'd expect the old v3 pool is full of v3 attrib
> files, each
> > with one link (ie, not used any longer).
>
> Currently:
> root@fs1:~# df -i /backuppc/
> Filesystem Inodes IUsed IFree IUse% Mounted on
> /dev/md3 238071808 53427304 184644504 23% /backuppc
>
> It was around 11% with V3, it's now 23% and still agrees with the plot I
> made.
>
> I've not added hosts, changed the number of incremental or full backups,
> or many
> any other changes that should increase the node count.
>
> As each backup (like #1405) was migrated the directory would be renamed to
> .old,
> migrated, and then removed. So there would be a steep increase in inodes,
> and a
> drop, but never to the original number.
>
> I have two backuppc servers, each with different pools of clients, they
> both
> went from approximately 11% of the filesystem's inodes being used to 22%.
>
>
>
>
>
>
>
> ------------------------------------------------------------
> ------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> BackupPC-devel mailing list
> Bac...@li...
> List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel
> Wiki: http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
|
|
From: Bill B. <bi...@br...> - 2017-03-30 01:27:46
|
On 03/29/2017 06:13 PM, Craig Barratt wrote: > Bill, > > Sure, I agree that multiple hardlinks only consume one inode. Each v3 pool file > (when first encountered in a v3 backup) should get moved to the new v4 pool. So > that shouldn't increase the number of inodes. The per-directory backup storage > in v4 should be more efficient; I'd expect one less inode per v4 directory. v4 > does add some reference count files per backup (128), but that's rounding error. > > Can you look in the V3 pool? Eg, is $TOPDIR/cpool/0/0/0 empty? Yes: root@fs1:/backuppc/cpool/0/0/0# ls -al total 116 drwxr-x--- 2 backuppc backuppc 110592 Mar 28 01:01 . drwxr-x--- 18 backuppc backuppc 4096 Jan 24 2013 .. root@fs1:/backuppc/cpool/0/0/0# > It could be it > didn't get cleaned if you turned off the V3 pool before BackupPC_nightly ran the > next time. If so, I'd expect the old v3 pool is full of v3 attrib files, each > with one link (ie, not used any longer). Currently: root@fs1:~# df -i /backuppc/ Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md3 238071808 53427304 184644504 23% /backuppc It was around 11% with V3, it's now 23% and still agrees with the plot I made. I've not added hosts, changed the number of incremental or full backups, or many any other changes that should increase the node count. As each backup (like #1405) was migrated the directory would be renamed to .old, migrated, and then removed. So there would be a steep increase in inodes, and a drop, but never to the original number. I have two backuppc servers, each with different pools of clients, they both went from approximately 11% of the filesystem's inodes being used to 22%. |
|
From: Craig B. <cba...@us...> - 2017-03-30 01:14:25
|
Bill, Sure, I agree that multiple hardlinks only consume one inode. Each v3 pool file (when first encountered in a v3 backup) should get moved to the new v4 pool. So that shouldn't increase the number of inodes. The per-directory backup storage in v4 should be more efficient; I'd expect one less inode per v4 directory. v4 does add some reference count files per backup (128), but that's rounding error. Can you look in the V3 pool? Eg, is $TOPDIR/cpool/0/0/0 empty? It could be it didn't get cleaned if you turned off the V3 pool before BackupPC_nightly ran the next time. If so, I'd expect the old v3 pool is full of v3 attrib files, each with one link (ie, not used any longer). Craig On Wed, Mar 29, 2017 at 5:57 PM, Bill Broadley <bi...@br...> wrote: > On 03/29/2017 05:32 PM, Craig Barratt wrote: > > Bill, > > > > That's interesting data. > > > > I'm not sure why the inode use goes up. Has it stayed at the higher > level after > > BackupPC_nightly has run? > > Yes. > > > Is the old V3 pool now empty? > > Yes, V3-V4 migration doesn't say anything, no errors, finishes quickly, > and I > disabled the V3 pool in the config.pl. The daily report/chart doesn't > show any V3. > > I didn't think hardlinks consumed inodes. So a file hardlinked 5 times > has 5 > directory entries, but only one inode. > > Is it possible backuppc consumes 2? > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > BackupPC-devel mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > |
|
From: Bill B. <bi...@br...> - 2017-03-30 00:57:54
|
On 03/29/2017 05:32 PM, Craig Barratt wrote: > Bill, > > That's interesting data. > > I'm not sure why the inode use goes up. Has it stayed at the higher level after > BackupPC_nightly has run? Yes. > Is the old V3 pool now empty? Yes, V3-V4 migration doesn't say anything, no errors, finishes quickly, and I disabled the V3 pool in the config.pl. The daily report/chart doesn't show any V3. I didn't think hardlinks consumed inodes. So a file hardlinked 5 times has 5 directory entries, but only one inode. Is it possible backuppc consumes 2? |
|
From: Craig B. <cba...@us...> - 2017-03-30 00:32:57
|
Bill, That's interesting data. I'm not sure why the inode use goes up. Has it stayed at the higher level after BackupPC_nightly has run? Is the old V3 pool now empty? Craig On Mon, Mar 27, 2017 at 3:24 AM, Bill Broadley <bi...@br...> wrote: > > I had a 2.7TB pool for 38 hosts, actual size after deduplication and > compression. > > I just upgraded to 4.0.1 and did a V3 to V4 pool migration. > > The storage penalty was pretty small, about 2% or 52GB. > > The inode overhead was substantial, just over a factor of 2. In my case > it was > 25348763 inodes (around 11% of the ext4 default) to 51809068 (around 22% > of the > default). > > If you are upgrading you might want to ensure that your V3 inode usage is > less > than 50% of what the filesystem is capable of. > > The above migration tool 52 hours or so on a RAID 5 of 4 disks on a server > that's a few years old. > > To watch I graphed it over time: > http://broadley.org/bill/backuppc.png > > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > BackupPC-devel mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > |
|
From: Craig B. <cba...@us...> - 2017-03-29 22:38:20
|
Holger,
Thanks for the explanation. I'm happy to remove the [2] and use scalar
context.
Craig
On Wed, Mar 29, 2017 at 3:06 PM, Holger Parplies <wb...@pa...> wrote:
> Hi,
>
> Richard Shaw wrote on 2017-03-29 13:38:16 -0500 [[BackupPC-devel]
> Inherited patch question]:
> > [...]
> > $ cat rpmbuild/BackupPC/SOURCES/BackupPC-4.0.0-fix-shadow-access.patch
> > [...]
> > - && $> != (my $uid = (getpwnam($Conf{BackupPCUser}))[2]) ) {
> > + && $> != (my $uid = (getpwnam($Conf{BackupPCUser}))) ) {
> > [...]
> > What's the effect of removing the [2] from these?
>
> well, in theory (and practise, at least on my local system here) getpwnam
> returns something like 'split /:/, $passwd_line' in list context and the
> uid in scalar context. The third element (index [2]) of the split would
> also be the uid, which explains why the two lines can be equivalent, even
> though they seem very different.
>
> >From the *name* of the patch, I would guess that there might be a
> potential
> problem on systems with shadow passwords in some cases, though I can't see
> one here on my system. I could *imagine* though, that there might be
> systems
> that differ.
>
> A closer look reveals the following:
>
> % perl -e 'my @p = getpwnam "foo"; print ">", (join ",", @p),
> "<\n";'
> foo,x,1234,1234,,,Holger Parplies,/home/foo,/bin/tcsh
> # perl -e 'my @p = getpwnam "foo"; print ">", (join ",", @p),
> "<\n";'
> foo,<my-hashed-password>,1234,1234,,,Holger
> Parplies,/home/foo,/bin/tcsh
>
> (no, my user name is not "foo" and my uid is not 1234 ;-), so my Perl (or
> rather getpwnam(3)) merges in the shadow password, privilege permitting.
> Although I can't find any hint in the documentation, I could imagine that
> the attempt to do so could trigger unwanted behaviour (e.g. an audit log or
> even termination of the process) under some security systems, depending on
> how the determination of "privilege permitting" might be implemented.
>
> In any case, I would *hope* that the scalar context case would be slightly
> more efficient, because the unneeded information in the additional array
> elements not corresponding to /etc/passwd fields ($quota, $comment,
> $expire)
> does not need to be retrieved.
>
> For an explanation of the getpwnam function look at 'perldoc -f getpwuid'
> (strangely, 'perldoc -f getpwnam' is not very helpful, at least on some
> systems ;-).
>
> Regards,
> Holger
>
> ------------------------------------------------------------
> ------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> BackupPC-devel mailing list
> Bac...@li...
> List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel
> Wiki: http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
|
|
From: Holger P. <wb...@pa...> - 2017-03-29 22:06:15
|
Hi,
Richard Shaw wrote on 2017-03-29 13:38:16 -0500 [[BackupPC-devel] Inherited patch question]:
> [...]
> $ cat rpmbuild/BackupPC/SOURCES/BackupPC-4.0.0-fix-shadow-access.patch
> [...]
> - && $> != (my $uid = (getpwnam($Conf{BackupPCUser}))[2]) ) {
> + && $> != (my $uid = (getpwnam($Conf{BackupPCUser}))) ) {
> [...]
> What's the effect of removing the [2] from these?
well, in theory (and practise, at least on my local system here) getpwnam
returns something like 'split /:/, $passwd_line' in list context and the
uid in scalar context. The third element (index [2]) of the split would
also be the uid, which explains why the two lines can be equivalent, even
though they seem very different.
>From the *name* of the patch, I would guess that there might be a potential
problem on systems with shadow passwords in some cases, though I can't see
one here on my system. I could *imagine* though, that there might be systems
that differ.
A closer look reveals the following:
% perl -e 'my @p = getpwnam "foo"; print ">", (join ",", @p), "<\n";'
foo,x,1234,1234,,,Holger Parplies,/home/foo,/bin/tcsh
# perl -e 'my @p = getpwnam "foo"; print ">", (join ",", @p), "<\n";'
foo,<my-hashed-password>,1234,1234,,,Holger Parplies,/home/foo,/bin/tcsh
(no, my user name is not "foo" and my uid is not 1234 ;-), so my Perl (or
rather getpwnam(3)) merges in the shadow password, privilege permitting.
Although I can't find any hint in the documentation, I could imagine that
the attempt to do so could trigger unwanted behaviour (e.g. an audit log or
even termination of the process) under some security systems, depending on
how the determination of "privilege permitting" might be implemented.
In any case, I would *hope* that the scalar context case would be slightly
more efficient, because the unneeded information in the additional array
elements not corresponding to /etc/passwd fields ($quota, $comment, $expire)
does not need to be retrieved.
For an explanation of the getpwnam function look at 'perldoc -f getpwuid'
(strangely, 'perldoc -f getpwnam' is not very helpful, at least on some
systems ;-).
Regards,
Holger
|
|
From: Richard S. <hob...@gm...> - 2017-03-29 18:38:24
|
I'm working on the 4.1.0 release for Fedora and I've been working through
all the patches I inherited, most of which aren't necessary anymore.
Here's one that I'm not sure what effect it actually has:
$ cat rpmbuild/BackupPC/SOURCES/BackupPC-4.0.0-fix-shadow-access.patch
--- a/lib/BackupPC/CGI/Lib.pm
+++ b/lib/BackupPC/CGI/Lib.pm
@@ -143,7 +143,7 @@ sub NewRequest
# Verify we are running as the correct user
#
if ( $Conf{BackupPCUserVerify}
- && $> != (my $uid = (getpwnam($Conf{BackupPCUser}))[2]) ) {
+ && $> != (my $uid = (getpwnam($Conf{BackupPCUser}))) ) {
ErrorExit(eval("qq{$Lang->{Wrong_user__my_userid_is___}}"), <<EOF);
This script needs to run as the user specified in \$Conf{BackupPCUser},
which is set to $Conf{BackupPCUser}.
--- a/lib/BackupPC/Lib.pm
+++ b/lib/BackupPC/Lib.pm
@@ -127,7 +127,7 @@ sub new
#
if ( !$noUserCheck
&& $bpc->{Conf}{BackupPCUserVerify}
- && $> != (my $uid = (getpwnam($bpc->{Conf}{BackupPCUser}))[2])
) {
+ && $> != (my $uid = (getpwnam($bpc->{Conf}{BackupPCUser}))) ) {
print(STDERR "$0: Wrong user: my userid is $>, instead of $uid"
. " ($bpc->{Conf}{BackupPCUser})\n");
print(STDERR "Please su $bpc->{Conf}{BackupPCUser} first\n");
What's the effect of removing the [2] from these?
Thanks,
Richard
|