You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(15) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(7) |
Feb
(6) |
Mar
(3) |
Apr
(9) |
May
(5) |
Jun
(11) |
Jul
(40) |
Aug
(15) |
Sep
(3) |
Oct
(2) |
Nov
(1) |
Dec
|
| 2004 |
Jan
(2) |
Feb
(8) |
Mar
(8) |
Apr
(25) |
May
(10) |
Jun
(11) |
Jul
(5) |
Aug
(9) |
Sep
(2) |
Oct
(7) |
Nov
(7) |
Dec
(6) |
| 2005 |
Jan
(6) |
Feb
(17) |
Mar
(9) |
Apr
(3) |
May
(4) |
Jun
(11) |
Jul
(42) |
Aug
(33) |
Sep
(13) |
Oct
(14) |
Nov
(19) |
Dec
(7) |
| 2006 |
Jan
(23) |
Feb
(19) |
Mar
(6) |
Apr
(8) |
May
(1) |
Jun
(12) |
Jul
(50) |
Aug
(16) |
Sep
(4) |
Oct
(18) |
Nov
(15) |
Dec
(10) |
| 2007 |
Jan
(10) |
Feb
(13) |
Mar
(1) |
Apr
|
May
(1) |
Jun
(1) |
Jul
(5) |
Aug
(7) |
Sep
(25) |
Oct
(58) |
Nov
(15) |
Dec
(4) |
| 2008 |
Jan
(2) |
Feb
(13) |
Mar
(3) |
Apr
(10) |
May
(1) |
Jun
(4) |
Jul
(4) |
Aug
(23) |
Sep
(21) |
Oct
(7) |
Nov
(3) |
Dec
(12) |
| 2009 |
Jan
(5) |
Feb
(2) |
Mar
|
Apr
(7) |
May
(6) |
Jun
(25) |
Jul
|
Aug
(2) |
Sep
(2) |
Oct
(3) |
Nov
(2) |
Dec
(2) |
| 2010 |
Jan
(1) |
Feb
|
Mar
(3) |
Apr
(2) |
May
(2) |
Jun
|
Jul
(3) |
Aug
(3) |
Sep
(2) |
Oct
|
Nov
|
Dec
(3) |
| 2011 |
Jan
(10) |
Feb
(2) |
Mar
(71) |
Apr
(4) |
May
(8) |
Jun
|
Jul
|
Aug
(4) |
Sep
(3) |
Oct
(1) |
Nov
(5) |
Dec
(1) |
| 2012 |
Jan
|
Feb
|
Mar
(2) |
Apr
(2) |
May
(15) |
Jun
(1) |
Jul
|
Aug
(20) |
Sep
|
Oct
|
Nov
(2) |
Dec
|
| 2013 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(4) |
May
(2) |
Jun
(11) |
Jul
(12) |
Aug
|
Sep
(19) |
Oct
(25) |
Nov
(7) |
Dec
(9) |
| 2014 |
Jan
(6) |
Feb
(2) |
Mar
(7) |
Apr
(5) |
May
(1) |
Jun
|
Jul
(1) |
Aug
(1) |
Sep
(1) |
Oct
(3) |
Nov
(2) |
Dec
(4) |
| 2015 |
Jan
(8) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(3) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
| 2016 |
Jan
(16) |
Feb
|
Mar
(1) |
Apr
(1) |
May
(12) |
Jun
(3) |
Jul
|
Aug
(5) |
Sep
|
Oct
(2) |
Nov
|
Dec
|
| 2017 |
Jan
(2) |
Feb
(6) |
Mar
(68) |
Apr
(18) |
May
(8) |
Jun
(1) |
Jul
|
Aug
(10) |
Sep
(2) |
Oct
(1) |
Nov
(13) |
Dec
(25) |
| 2018 |
Jan
(18) |
Feb
(2) |
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
| 2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2020 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
(1) |
Oct
(7) |
Nov
|
Dec
|
|
From: Les M. <les...@gm...> - 2016-01-10 05:38:21
|
On Sat, Jan 9, 2016 at 6:25 PM, Michael <m.b...@im...> wrote:
>
> My current backup pool is ~ 12 machine. 11 on Linux and 1 windows
> machine. My backup machine is a 3TB Lacie-Cloudbox, with 256 MB memory.
> Some of you might say that 256 MB is not enough. Actually I've even seen
> posts on the net saying that you would need a server with several GB
> RAM. This is just insane. A typical PC in my pool has ~600k files.
> Representing each of them with a 256-bit hash, that's basically 20MB of
> data to manage for each backup. Of course you need some metadata, etc,
> but I see no reason why you need GB of memory to manage that.
And yet your complaint is that your server is slow... With a
reasonable amount of RAM, much of the directory structure, inodes, and
the next parts of files currently being read will already be in cache
when you need them and writes will be substantially buffered.
Otherwise you'll wait for the disk head to bounce around and always be
in the wrong place.
--
Les Mikesell
les...@gm...
|
|
From: <bac...@ko...> - 2016-01-10 04:37:40
|
Michael wrote at about 01:25:27 +0100 on Sunday, January 10, 2016: > Hello, > > I've been testing BackupPC 4.0.0alpha3 for 1 year now, for backing up 12 > home machines, and to be honest, I'm quite unhappy with it. > To my opinion, it is completely unreliable, you have to regularly check > whether backups are done correctly, and most of the time you can't do a > backup without at least an error. And it's awfully slow. I have been using it for almost 10 years without problems... it's reliable and stable as anything. Just because other solutions don't tell you they have errors, doesn't mean there aren't errors... they just ignore them... > If I would participate to the development of BPC, I would make more > changes to the architecture. I think that the changes from 3.0 to 4.0 > are very promising, but not enough. The first thing to do is to trash > rsync/rsyncd and use a client-side sync mechanism (like unison). Then > throw away all Perl code and rewrite in C. Also add a timestamp to log > files because debugging BPC failures without timestamps is just a f*** > nightmare. And finally make it much more reliable and resistant to > connection issues or interrupt. i.e., I want to create a totally new backup program using a totally different language and methodology... Great... nothing is stopping you from creating your own backup program, but I doubt you will find much willingness here to create a completely different backup program from scratch... |
|
From: Michael <m.b...@im...> - 2016-01-10 00:25:38
|
Hello, I've been testing BackupPC 4.0.0alpha3 for 1 year now, for backing up 12 home machines, and to be honest, I'm quite unhappy with it. To my opinion, it is completely unreliable, you have to regularly check whether backups are done correctly, and most of the time you can't do a backup without at least an error. And it's awfully slow. The big advantage of BPC (besides being free and open-source of course) is to manage backup of multiple machines in a single pool, hence saving space. My current backup pool is ~ 12 machine. 11 on Linux and 1 windows machine. My backup machine is a 3TB Lacie-Cloudbox, with 256 MB memory. Some of you might say that 256 MB is not enough. Actually I've even seen posts on the net saying that you would need a server with several GB RAM. This is just insane. A typical PC in my pool has ~600k files. Representing each of them with a 256-bit hash, that's basically 20MB of data to manage for each backup. Of course you need some metadata, etc, but I see no reason why you need GB of memory to manage that. If I would participate to the development of BPC, I would make more changes to the architecture. I think that the changes from 3.0 to 4.0 are very promising, but not enough. The first thing to do is to trash rsync/rsyncd and use a client-side sync mechanism (like unison). Then throw away all Perl code and rewrite in C. Also add a timestamp to log files because debugging BPC failures without timestamps is just a f*** nightmare. And finally make it much more reliable and resistant to connection issues or interrupt. What I like in BPC: - Mutualization of backups in a single pool - Clean interface - Free and open-source! What I hate in BPC: - BPC seemingly spending more time in backupref_count, fsck or whatever than in doing actual file transfer. - Seeing "rsync: read error: Connection reset by peer" in my client log, followed by even more fsck whatever on the server for ages. - Not resiliant to interruption, making it very inefficient and unreliable. - no timestamps in server logs! - Mostly unhelpful logs. What I would love to see in BPC: - Possibility to move the processing (delta) to the client. - More efficient maintenance, less overhead processing. - Flawless execution on a 256MB memory server. Some ideas: - Use client-side sync and delta detection mechanism (like unison or duplicity) - Use ZFS My gripes and wishes for 2016 Michaël On 01/07/2016 12:25 AM, Adam Goryachev wrote: > Hi, > > I've been a long time user of backuppc (couple of years at least), and > in general it works really well and I'm mostly happy with the current > status. However, since I upgraded to the 4.0.0alpha3 last year, I've had > a number of minor issues (some more serious than others, like failing to > backup unchanged files, or saying the backup has failed even though it > succeeded). So far, I've not lost data due to any issue, and that is a > plus, but I'm very concerned that eventually, one of these problems will > cause actual data loss (as in, backup failed, something else caused data > loss like failed RAID array, and then can't recover from backup). > > I'd like to know if there is any current person or organisation doing > development work on BackupPC, and/or interested in doing that? I'm > considering to fork the project, and try to debug/fix the remaining > issues in BPC 4, but at the same time, I'm very busy, and am not really > a "proper" coder, so working on such a large project will be difficult. > > With the right group of developers, this could work (as in, a small work > load for each person, but at least better maintenance/development > efforts). My concerns are: > 1) Without ongoing development/maintenance, new versions of OS or perl > or whatever will cause breakages, while manual/minor patches or config > changes might solve these, over time it will become more of a nightmare. > 2) The point of using a "standard" open source product is that we all > get the advantage of experience (ie, more users finding problems), and > improvements/patches. I could have built (probably never as good as the > current BPC) my own solution. > > So, are you interested in developing/contributing? > What is the current status/plans around BPC? > Do you have any patches that are not applied to either v3 or v4 releases? > Thoughts/discussions? > > PS, BPC is an excellent product, and I greatly appreciate all the time > and effort that has been invested into it, I would ideally like to see > it continue under the leadership of Craig, he has done an amazing > development job so far. I really really do not want to see it basically > waste away, with people moving to other products simply because it is > unmaintained, and has a few small problems (which is where I currently > stand, either I move to another product, or I start working harder on > the current one). > > Regards, > Adam > |
|
From: François <ai...@gm...> - 2016-01-07 10:56:06
|
On 7 January 2016 at 00:25, Adam Goryachev <mai...@we...> wrote: > Hi, Hi, > I've been a long time user of backuppc (couple of years at least), and > in general it works really well and I'm mostly happy with the current > status. However, since I upgraded to the 4.0.0alpha3 last year, I've had > a number of minor issues (some more serious than others, like failing to > backup unchanged files, or saying the backup has failed even though it > succeeded). I've been using version 3 for years, and obviously it just works. Altough it has performances issues, so I've been considering using v4. I'm not the one who setup that, so I'm not fluent with backuppc. However, I was told that version 4 was interesting. That's why I've created repositories on Github to host the code, because I couldn't find official dvcs. - https://github.com/fser/BackupPC - https://github.com/fser/rsync-bpc - https://github.com/fser/BackupPC-XS - and a last one : https://github.com/fser/backuppc-debian-package My first issue was to get .deb packages, and IIRC, one of those 3 was an issue for automating. But someone on github did the work to achieve deb packages. I had to have a look, but didn't find time yet, and now, I've lost the project issue. > > So, are you interested in developing/contributing? > What is the current status/plans around BPC? > Do you have any patches that are not applied to either v3 or v4 releases? > Thoughts/discussions? So what I mean is yes, I'm interrested but like all of you I don't have much time. As you may have understood, I'm more into v4 than v3, which is already available in the wild so I think it would be more interesting to have packages and automation to let people easily use backuppc4. btw I don't remember if I did the same call here that you did, but I'm glad it makes things move along! -- François |
|
From: Joe B. <jo...@ts...> - 2016-01-07 10:00:15
|
Hi, Happy New Year to all :-) I am willing to help and think that this is a good idea. I am totally convinced that this project deserves it: @Craig, you have done an incredible job. I am a "proper" programmer and very busy too. I am using V4 for my own personal use and we have all our official company and production backups on V3. I would prefer to push V4 forward. I will help and support as best I can. Joe TSolucio On 07/01/16 09:52, Alexander Moisseev wrote: > On 07.01.16 2:25, Adam Goryachev wrote: >> So, are you interested in developing/contributing? > People on the list now and then are asking about repository where they could contribute. I believe a few people will be involved if such a repository will be created and maintained. > > I am maintaining sysutils/backuppc and sysutils/backuppc-devel FreeBSD ports. At some point I realized I need a repository just to keep tracking patches between Craig's timeouts (your concern #1). Of course I am rather interested in contributing to a community repository (your concern #2). > > I am not a "proper" coder and busy too. I'd like to contributing on occasional basis. > > Actually, I am using v3 and not considering move to v4 while it is unmaintained. I think with v3 it will be easier to fix on my own potential breakages related to external changes (OS, Perl, ...). > Also, as far as I know policies of some OS prohibit including nonstable (alpha, beta) software in distribution. So a lot of users will be stick to v3 for a long time. > >> What is the current status/plans around BPC? > Unmaintained/abandoned - I am really sorry, but no any feedback from developer more than a year. > >> Do you have any patches that are not applied to either v3 or v4 releases? > So, some patches are there: > > https://github.com/moisseev/BackupPC > > > This test script illustrates potential unexpected loss of a few backups after modifying BackupPC settings: > > https://gist.github.com/moisseev/d5a8a499a7b69b1f0428 > > -- > Alexander > > > ------------------------------------------------------------------------------ > _______________________________________________ > BackupPC-devel mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > -- Un saludo Joe TSolucio |
|
From: Alexander M. <mo...@me...> - 2016-01-07 08:52:20
|
On 07.01.16 2:25, Adam Goryachev wrote: > > So, are you interested in developing/contributing? People on the list now and then are asking about repository where they could contribute. I believe a few people will be involved if such a repository will be created and maintained. I am maintaining sysutils/backuppc and sysutils/backuppc-devel FreeBSD ports. At some point I realized I need a repository just to keep tracking patches between Craig's timeouts (your concern #1). Of course I am rather interested in contributing to a community repository (your concern #2). I am not a "proper" coder and busy too. I'd like to contributing on occasional basis. Actually, I am using v3 and not considering move to v4 while it is unmaintained. I think with v3 it will be easier to fix on my own potential breakages related to external changes (OS, Perl, ...). Also, as far as I know policies of some OS prohibit including nonstable (alpha, beta) software in distribution. So a lot of users will be stick to v3 for a long time. > What is the current status/plans around BPC? Unmaintained/abandoned - I am really sorry, but no any feedback from developer more than a year. > Do you have any patches that are not applied to either v3 or v4 releases? So, some patches are there: https://github.com/moisseev/BackupPC This test script illustrates potential unexpected loss of a few backups after modifying BackupPC settings: https://gist.github.com/moisseev/d5a8a499a7b69b1f0428 -- Alexander |
|
From: Adam G. <mai...@we...> - 2016-01-06 23:26:00
|
Hi, I've been a long time user of backuppc (couple of years at least), and in general it works really well and I'm mostly happy with the current status. However, since I upgraded to the 4.0.0alpha3 last year, I've had a number of minor issues (some more serious than others, like failing to backup unchanged files, or saying the backup has failed even though it succeeded). So far, I've not lost data due to any issue, and that is a plus, but I'm very concerned that eventually, one of these problems will cause actual data loss (as in, backup failed, something else caused data loss like failed RAID array, and then can't recover from backup). I'd like to know if there is any current person or organisation doing development work on BackupPC, and/or interested in doing that? I'm considering to fork the project, and try to debug/fix the remaining issues in BPC 4, but at the same time, I'm very busy, and am not really a "proper" coder, so working on such a large project will be difficult. With the right group of developers, this could work (as in, a small work load for each person, but at least better maintenance/development efforts). My concerns are: 1) Without ongoing development/maintenance, new versions of OS or perl or whatever will cause breakages, while manual/minor patches or config changes might solve these, over time it will become more of a nightmare. 2) The point of using a "standard" open source product is that we all get the advantage of experience (ie, more users finding problems), and improvements/patches. I could have built (probably never as good as the current BPC) my own solution. So, are you interested in developing/contributing? What is the current status/plans around BPC? Do you have any patches that are not applied to either v3 or v4 releases? Thoughts/discussions? PS, BPC is an excellent product, and I greatly appreciate all the time and effort that has been invested into it, I would ideally like to see it continue under the leadership of Craig, he has done an amazing development job so far. I really really do not want to see it basically waste away, with people moving to other products simply because it is unmaintained, and has a few small problems (which is where I currently stand, either I move to another product, or I start working harder on the current one). Regards, Adam -- Adam Goryachev Website Managers www.websitemanagers.com.au |
|
From: Robby <rob...@gm...> - 2015-12-30 16:55:01
|
Hello, I'm running BackupPC 4 and I'm just wondering if it is on purpose, that we run "RefCountUpdate" twice (once without Fsck and Check and once with both) on every backup? (Line 1257 and following) # # Update reference counts - first apply the deltas. # # Then, as a sanity/debig check, run a full fsck and check. # RefCountUpdate(0, 0); RefCountUpdate(1, 1); |
|
From: Stefan F. <st...@ar...> - 2015-07-04 21:44:24
|
In many host configuration files I have an override for EMailAdminUserName.
What happens now is that backuppc's sysadmin mail is sent to the user
specified in EMailAdminUserName of the last host I added.
If you have a look in BackupPC_sendEmail starting from line 181:
>foreach my $host ( sort(keys(%Status)) ) {
> #
> # read any per-PC config settings (allowing per-PC email settings)
> #
> $bpc->ConfigRead($host);
> %Conf = $bpc->Conf();
The problem is that after the loop is done $Conf{EMailAdminUserName} will
contain the username specified as an override in the host configuration
processed last and not the value from the main configuration anymore. But
you use it down in line 386:
>if ( $adminMesg ne "" && $Conf{EMailAdminUserName} ne "" ) {
> my $headers = $Conf{EMailHeaders};
> $headers .= "\n" if ( $headers !~ /\n$/ );
> $adminMesg = <<EOF;
>To: $Conf{EMailAdminUserName}
>Subject: BackupPC administrative attention needed
>$headers
>${adminMesg}Regards,
>PC Backup Genie
>EOF
> SendMail($adminMesg);
>}
I inserted a line to read the main configuration again after the loop (see
attached patch). That fixes the issue for me but since I'm not familiar with
Perl I wonder if that's the correct way to do it.
Best,
Stefan |
|
From: Stephen <st...@em...> - 2015-05-13 21:57:23
|
I'm running BackupPC 4.0.0alpha3 and recently had the occasion to perform a
restore via rsync.
I attempted to restore the /etc/apache2 directory on a server to
/tmp/apache2 on a different server for reference. The share name is '/' on
the server. The /tmp/apache2 directory existed on the destination.
The restore reported success, however restored no files.
In the logs, I saw the following:
-----
Trimming /etc from filesList
Wrote source file list to /srv/BackupPC/pc/my.server/.rsyncFilesFrom9567: /apache2
Running: /usr/local/bin/rsync_bpc --bpc-top-dir /srv/BackupPC
--bpc-host-name my.server --bpc-share-name / --bpc-bkup-num 32
--bpc-bkup-comp 3 --bpc-bkup-merge 32/3/4 --bpc-log-level 1 -e /usr/bin/ssh
-q -c arcfour -x -l backuppc --rsync-path=sudo /usr/bin/rsync --recursive
--super --protect-args --numeric-ids --perms --owner --group -D --times
--links --hard-links --delete --partial --log-format=log: %o %i %B %8U,%8G
%9l %f%L --stats -v --files-from=/srv/BackupPC/pc/my.server/.rsyncFilesFrom9567 /
my.server:/tmp/
This is the rsync child about to exec /usr/local/bin/rsync_bpc
sending incremental file list
rsync_bpc: link_stat "/apache2" failed: No such file or directory (2)
...snip...
rsync_bpc exited with benign status 23 (5888)
-----
It seems that in $installdir/lib/BackupPC/Xfer/Rsync.pm, $srcDir is set to
"/", never changed, and passed as an argument to rsync_bpc. Yet the
filenames in the "files-from" file are modified to strip any parent
directories. Therefore $srcDir needs a different value if the restored
files' paths aren't absolute relative to the root of the backup.
The included patch (inline and attached) fixes this problem for me; I've
performed a few restores to original and different destinations and it
appears to work as expected.
I don't hack BackupPC code all the time, so Craig may wish to do a sanity
check; there may be a more correct fix.
# diff -u Rsync.pm.orig Rsync.pm.new
--- Rsync.pm.orig 2015-05-13 14:17:04.825460437 -0400
+++ Rsync.pm.new 2015-05-13 17:26:23.413745286 -0400
@@ -102,6 +102,7 @@
for ( my $i = 0 ; $i < @{$t->{fileList}} ; $i++ ) {
$t->{fileList}[$i] = substr($t->{fileList}[$i], length($t->{pathHdrSrc}));
}
+ $srcDir=$t->{pathHdrSrc} if ($t->{pathHdrSrc});
$t->{XferLOG}->write(\"Trimming $t->{pathHdrSrc} from filesList\n");
}
|
|
From: Alexander M. <mo...@me...> - 2015-05-07 09:06:52
|
Proposed patch fixes full backup expire exponential algorithm. Current algorithm has issues with backups loss after modifying configuration settings or making manual backups. Short explanation is here: https://gist.github.com/moisseev/d5a8a499a7b69b1f0428#comment-1442896 patches: v4 https://github.com/moisseev/BackupPC/commit/c9a265c78969d1cbad4e95ca9ecfcba0618ec3c8 v3 https://github.com/moisseev/BackupPC/compare/a5dabe7...BackupFullExpire-v3 -- Alexander |
|
From: Alexander M. <mo...@me...> - 2015-05-03 17:02:16
|
20.01.2015 5:38, Craig Barratt пишет: > The deprecated defined(@Backups) is already fixed in 3.3.1 It is mentioned in ChangeLog but not fixed in 3.3.1 actually. -- Alexander |
|
From: Alexander M. <mo...@me...> - 2015-04-30 14:51:15
|
30.04.2015 15:02, Henrik Genssen пишет: > I would expect a 404, as the content I requested was not found. > The content of the resulting HTML page is fine and any browser would show it up even on a 404 state. > Quick patch is here: https://gist.github.com/moisseev/f27aaa9a771b3d66830e The patch set 404 status code to HTTP header for all objects returned on error. It is not correct but it should work for your needs. The diff was made against installed FreeBSD package backuppc-3.3.0_7. This part of 3.1.0 code shouldn't differ except line numbers. -- Alexander |
|
From: Henrik G. <hg...@me...> - 2015-04-30 12:03:04
|
I would expect a 404, as the content I requested was not found. The content of the resulting HTML page is fine and any browser would show it up even on a 404 state. regards, Henrik >reply to message: >date: 29.04.2015 18:15:15 >from: "Alexander Moisseev" <mo...@me...> >to: bac...@li... >subject: Re: [BackupPC-devel] Status Result codes for direct restore > >29.04.2015 18:26, Henrik Genssen ?????: > >> If I now write a script constructing the URLs for my missing files to download them from a backup, this works of course, too :-) >> But if the file does not exist in the given backup, I get a HTTP Result code of 200 with a HTML Page telling me, that the file was not found. Does this make sense? > >What result code are you expected? >You got 200 OK since you successfully got a web page. > >-- >Alexander > >------------------------------------------------------------------------------ >One dashboard for servers and applications across Physical-Virtual-Cloud >Widest out-of-the-box monitoring support with 50+ applications >Performance metrics, stats and reports that give you Actionable Insights >Deep dive visibility with transaction tracing using APM Insight. >http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >_______________________________________________ >BackupPC-devel mailing list >Bac...@li... >List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel >Wiki: http://backuppc.wiki.sourceforge.net >Project: http://backuppc.sourceforge.net/ > |
|
From: Alexander M. <mo...@me...> - 2015-04-29 16:15:20
|
29.04.2015 18:26, Henrik Genssen пишет: > If I now write a script constructing the URLs for my missing files to download them from a backup, this works of course, too :-) > But if the file does not exist in the given backup, I get a HTTP Result code of 200 with a HTML Page telling me, that the file was not found. Does this make sense? What result code are you expected? You got 200 OK since you successfully got a web page. -- Alexander |
|
From: Henrik G. <hg...@me...> - 2015-04-29 15:36:52
|
Hi, I am using backuppc 3.1.0. If I download a single file from a backup directly I get the file downloaded in its binary form fine using the web-interface. If I now write a script constructing the URLs for my missing files to download them from a backup, this works of course, too :-) But if the file does not exist in the given backup, I get a HTTP Result code of 200 with a HTML Page telling me, that the file was not found. Does this make sense? Is it just related to my old backuppc version? regards Henrik |
|
From: higuita <hi...@GM...> - 2015-03-10 23:54:09
|
Hi
> > I want to execute a remote script to do a backup. The backup
> > will not be store in backuppc, so all i need is to execute a script
> > and save the status and the script log.
> >
> > Right now i workaround with tar, i replace the tar command
> > with a script that will execute my command and then tars the log, so
(...)
> You should be able to use the DumpPreUserCmd for that, then perhaps
> back up a directory containing the log of the operation. You'd get
> scheduling and the ability to see failures in one place.
This is also a workaround, just like mine with tar :)
They work, but i think a simple "exec" backup type could be
useful for more people and should be simple to implement as almost
everything is already there.
The fun of it is that is very powerful and flexible and can be
used and abused in ways we can even imagine.
> > Restore could also be a script. The script status code and
> > stdout and stderr should be captured and logged by backuppc. No files
> > are transfered to backuppc, so the size for this type of backups could
> > always be zero in the status (or show it inside a [], to flag a remote
> > backup)
>
> Not sure how you would control what was restored, though.
Just execute the restore script. backuppc will just have the
logs of the "exec" backup, so the user will have also create a restore
script for what he wants. A CD/DVD will require one thing, a remote
rsynced HD will require other and a other yet for any other special
backup program.
Maybe the remote backup script could report a list of
files stored in the log and the restore could also take a file list
as argument... but this should be optional and maybe with a regexp
option to extract that file list (as each script could generate different
logs)
Either way, the only required thing is the restore script
and one option for the backup number and/or date of the selected backup.
More elaborated backups and restores require the admin to proper
configure the scripts
Thanks
higuita
--
Naturally the common people don't want war... but after all it is the
leaders of a country who determine the policy, and it is always a
simple matter to drag the people along, whether it is a democracy, or
a fascist dictatorship, or a parliament, or a communist dictatorship.
Voice or no voice, the people can always be brought to the bidding of
the leaders. That is easy. All you have to do is tell them they are
being attacked, and denounce the pacifists for lack of patriotism and
exposing the country to danger. It works the same in every country.
-- Hermann Goering, Nazi and war criminal, 1883-1946
|
|
From: Les M. <les...@gm...> - 2015-03-04 05:03:38
|
On Tue, Mar 3, 2015 at 8:47 PM, higuita <hi...@gm...> wrote:
> Hi
>
> I love backuppc, is very flexible and works very well... but i
> miss one feature: Exec backup type
>
> I want to execute a remote script to do a backup. The backup will
> not be store in backuppc, so all i need is to execute a script and save
> the status and the script log.
>
> Right now i workaround with tar, i replace the tar command
> with a script that will execute my command and then tars the log, so
> backuppc can see the backup status and a user can checkout the log.
> It works, but is a pain to have to play with tar command, the full
> and incremental options to do what i need.
>
> A simple ssh $host $script $options would be perfect, where
> the $options get their parameters from the full and incremental
> backup.
You should be able to use the DumpPreUserCmd for that, then perhaps
back up a directory containing the log of the operation. You'd get
scheduling and the ability to see failures in one place.
> Restore could also be a script. The script status code and
> stdout and stderr should be captured and logged by backuppc. No files
> are transfered to backuppc, so the size for this type of backups could
> always be zero in the status (or show it inside a [], to flag a remote
> backup)
Not sure how you would control what was restored, though.
--
Les Mikesell
les...@gm...
|
|
From: higuita <hi...@GM...> - 2015-03-04 02:47:15
|
Hi
I love backuppc, is very flexible and works very well... but i
miss one feature: Exec backup type
I want to execute a remote script to do a backup. The backup will
not be store in backuppc, so all i need is to execute a script and save
the status and the script log.
Right now i workaround with tar, i replace the tar command
with a script that will execute my command and then tars the log, so
backuppc can see the backup status and a user can checkout the log.
It works, but is a pain to have to play with tar command, the full
and incremental options to do what i need.
A simple ssh $host $script $options would be perfect, where
the $options get their parameters from the full and incremental
backup. Restore could also be a script. The script status code and
stdout and stderr should be captured and logged by backuppc. No files
are transfered to backuppc, so the size for this type of backups could
always be zero in the status (or show it inside a [], to flag a remote
backup)
With this, one could use backuppc as the only interface for
all backups. This is useful specially on remote machines with very slow
network, where one must backup to a local HD/NAS. This could also open the
door for user to use tapes, dvd burners and interact with other backup
softwares
Please consider adding this.
Best regards,
higuita
--
Naturally the common people don't want war... but after all it is the
leaders of a country who determine the policy, and it is always a
simple matter to drag the people along, whether it is a democracy, or
a fascist dictatorship, or a parliament, or a communist dictatorship.
Voice or no voice, the people can always be brought to the bidding of
the leaders. That is easy. All you have to do is tell them they are
being attacked, and denounce the pacifists for lack of patriotism and
exposing the country to danger. It works the same in every country.
-- Hermann Goering, Nazi and war criminal, 1883-1946
|
|
From: Kenneth P. <sh...@se...> - 2015-02-26 13:29:17
|
FYI, 0.72 was pushed to the Red Hat distros (EPEL, Fedora) and the io_error flag bug was closed. Those using these distros should be able to pull the new package with yum. <https://bugzilla.redhat.com/show_bug.cgi?id=1177212> |
|
From: Steven De W. <bac...@st...> - 2015-02-08 13:41:34
|
Hi Craig,
First off, thank you for the excellent BackupPC software.
I've been a (personal) user of the software for a few years already
and really like it.
Recentely I've deployed a new instance at my dad's home and I've used
the latest 4.0.0alpha3. At my place, I'm still running v3.x, so I
never ran into this issue.
BackupPC runs on a Linux machine and I'm trying to backup a Windows
laptop, which is running CygWin with SSH & Rsyncd on a DHCP IP-address.
As such, I'm using nmblookup to figure out the HostIP that belongs to
the Host.
I've noticed however (in the bad XferLOG), that when rsync_bpc is
started, it is provided with the 'host' variable to connect to,
whereas it should of course connect to the 'hostIP'.
I did some searching in the settings and afterwards turned to the Perl
code and have fixed my issue in the lib/BackupPC/Xfer/Rsync.pm file.
I'm not much of a programmer, but below is the output of a diff
between the original source and my changes (in folder
BackupPC-4.0.0alpha3/lib/BackupPC/Xfer):
# diff Rsync.pm Rsync.pm-new
147c147
< "$conf->{RsyncdUserName}\@$t->{host}::$remoteDir");
---
> "$conf->{RsyncdUserName}\@$t->{hostIP}::$remoteDir");
370c370
< "$conf->{RsyncdUserName}\@$t->{host}::$shareName",
---
> "$conf->{RsyncdUserName}\@$t->{hostIP}::$shareName",
I hope this provides you with the necessary information to fix the
issue in your source code?
Regards,
Steven
|
|
From: Tobias F. <to...@co...> - 2015-01-20 21:09:27
|
Am Dienstag, den 20.01.2015, 02:38 +0000 schrieb Craig Barratt: > On Sun Jan 18 2015 at 11:00:50 AM Alexander Moisseev > <mo...@me...> wrote: > Cgaig, would you take a look at those patches: > > BackupPC v3 > https://github.com/moisseev/BackupPC/commits/master-v3 > > BackupPC v4 > https://github.com/moisseev/BackupPC/commits/master-v4 > > Add $Conf{CgiDateFormatMMDD} that allows set file size format > in the backup browse table. > https://github.com/moisseev/BackupPC/commits/cgi-file-size-v3 > > > Thanks for reminding me of these. The deprecated defined(@Backups) > is already fixed in 3.3.1 and 4.0.0alpha3. The two other fixes (minor > typo in en.pm and adding the =encoding directive to BackupPC.pod) will > be in the next 3.x release. All your 4.0.0alpha3 patches are in my > source tree for 4.x. > > > I'll likely add your cgi-file-size patch to 4.0.0, without adding > another configuration option. I don't think I will include it in the > next 3.x release (I'm trying to keep 3.x changes to bug fixes at this > point.) > > > Thanks, > Craig > ------------------------------------------------------------------------------ > New Year. New Location. New Benefits. New Data Center in Ashburn, VA. > GigeNET is offering a free month of service with a new server in Ashburn. > Choose from 2 high performing configs, both with 100TB of bandwidth. > Higher redundancy.Lower latency.Increased capacity.Completely compliant. > http://p.sf.net/sfu/gigenet > _______________________________________________ BackupPC-devel mailing list Bac...@li... List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/ Hi Craig, did you also see my patch regarding the hooks for NightlyRun? What is your feeling about this feature? -- Tobi |
|
From: Alexander M. <mo...@me...> - 2015-01-20 06:29:59
|
> I'll likely add your cgi-file-size patch to 4.0.0, without adding another configuration option. I think it would be better if user could choose at least between two formats: with SI prefixes (shorter human readable form) and with a thousand separator (more visual representation - easier for compare or search if properly aligned). May be it should be achieved by some CGI control element rather than configuration option. -- Alexander |
|
From: Craig B. <cba...@us...> - 2015-01-20 02:38:39
|
On Sun Jan 18 2015 at 11:00:50 AM Alexander Moisseev <mo...@me...> wrote: > Cgaig, would you take a look at those patches: > > BackupPC v3 > https://github.com/moisseev/BackupPC/commits/master-v3 > > BackupPC v4 > https://github.com/moisseev/BackupPC/commits/master-v4 > > Add $Conf{CgiDateFormatMMDD} that allows set file size format in the > backup browse table. > https://github.com/moisseev/BackupPC/commits/cgi-file-size-v3 Thanks for reminding me of these. The deprecated defined(@Backups) is already fixed in 3.3.1 and 4.0.0alpha3. The two other fixes (minor typo in en.pm and adding the =encoding directive to BackupPC.pod) will be in the next 3.x release. All your 4.0.0alpha3 patches are in my source tree for 4.x. I'll likely add your cgi-file-size patch to 4.0.0, without adding another configuration option. I don't think I will include it in the next 3.x release (I'm trying to keep 3.x changes to bug fixes at this point.) Thanks, Craig |
|
From: Alexander M. <mo...@me...> - 2015-01-18 19:00:34
|
Cgaig, would you take a look at those patches: BackupPC v3 https://github.com/moisseev/BackupPC/commits/master-v3 BackupPC v4 https://github.com/moisseev/BackupPC/commits/master-v4 Add $Conf{CgiDateFormatMMDD} that allows set file size format in the backup browse table. https://github.com/moisseev/BackupPC/commits/cgi-file-size-v3 -- Thanks in advance, Alexander |