You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(15) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(7) |
Feb
(6) |
Mar
(3) |
Apr
(9) |
May
(5) |
Jun
(11) |
Jul
(40) |
Aug
(15) |
Sep
(3) |
Oct
(2) |
Nov
(1) |
Dec
|
| 2004 |
Jan
(2) |
Feb
(8) |
Mar
(8) |
Apr
(25) |
May
(10) |
Jun
(11) |
Jul
(5) |
Aug
(9) |
Sep
(2) |
Oct
(7) |
Nov
(7) |
Dec
(6) |
| 2005 |
Jan
(6) |
Feb
(17) |
Mar
(9) |
Apr
(3) |
May
(4) |
Jun
(11) |
Jul
(42) |
Aug
(33) |
Sep
(13) |
Oct
(14) |
Nov
(19) |
Dec
(7) |
| 2006 |
Jan
(23) |
Feb
(19) |
Mar
(6) |
Apr
(8) |
May
(1) |
Jun
(12) |
Jul
(50) |
Aug
(16) |
Sep
(4) |
Oct
(18) |
Nov
(15) |
Dec
(10) |
| 2007 |
Jan
(10) |
Feb
(13) |
Mar
(1) |
Apr
|
May
(1) |
Jun
(1) |
Jul
(5) |
Aug
(7) |
Sep
(25) |
Oct
(58) |
Nov
(15) |
Dec
(4) |
| 2008 |
Jan
(2) |
Feb
(13) |
Mar
(3) |
Apr
(10) |
May
(1) |
Jun
(4) |
Jul
(4) |
Aug
(23) |
Sep
(21) |
Oct
(7) |
Nov
(3) |
Dec
(12) |
| 2009 |
Jan
(5) |
Feb
(2) |
Mar
|
Apr
(7) |
May
(6) |
Jun
(25) |
Jul
|
Aug
(2) |
Sep
(2) |
Oct
(3) |
Nov
(2) |
Dec
(2) |
| 2010 |
Jan
(1) |
Feb
|
Mar
(3) |
Apr
(2) |
May
(2) |
Jun
|
Jul
(3) |
Aug
(3) |
Sep
(2) |
Oct
|
Nov
|
Dec
(3) |
| 2011 |
Jan
(10) |
Feb
(2) |
Mar
(71) |
Apr
(4) |
May
(8) |
Jun
|
Jul
|
Aug
(4) |
Sep
(3) |
Oct
(1) |
Nov
(5) |
Dec
(1) |
| 2012 |
Jan
|
Feb
|
Mar
(2) |
Apr
(2) |
May
(15) |
Jun
(1) |
Jul
|
Aug
(20) |
Sep
|
Oct
|
Nov
(2) |
Dec
|
| 2013 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(4) |
May
(2) |
Jun
(11) |
Jul
(12) |
Aug
|
Sep
(19) |
Oct
(25) |
Nov
(7) |
Dec
(9) |
| 2014 |
Jan
(6) |
Feb
(2) |
Mar
(7) |
Apr
(5) |
May
(1) |
Jun
|
Jul
(1) |
Aug
(1) |
Sep
(1) |
Oct
(3) |
Nov
(2) |
Dec
(4) |
| 2015 |
Jan
(8) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(3) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
| 2016 |
Jan
(16) |
Feb
|
Mar
(1) |
Apr
(1) |
May
(12) |
Jun
(3) |
Jul
|
Aug
(5) |
Sep
|
Oct
(2) |
Nov
|
Dec
|
| 2017 |
Jan
(2) |
Feb
(6) |
Mar
(68) |
Apr
(18) |
May
(8) |
Jun
(1) |
Jul
|
Aug
(10) |
Sep
(2) |
Oct
(1) |
Nov
(13) |
Dec
(25) |
| 2018 |
Jan
(18) |
Feb
(2) |
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
| 2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2020 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
(1) |
Oct
(7) |
Nov
|
Dec
|
|
From: Juergen H. <jue...@un...> - 2016-06-16 20:58:25
|
Hello, Eduardo Thank you for your initiative re backuppc on windows. I suggest that you use the ML at <bac...@li...> for communicating with other people who are interested in this issue. That list is still the platform for discussion on backuppc development. I am automatically receiving your comments to github since I opened the issue on windows integration, but I am afraid that some other contributors may not be aware of your comments/contributions. @members of the backuppc-devel list: when the switch to github was decided, we underestimated the potential of bumpy communications between the ML at sourcefourge and state-change information at github. I see 2 solutions: (1) do as I did just now and ask contributors of comments in github to use the sourceforge ML for communications, and (2) find an automated solution. Is it possible to add bac...@li... to the list of recipients that automatically receive mail when a state-change of the wiki occurs? I am not sufficiently knowledgable with github technology to put something like this in place Juergen |
|
From: Benjamin L. <ben...@nw...> - 2016-06-01 19:24:43
|
Hi, There has been no update in record time. This software looks abandoned on sourceforge as well as by the packaging person "Bernard Johnson" from the fedoraproject. His email bjo...@sy... is fake and unresponsive. The company at symetrix.co (not com) don't know him. He has completely disappeared. Is there a migration to GitHub in progress? I left a note there: https://github.com/backuppc/backuppc/issues/19 Regards, Benjamin Lefoul |
|
From: Mauro C. <mc...@mc...> - 2016-05-25 07:43:22
|
Hi, as discussed earlier, I would be glad to help porting wiki content to github. We understood there is a DB dump somewhere and, possibly, a tarball from Craig. Can I have access to either? Tarball would be preferred, but I think we should start from somewhere ASAP. Similar consideration applies to possible integration of latest Craig's v4 CVS. Is it available? If so either I or Lars will be glad to port it to github to provide everyone with a solid base from which to move on. Regards Mauro |
|
From: Alexander M. <mo...@me...> - 2016-05-19 11:13:17
|
On 19.05.16 12:52, Lars Tobias Skjong-Børsting wrote: > On 19/05/16 11:34, Alexander Moisseev wrote: > >> IMHO yes, but I'd stop making any changes until we get feedback from Craig and interested contributors and come to some decision. > > Do I understand you correctly if you think I shouln't force-push master > back to the 3.x.x right now, but wait for an answer from Craig and others? > I think you should eventually, but you can do it any time until you change anything else. My point was do not make any other changes in repository (except issue tracker) until we got Craig's reply on v4 sources and branch model finally settled down. |
|
From: Lars T. Skjong-B. <li...@re...> - 2016-05-19 09:52:17
|
On 19/05/16 11:34, Alexander Moisseev wrote: > IMHO yes, but I'd stop making any changes until we get feedback from Craig and interested contributors and come to some decision. Do I understand you correctly if you think I shouln't force-push master back to the 3.x.x right now, but wait for an answer from Craig and others? -- Best regards, Lars Tobias |
|
From: Alexander M. <mo...@me...> - 2016-05-19 09:34:25
|
On 19.05.16 11:30, Lars Tobias Skjong-Børsting wrote: > On 19/05/16 10:04, Alexander Moisseev wrote: >> On 19.05.16 10:31, Lars Tobias Skjong-Børsting wrote: >>> >>> I'll branch out 3.x, then, and merge in 4.x to master. >>> >> Do you really think it was a good idea to merge in the `master` v3 CVS with v4 release tarball (which in not "real" source code)? > > We don't have the real CVS tree, so it's all we have. > I believe Craig has some kind of source. Let's wait for his answer. There are make files in v3 that make substitutions in CVS sources in order to build release tarball. I believe v4 follows the same ideology. So we shouldn't simply merge source and release code. > I'm sorry if this was a bad move. :( Do you think we should remove it? > IMHO yes, but I'd stop making any changes until we get feedback from Craig and interested contributors and come to some decision. |
|
From: Lars T. Skjong-B. <li...@re...> - 2016-05-19 08:30:45
|
On 19/05/16 10:04, Alexander Moisseev wrote: > On 19.05.16 10:31, Lars Tobias Skjong-Børsting wrote: >> >> I'll branch out 3.x, then, and merge in 4.x to master. >> > Do you really think it was a good idea to merge in the `master` v3 CVS with v4 release tarball (which in not "real" source code)? We don't have the real CVS tree, so it's all we have. I'm sorry if this was a bad move. :( Do you think we should remove it? -- Best regards, Lars Tobias |
|
From: Alexander M. <mo...@me...> - 2016-05-19 08:04:10
|
On 19.05.16 10:31, Lars Tobias Skjong-Børsting wrote: > > I'll branch out 3.x, then, and merge in 4.x to master. > Do you really think it was a good idea to merge in the `master` v3 CVS with v4 release tarball (which in not "real" source code)? |
|
From: Lars T. Skjong-B. <li...@re...> - 2016-05-19 07:31:35
|
Hi Craig, On 19/05/16 08:32, Craig Barratt wrote: > However, I'd prefer 4.x to be the master branch, since that will have > the most development going forward. I'll branch out 3.x, then, and merge in 4.x to master. > Also, my github user name is craigbarratt. I've added you to the organization. -- Best regards, Lars Tobias |
|
From: Alexander M. <mo...@me...> - 2016-05-19 06:48:04
|
On 19.05.16 9:32, Craig Barratt wrote: > Agreed; I misunderstood the setup. > > However, I'd prefer 4.x to be the master branch, since that will have the most development going forward. Totally agreed. Is there 4.x CVS code anywhere? Would you provide access to it for @larstobi so he could import it to the github repository? For now v4.0.0 branch is the latest release tarball. |
|
From: Craig B. <cba...@us...> - 2016-05-19 06:32:47
|
Agreed; I misunderstood the setup. However, I'd prefer 4.x to be the master branch, since that will have the most development going forward. Also, my github user name is craigbarratt. Craig On Wed, May 18, 2016 at 11:16 PM, Alexander Moisseev <mo...@me...> wrote: > On 18.05.16 21:03, David Cramblett wrote: > > > > That looks like 3.x, while rsync-bpc and backuppc-xs are 4.x. Maybe we > should have backuppc3 for 3.x, and backuppc is 4.x? > > > No, it will break workflow. We definitely should keep both as branches of > the same project backuppc/backuppc. > IMHO we should keep rsync-bpc and backuppc-xs as separate projects (at > least for now). > > We can change branch names as Craig suggested (but IMHO it is > questionable), but we should do it ASAP. Renaming a branch is a merely one > command, but it is really deleting a branch, followed by pushing a new one > with a new name. So it has implications. For example, if you have opened > pull requests for the branch, the pull requests will be removed. > > What should be done ASAP: > 1. Finally choose branching model and branch names. > 2. Import 4.x CVS code (if it exists). > > > ------------------------------------------------------------------------------ > Mobile security can be enabling, not merely restricting. Employees who > bring their own devices (BYOD) to work are irked by the imposition of MDM > restrictions. Mobile Device Manager Plus allows you to control only the > apps on BYO-devices by containerizing them, leaving personal data > untouched! > https://ad.doubleclick.net/ddm/clk/304595813;131938128;j > _______________________________________________ > BackupPC-devel mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > |
|
From: Alexander M. <mo...@me...> - 2016-05-19 06:16:31
|
On 18.05.16 21:03, David Cramblett wrote: > > That looks like 3.x, while rsync-bpc and backuppc-xs are 4.x. Maybe we should have backuppc3 for 3.x, and backuppc is 4.x? > No, it will break workflow. We definitely should keep both as branches of the same project backuppc/backuppc. IMHO we should keep rsync-bpc and backuppc-xs as separate projects (at least for now). We can change branch names as Craig suggested (but IMHO it is questionable), but we should do it ASAP. Renaming a branch is a merely one command, but it is really deleting a branch, followed by pushing a new one with a new name. So it has implications. For example, if you have opened pull requests for the branch, the pull requests will be removed. What should be done ASAP: 1. Finally choose branching model and branch names. 2. Import 4.x CVS code (if it exists). |
|
From: Rick L <ri...@li...> - 2016-05-18 18:27:05
|
Excellent news. +1 to you David C. On 05/18/2016 02:03 PM, David Cramblett wrote: > > David & team, > > Thanks for reaching out and gathering interested parties. > > Yes, as Stephen passed along, unfortunately my day job has kept me > really busy over the last couple of years. So I simply haven't had > any time to devote to BackupPC, and I haven't even had time to keep up > with the mail lists either. Most of 4.0 development happened when I > was able to take a few months off between jobs in 2013. > > It would be great to get help in moving things forward again. Having > some people contribute in various areas would definitely help, and > motivate me to find some time to work on it again. The current 4.0 is > generally pretty stable and works well (albeit I haven't been watching > the mail list). As you noted, FTP is the one area that's not > finished. The original FTP code was contributed and it has some > significant issues. I suspect FTP is rarely used - we could simply > drop it for 4.0, or I can revisit my latest code (not yet checked in) > to see how much more there is to do? > > So, yes, help with the project would be great. > > Amazingly, I've never used github - just goes to show how out of touch > I've become :(. But I agree moving all the development to github > makes sense. Actually, it looks like that's started to happen, eg: > https://github.com/backuppc. That looks like 3.x, while rsync-bpc and > backuppc-xs are 4.x. Maybe we should have backuppc3 for 3.x, and > backuppc is 4.x? > > In terms of contributions, all kinds can be very helpful (feel free to > augment this and repost to the user or developer lists if you want): > > * perusing the mail lists and finding bug reports that can be > concisely documented and/or replicated. There are also feature > requests that are worth keeping track of. I last did this over > the xmas holidays in 2014/15, which led to the last release of > 3.3.1 in Jan 2015. So I'm almost 1.5 years behind on doing this, > and as more time goes by I feel like I need even more time to catch up > * actually fixing and patching bugs > * testing (given the big user base, it became more nerve wracking > doing new releases given how little time I had left to do testing...) > * releases (actually generating releases, posting announcements etc) > * writing code for new releases and features (mainly perl, but 4.0 > has significant portions of C code) > * updating rsync-bpc to be based on the most recent rsync; can be > done mostly by applying diffs, but some hand work required, > especially if rsync has changed a lot. > * refreshing cygwin-rsyncd for 3.x users > * wiki needs a complete restart (somewhere I have tarballs of the > old wiki, but it was relatively disorganized) > * documentation > * user support (there's been a great group of people selflessly > providing user support, which has been one of the most important > parts of the project's success, and one of the main things that > kept me motivated to keep working on the project) > > The more people can do the better! I'd be happy to try to commit the > time to finish off 4.0 (help there would be great; even just testing > help), and I would definitely be interested in reviewing patches and > other code submissions (eg: for every release I carefully look at the > diffs from the prior release). > > Craig > > On Tue, 17 May 2016, David Cramblett wrote: > > Craig, > > I'm writing to you on behalf of the active users of the BackupPC > mailing list. We have been tossing around the idea of helping out > with the BackupPC > project for the last year or so. In the recent days, critical mass > has finally been achieved and we have some volunteers stepping up. > > We have been discussing how we could help and what's needed to > keep BackupPC supported with future operating system updates, > third party software > libraries, etc. As well as getting v4 out of beta, it works quite > well (and I have seen a very nice performance boost!). > > However, we're all in agreement, the first order of business was > to reach out to you and see if you're even interested in the help? > > Stephen Joyce shared with us that he had communicated with you > recently offering some help. Stephen said you had let him know you > were still very > interested in your project, but that time was tight right now with > other obligations. He also said he has access to the BackupPC SF > site and was > hoping to apply some patches he was aware of, and do something > about the wiki being shut down by SF. He also let us know that you > were holding off > on v4 mostly due to an implementation of the FTP client. > > With all that in mind: > > 1) Are you interested in some assistance with the BackupPC project? > > 2) If yes. We were also interested in porting the project over to > GitHub. GitHub provides us with some more modern tools and ease of > collaboration, > I'm betting your familiar. Would this be something you would be > okay with? We would expect that you would be a project > administrator on GitHub if > the project was migrated. > > Most important to all of us, is that anything we do, is with your > blessing. We appreciate all the work you have put into BackupPC > over the years, > and the benefits we have received from that effort. If you are > interested in accepting our assistance, we are happy to provide > assistance under > whatever format you would like. If you would like to continue to > control releases, features, etc., we would be happy to provide > help under whatever > model you prefer. We by no means are attempting to hijack the > BackupPC project, nor do we want a bunch of BackupPC forks to pop > up that don't meet > the high quality standards you have met in the past. > > I have cc'd Stephen Joyce, Mauro Condarelli, and Lars Tobias > Skjong-Børsting on this communication. Stephen obviously has been > in communication with > you about the project and has access to the SF site. Mauro and > Lars have stepped up to help get the community of developers > organized and on the > right track in the event that your amenable to the assistance. > > I hope this message finds you well and we look forward to your > response, > > Regards, > > David > > > -- > David Cramblett > > > -- > David Cramblett > > > ------------------------------------------------------------------------------ > Mobile security can be enabling, not merely restricting. Employees who > bring their own devices (BYOD) to work are irked by the imposition of MDM > restrictions. Mobile Device Manager Plus allows you to control only the > apps on BYO-devices by containerizing them, leaving personal data untouched! > https://ad.doubleclick.net/ddm/clk/304595813;131938128;j > > > _______________________________________________ > BackupPC-devel mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ |
|
From: David C. <da...@fu...> - 2016-05-18 18:04:01
|
David & team, Thanks for reaching out and gathering interested parties. Yes, as Stephen passed along, unfortunately my day job has kept me really busy over the last couple of years. So I simply haven't had any time to devote to BackupPC, and I haven't even had time to keep up with the mail lists either. Most of 4.0 development happened when I was able to take a few months off between jobs in 2013. It would be great to get help in moving things forward again. Having some people contribute in various areas would definitely help, and motivate me to find some time to work on it again. The current 4.0 is generally pretty stable and works well (albeit I haven't been watching the mail list). As you noted, FTP is the one area that's not finished. The original FTP code was contributed and it has some significant issues. I suspect FTP is rarely used - we could simply drop it for 4.0, or I can revisit my latest code (not yet checked in) to see how much more there is to do? So, yes, help with the project would be great. Amazingly, I've never used github - just goes to show how out of touch I've become :(. But I agree moving all the development to github makes sense. Actually, it looks like that's started to happen, eg: https://github.com/backuppc. That looks like 3.x, while rsync-bpc and backuppc-xs are 4.x. Maybe we should have backuppc3 for 3.x, and backuppc is 4.x? In terms of contributions, all kinds can be very helpful (feel free to augment this and repost to the user or developer lists if you want): - perusing the mail lists and finding bug reports that can be concisely documented and/or replicated. There are also feature requests that are worth keeping track of. I last did this over the xmas holidays in 2014/15, which led to the last release of 3.3.1 in Jan 2015. So I'm almost 1.5 years behind on doing this, and as more time goes by I feel like I need even more time to catch up - actually fixing and patching bugs - testing (given the big user base, it became more nerve wracking doing new releases given how little time I had left to do testing...) - releases (actually generating releases, posting announcements etc) - writing code for new releases and features (mainly perl, but 4.0 has significant portions of C code) - updating rsync-bpc to be based on the most recent rsync; can be done mostly by applying diffs, but some hand work required, especially if rsync has changed a lot. - refreshing cygwin-rsyncd for 3.x users - wiki needs a complete restart (somewhere I have tarballs of the old wiki, but it was relatively disorganized) - documentation - user support (there's been a great group of people selflessly providing user support, which has been one of the most important parts of the project's success, and one of the main things that kept me motivated to keep working on the project) The more people can do the better! I'd be happy to try to commit the time to finish off 4.0 (help there would be great; even just testing help), and I would definitely be interested in reviewing patches and other code submissions (eg: for every release I carefully look at the diffs from the prior release). Craig On Tue, 17 May 2016, David Cramblett wrote: Craig, > > I'm writing to you on behalf of the active users of the BackupPC mailing > list. We have been tossing around the idea of helping out with the BackupPC > project for the last year or so. In the recent days, critical mass has > finally been achieved and we have some volunteers stepping up. > > We have been discussing how we could help and what's needed to keep > BackupPC supported with future operating system updates, third party > software > libraries, etc. As well as getting v4 out of beta, it works quite well > (and I have seen a very nice performance boost!). > > However, we're all in agreement, the first order of business was to reach > out to you and see if you're even interested in the help? > > Stephen Joyce shared with us that he had communicated with you recently > offering some help. Stephen said you had let him know you were still very > interested in your project, but that time was tight right now with other > obligations. He also said he has access to the BackupPC SF site and was > hoping to apply some patches he was aware of, and do something about the > wiki being shut down by SF. He also let us know that you were holding off > on v4 mostly due to an implementation of the FTP client. > > With all that in mind: > > 1) Are you interested in some assistance with the BackupPC project? > > 2) If yes. We were also interested in porting the project over to GitHub. > GitHub provides us with some more modern tools and ease of collaboration, > I'm betting your familiar. Would this be something you would be okay with? > We would expect that you would be a project administrator on GitHub if > the project was migrated. > > Most important to all of us, is that anything we do, is with your > blessing. We appreciate all the work you have put into BackupPC over the > years, > and the benefits we have received from that effort. If you are interested > in accepting our assistance, we are happy to provide assistance under > whatever format you would like. If you would like to continue to control > releases, features, etc., we would be happy to provide help under whatever > model you prefer. We by no means are attempting to hijack the BackupPC > project, nor do we want a bunch of BackupPC forks to pop up that don't meet > the high quality standards you have met in the past. > > I have cc'd Stephen Joyce, Mauro Condarelli, and Lars Tobias > Skjong-Børsting on this communication. Stephen obviously has been in > communication with > you about the project and has access to the SF site. Mauro and Lars have > stepped up to help get the community of developers organized and on the > right track in the event that your amenable to the assistance. > > I hope this message finds you well and we look forward to your response, > > Regards, > > David > > > -- > David Cramblett -- David Cramblett |
|
From: Ludovic D. <ld...@de...> - 2016-04-15 13:45:50
|
Hi ! (Debian bug URL https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=820963 ) Yes, a full fix won't be easy because smbclient output has changed since samba 4.2. For example in 4.1.x the following code was present in samba/source3/client/clitar.c in do_atar(): if (tar_noisy) { DEBUG(0, ("%12.0f (%7.1f kb/s) %s\n", (double)finfo.size, finfo.size / MAX(0.001, (1.024*this_time)), finfo.name)); } clitar.c has been rewritten in 4.2+... -- Ludovic Drolez. http://www.aopensource.com - The Android Open Source Portal http://www.drolez.com - Personal site - Linux and Free Software |
|
From: Bernd S. <si...@ne...> - 2016-03-05 14:24:41
|
Hello, i am currently build BackupPC 3.3.1 rpms for openSuSE and SLES12 using build.opensuse.org... Threfore, i like to fix a warning from rpmlint... rpmlint complains about an incorrect address for Free Software foundation, Inc. Indeed - the current address differs from the address in the files.. i like to provide a patch - and would be glad, if it will be accepted. how are patches submitted? is is ok to submit the output of an 'git status' command? Best Regards, Bernd |
|
From: Adam G. <mai...@we...> - 2016-01-18 02:54:33
|
OK, so I've spent forever (years) suffering from this bug, but I've spent a bit more time on it, and might have some insight. Firstly, this seems to happen semi-randomly during backups, so eventually the backup will complete (usually) though it seems related to either the backup client, and/or the number of files that the client has. I get logs like this in my /var/log/messages: Jan 17 23:34:31 keep kernel: [11513620.906447] rsync_bpc[27253]: segfault at 7fe37aefd428 ip 00000000004473af sp 00007ffd49086e40 error 4 in rsync_bpc[400000+75000] Jan 18 00:35:01 keep kernel: [11517256.388512] rsync_bpc[27472]: segfault at 7fcaa53f3428 ip 00000000004473af sp 00007ffede0afdd0 error 4 in rsync_bpc[400000+75000] Jan 18 01:05:12 keep kernel: [11519069.903776] rsync_bpc[27607]: segfault at 7f747bbf5428 ip 00000000004473af sp 00007fffe8b03b40 error 4 in rsync_bpc[400000+75000] Jan 18 01:35:09 keep kernel: [11520869.888899] rsync_bpc[27860]: segfault at 7f7a06240428 ip 00000000004473af sp 00007ffebad15f60 error 4 in rsync_bpc[400000+75000] Jan 18 02:04:56 keep kernel: [11522659.795284] rsync_bpc[28086]: segfault at 7f0088f10428 ip 00000000004473af sp 00007ffe07298520 error 4 in rsync_bpc[400000+75000] Jan 18 02:40:48 keep kernel: [11524814.507776] rsync_bpc[28340]: segfault at 7f048c215428 ip 00000000004473af sp 00007fff569b9e20 error 4 in rsync_bpc[400000+75000] Jan 18 03:04:41 keep kernel: [11526249.846662] rsync_bpc[28562]: segfault at 7fb36e61e428 ip 00000000004473af sp 00007ffdcb018d20 error 4 in rsync_bpc[400000+75000] Jan 18 03:40:53 keep kernel: [11528425.088184] rsync_bpc[28795]: segfault at 7f1cb1b9b428 ip 00000000004473af sp 00007fff75d291a0 error 4 in rsync_bpc[400000+75000] Jan 18 04:05:13 keep kernel: [11529887.025178] rsync_bpc[29045]: segfault at 7f94898d8428 ip 00000000004473af sp 00007fff3d6a1dd0 error 4 in rsync_bpc[400000+75000] Jan 18 04:37:06 keep kernel: [11531803.020965] rsync_bpc[29275]: segfault at 7f83b84b3428 ip 00000000004473af sp 00007ffcb8962ed0 error 4 in rsync_bpc[400000+75000] Jan 18 05:11:09 keep kernel: [11533848.516550] rsync_bpc[29531]: segfault at 7f18f296a428 ip 00000000004473af sp 00007ffe3f582680 error 4 in rsync_bpc[400000+75000] Jan 18 05:47:10 keep kernel: [11536013.450327] rsync_bpc[29921]: segfault at 7fd986392428 ip 00000000004473af sp 00007ffe07aacb60 error 4 in rsync_bpc[400000+75000] Jan 18 06:04:47 keep kernel: [11537071.297055] rsync_bpc[30127]: segfault at 7f0dd13f3428 ip 00000000004473af sp 00007fff977dd350 error 4 in rsync_bpc[400000+75000] Jan 18 13:15:05 keep kernel: [11562928.034694] rsync_bpc[1224]: segfault at 7f6923390428 ip 00000000004473af sp 00007fff7d94c8f0 error 4 in rsync_bpc[400000+75000] Jan 18 13:30:57 keep kernel: [11563881.316870] rsync_bpc[1322]: segfault at 7f8a9f83b428 ip 00000000004473af sp 00007fff9b9d9850 error 4 in rsync_bpc[400000+75000] I've found an informative post: http://stackoverflow.com/questions/2549214/interpreting-segfault-messages and this pointed me to this command: addr2line -e /usr/local/bin/rsync_bpc -fCi 0x00000000004473af bpc_attrib_fileCopyOpt /usr/src/rsync-bpc-3.0.9.3/backuppc/bpc_attrib.c:284 Looking at the file I see this function: 273 /* 274 * Copy all the attributes from fileSrc to fileDest. fileDest should already have a 275 * valid allocated fileName and allocated xattr hash. The fileDest xattr hash is 276 * emptied before the copy, meaning it is over written. 277 * 278 * If overwriteEmptyDigest == 0, an empty digest in fileSrc will not overwrite fileDest. 279 */ 280 void bpc_attrib_fileCopyOpt(bpc_attrib_file *fileDest, bpc_attrib_file *fileSrc, int overwriteEmptyDigest) 281 { 282 if ( fileDest == fileSrc ) return; 283 284 fileDest->type = fileSrc->type; 285 fileDest->compress = fileSrc->compress; 286 fileDest->mode = fileSrc->mode; 287 fileDest->isTemp = fileSrc->isTemp; 288 fileDest->uid = fileSrc->uid; 289 fileDest->gid = fileSrc->gid; 290 fileDest->nlinks = fileSrc->nlinks; 291 fileDest->mtime = fileSrc->mtime; 292 fileDest->size = fileSrc->size; 293 fileDest->inode = fileSrc->inode; Looking at line 284 we see that this is the first time we try to read from the object fileSrc. I suspect that somehow fileSrc is either invalid, doesn't exist, etc, and therefore that is why we are getting this error. I'm guessing that is has some value, as otherwise we should see a much smaller number in the at value from the logs (similar to the OP in the stackoverflow message. So, is there a simple way to make sure fileSrc is "valid" before trying to read it, and potentially cause a crash? I'd like to add some extra logs/debug before the crash to try and find the cause and hopefully fix it. There are only two places that this function is called: /* * Copy all the attributes from fileSrc to fileDest. fileDest should already have a * valid allocated fileName and allocated xattr hash. The fileDest xattr hash is * emptied before the copy, meaning it is over written. */ void bpc_attrib_fileCopy(bpc_attrib_file *fileDest, bpc_attrib_file *fileSrc) { if ( fileDest == fileSrc ) return; bpc_attrib_fileCopyOpt(fileDest, fileSrc, 1); } This is just passing the exact same variable it received, so we will need to trace it back another step... I guess if there is an easy method to test if it is valid, I can add that test to each function before calling bpc_attrib_fileCopy and hopefully eventually work out what is wrong with it. I guess my c skills are quite rusty (non-existant really), so if anyone is able to assist, I'd be very happy, even if it is just a clue on the right way to debug/find the problem. Regards, Adam -- Adam Goryachev Website Managers www.websitemanagers.com.au |
|
From: Les M. <les...@gm...> - 2016-01-12 15:23:46
|
On Tue, Jan 12, 2016 at 2:40 AM, François <ai...@gm...> wrote:
> On Tuesday, 12 January 2016, Adam Goryachev
> <mai...@we...> wrote:
>
>> However, lets say a disaster strikes, isn't it better to
>> recover the first 300MB of the file rather than nothing?
>
> I don't agree. 1Gb is most likely q binqry file which would be corrupted if
> a part is missing. But anyway, I kind of like the idea of partial transfer.
>
Rsync itself does have the option to save partial transfers for
restarts so it might not be that hard to add, but then you add
overhead to look for the fragments and clean them up.
--
Les Mikesell
les...@gm...
|
|
From: François <ai...@gm...> - 2016-01-12 08:40:20
|
On Tuesday, 12 January 2016, Adam Goryachev < mai...@we...> wrote: > However, lets say a disaster strikes, isn't it better to > recover the first 300MB of the file rather than nothing? I don't agree. 1Gb is most likely q binqry file which would be corrupted if a part is missing. But anyway, I kind of like the idea of partial transfer. -- -- François |
|
From: Les M. <les...@gm...> - 2016-01-11 23:50:20
|
On Mon, Jan 11, 2016 at 3:59 PM, Michael <m.b...@im...> wrote:
> Of course, the main problem is the small amount of memory. Looking
> in the dmesg logs, I could spot regularly OOM messages, the kernel killing
> the backuppc_dump process, etc. Now it is a bit unfair of me blaming BPC
> when actually the main culprit is the lack of memory. But the thing is
> that BPC is quite unhelpful in these situations.
Not sure anything is helpful - or can be - in the OOM-killer
situation. The process in question doesn't get much of a chance to
tell you what happened.
> Server logs are mostly
> useless, no timestamps, and there is no attempt to restart again, but
> instead BPC goes into a long process of counting references, etc, meaning
> most of the server time is spent in (apparently) unproductive tasks.
On a suitable platform, Linux tends to be reliable so it's not
surprising that programmers don't spend a lot of time dealing with
cases that shouldn't happen.
> Initially BPC was a "mere" wrapper around rsync. First duplicating a
> complete hierarchy with hard-links, then rsync'ing over it. It had the
> advantage of simplicity but is very slow to maintain, and impossible to
> duplicate.
No, v3 has rsync completely implemented in perl so that it can
maintain the archive copies compressed while chatting block checksums
against a native remote rsync reading uncompressed files. And it
only handled the the older rsync protocol that required the entire
directory to be transferred first and held in RAM for the duration of
the run. Given the way perl stores variables, this isn't pretty, but
then again RAM is cheap.
> Now the trend in 4.0alpha was to move to a custom C
> implementation of rsync, where hierarchy only stores attrib files. I
> think that we can improve the maintenance phase further (ref counting,
> backup deletion...) by flattening this structure into a single linear
> file, and by listing once for all the references in a given backup,
> possibly with caching of references per directory. Directory entries
> would be more like git objects, attaching a name to a reference along
> with some metadata.
Git does some interesting things - but I'm not all that convinced
checking in a whole machine would be a win compared to what rsync
does.
> This means integrating further with the inner
> working of rsync. It would be fully compliant with rsync from the client
> side. But refcounting and backup deletion should then be equivalent
> to sorting and finding duplicate/unique entries, which can be very
> efficient. Even on my Lacie sorting a 600k-line file with 32B random
> hash entries takes only a couple seconds.
That kind of boils down to a question of how much work you want to do
to save a few dollars worth of RAM. Or even another box to do the
work for you over NFS or iscsi to your storage server.
> - Client-side sync
>
> Sure, this must be an optional feature, and I agree this is not the
> priority. Many clients will still simply run rsyncd or rsync/ssh. But
> the client-side sync would allow to detect hard links more efficiently.
> It will also decrease memory usage on the server (see rsync faq). Then
> it opens up a whole new set of optimization, delta-diff on multiple
> files...
I've always considered it one of the main attractions of BPC that it
does not require any client side setup beyond ssh keys which you
normally need anyway.
> *** Regarding writing in C
>
> Ok, I'm not a perl fan. But I agree, it is useful for stuff where
> performance does not matter, for website interface, etc. But I would
> rewrite in C the ref counting part and similar.
It's not 'performance' that is bad for many/most things where you are
dealing with network and disk activity. It just needs (much) more
RAM. And on most platforms that is easy to accommodate. That's not
to say it can't be improved, but you are going to trade expensive
human time to save a bit of cheap hardware.
--
Les Mikesell
les...@gm...
|
|
From: Adam G. <mai...@we...> - 2016-01-11 23:22:49
|
On 12/01/16 08:59, Michael wrote:
> Hi Stephen,
> Hi all,
>
> Thanks for your feedback and sharing your experience.
> Here some clarification on my side.
>
>
> *** Regarding BPC being not reliable.
>
> I don't deny that BPC works very well in many situations, and can sustain
> heavy load etc. But in my case it was not the flawless setup I imagined at
> first. Of course, the main problem is the small amount of memory. Looking
> in the dmesg logs, I could spot regularly OOM messages, the kernel killing
> the backuppc_dump process, etc. Now it is a bit unfair of me blaming BPC
> when actually the main culprit is the lack of memory. But the thing is
> that BPC is quite unhelpful in these situations. Server logs are mostly
> useless, no timestamps, and there is no attempt to restart again, but
> instead BPC goes into a long process of counting references, etc, meaning
> most of the server time is spent in (apparently) unproductive tasks.
> Again, the main culprit is the platform, and actually BPC never lost any
> data (afaik), and always recovered somehow. Still, there are some traces
> of corruption in the db (like warning about reference being equal to -1
> instead of 0), indicating that maybe BPC is not atomic.
I think adding timestamps to logs is not a significant problem, and
shouldn't be difficult to do. However, which log entries deserve a
timestamp? Every single one? Let's assume a timestamp in this format
20160112-094440 ("YYYYMMDD-HHMMSS "), that is just 16 bytes per line,
plus some small overhead to lookup and format the data. For a logfile of
3 million files, that's 48MB of timestamps you just added to an already
massive log file. Maybe we could add a timestamp every minute or every x
minutes? However, then you make the log harder to parse because each
line is not consistent.... Maybe start a thread on this specific issue,
and lets discuss the options, and see what most people think is the most
useful.
> *** Regarding the 256MB requirement.
>
> Admittedly this is very demanding and very far off most feedbacks I've
> seen on the net. Stephen's setup of a recycled Dell PE 1950 (8 cores, 16
> GB RAM) seems more typical than my poor lacie-cloudbox with 256MB. But
> when I monitor memory usage, for instance when doing a full backup of
> 600k-file client, BPC dump/rsync processes consume consistently around
> 100MB memory (htop/smem). Looking at rsync page, they say that rsync 3.0+
> should consume 100bytes/file, so a total of 60MB for rsync. So I don't see
> any blocking point why BPC would not fit in a 256-MB memory budget. Of
> course it will be slower, but it must work. This WE I again spent some
> time tuning down the Lacie-Cloudbox, stripping away all useless processes,
> like those hungry python stuff. Now, in idle, the Lacie has 200MB free
> physical + 200MB free swap space, and in that setup BPC worked for 2 days
> w/o crash doing a full 55GB, 650k files backup, in 2 hours, 7.5MB/s
> (almost no changes, hence the very high speed of course). For now I
> disabled all my machines but a few, and will enable the remaining ones one
> by one. I have good hope that it will work again. Now, my wishes would be
> to restore some other services, but this will likely require increasing
> the swap space.
I don't think main development of BPC should be setup around what is
essentially an embedded platform. However, if there is some random piece
of BPC which is allocating memory where it isn't needed, then definitely
that can be looked at. So far, what I have seen BPC failing on is a
single directory with a large number of files (ie, 977386 currently
which has not succeeded in some time, unfortunately, the application
requires all files in a single directory). However, scaling the
requirements with the hardware is not a bad goal, even better if minimal
hardware can backup any target with the only effect being a longer time
to complete. Do we *really* need the entire list of files in RAM? Isn't
that the point of the newer rsync which doesn't need to pre-load the
entire list of files? Can't we just process "one file at a time" with a
look-ahead of 1000 files (seems to be what rsync does already)?
I expect this type of work to be a lot more complicated/involved. Don't
expect a lot of help, as few people are going to be interested in this
goal. Developers tend to scratch their own itches...
> *** Regarding BPC being slow
>
> I only give BPC 256MB, so I should expect too much regarding performance.
> That I fully agree. However when I say it is slow, I mean it is slow even
> if I take into account that fact. Transfer speed is ok-ish; it uses rsync
> at its best, which requires some heavy processing sometimes. But I don't
> understand why it needs so much time for the remaining tasks (ref
> counting, etc). I'm actually convinced (perhaps naively) that this can be
> significantly improved. See further down.
I agree, I think v4 is doing refcounts that are not needed. I saw a
email recently (last few months) noting that v4 will refcount *all*
backups for the host, instead of only the backup that was just
completed. The current host I'm working on takes hours just for the
refcnt after a backup, this also likely involves a LOT of random I/O as
well.
> *** Regarding "trashing" rsync
> + @kosowsky.org about designing a totally new backup program
>
> My statement was... too brutal I guess ;-) I fully agree with Stephen's
> comment. And I don't want to create a new program from scratch. rsync is
> one of the best open-source sync program. Unison and duplicity are
> basically using rsync internally. I do think however that BPC can be
> significantly improved.
I think what I'd like to see here is the ability to add some
"intelligence" to the client side. Whether we need a BPC client or not,
I'm not sure, but currently BPC doesn't seem to "continue" a backup
well. Some cloud sync apps seem better at doing small incremental
uploads which eventually will get a consistent confirmed backup. This
includes backing up really large single files, BPC will discard and
re-start the file, instead of knowing that "this" is only half the file,
and that it should continue on the next backup.
1) Consider a file 1GB in size, on the first time BPC sees the file, it
starts a full transfer and manages to download 300MB before a network
hiccup, or timeout happens. BPC can add the file to the pool, and save
the file into the partial backup, marking the file as incomplete.
However, lets say a disaster strikes, isn't it better to recover the
first 300MB of the file rather than nothing?
2) In the scenario of BPC has a complete/valid backup of this 1GB file,
but it has changed. BPC/rsync starts to transfer the file, we complete
the changes in the first 300MB of the file before the same network
hiccup/timeout. Again, why not keep the file, and mark as incomplete?
Next time, rsync will quickly skip the first 300MB, and continue the
backup of the rest of the file. In a disaster, you have the choice to
restore the incomplete file from the partial backup, or the complete
file from the previous backup, or both and then forensically
examine/deal with the differences to potentially recover a bunch of data
you may not otherwise have had access to.
> - Flatten the backup hierarchy
>
> Initially BPC was a "mere" wrapper around rsync. First duplicating a
> complete hierarchy with hard-links, then rsync'ing over it. It had the
> advantage of simplicity but is very slow to maintain, and impossible to
> duplicate. Now the trend in 4.0alpha was to move to a custom C
> implementation of rsync, where hierarchy only stores attrib files. I
> think that we can improve the maintenance phase further (ref counting,
> backup deletion...) by flattening this structure into a single linear
> file, and by listing once for all the references in a given backup,
> possibly with caching of references per directory. Directory entries
> would be more like git objects, attaching a name to a reference along
> with some metadata. This means integrating further with the inner
> working of rsync. It would be fully compliant with rsync from the client
> side. But refcounting and backup deletion should then be equivalent
> to sorting and finding duplicate/unique entries, which can be very
> efficient. Even on my Lacie sorting a 600k-line file with 32B random
> hash entries takes only a couple seconds.
Wouldn't that require loading all 600k lines into memory? What if you
had 100 million entries or 100 billion? I think by the time you get to
looking at that, you are better off using a proper DB to store that
data, they are much better designed to handle sorting/random access of
data that some flat text file.
This might be something better looked at in BPC v5, as it's likely to be
a fairly large achitectural change. I'd need to read a lot more about
the v4 specific on-disk formats to comment further...
> - Client-side sync
>
> Sure, this must be an optional feature, and I agree this is not the
> priority. Many clients will still simply run rsyncd or rsync/ssh. But
> the client-side sync would allow to detect hard links more efficiently.
> It will also decrease memory usage on the server (see rsync faq). Then
> it opens up a whole new set of optimization, delta-diff on multiple
> files...
Yes, also, it mostly works well as simply "another" BPC protocol that
can sit alongside tar/rsync/smb/etc... However, finding the developers
to work on this, and then maintain it in the long term? A *nix client
may not be so difficult, but a windows client might be more useful but
harder.... A definite project all by itself!
> *** Regarding writing in C
>
> Ok, I'm not a perl fan. But I agree, it is useful for stuff where
> performance does not matter, for website interface, etc. But I would
> rewrite in C the ref counting part and similar.
I suppose the question is how much performance improvement will this get
you? It is possible to embed C within perl, and possible to pre-compile
a perl script into a standalone executable. So certainly re-writing
sections in C is not impossible. For example, you want to re-write the
ref counting part, I suspect this is mostly disk I/O constrained rather
than CPU/code constrained, so I doubt you would see any real performance
improvement. I expect the best way to improve performance on this part
is to improve/fix the algorithm, and then translate that improvement
into the code.
eg, as was reported (by someone else that I can't recall right now) if
you have 100 backups saved for this host, and you finish a new backup
(whether completed or partial) then you redo the refcnt for all 101
backups. If this was changed to only redo the refcnt for the current
backup, then you are 100 times faster. Better than any improvement by
changing the language.
Finally, I wonder whether we will have more (or less) people able (and
actually doing it) to contribute code if it is written in perl or C? I
suspect the pragmatic process will be to keep it in perl and just patch
the things needed. Over time, some performance reliant components could
be re-written into C and embedded into the existing perl system.
Eventually, the final step could be taken to convert the remaining
portions into C.
Regards,
Adam
--
Adam Goryachev Website Managers www.websitemanagers.com.au
|
|
From: Michael <m.b...@im...> - 2016-01-11 21:59:24
|
Hi Stephen, Hi all, Thanks for your feedback and sharing your experience. Here some clarification on my side. *** Regarding BPC being not reliable. I don't deny that BPC works very well in many situations, and can sustain heavy load etc. But in my case it was not the flawless setup I imagined at first. Of course, the main problem is the small amount of memory. Looking in the dmesg logs, I could spot regularly OOM messages, the kernel killing the backuppc_dump process, etc. Now it is a bit unfair of me blaming BPC when actually the main culprit is the lack of memory. But the thing is that BPC is quite unhelpful in these situations. Server logs are mostly useless, no timestamps, and there is no attempt to restart again, but instead BPC goes into a long process of counting references, etc, meaning most of the server time is spent in (apparently) unproductive tasks. Again, the main culprit is the platform, and actually BPC never lost any data (afaik), and always recovered somehow. Still, there are some traces of corruption in the db (like warning about reference being equal to -1 instead of 0), indicating that maybe BPC is not atomic. *** Regarding the 256MB requirement. Admittedly this is very demanding and very far off most feedbacks I've seen on the net. Stephen's setup of a recycled Dell PE 1950 (8 cores, 16 GB RAM) seems more typical than my poor lacie-cloudbox with 256MB. But when I monitor memory usage, for instance when doing a full backup of 600k-file client, BPC dump/rsync processes consume consistently around 100MB memory (htop/smem). Looking at rsync page, they say that rsync 3.0+ should consume 100bytes/file, so a total of 60MB for rsync. So I don't see any blocking point why BPC would not fit in a 256-MB memory budget. Of course it will be slower, but it must work. This WE I again spent some time tuning down the Lacie-Cloudbox, stripping away all useless processes, like those hungry python stuff. Now, in idle, the Lacie has 200MB free physical + 200MB free swap space, and in that setup BPC worked for 2 days w/o crash doing a full 55GB, 650k files backup, in 2 hours, 7.5MB/s (almost no changes, hence the very high speed of course). For now I disabled all my machines but a few, and will enable the remaining ones one by one. I have good hope that it will work again. Now, my wishes would be to restore some other services, but this will likely require increasing the swap space. *** Regarding BPC being slow I only give BPC 256MB, so I should expect too much regarding performance. That I fully agree. However when I say it is slow, I mean it is slow even if I take into account that fact. Transfer speed is ok-ish; it uses rsync at its best, which requires some heavy processing sometimes. But I don't understand why it needs so much time for the remaining tasks (ref counting, etc). I'm actually convinced (perhaps naively) that this can be significantly improved. See further down. *** Regarding "trashing" rsync + @kosowsky.org about designing a totally new backup program My statement was... too brutal I guess ;-) I fully agree with Stephen's comment. And I don't want to create a new program from scratch. rsync is one of the best open-source sync program. Unison and duplicity are basically using rsync internally. I do think however that BPC can be significantly improved. - Flatten the backup hierarchy Initially BPC was a "mere" wrapper around rsync. First duplicating a complete hierarchy with hard-links, then rsync'ing over it. It had the advantage of simplicity but is very slow to maintain, and impossible to duplicate. Now the trend in 4.0alpha was to move to a custom C implementation of rsync, where hierarchy only stores attrib files. I think that we can improve the maintenance phase further (ref counting, backup deletion...) by flattening this structure into a single linear file, and by listing once for all the references in a given backup, possibly with caching of references per directory. Directory entries would be more like git objects, attaching a name to a reference along with some metadata. This means integrating further with the inner working of rsync. It would be fully compliant with rsync from the client side. But refcounting and backup deletion should then be equivalent to sorting and finding duplicate/unique entries, which can be very efficient. Even on my Lacie sorting a 600k-line file with 32B random hash entries takes only a couple seconds. - Client-side sync Sure, this must be an optional feature, and I agree this is not the priority. Many clients will still simply run rsyncd or rsync/ssh. But the client-side sync would allow to detect hard links more efficiently. It will also decrease memory usage on the server (see rsync faq). Then it opens up a whole new set of optimization, delta-diff on multiple files... *** Regarding writing in C Ok, I'm not a perl fan. But I agree, it is useful for stuff where performance does not matter, for website interface, etc. But I would rewrite in C the ref counting part and similar. Kind regards, Michaël |
|
From: Gandalf C. <gan...@gm...> - 2016-01-10 21:13:01
|
2016-01-10 14:12 GMT+01:00 Gandalf Corvotempesta <gan...@gm...>: > is everything ok and working as expected ? Because having backups > running more than 38 hours is not normal: > http://pastebin.com/raw/nXuUF9sA > > srv1 dump is running from 16/01/09 @ 0:26 Now I can see this in log file: 2016-01-08 19:26:48 Created directory /var/backups/backuppc/pc/x/refCnt 2016-01-08 19:26:48 full backup started for directory full 2016-01-09 21:07:03 full backup 0 complete, 4181687 files, 4181687 bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other) 2016-01-10 00:21:40 Aborting backup up after signal INT 2016-01-10 08:10:48 incr backup started for directory full 2016-01-10 10:27:51 Got fatal error during xfer (rsync_bpc exited with benign status 24 (6144)) 2016-01-10 19:23:47 full backup started for directory full Why a new full is started again ? In config I have: $Conf{FullPeriod} = 27.97; $Conf{FullKeepCnt} = 1; $Conf{FullKeepCntMin} = 1; $Conf{FullAgeMax} = 60; $Conf{IncrPeriod} = 0.97; $Conf{IncrKeepCntMin} = 7; $Conf{IncrAgeMax} = 35; $Conf{IncrKeepCnt} = 31; $Conf{FillCycle} = 0; $Conf{WakeupSchedule} = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]; $Conf{BlackoutPeriods} = [ { hourBegin => 7.0, hourEnd => 23.5, weekDays => [1, 2, 3, 4, 5, 6, 7], }, ]; It should create 1 full every 27.97 days and never backing up during the day (blacklist from 07:00 to 23:30 every day) Why a new full was started at 19:23:47 ? It's wrong twice: 1 for running in a blacklisted hour and 1 for running a second full regardless the configured period. |
|
From: Stephen <st...@em...> - 2016-01-10 14:29:00
|
Hi Michael, I find your report useful and interesting. However my experience is a bit different from yours. On Sat, 9 Jan 2016, Michael wrote: > Hello, > > I've been testing BackupPC 4.0.0alpha3 for 1 year now, for backing up 12 > home machines, and to be honest, I'm quite unhappy with it. > To my opinion, it is completely unreliable, you have to regularly check > whether backups are done correctly, and most of the time you can't do a > backup without at least an error. And it's awfully slow. The big > advantage of BPC (besides being free and open-source of course) is to > manage backup of multiple machines in a single pool, hence saving space. I've been using v4 in production for almost 10 months, and I disagree. I have found v4 to be very stable and useful. I have two v4 servers (along with at least 3 older v3 servers) and the largest v4 install backs up 96 hosts - 534 full backups of total size 3495.74GiB (prior to pooling and compression), - 2563 incr backups of total size 18496.62GiB (prior to pooling and compression). I find v4's speed to be better than v3 and do not see any more errors than I did with v3 respecting xfers, bad files, etc. In fact, I can only find 1 instance in my logs and that's due to backing up an open file. Now with either v3 or v4, if you try to back up the wrong files you'll encounter lots of pain (and errors). Are you excluding special files? The exact list may vary somewhat on your clients' distro and your site policy. For Ubuntu at my site, I currently exclude /proc, /sys, /tmp, /var/tmp, /var/cache/openafs, /var/cache/apt/archives, /var/log/lastlog, /var/lib/mlocate, /var/spool/torque/spool, /home, /afs, /scratch*, /not_backed_up*, /vicep*, /srv*, /spare*, /media, /mnt* For select hosts, /home is backed up separately, using ClientNameAlias. My speeds vary from over 100MiB/s (when backing up a few new sparse files) to 0.51MiB/s (a tiny incremental where it took a while to determine there was nothing to do). Average for a 3GB full seems to be about 9MiB/s. Notice that I'm not saying BPC v4 doesn't have bugs. I've found a couple of them and reported one - with a possible solution - to the -devel list. But any new software is likely to have bugs and this is reflected in the fact that v4 is still alpha. You're supposed to be *testing* v4 at this point. I'm using it in production because I think its pros outweigh its cons and I'm willing to hack the code (or otherwise suffer consequences) when I encounter bugs. > My current backup pool is ~ 12 machine. 11 on Linux and 1 windows > machine. My backup machine is a 3TB Lacie-Cloudbox, with 256 MB memory. > Some of you might say that 256 MB is not enough. Actually I've even seen > posts on the net saying that you would need a server with several GB > RAM. This is just insane. A typical PC in my pool has ~600k files. > Representing each of them with a 256-bit hash, that's basically 20MB of > data to manage for each backup. Of course you need some metadata, etc, > but I see no reason why you need GB of memory to manage that. You probably don't want to hear it, but the Cloudbox probably is your bottleneck. A colleague once experimented with BackupPC on a Synology Disk Station. It worked. But it was quite slow. We eventually turned the Synology into an iSCSI target hanging off a commodity PC with 2GB of RAM and the speed increased a lot (unfortunately I didn't benchmark it as I knew it was a win-win to separate the CPU from the storage and didn't hesitate to do so). My current server for the 96 hosts is a recycled Dell PE 1950 (8 cores, 16 GB RAM) backing up to a Dell PE 2950 (4 cores, 8GB RAM) via 4Gbit FC. BPC runs on teh 1950; the 2950 is just a storage server. Clients are mostly on 1Gbit ethernet. Server and clients are Ubuntu. > If I would participate to the development of BPC, I would make more > changes to the architecture. I think that the changes from 3.0 to 4.0 > are very promising, but not enough. The first thing to do is to trash > rsync/rsyncd and use a client-side sync mechanism (like unison). I think an *optional* client-side sync mechanism (like unison), implemented as an additional xfer option, is interesting. Especially if an end-user can manually initiate a backup or restore via a client interface (ala Crashplan but hopefully without the java dependency). However I'm bothered by a recommendation to "trash rsync/rsyncd". There's *zero* reason to eliminate those xfer methods and I think doing so would immediately make any fork much less likely to succeed. > Then > throw away all Perl code and rewrite in C. This is the direction that v4 is headed, but I think the use of C should be judicious. There's little reason for some parts of the code (cgi, etc) to be in C. Just my two cents. Cheers, Stephen |
|
From: Joe B. <jo...@ts...> - 2016-01-10 11:32:23
|
Hi all, I think that we shouldn't get into exact development details so soon. Our first decision is to see if it is viable to continue dedicating time to this project or not, if there are enough people or interest. Once we decide that we can discuss how to go about it: version to start from, problems, enhancements,... I see a clear need for it but I would like to know Craig's position. @Craig, you are the creator of the product so I would like to know your position before doing anything. I understand that you have moved on to other things and have stopped dedicating time to BackupPC, but, Is that a definite situation? Do you plan on continuing the development in the future? Can we count on you if we decide to continue ourselves? What do you think about the future of BackupPC and how would you continue if you had the time/resources to do so? I would also like to set a time out on those questions. Craig hasn't been responsive in the past months, so If, let's say, at the end of this week (15th Jan) Craig hasn't answered I would propose to start and email vote on the version to fork. I suppose we all agree on managing the project on github, so our next step would be to decide who starts that project and after that I would go for a, more or less, agreed list of things to do. From there it should be easy to just follow the issues and pull requests and we can discuss how we should make some noise on social networks or similar to create awareness about the project. I am open for discussion Have a nice day :-) Joe TSolucio El 10/01/16 a las 01:25, Michael escribió: > Hello, > > I've been testing BackupPC 4.0.0alpha3 for 1 year now, for backing up 12 > home machines, and to be honest, I'm quite unhappy with it. > To my opinion, it is completely unreliable, you have to regularly check > whether backups are done correctly, and most of the time you can't do a > backup without at least an error. And it's awfully slow. The big > advantage of BPC (besides being free and open-source of course) is to > manage backup of multiple machines in a single pool, hence saving space. > > My current backup pool is ~ 12 machine. 11 on Linux and 1 windows > machine. My backup machine is a 3TB Lacie-Cloudbox, with 256 MB memory. > Some of you might say that 256 MB is not enough. Actually I've even seen > posts on the net saying that you would need a server with several GB > RAM. This is just insane. A typical PC in my pool has ~600k files. > Representing each of them with a 256-bit hash, that's basically 20MB of > data to manage for each backup. Of course you need some metadata, etc, > but I see no reason why you need GB of memory to manage that. > > If I would participate to the development of BPC, I would make more > changes to the architecture. I think that the changes from 3.0 to 4.0 > are very promising, but not enough. The first thing to do is to trash > rsync/rsyncd and use a client-side sync mechanism (like unison). Then > throw away all Perl code and rewrite in C. Also add a timestamp to log > files because debugging BPC failures without timestamps is just a f*** > nightmare. And finally make it much more reliable and resistant to > connection issues or interrupt. > > What I like in BPC: > - Mutualization of backups in a single pool > - Clean interface > - Free and open-source! > > What I hate in BPC: > - BPC seemingly spending more time in backupref_count, fsck or whatever > than in doing actual file transfer. > - Seeing "rsync: read error: Connection reset by peer" in my client log, > followed by even more fsck whatever on the server for ages. > - Not resiliant to interruption, making it very inefficient and unreliable. > - no timestamps in server logs! > - Mostly unhelpful logs. > > What I would love to see in BPC: > - Possibility to move the processing (delta) to the client. > - More efficient maintenance, less overhead processing. > - Flawless execution on a 256MB memory server. > > Some ideas: > - Use client-side sync and delta detection mechanism (like unison or > duplicity) > - Use ZFS > > My gripes and wishes for 2016 > Michaël > > > On 01/07/2016 12:25 AM, Adam Goryachev wrote: >> Hi, >> >> I've been a long time user of backuppc (couple of years at least), and >> in general it works really well and I'm mostly happy with the current >> status. However, since I upgraded to the 4.0.0alpha3 last year, I've had >> a number of minor issues (some more serious than others, like failing to >> backup unchanged files, or saying the backup has failed even though it >> succeeded). So far, I've not lost data due to any issue, and that is a >> plus, but I'm very concerned that eventually, one of these problems will >> cause actual data loss (as in, backup failed, something else caused data >> loss like failed RAID array, and then can't recover from backup). >> >> I'd like to know if there is any current person or organisation doing >> development work on BackupPC, and/or interested in doing that? I'm >> considering to fork the project, and try to debug/fix the remaining >> issues in BPC 4, but at the same time, I'm very busy, and am not really >> a "proper" coder, so working on such a large project will be difficult. >> >> With the right group of developers, this could work (as in, a small work >> load for each person, but at least better maintenance/development >> efforts). My concerns are: >> 1) Without ongoing development/maintenance, new versions of OS or perl >> or whatever will cause breakages, while manual/minor patches or config >> changes might solve these, over time it will become more of a nightmare. >> 2) The point of using a "standard" open source product is that we all >> get the advantage of experience (ie, more users finding problems), and >> improvements/patches. I could have built (probably never as good as the >> current BPC) my own solution. >> >> So, are you interested in developing/contributing? >> What is the current status/plans around BPC? >> Do you have any patches that are not applied to either v3 or v4 releases? >> Thoughts/discussions? >> >> PS, BPC is an excellent product, and I greatly appreciate all the time >> and effort that has been invested into it, I would ideally like to see >> it continue under the leadership of Craig, he has done an amazing >> development job so far. I really really do not want to see it basically >> waste away, with people moving to other products simply because it is >> unmaintained, and has a few small problems (which is where I currently >> stand, either I move to another product, or I start working harder on >> the current one). >> >> Regards, >> Adam >> > > > > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 > > > _______________________________________________ > BackupPC-devel mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ |