rsyncrypto-devel Mailing List for rsync friendly file encryption
Brought to you by:
thesun
You can subscribe to this list here.
2005 |
Jan
|
Feb
|
Mar
(2) |
Apr
(2) |
May
(7) |
Jun
(5) |
Jul
(12) |
Aug
(29) |
Sep
(6) |
Oct
(5) |
Nov
(18) |
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(13) |
Feb
(3) |
Mar
|
Apr
(5) |
May
(6) |
Jun
(8) |
Jul
|
Aug
(1) |
Sep
(3) |
Oct
(2) |
Nov
(23) |
Dec
(2) |
2007 |
Jan
(47) |
Feb
(4) |
Mar
(4) |
Apr
|
May
|
Jun
(8) |
Jul
(2) |
Aug
|
Sep
(6) |
Oct
|
Nov
(24) |
Dec
(17) |
2008 |
Jan
(4) |
Feb
(22) |
Mar
(25) |
Apr
(19) |
May
(76) |
Jun
(34) |
Jul
(18) |
Aug
(2) |
Sep
|
Oct
(4) |
Nov
|
Dec
(3) |
2009 |
Jan
|
Feb
(13) |
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
(9) |
Aug
(7) |
Sep
(2) |
Oct
(3) |
Nov
|
Dec
(4) |
2010 |
Jan
|
Feb
(4) |
Mar
|
Apr
(3) |
May
(3) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(7) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
(1) |
2014 |
Jan
|
Feb
|
Mar
(14) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2015 |
Jan
|
Feb
(6) |
Mar
(2) |
Apr
|
May
|
Jun
(4) |
Jul
|
Aug
|
Sep
(4) |
Oct
(1) |
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(5) |
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(5) |
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(3) |
Oct
(7) |
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(7) |
Dec
|
From: Shachar S. <sh...@sh...> - 2019-11-09 16:52:25
|
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <style type="text/css">body p { margin-bottom: 0cm; margin-top: 0pt; } </style> </head> <body bidimailui-charset-is-forced="true" text="#000000" bgcolor="#FFFFFF"> <br> <div class="moz-cite-prefix">On 09/11/2019 16:37, <a class="moz-txt-link-abbreviated" href="mailto:hg...@an...">hg...@an...</a> wrote:<br> </div> <blockquote type="cite" cite="mid:3f0...@ww..."> <meta http-equiv="content-type" content="text/html; charset=UTF-8"> <title></title> <style type="text/css">#qt p{margin-bottom:0cm;margin-top:0pt;} p.MsoNormal,p.MsoNoSpacing{margin:0}</style> <div style="font-family:Arial;">I see three issues:<br> </div> <div style="font-family:Arial;"><br> </div> <div style="font-family:Arial;">Encrypting your files makes it more likely you'll totally lose them by losing the encryption keys. After a successful natural disaster caused recovery of rsyncrypto encrypted files, I decided what I was storing was not sensitive enough to justify that danger.<br> </div> </blockquote> <p>I think that's the only point you made I actually disagree with. Rsyncrypto was built so that you can recover your entire data set using a single private key that need not be updated. Just make sure you store that securely (say, on a DoK in a safe, or whatever), and you can lose the entire local data and still recover everything.</p> <p><br> </p> <p>You can even post the private key, password protected, to the same place you back everything else up.<br> </p> <blockquote type="cite" cite="mid:3f0...@ww..."> <div style="font-family:Arial;"><br> </div> <div style="font-family:Arial;">Not having an on the fly mode ... <br> </div> </blockquote> but, on the other hand<br> <blockquote type="cite" cite="mid:3f0...@ww..."> <div style="font-family:Arial;"><br> </div> <div style="font-family:Arial;">To where do you backup your data offsite? rsync.net is great, but relatively expensive at rest compared to object store cloud offerings like AWS S3's lower classes and Glacier, Backblaze B2, Azure's archival offerings, etc. Rclone would appear to be the equivalent program for those, with its own limitations, including greater bandwidth use.<br> </div> </blockquote> <p>But on the fly mode would, pretty much by necessity, be incompatible with current rsync. This means that even fewer backup providers would be eligible.</p> <p><br> </p> <p>When originally written, rsyncrypto was meant to be the technological side of a backup service I intended on running. On the fly would have been acceptable there (but would have reduced its usability to everyone else). The power of open source is that the technology lives on where the business has failed, but I guess even that has its own limits.<br> </p> <blockquote type="cite" cite="mid:3f0...@ww..."> <div style="font-family:Arial;"><br> </div> <div style="font-family:Arial;">Ah, that brings up a 4th: rsync/rsyncrypto shines for files that have small portions changing, like log files, but today for many if not most users that's minuscule compared to media files that don't get changed. Compare to people with only a few computers to back up not finding deduplication compelling, because storage and bandwidth costs and capacities have changed so much.<br> </div> <div style="font-family:Arial;"><br> </div> </blockquote> <p>Like I said, the world has moved on. I accept it. It's why you're not seeing new versions coming out.</p> <p><br> </p> <p>Shachar</p> </body> </html> |
From: David D. <dd...@gm...> - 2019-11-09 16:31:36
|
Thanks so much for the thoughtful response. The extra time to re-encrypt everything locally is not a big problem for me, so I will keep in mind your workaround should I find my fuse-based solution unworkable. I do agree with your assessment of many of the backup solutions that perform snapshots and only transfer those snapshots off site. As for my needs, I started looking at encfs, and ended up with gocryptfs. Both are FUSE based tools and both offer a "reverse" mode which makes it possible to create an encrypted file-by-file view of my unencrypted backup directory to feed into rsync to replicate offsite. Thus, I don't have any issue with snapshots and infinite growth, it is a rather direct drop-in replacement for rsyncrypto. The only disadvantage I see to gocryptfs is that a change to any file will likely result in the whole file being copied during rsync because of the encryption algorithm used. However, for files where only a small part of them would change they tend to be small enough that this should be rather minor. Larger files in my particular use are almost always static. Another potential advantage of gocryptfs is I can create an unencrypted view of the backup to recover individual files: sshfs mount the offsite backup drive, then gocryptfs (in non-reverse mode) to create a unencrypted view of the encrypted backup. I would do this for a full recovery, but it could come in handy to access a single file. I say these things not to poo-poo on rsyncrypto, it is a great project and I used it successfully for many years. But instead to suggest that the use of FUSE to create encrypted / non-encrypted views of source data is, I think, superior to taking a file and producing an encrypted/unencrypted copy of the file and in-line with the Unix philosophy of single-purpose tools. If I were to put work into a new release of rsyncrypto I would move in the direction of a FUSE approach like encfs and gocryptfs but improve by using the rsync-friendly encryption techniques that makes rsyncrypto shine. Best, David Diepenbrock On Sat, Nov 9, 2019 at 8:00 AM Shachar Shemesh <sh...@sh...> wrote: > > On 08/11/2019 19:00, David Diepenbrock wrote: > > I may have discovered another bug, and I apologize in advance for not > digging into it more, and for polluting this thread. When I ran on top of > my rather old encrypted copy, after compiling with the delete bug fixed, I > noticed that the size was *significantly* larger than it should be, on the > order of double or more what I was expecting. Now I had cleaned out the > source dir quite a bit, so I suspect that not everything was properly > deleted. A quick check at the file counts showed the encrypted directory > had nearly 4x the file count of the source dir. > > That may be a bug, indeed. I'll try to have a look at it. > > > With that said, please note that there is a very easy work around. Just > delete all of the encrypted files, leaving only the key files (which are > very small), and then re-encrypt your data. You will have just the relevant > files, and they will still be rsyncable. > > > Unfortunately, this made it a no-go for me to resume using rsyncrypto. > For now I've migrated to using a fuse encrypted filesystem > instead (gocryptfs in my case, since I needed something with reverse mode > for feeding into rsync). With the fuse option it means I don't need to > preserve an encrypted copy of the data on disk, which rather significantly > outweighs any of the drawbacks. As such, I can't see myself moving back to > rsyncrypto anytime soon, unless someone can point out something I might be > missing? > > Here's what my understanding of how Fuse based solutions work. The file > system keeps track over what changed, encrypts the delta, and sends it > over. This keeps the changes small and gives you, in many cases, bandwidth > efficiency similar to rsyncrypto[1]. > > > Here's this system's down side. Personally, I find this down side so bad > as to make the system unworkable, but most people don't seem to care: You > can never free space on the back up storage device. Every single > intermediate backup is potentially crucial for correct restore. There are > two ways, and only two ways, to keep your system up to date. Either resign > yourself to your remote backup folder getting bigger and bigger as time > moves on, or periodically re-sync the whole data set. For a system designed > to keep bandwidth low, I find it unacceptable. > > > When I say this to people, the common answer is that neither bandwidth nor > storage are that expensive these days[2]. I find this answer near-sighted, > as it only views the "backup" side of the equation. There is a very serious > "restore" side to consider. > > > Encrypted backup should, ideally, have just one "single point of failure" > point, which is the encryption key (for rsyncrypto, even that's not true: > you store the symmetric key locally in the key files, and you have your RSA > master key). Fuse based solutions have another failure point: every delta > produced and encrypted *must* find it's way to the backup storage and > remain there. If one such delta fails to be stored, reliable restore is > impossible from that point onwards. If this update goes missing the whole > backup becomes unreliable. > > > Compare this to rsyncrypto's failure mode (the worst of which you > experienced): if Rsyncrypto lose track of a file, it will re-encrypt that > file. This wastes storage, but is otherwise harmless. Since rsyncrypto > requires very little state, it is also very cheap to recover from this > problem (you need to re-encrypt everything locally, but you do not need to > re-upload everything). > > > In summary: I have moved on. People are not interested in what rsyncrypto > has to offer, and I accept that. I wish I understood that, however, as it > seems to me to be a genuinely superior solution (though, admittedly, more > clunky) to what people are actually choosing. > > > Rsyncrypto can be made better. The file name can be stored, encrypted, > inside the file to allow recovering the file map. I could think of a system > that would integrate rsyncrypto and rsync, so that the files could be > encrypted on the fly, saving local storage. I don't think those will change > rsyncrypto's adoption in any significant ways, so I don't spend my time on > them. > > > Shachar > > > > 1 - There certain types of changes for which this method doesn't save > bandwidth. If you take a large file and *add* one byte at its beginning, > with rsyncrypto+rsync you will have to resync about 8KB of data, whereas > with FUSE based solution you'd have to retransmit the entire file. As I > said above, most people don't seem to find that painful at this day and age. > > > 2 - There is no bound on how much excessive storage is used, and no simple > work around to regain that lost storage, except by re-uploading the whole > data set. Please remember you complained about 4x data usage. > _______________________________________________ > Rsyncrypto-devel mailing list > Rsy...@li... > https://lists.sourceforge.net/lists/listinfo/rsyncrypto-devel > |
From: <hg...@an...> - 2019-11-09 14:56:39
|
> From: Shachar Shemesh <sh...@sh...> > Date: Saturday, November 09, 2019 7:59 AM > > [...] > > In summary: I have moved on. People are not interested in what rsyncrypto has to offer, and I accept that. I wish I understood that, however, as it seems to me to be a genuinely superior solution (though, admittedly, more clunky) to what people are actually choosing. > > Rsyncrypto can be made better. The file name can be stored, encrypted, inside the file to allow recovering the file map. I could think of a system that would integrate rsyncrypto and rsync, so that the files could be encrypted on the fly, saving local storage. I don't think those will change rsyncrypto's adoption in any significant ways, so I don't spend my time on them. > > Shachar I see three issues: Encrypting your files makes it more likely you'll totally lose them by losing the encryption keys. After a successful natural disaster caused recovery of rsyncrypto encrypted files, I decided what I was storing was not sensitive enough to justify that danger. Not having an on the fly mode ... it can be viewed as an additional backup for your data, but otherwise it's a pretty big deal if you have a lot to backup up. And high capacity disk drives are now engineered with heads so close to surfaces that they come with yearly maximum total reading and writing budgets, which are quite modest compared to the capacities drives are getting up to. Unless you use an "already am backing it up, and size and date modified have not changed" heuristic, reading the entirety of the files every day is *bad*. To where do you backup your data offsite? rsync.net is great, but relatively expensive at rest compared to object store cloud offerings like AWS S3's lower classes and Glacier, Backblaze B2, Azure's archival offerings, etc. Rclone would appear to be the equivalent program for those, with its own limitations, including greater bandwidth use. Ah, that brings up a 4th: rsync/rsyncrypto shines for files that have small portions changing, like log files, but today for many if not most users that's minuscule compared to media files that don't get changed. Compare to people with only a few computers to back up not finding deduplication compelling, because storage and bandwidth costs and capacities have changed so much. - Harold |
From: Shachar S. <sh...@sh...> - 2019-11-09 14:00:20
|
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <style type="text/css">body p { margin-bottom: 0cm; margin-top: 0pt; } </style> </head> <body bidimailui-charset-is-forced="true" text="#000000" bgcolor="#FFFFFF"> <br> <div class="moz-cite-prefix">On 08/11/2019 19:00, David Diepenbrock wrote:<br> </div> <blockquote type="cite" cite="mid:CAL...@ma..."> <meta http-equiv="content-type" content="text/html; charset=UTF-8"> <div dir="ltr">I may have discovered another bug, and I apologize in advance for not digging into it more, and for polluting this thread. When I ran on top of my rather old encrypted copy, after compiling with the delete bug fixed, I noticed that the size was *significantly* larger than it should be, on the order of double or more what I was expecting. Now I had cleaned out the source dir quite a bit, so I suspect that not everything was properly deleted. A quick check at the file counts showed the encrypted directory had nearly 4x the file count of the source dir.</div> </blockquote> <p>That may be a bug, indeed. I'll try to have a look at it.</p> <p><br> </p> <p>With that said, please note that there is a very easy work around. Just delete all of the encrypted files, leaving only the key files (which are very small), and then re-encrypt your data. You will have just the relevant files, and they will still be rsyncable.<br> </p> <blockquote type="cite" cite="mid:CAL...@ma..."> <div dir="ltr"> <div><br> </div> <div>Unfortunately, this made it a no-go for me to resume using rsyncrypto. For now I've migrated to using a fuse encrypted filesystem instead (gocryptfs in my case, since I needed something with reverse mode for feeding into rsync). With the fuse option it means I don't need to preserve an encrypted copy of the data on disk, which rather significantly outweighs any of the drawbacks. As such, I can't see myself moving back to rsyncrypto anytime soon, unless someone can point out something I might be missing?</div> <br> </div> </blockquote> <p>Here's what my understanding of how Fuse based solutions work. The file system keeps track over what changed, encrypts the delta, and sends it over. This keeps the changes small and gives you, in many cases, bandwidth efficiency similar to rsyncrypto[1].</p> <p><br> </p> <p>Here's this system's down side. Personally, I find this down side so bad as to make the system unworkable, but most people don't seem to care: You can never free space on the back up storage device. Every single intermediate backup is potentially crucial for correct restore. There are two ways, and only two ways, to keep your system up to date. Either resign yourself to your remote backup folder getting bigger and bigger as time moves on, or periodically re-sync the whole data set. For a system designed to keep bandwidth low, I find it unacceptable.</p> <p><br> </p> <p>When I say this to people, the common answer is that neither bandwidth nor storage are that expensive these days[2]. I find this answer near-sighted, as it only views the "backup" side of the equation. There is a very serious "restore" side to consider.</p> <p><br> </p> <p>Encrypted backup should, ideally, have just one "single point of failure" point, which is the encryption key (for rsyncrypto, even that's not true: you store the symmetric key locally in the key files, and you have your RSA master key). Fuse based solutions have another failure point: every delta produced and encrypted <b>must</b> find it's way to the backup storage and remain there. If one such delta fails to be stored, reliable restore is impossible from that point onwards. If this update goes missing the whole backup becomes unreliable.</p> <p><br> </p> <p>Compare this to rsyncrypto's failure mode (the worst of which you experienced): if Rsyncrypto lose track of a file, it will re-encrypt that file. This wastes storage, but is otherwise harmless. Since rsyncrypto requires very little state, it is also very cheap to recover from this problem (you need to re-encrypt everything locally, but you do not need to re-upload everything).</p> <p><br> </p> <p>In summary: I have moved on. People are not interested in what rsyncrypto has to offer, and I accept that. I wish I understood that, however, as it seems to me to be a genuinely superior solution (though, admittedly, more clunky) to what people are actually choosing.</p> <p><br> </p> <p>Rsyncrypto can be made better. The file name can be stored, encrypted, inside the file to allow recovering the file map. I could think of a system that would integrate rsyncrypto and rsync, so that the files could be encrypted on the fly, saving local storage. I don't think those will change rsyncrypto's adoption in any significant ways, so I don't spend my time on them.</p> <p><br> </p> <p>Shachar<br> </p> <p><br> </p> <p><br> </p> <p>1 - There certain types of changes for which this method doesn't save bandwidth. If you take a large file and <b>add</b> one byte at its beginning, with rsyncrypto+rsync you will have to resync about 8KB of data, whereas with FUSE based solution you'd have to retransmit the entire file. As I said above, most people don't seem to find that painful at this day and age.<br> </p> <p><br> </p> <p>2 - There is no bound on how much excessive storage is used, and no simple work around to regain that lost storage, except by re-uploading the whole data set. Please remember you complained about 4x data usage.<br> </p> </body> </html> |
From: David D. <dd...@gm...> - 2019-11-08 17:00:43
|
I may have discovered another bug, and I apologize in advance for not digging into it more, and for polluting this thread. When I ran on top of my rather old encrypted copy, after compiling with the delete bug fixed, I noticed that the size was *significantly* larger than it should be, on the order of double or more what I was expecting. Now I had cleaned out the source dir quite a bit, so I suspect that not everything was properly deleted. A quick check at the file counts showed the encrypted directory had nearly 4x the file count of the source dir. Unfortunately, this made it a no-go for me to resume using rsyncrypto. For now I've migrated to using a fuse encrypted filesystem instead (gocryptfs in my case, since I needed something with reverse mode for feeding into rsync). With the fuse option it means I don't need to preserve an encrypted copy of the data on disk, which rather significantly outweighs any of the drawbacks. As such, I can't see myself moving back to rsyncrypto anytime soon, unless someone can point out something I might be missing? Best, David Diepenbrock On Thu, Nov 7, 2019 at 4:59 AM Shachar Shemesh <sh...@sh...> wrote: > > On 06/11/2019 16:30, David Diepenbrock wrote: > > Hi Shachar and others, > > A few years back I was using rsyncrypto as part of an simple DIY off-site > backup solution. I had to stop using it because I lost access to the > remote backup system. However, I'm setting it back up again and I ran into > the recursion on delete bug fixed in commit r612 (which I fixed > independently before I realized it had already been discovered and > fixed!). The presence of that bug is a no-go for me, and I suspect many > others. Can a new release be created with that bug fix in place? > > Hi David, > > > Yes, it seems like it is high time I did. > > > It will probably take me a couple of weeks to get to do it (I am currently > traveling). Please, if you don't hear from me by the end of this month, > ping me again on the list. > > > Thank you for enjoying rsyncrypto, > > Shachar > |
From: Shachar S. <sh...@sh...> - 2019-11-07 11:16:48
|
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <style type="text/css">body p { margin-bottom: 0cm; margin-top: 0pt; } </style> </head> <body bidimailui-charset-is-forced="true" text="#000000" bgcolor="#FFFFFF"> <br> <div class="moz-cite-prefix">On 06/11/2019 16:30, David Diepenbrock wrote:<br> </div> <blockquote type="cite" cite="mid:CAL...@ma..."> <meta http-equiv="content-type" content="text/html; charset=UTF-8"> <div dir="ltr">Hi Shachar and others, <div><br> </div> <div>A few years back I was using rsyncrypto as part of an simple DIY off-site backup solution. I had to stop using it because I lost access to the remote backup system. However, I'm setting it back up again and I ran into the recursion on delete bug fixed in commit r612 (which I fixed independently before I realized it had already been discovered and fixed!). The presence of that bug is a no-go for me, and I suspect many others. Can a new release be created with that bug fix in place?</div> <br> </div> </blockquote> <p>Hi David,</p> <p><br> </p> <p>Yes, it seems like it is high time I did.</p> <p><br> </p> <p>It will probably take me a couple of weeks to get to do it (I am currently traveling). Please, if you don't hear from me by the end of this month, ping me again on the list.</p> <p><br> </p> <p>Thank you for enjoying rsyncrypto,</p> <p>Shachar<br> </p> </body> </html> |
From: David D. <dd...@gm...> - 2019-11-06 14:30:42
|
Hi Shachar and others, A few years back I was using rsyncrypto as part of an simple DIY off-site backup solution. I had to stop using it because I lost access to the remote backup system. However, I'm setting it back up again and I ran into the recursion on delete bug fixed in commit r612 (which I fixed independently before I realized it had already been discovered and fixed!). The presence of that bug is a no-go for me, and I suspect many others. Can a new release be created with that bug fix in place? Best, David Diepenbrock |
From: <sou...@co...> - 2018-10-08 05:52:19
|
On 08/10/18 05:54, Shachar Shemesh wrote: > I want to clarify the report. You are not complaining about someone > erasing the name translation file during encryption. You are > complaining about someone erasing one of the *source* files. Is that > correct? > correct, this is about a *source* file that is erased while rsyncrypto is running. The other situation where there is a problem, is when the *filelist* contains a file which does not exist (which could have been caused by someone erasing the *source* file, or the filelist was badly created). > Erasing the name translation file or one of the encrypted files during > encryption is not a bug rsyncrypto is supposed to handle. No one > should be touching those during the encryption phase. Agree, that scenario is not a rsyncrypto bug, that is user error/bug. > Erasing a source file is something that rsyncrypto should, absolutely, > be able to handle without failing outright. Also I fully agree here, rsyncrypto should be able to handle this. > > Thank you, > Shachar > thank you for working and supporting this Johan > On 07/10/18 11:14, sou...@co... wrote: >> >> On 07/10/18 09:40, Shachar Shemesh wrote: >> >>> On 06/10/18 11:51, sou...@co... wrote: >>>> >>>> Hello Shachar, >>>> >>>> hereby a scenario to reproduce many times. I've created a set-up >>>> where you can reproduce numerous time, very quickly. >>>> >>>> In scenario, I have a huge number of files in a directory, when >>>> rsyncrypto is running, I suspend with Ctrl-Z, delete all files in >>>> the directory, then continue unsuspending rsyncrypto by bringing to >>>> the foreground with 'fg' command. >>>> >>>> To be able to reproduce time-after-time, I'm keeping a copy of the >>>> original big directory in repo-source-dir-copy >>>> >>> Hello Johan, >>> >>> >>> I understand the scenario you describe. I just don't understand how >>> rsyncrypto is supposed to survive it. >>> >>> >>> As with many other programs, rsyncrypto expects the files it needs >>> not to shift around /while/ it is manipulating them. >>> >>> >>> Essentially, you are asking that if you delete the file rsyncrypto >>> is creating, that it not get deleted. >>> >>> >>> Am I missing something? Can you explain the real wold aim you are >>> trying to achieve that causes this condition to trigger? >>> >>> >>> Thank you, >>> >>> Shachar >>> >>> >> Hello Shachar, >> >> if during a run, there are hunderds of additional files encrypted and >> sync, and there is one file that went missing causing the lstat >> error, rsyncrypto behaves this lstat error as a fatal error and >> doesn't update the translation file. >> >> In addition, these hundreds of additional files are now encrypted and >> can't be matched back to the original filename. Also at the next run, >> rsyncrypto will create an additional copy of these extra files. >> >> >> What I'm asking for is that rsyncrypto doesn't handle this as a fatal >> error and at the end of the run still update the translation file >> when the lstat error occured. >> >> >> I discovered the issue when running rsyncrypto on my home directory >> when still working on my box which causes sometimes that a file gets >> deleted. >> >> >> The problem also happens when using --filelist, if there is an error >> in that filelist, and there is a file that does not exist, no >> translation file is saved, here output of such: >> >> >> ./rsyncrypto -vv --changed --trim=0 >> --name-encrypt=./repo-encrypt-filename --ne-nesting=3 >> --filelist=file-list . ./repo-encrypted-dir ./SRCDIR.KEYS ./rckey.crt >> Encrypting file: ./repo-src-dir2/orig-file3 >> Encrypting file: ./repo-src-dir2/orig-file4 >> Error in encryption of ./repo-src-dir2/orig-file5: stat >> failed(././repo-src-dir2/orig-file5): No such file or directory >> Exit code delayed from previous errors >> >> >> A fix that I've found is to change the end of main.cpp to (and move >> declaration of rsa_key to a higher level): >> >> >> } catch( const rscerror &err ) { >> std::cerr<<err.error()<<std::endl; >> ret=1; >> if( encrypt && EXISTS(nameenc) ) { >> // Write the (possibly changed) filelist back to the file >> filemap::write_map(FILENAME(nameenc)); >> // Encrypt the filelist file itself >> file_encrypt(FILENAME(nameenc), >> autofd::combine_paths(FILENAME(dst), FILEMAPNAME). >> c_str(), autofd::combine_paths(FILENAME(key), >> FILEMAPNAME).c_str(), rsa_key, >> NULL ); >> } >> } >> >> return ret; >> } >> >> >> Probably some extra logic to be added to only write translation file >> in case of lstat error, not in case of other errors. >> >> >> regards, >> >> Johan >> >>> >>> >>> >>> >>> _______________________________________________ >>> Rsyncrypto-devel mailing list >>> Rsy...@li... >>> https://lists.sourceforge.net/lists/listinfo/rsyncrypto-devel >> >> >> >> >> >> >> _______________________________________________ >> Rsyncrypto-devel mailing list >> Rsy...@li... >> https://lists.sourceforge.net/lists/listinfo/rsyncrypto-devel > > > > > > _______________________________________________ > Rsyncrypto-devel mailing list > Rsy...@li... > https://lists.sourceforge.net/lists/listinfo/rsyncrypto-devel |
From: Shachar S. <sh...@sh...> - 2018-10-08 03:54:31
|
<html style="direction: ltr;"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <style type="text/css">body p { margin-bottom: 0cm; margin-top: 0pt; } </style> </head> <body bidimailui-charset-is-forced="true" style="direction: ltr;" text="#000000" bgcolor="#FFFFFF"> <div class="moz-cite-prefix">I want to clarify the report. You are not complaining about someone erasing the name translation file during encryption. You are complaining about someone erasing one of the <b>source</b> files. Is that correct?<br> <br> Erasing the name translation file or one of the encrypted files during encryption is not a bug rsyncrypto is supposed to handle. No one should be touching those during the encryption phase. Erasing a source file is something that rsyncrypto should, absolutely, be able to handle without failing outright.<br> <br> Thank you,<br> Shachar<br> <br> On 07/10/18 11:14, <a class="moz-txt-link-abbreviated" href="mailto:sou...@co...">sou...@co...</a> wrote:<br> </div> <blockquote type="cite" cite="mid:e8d...@co..."> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <p>On 07/10/18 09:40, Shachar Shemesh wrote: </p> <blockquote type="cite"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <style type="text/css">body p { margin-bottom: 0cm; margin-top: 0pt; } </style> <div class="moz-cite-prefix">On 06/10/18 11:51, <a class="moz-txt-link-abbreviated" href="mailto:sou...@co..." moz-do-not-send="true">sou...@co...</a> wrote:<br> </div> <blockquote type="cite" cite="mid:899...@co..."> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <p>Hello Shachar,</p> <p>hereby a scenario to reproduce many times. I've created a set-up where you can reproduce numerous time, very quickly.</p> <p>In scenario, I have a huge number of files in a directory, when rsyncrypto is running, I suspend with Ctrl-Z, delete all files in the directory, then continue unsuspending rsyncrypto by bringing to the foreground with 'fg' command.</p> <p>To be able to reproduce time-after-time, I'm keeping a copy of the original big directory in repo-source-dir-copy<br> </p> </blockquote> <p>Hello Johan,</p> <p><br> </p> <p>I understand the scenario you describe. I just don't understand how rsyncrypto is supposed to survive it.</p> <p><br> </p> <p>As with many other programs, rsyncrypto expects the files it needs not to shift around <i>while</i> it is manipulating them.</p> <p><br> </p> <p>Essentially, you are asking that if you delete the file rsyncrypto is creating, that it not get deleted.</p> <p><br> </p> <p>Am I missing something? Can you explain the real wold aim you are trying to achieve that causes this condition to trigger?</p> <p><br> </p> <p>Thank you,</p> <p>Shachar<br> </p> <br> </blockquote> Hello Shachar, <p>if during a run, there are hunderds of additional files encrypted and sync, and there is one file that went missing causing the lstat error, rsyncrypto behaves this lstat error as a fatal error and doesn't update the translation file.</p> <p>In addition, these hundreds of additional files are now encrypted and can't be matched back to the original filename. Also at the next run, rsyncrypto will create an additional copy of these extra files.</p> <p><br> </p> <p>What I'm asking for is that rsyncrypto doesn't handle this as a fatal error and at the end of the run still update the translation file when the lstat error occured.</p> <p><br> </p> <p>I discovered the issue when running rsyncrypto on my home directory when still working on my box which causes sometimes that a file gets deleted.</p> <p><br> </p> <p>The problem also happens when using --filelist, if there is an error in that filelist, and there is a file that does not exist, no translation file is saved, here output of such:</p> <p><br> </p> <p> ./rsyncrypto -vv --changed --trim=0 --name-encrypt=./repo-encrypt-filename --ne-nesting=3 --filelist=file-list . ./repo-encrypted-dir ./SRCDIR.KEYS ./rckey.crt<br> Encrypting file: ./repo-src-dir2/orig-file3<br> Encrypting file: ./repo-src-dir2/orig-file4<br> Error in encryption of ./repo-src-dir2/orig-file5: stat failed(././repo-src-dir2/orig-file5): No such file or directory<br> Exit code delayed from previous errors<br> </p> <p><br> </p> <p>A fix that I've found is to change the end of main.cpp to (and move declaration of rsa_key to a higher level):</p> <p><br> </p> <p> } catch( const rscerror &err ) {<br> std::cerr<<err.error()<<std::endl;<br> ret=1;<br> if( encrypt && EXISTS(nameenc) ) {<br> // Write the (possibly changed) filelist back to the file<br> filemap::write_map(FILENAME(nameenc));<br> // Encrypt the filelist file itself<br> file_encrypt(FILENAME(nameenc), autofd::combine_paths(FILENAME(dst), FILEMAPNAME).<br> c_str(), autofd::combine_paths(FILENAME(key), FILEMAPNAME).c_str(), rsa_key,<br> NULL );<br> }<br> }<br> <br> return ret;<br> }<br> </p> <p><br> </p> <p>Probably some extra logic to be added to only write translation file in case of lstat error, not in case of other errors.</p> <p><br> </p> <p>regards,</p> <p>Johan<br> </p> <blockquote type="cite"> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Rsyncrypto-devel mailing list <a class="moz-txt-link-abbreviated" href="mailto:Rsy...@li..." moz-do-not-send="true">Rsy...@li...</a> <a class="moz-txt-link-freetext" href="https://lists.sourceforge.net/lists/listinfo/rsyncrypto-devel" moz-do-not-send="true">https://lists.sourceforge.net/lists/listinfo/rsyncrypto-devel</a> </pre> </blockquote> <p><br> </p> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Rsyncrypto-devel mailing list <a class="moz-txt-link-abbreviated" href="mailto:Rsy...@li...">Rsy...@li...</a> <a class="moz-txt-link-freetext" href="https://lists.sourceforge.net/lists/listinfo/rsyncrypto-devel">https://lists.sourceforge.net/lists/listinfo/rsyncrypto-devel</a> </pre> </blockquote> <br> </body> </html> |
From: <sou...@co...> - 2018-10-07 08:14:27
|
On 07/10/18 09:40, Shachar Shemesh wrote: > On 06/10/18 11:51, sou...@co... wrote: >> >> Hello Shachar, >> >> hereby a scenario to reproduce many times. I've created a set-up >> where you can reproduce numerous time, very quickly. >> >> In scenario, I have a huge number of files in a directory, when >> rsyncrypto is running, I suspend with Ctrl-Z, delete all files in the >> directory, then continue unsuspending rsyncrypto by bringing to the >> foreground with 'fg' command. >> >> To be able to reproduce time-after-time, I'm keeping a copy of the >> original big directory in repo-source-dir-copy >> > Hello Johan, > > > I understand the scenario you describe. I just don't understand how > rsyncrypto is supposed to survive it. > > > As with many other programs, rsyncrypto expects the files it needs not > to shift around /while/ it is manipulating them. > > > Essentially, you are asking that if you delete the file rsyncrypto is > creating, that it not get deleted. > > > Am I missing something? Can you explain the real wold aim you are > trying to achieve that causes this condition to trigger? > > > Thank you, > > Shachar > > Hello Shachar, if during a run, there are hunderds of additional files encrypted and sync, and there is one file that went missing causing the lstat error, rsyncrypto behaves this lstat error as a fatal error and doesn't update the translation file. In addition, these hundreds of additional files are now encrypted and can't be matched back to the original filename. Also at the next run, rsyncrypto will create an additional copy of these extra files. What I'm asking for is that rsyncrypto doesn't handle this as a fatal error and at the end of the run still update the translation file when the lstat error occured. I discovered the issue when running rsyncrypto on my home directory when still working on my box which causes sometimes that a file gets deleted. The problem also happens when using --filelist, if there is an error in that filelist, and there is a file that does not exist, no translation file is saved, here output of such: ./rsyncrypto -vv --changed --trim=0 --name-encrypt=./repo-encrypt-filename --ne-nesting=3 --filelist=file-list . ./repo-encrypted-dir ./SRCDIR.KEYS ./rckey.crt Encrypting file: ./repo-src-dir2/orig-file3 Encrypting file: ./repo-src-dir2/orig-file4 Error in encryption of ./repo-src-dir2/orig-file5: stat failed(././repo-src-dir2/orig-file5): No such file or directory Exit code delayed from previous errors A fix that I've found is to change the end of main.cpp to (and move declaration of rsa_key to a higher level): } catch( const rscerror &err ) { std::cerr<<err.error()<<std::endl; ret=1; if( encrypt && EXISTS(nameenc) ) { // Write the (possibly changed) filelist back to the file filemap::write_map(FILENAME(nameenc)); // Encrypt the filelist file itself file_encrypt(FILENAME(nameenc), autofd::combine_paths(FILENAME(dst), FILEMAPNAME). c_str(), autofd::combine_paths(FILENAME(key), FILEMAPNAME).c_str(), rsa_key, NULL ); } } return ret; } Probably some extra logic to be added to only write translation file in case of lstat error, not in case of other errors. regards, Johan > > > > > _______________________________________________ > Rsyncrypto-devel mailing list > Rsy...@li... > https://lists.sourceforge.net/lists/listinfo/rsyncrypto-devel |
From: Shachar S. <sh...@sh...> - 2018-10-07 07:40:42
|
<html style="direction: ltr;"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <style type="text/css">body p { margin-bottom: 0cm; margin-top: 0pt; } </style> </head> <body bidimailui-charset-is-forced="true" style="direction: ltr;" text="#000000" bgcolor="#FFFFFF"> <div class="moz-cite-prefix">On 06/10/18 11:51, <a class="moz-txt-link-abbreviated" href="mailto:sou...@co...">sou...@co...</a> wrote:<br> </div> <blockquote type="cite" cite="mid:899...@co..."> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <p>Hello Shachar,</p> <p>hereby a scenario to reproduce many times. I've created a set-up where you can reproduce numerous time, very quickly.</p> <p>In scenario, I have a huge number of files in a directory, when rsyncrypto is running, I suspend with Ctrl-Z, delete all files in the directory, then continue unsuspending rsyncrypto by bringing to the foreground with 'fg' command.</p> <p>To be able to reproduce time-after-time, I'm keeping a copy of the original big directory in repo-source-dir-copy<br> </p> </blockquote> <p>Hello Johan,</p> <p><br> </p> <p>I understand the scenario you describe. I just don't understand how rsyncrypto is supposed to survive it.</p> <p><br> </p> <p>As with many other programs, rsyncrypto expects the files it needs not to shift around <i>while</i> it is manipulating them.</p> <p><br> </p> <p>Essentially, you are asking that if you delete the file rsyncrypto is creating, that it not get deleted.</p> <p><br> </p> <p>Am I missing something? Can you explain the real wold aim you are trying to achieve that causes this condition to trigger?</p> <p><br> </p> <p>Thank you,</p> <p>Shachar<br> </p> <br> </body> </html> |
From: <sou...@co...> - 2018-10-06 08:52:01
|
Hello Shachar, hereby a scenario to reproduce many times. I've created a set-up where you can reproduce numerous time, very quickly. In scenario, I have a huge number of files in a directory, when rsyncrypto is running, I suspend with Ctrl-Z, delete all files in the directory, then continue unsuspending rsyncrypto by bringing to the foreground with 'fg' command. To be able to reproduce time-after-time, I'm keeping a copy of the original big directory in repo-source-dir-copy Set-up of environment in a test-directory: mkdir repo-source-dir mkdir repo-encrypted-dir mkdir SRCDIR.KEYS dd bs=4k count=2 if=/dev/urandom of=./repo-source-dir-copy/orig-file let a=1 cd repo-source-dir-copy while true; do cp orig-file copy-file-$a; let a=a+1; echo $a; done # interrupt when sufficient files created, like 600-700 # cd .. dd bs=4k count=2 if=/dev/urandom of=./repo-source-dir/orig-file2 dd bs=4k count=2 if=/dev/urandom of=./repo-source-dir/orig-file3 ./rsyncrypto -vvr --changed --trim=0 \ --name-encrypt=./repo-encrypt-filename --ne-nesting=3 \ ./repo-source-dir \ ./repo-encrypted-dir \ ./SRCDIR.KEYS \ ./rckey.crt # # now 2 files in repo-encrypt-filename # # strings repo-encrypt-filename # /73B2387C10C76099CDADBC8BE8EB908B ./repo-source-dir/orig-file3 # /B06C02944DC76FCC72B038EEB0FCA02E ./repo-source-dir/orig-file2 ============= Then to reproduce the problem: # # add many files to repo-source-dir # ln ./repo-source-dir-copy/* ./repo-source-dir # # execute new backup # ./rsyncrypto -vvr --changed --trim=0 \ --name-encrypt=./repo-encrypt-filename --ne-nesting=3 \ ./repo-source-dir \ ./repo-encrypted-dir \ ./SRCDIR.KEYS \ ./rckey.crt # # suspend when running by ctrl-Z # # Encrypting ./repo-source-dir/copy-file-753 # Encrypting ./repo-source-dir/copy-file-629 # ^Z # [1]+ Stopped ./rsyncrypto -vvr --changed --trim=0 --name-encrypt=./repo-encrypt-filename --ne-nesting=3 ./repo-source-dir ./repo-encrypted-dir ./SRCDIR.KEYS ./rckey.crt # rm repo-source-dir/* # # bring back to foreground # # fg # # lstat failed(./repo-source-dir/copy-file-12): No such file or directory # lstat failed(./repo-source-dir/copy-file-178): No such file or directory # lstat failed(./repo-source-dir/copy-file-296): No such file or directory # lstat failed(./repo-source-dir/copy-file-70): No such file or directory # lstat failed(./repo-source-dir/copy-file-456): No such file or directory # lstat failed(./repo-source-dir/copy-file-580): No such file or directory # lstat failed(./repo-source-dir/copy-file-606): No such file or directory # Exit code delayed from previous errors # # check number of filename in repo-encrypt-filename # # strings repo-encrypt-filename # /73B2387C10C76099CDADBC8BE8EB908B ./repo-source-dir/orig-file3 # /B06C02944DC76FCC72B038EEB0FCA02E ./repo-source-dir/orig-file2 # ============================== If you need to test again, start over at 'ln' command regards, Johan On 05/10/18 20:48, Shachar Shemesh wrote: > On 16/09/18 17:36, sou...@co... wrote: >> Dear, >> >> I' running rsyncrypto with option --name-encrypt=translation_file to >> encrypt filenames. The file translation_file is in most caes updated >> at the end of the run. >> >> Currently, the translation_file is not updated at the end of the run. >> There is an error message on 'lstat failed(/dir/dir/filename): No >> such file or directory' >> >> The error is caused by the fact that between rsyncrypto reading the >> directory and then attempting to access the file, the file has been >> deleted. >> >> So I end up with many encrypted files, but no updated >> translation_file. How can it be forced that translation_file is >> written out, even in the case of a lstat error ? >> >> regards, >> >> Johan >> > Hello Johan, > > I'm trying to wrap my brains about the sequence of events. Can you > give me reproduction instructions to this problem? What steps do I > need to take in order to see it on my system? > > Thanks, > Shachar > > > > > _______________________________________________ > Rsyncrypto-devel mailing list > Rsy...@li... > https://lists.sourceforge.net/lists/listinfo/rsyncrypto-devel |
From: Shachar S. <sh...@sh...> - 2018-10-05 18:48:17
|
<html style="direction: ltr;"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <style type="text/css">body p { margin-bottom: 0cm; margin-top: 0pt; } </style> </head> <body bidimailui-charset-is-forced="true" style="direction: ltr;" text="#000000" bgcolor="#FFFFFF"> <div class="moz-cite-prefix">On 16/09/18 17:36, <a class="moz-txt-link-abbreviated" href="mailto:sou...@co...">sou...@co...</a> wrote:<br> </div> <blockquote type="cite" cite="mid:35c...@co...">Dear, <br> <br> I' running rsyncrypto with option --name-encrypt=translation_file to encrypt filenames. The file translation_file is in most caes updated at the end of the run. <br> <br> Currently, the translation_file is not updated at the end of the run. There is an error message on 'lstat failed(/dir/dir/filename): No such file or directory' <br> <br> The error is caused by the fact that between rsyncrypto reading the directory and then attempting to access the file, the file has been deleted. <br> <br> So I end up with many encrypted files, but no updated translation_file. How can it be forced that translation_file is written out, even in the case of a lstat error ? <br> <br> regards, <br> <br> Johan <br> <br> </blockquote> Hello Johan,<br> <br> I'm trying to wrap my brains about the sequence of events. Can you give me reproduction instructions to this problem? What steps do I need to take in order to see it on my system?<br> <br> Thanks,<br> Shachar<br> </body> </html> |
From: Shachar S. <sh...@sh...> - 2018-10-04 03:20:18
|
<html style="direction: ltr;"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <style type="text/css">body p { margin-bottom: 0cm; margin-top: 0pt; } </style> </head> <body bidimailui-charset-is-forced="true" style="direction: ltr;" text="#000000" bgcolor="#FFFFFF"> <div class="moz-cite-prefix">On 25/09/18 09:13, <a class="moz-txt-link-abbreviated" href="mailto:sou...@co...">sou...@co...</a> wrote:<br> </div> <blockquote type="cite" cite="mid:9e3...@co..."> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <p>Hello Shachar,</p> <p>as requested, reminder send to the list</p> </blockquote> <p>Thank you. Working on it.</p> <p><br> </p> <p>Shachar<br> </p> <br> </body> </html> |
From: <sou...@co...> - 2018-09-25 06:14:07
|
Hello Shachar, as requested, reminder send to the list regards, Johan On 17/09/18 21:32, Shachar Shemesh wrote: > Hello Johan, > > On 16/09/18 17:36, sou...@co... wrote: >> Dear, >> >> I' running rsyncrypto with option --name-encrypt=translation_file to >> encrypt filenames. The file translation_file is in most caes updated >> at the end of the run. >> >> Currently, the translation_file is not updated at the end of the run. >> There is an error message on 'lstat failed(/dir/dir/filename): No >> such file or directory' >> >> The error is caused by the fact that between rsyncrypto reading the >> directory and then attempting to access the file, the file has been >> deleted. >> >> So I end up with many encrypted files, but no updated >> translation_file. How can it be forced that translation_file is >> written out, even in the case of a lstat error ? >> >> regards, >> >> Johan > It sounds like a bug. I will have to look into it. > > Sadly, I'm in the middle of an extremely busy time right now, so it > will take me a little while. Please, and I do mean it, if you do not > hear from me within a week, please please please send me (to the list) > a reminder. > > Thank you for your report, > Shachar > > > > > _______________________________________________ > Rsyncrypto-devel mailing list > Rsy...@li... > https://lists.sourceforge.net/lists/listinfo/rsyncrypto-devel |
From: Shachar S. <sh...@sh...> - 2018-09-17 19:50:44
|
<html style="direction: ltr;"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <style type="text/css">body p { margin-bottom: 0cm; margin-top: 0pt; } </style> </head> <body bidimailui-charset-is-forced="true" style="direction: ltr;" text="#000000" bgcolor="#FFFFFF"> <div class="moz-cite-prefix">Hello Johan,<br> <br> On 16/09/18 17:36, <a class="moz-txt-link-abbreviated" href="mailto:sou...@co...">sou...@co...</a> wrote:<br> </div> <blockquote type="cite" cite="mid:35c...@co...">Dear, <br> <br> I' running rsyncrypto with option --name-encrypt=translation_file to encrypt filenames. The file translation_file is in most caes updated at the end of the run. <br> <br> Currently, the translation_file is not updated at the end of the run. There is an error message on 'lstat failed(/dir/dir/filename): No such file or directory' <br> <br> The error is caused by the fact that between rsyncrypto reading the directory and then attempting to access the file, the file has been deleted. <br> <br> So I end up with many encrypted files, but no updated translation_file. How can it be forced that translation_file is written out, even in the case of a lstat error ? <br> <br> regards, <br> <br> Johan <br> </blockquote> It sounds like a bug. I will have to look into it.<br> <br> Sadly, I'm in the middle of an extremely busy time right now, so it will take me a little while. Please, and I do mean it, if you do not hear from me within a week, please please please send me (to the list) a reminder.<br> <br> Thank you for your report,<br> Shachar<br> </body> </html> |
From: <sou...@co...> - 2018-09-16 14:54:46
|
Dear, I' running rsyncrypto with option --name-encrypt=translation_file to encrypt filenames. The file translation_file is in most caes updated at the end of the run. Currently, the translation_file is not updated at the end of the run. There is an error message on 'lstat failed(/dir/dir/filename): No such file or directory' The error is caused by the fact that between rsyncrypto reading the directory and then attempting to access the file, the file has been deleted. So I end up with many encrypted files, but no updated translation_file. How can it be forced that translation_file is written out, even in the case of a lstat error ? regards, Johan |
From: Shachar S. <sh...@sh...> - 2017-09-06 19:01:54
|
On 04/09/17 10:53, Julien Métairie wrote: > > Hi Shachar, > > Thank you for this fix. I can't test it right now and I use it on a > Debian server, where only Debian packages are used. I downgraded to > v1.12 (using Jessie repo) but be sure I will be happy to switch to > v1.14 as soon as I can. > Once again, many thanks for your work. > > Regards, > Julien I've just finished uploading the new version to the Debian repository. If you have unstable in your sources (which is not a smart thing to do with a server), you should get the new version automatically in about half an hour. If not, you can download the deb file from https://packages.debian.org/unstable/rsyncrypto Shachar |
From: Julien M. <ru...@ru...> - 2017-09-04 07:54:09
|
Le 03/09/2017 à 17:35, Shachar Shemesh a écrit : > On 29/08/17 10:47, Julien Métairie wrote: >> Hi everybody, >> >> I just upgraded from rsyncrypto v1.12 to v1.13, and I am stuck with the --trim argument. >> >> Here is my cmdline : >> rsyncrypto --delete --delete-keys --changed --trim 2 --filelist list.txt . /mnt/backup/ /home/ruliane/keyfiles/ /home/ruliane/backup.crt >> >> Content of list.txt : >> /home/ruliane/nas/Logiciels/Keys.txt >> /home/ruliane/nas/Multimedia/Clips/ >> >> >> I obtain the following : >> /mnt/backup/Clips/file1.avi >> /mnt/backup/nas/Logiciels/Keys.txt >> >> >> It seems that the --trim option does not give the same result with folders and files. I expected the following : >> /mnt/backup/nas/Multimedia/Clips/file1.avi >> /mnt/backup/nas/Logiciels/Keys.txt >> >> >> My cmdline worked well with rsyncrypto v1.12. Since I upgraded to v1.13, according to the manpage, I added the dot "." as a first argument. What should I do to get the old behavior ? > Hello Julien, > > Can you please apply the following patch and let me know if it solves > the problem for you? > > Index: file.cpp > =================================================================== > --- file.cpp (revision 603) > +++ file.cpp (working copy) > @@ -151,7 +151,7 @@ > if( VERBOSE(1) ) > std::cerr<<opname<<" directory: > "<<srcname<<std::endl; > > - real_dir_encrypt( src.c_str(), trim_offset, dst_dir, > key_dir, rsa_key, op, opname, dstnameop, > + real_dir_encrypt( src.c_str(), 0, dst_dir, key_dir, > rsa_key, op, opname, dstnameop, > keynameop ); > } > } catch( const delayed_error & ) { > > Thanks, > Shachar > > Hi Shachar, Thank you for this fix. I can't test it right now and I use it on a Debian server, where only Debian packages are used. I downgraded to v1.12 (using Jessie repo) but be sure I will be happy to switch to v1.14 as soon as I can. Once again, many thanks for your work. Regards, Julien |
From: Shachar S. <sh...@sh...> - 2017-09-03 18:47:53
|
Hello everybody, Version 1.14 has been released, and is available for download from the web site. The main update is a fix to the --filelist and --trim combination, which did not work well in version 1.13. Share and enjoy. Shachar |
From: Shachar S. <sh...@sh...> - 2017-09-03 15:35:43
|
On 29/08/17 10:47, Julien Métairie wrote: > Hi everybody, > > I just upgraded from rsyncrypto v1.12 to v1.13, and I am stuck with the --trim argument. > > Here is my cmdline : > rsyncrypto --delete --delete-keys --changed --trim 2 --filelist list.txt . /mnt/backup/ /home/ruliane/keyfiles/ /home/ruliane/backup.crt > > Content of list.txt : > /home/ruliane/nas/Logiciels/Keys.txt > /home/ruliane/nas/Multimedia/Clips/ > > > I obtain the following : > /mnt/backup/Clips/file1.avi > /mnt/backup/nas/Logiciels/Keys.txt > > > It seems that the --trim option does not give the same result with folders and files. I expected the following : > /mnt/backup/nas/Multimedia/Clips/file1.avi > /mnt/backup/nas/Logiciels/Keys.txt > > > My cmdline worked well with rsyncrypto v1.12. Since I upgraded to v1.13, according to the manpage, I added the dot "." as a first argument. What should I do to get the old behavior ? Hello Julien, Can you please apply the following patch and let me know if it solves the problem for you? Index: file.cpp =================================================================== --- file.cpp (revision 603) +++ file.cpp (working copy) @@ -151,7 +151,7 @@ if( VERBOSE(1) ) std::cerr<<opname<<" directory: "<<srcname<<std::endl; - real_dir_encrypt( src.c_str(), trim_offset, dst_dir, key_dir, rsa_key, op, opname, dstnameop, + real_dir_encrypt( src.c_str(), 0, dst_dir, key_dir, rsa_key, op, opname, dstnameop, keynameop ); } } catch( const delayed_error & ) { Thanks, Shachar |
From: Shachar S. <sh...@sh...> - 2017-09-01 19:07:31
|
On 29/08/17 10:47, Julien Métairie wrote: > It seems that the --trim option does not give the same result with folders and files. I expected the following : > /mnt/backup/nas/Multimedia/Clips/file1.avi > /mnt/backup/nas/Logiciels/Keys.txt > > > My cmdline worked well with rsyncrypto v1.12. Since I upgraded to v1.13, according to the manpage, I added the dot "." as a first argument. What should I do to get the old behavior ? Nothing. It's a bug in rsyncrypto. It appears that when a file list specifies a directory name, the trim is calculated twice. I'll have a look at it and, hopfully, post a fix. Thank you for the report. Shachar |
From: <ru...@ru...> - 2017-08-29 08:05:28
|
Hi everybody, I just upgraded from rsyncrypto v1.12 to v1.13, and I am stuck with the --trim argument. Here is my cmdline : rsyncrypto --delete --delete-keys --changed --trim 2 --filelist list.txt . /mnt/backup/ /home/ruliane/keyfiles/ /home/ruliane/backup.crt Content of list.txt : /home/ruliane/nas/Logiciels/Keys.txt /home/ruliane/nas/Multimedia/Clips/ I obtain the following : /mnt/backup/Clips/file1.avi /mnt/backup/nas/Logiciels/Keys.txt It seems that the --trim option does not give the same result with folders and files. I expected the following : /mnt/backup/nas/Multimedia/Clips/file1.avi /mnt/backup/nas/Logiciels/Keys.txt My cmdline worked well with rsyncrypto v1.12. Since I upgraded to v1.13, according to the manpage, I added the dot "." as a first argument. What should I do to get the old behavior ? Regards, Julien |
From: Colin S. <Col...@io...> - 2016-11-30 15:53:16
|
This does seem to work fine over sshfs, with the data volume sent being pretty much the change size. Though I can't see anyway in rsyncrypto to restore part of the tree? A number of people also seem to use standard rsync to an encfs mount which is mounted on sshfs, for my style application, in case anyone is interested. Thanks Colin On Wed, 2016-11-23 at 10:02 +0000, Colin Simpson wrote: > I'll try a test and see if I can measure the approximate bandwidth > used. > > Thanks > > Colin > > > On Mon, 2016-11-21 at 23:42 +0200, Shachar Shemesh wrote: > > On 21/11/2016 19:42, Colin Simpson wrote: > > > > > > So backing up via SSH would be my preferred method. The suggested > > > way > > > from the previous thread was use rsyncrypto to another and sync > > > this > > > via rsync. But I just don't have a spare 8TB lying around. > > > > > > Has it been tried to run this via say, sshfs? Would that work? > > > > > > > I have no idea how sshfs works, so I don't know. If I were to > > guess, > > I'd say that files that have not changed would work (because > > rsyncrypto doesn't touch them), but files that changed only a > > little > > might pay the full bandwidth. > > Another complication is that rsyncrypto isn't, strictly speaking, > > one > > pass. The file header is written only after the encrypted file is > > written. This has its reasons rooted in the use of external utility > > for the compression (Gzip), and the need not to pass it garbage > > past > > the end of the file. They are not good enough reasons, so I had > > planned a new version of rsyncrypto that would be able to work as a > > filter. Had that been around, you could filter it to librsync, and > > get what you wanted. Unfortunately, I don't know what decade I'll > > manage to get around to actually doing it. > > Either way, it is worth a try, and let us know what the results > > are. > > Shachar > > > > ________________________________ > > > This email and any files transmitted with it are confidential and are > intended solely for the use of the individual or entity to whom they > are addressed. If you are not the original recipient or the person > responsible for delivering the email to the intended recipient, be > advised that you have received this email in error, and that any use, > dissemination, forwarding, printing, or copying of this email is > strictly prohibited. If you received this email in error, please > immediately notify the sender and delete the original. > > ------------------------------------------------------------------- > ----------- > _______________________________________________ > Rsyncrypto-devel mailing list > Rsy...@li... > https://lists.sourceforge.net/lists/listinfo/rsyncrypto-devel ________________________________ This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original. |
From: Colin S. <Col...@io...> - 2016-11-23 10:02:21
|
I'll try a test and see if I can measure the approximate bandwidth used. Thanks Colin On Mon, 2016-11-21 at 23:42 +0200, Shachar Shemesh wrote: > On 21/11/2016 19:42, Colin Simpson wrote: > > > > So backing up via SSH would be my preferred method. The suggested > > way > > from the previous thread was use rsyncrypto to another and sync > > this > > via rsync. But I just don't have a spare 8TB lying around. > > > > Has it been tried to run this via say, sshfs? Would that work? > > > I have no idea how sshfs works, so I don't know. If I were to guess, > I'd say that files that have not changed would work (because > rsyncrypto doesn't touch them), but files that changed only a little > might pay the full bandwidth. > Another complication is that rsyncrypto isn't, strictly speaking, one > pass. The file header is written only after the encrypted file is > written. This has its reasons rooted in the use of external utility > for the compression (Gzip), and the need not to pass it garbage past > the end of the file. They are not good enough reasons, so I had > planned a new version of rsyncrypto that would be able to work as a > filter. Had that been around, you could filter it to librsync, and > get what you wanted. Unfortunately, I don't know what decade I'll > manage to get around to actually doing it. > Either way, it is worth a try, and let us know what the results are. > Shachar > ________________________________ This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original. |