rsyncrypto-devel Mailing List for rsync friendly file encryption (Page 2)
Brought to you by:
thesun
You can subscribe to this list here.
2005 |
Jan
|
Feb
|
Mar
(2) |
Apr
(2) |
May
(7) |
Jun
(5) |
Jul
(12) |
Aug
(29) |
Sep
(6) |
Oct
(5) |
Nov
(18) |
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(13) |
Feb
(3) |
Mar
|
Apr
(5) |
May
(6) |
Jun
(8) |
Jul
|
Aug
(1) |
Sep
(3) |
Oct
(2) |
Nov
(23) |
Dec
(2) |
2007 |
Jan
(47) |
Feb
(4) |
Mar
(4) |
Apr
|
May
|
Jun
(8) |
Jul
(2) |
Aug
|
Sep
(6) |
Oct
|
Nov
(24) |
Dec
(17) |
2008 |
Jan
(4) |
Feb
(22) |
Mar
(25) |
Apr
(19) |
May
(76) |
Jun
(34) |
Jul
(18) |
Aug
(2) |
Sep
|
Oct
(4) |
Nov
|
Dec
(3) |
2009 |
Jan
|
Feb
(13) |
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
(9) |
Aug
(7) |
Sep
(2) |
Oct
(3) |
Nov
|
Dec
(4) |
2010 |
Jan
|
Feb
(4) |
Mar
|
Apr
(3) |
May
(3) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(7) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
(1) |
2014 |
Jan
|
Feb
|
Mar
(14) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2015 |
Jan
|
Feb
(6) |
Mar
(2) |
Apr
|
May
|
Jun
(4) |
Jul
|
Aug
|
Sep
(4) |
Oct
(1) |
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(5) |
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(5) |
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(3) |
Oct
(7) |
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(7) |
Dec
|
From: Shachar S. <sh...@sh...> - 2016-11-21 21:42:14
|
On 21/11/2016 19:42, Colin Simpson wrote: > I'm still here, well looking at this as a very useful project. > > I see historically this has been asked before. Basically I'd like to > use this to backup my large local server (approx 8TB) to a cloud > service with this much storage (that I can't really trust). Which I > guess is the intended use of rsyncrypto, with the added benefit of any > changes being basically the only thing sent. > > So backing up via SSH would be my preferred method. The suggested way > from the previous thread was use rsyncrypto to another and sync this > via rsync. But I just don't have a spare 8TB lying around. > > Has it been tried to run this via say, sshfs? Would that work? I have no idea how sshfs works, so I don't know. If I were to guess, I'd say that files that have not changed would work (because rsyncrypto doesn't touch them), but files that changed only a little might pay the full bandwidth. Another complication is that rsyncrypto isn't, strictly speaking, one pass. The file header is written only after the encrypted file is written. This has its reasons rooted in the use of external utility for the compression (Gzip), and the need not to pass it garbage past the end of the file. They are not good enough reasons, so I had planned a new version of rsyncrypto that would be able to work as a filter. Had that been around, you could filter it to librsync, and get what you wanted. Unfortunately, I don't know what decade I'll manage to get around to actually doing it. Either way, it is worth a try, and let us know what the results are. Shachar |
From: Colin S. <Col...@io...> - 2016-11-21 17:55:31
|
I'm still here, well looking at this as a very useful project. I see historically this has been asked before. Basically I'd like to use this to backup my large local server (approx 8TB) to a cloud service with this much storage (that I can't really trust). Which I guess is the intended use of rsyncrypto, with the added benefit of any changes being basically the only thing sent. So backing up via SSH would be my preferred method. The suggested way from the previous thread was use rsyncrypto to another and sync this via rsync. But I just don't have a spare 8TB lying around. Has it been tried to run this via say, sshfs? Would that work? Thanks Colin ________________________________ This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original. |
From: Shachar S. <sh...@sh...> - 2016-11-14 19:33:49
|
Hello anyone still sticking around, I just uploaded version 1.13 of rsyncrypto to the web site. The main difference is that rsyncrypto now supports (and, in fact, requires) OpenSSL version 1.1.0 or later to work. There are also a bunch of stuff developed over the 8 years (oh my god!) since the previous release, but never making into an official release. There are also some bad news. Due to the Visual Studio 2015 not having a nice time importing the Visual Studio 2008 project files, at this point in time, Rsyncrypto does not compile on Windows. If there is anyone here who is willing to help with this, I would really appreciate it. That's all for now. I now return you to the blessed silence. Shachar |
From: Vladimir B. <vb...@gm...> - 2015-10-05 08:30:17
|
Hi all, JFYI I've created scripts to use rsyncrypto with rsnapshot and rsync. https://github.com/vbotka9/rcb The purpose is to test the consistency of the loop * rsnapshot * store some meta-data of the snapshot [1] * rsyncrypto encrypt snapshot * rsync encrypted snapshot to remote backup * rsyncrypto decrypt snapshot * restore the data from the snapshot and compare it with the original data [1] empty directories, links, file attributes (owner,group,mode,time), not stored in rsyncrypto The scripts were tested in FreeBSD 10.2 and Ubuntu 14.04 as described in the NOTES https://github.com/vbotka9/rcb/blob/master/NOTES.freebsd https://github.com/vbotka9/rcb/blob/master/NOTES.ubuntu I'd appreciate any comments, advices, suggestions etc. Thank you, Cheers, -vlado -- Vladimír Botka |
From: Vladimir B. <vb...@gm...> - 2015-09-28 12:59:28
|
Hi Shachar, Guillaume, all On Sat, 26 Sep 2015 15:57:16 +0200 Guillaume Friloux <gui...@fr...> wrote: > Le 2015/09/26 14:49, Shachar Shemesh a écrit : > > On 26/09/15 15:32, Vladimir Botka wrote: > >> I run command [1] in FreeBSD and see gzip error [2]. Would it be > >> possible to help me? Thank you. > > > > In the past, --rsyncable wasn't part of vanilla gzip. It is possible > > that FreeBSD doesn't include it. I have no idea if it made it into > > vanilla gzip or not. Almost all Linux distributions include it. > > It has never made it upstream. > But Vladimir should be able to rebuild gzip by enabling rsyncable : > https://www.freshports.org/archivers/gzip/ thank you very much for the comments and advice! Gzip from ports works as expected. For the record. In FreeBSD 10.0: * Gzip from the ports compiles by default with --rsyncable (/usr/local/bin/gzip) * Gzip included in the system doesn't. (/usr/bin/gzip) Cheers, -vlado -- Vladimir Botka |
From: Guillaume F. <gui...@fr...> - 2015-09-26 14:15:39
|
Le 2015/09/26 14:49, Shachar Shemesh a écrit : > On 26/09/15 15:32, Vladimir Botka wrote: > >> Hello, >> >> I run command [1] in FreeBSD and see gzip error [2]. Would it be >> possible to help me? Thank you. > In the past, --rsyncable wasn't part of vanilla gzip. It is possible > that FreeBSD doesn't include it. I have no idea if it made it into > vanilla gzip or not. Almost all Linux distributions include it. It has never made it upstream. But Vladimir should be able to rebuild gzip by enabling rsyncable : https://www.freshports.org/archivers/gzip/ > > Using rsyncrypto without it is rather pointless. There is no point in > giving up security in order to get rsync friendly cypher texts if the > compression then goes ahead and makes them non-rsyncable. I also > wouldn't recommend using rsyncrypto without compression, as it has > some security assumptions that revolve around high entropy input. > Using uncompressed files with rsyncrypto is below the security > threshold I would recommend. > > The only solution I can offer is to compile your own version of gzip, > with the rsyncable flag. > > Shachar > > ------------------------------------------------------------------------------ > > _______________________________________________ > Rsyncrypto-devel mailing list > Rsy...@li... > https://lists.sourceforge.net/lists/listinfo/rsyncrypto-devel |
From: Shachar S. <sh...@sh...> - 2015-09-26 13:09:05
|
On 26/09/15 15:32, Vladimir Botka wrote: > Hello, > > I run command [1] in FreeBSD and see gzip error [2]. Would it be > possible to help me? Thank you. In the past, --rsyncable wasn't part of vanilla gzip. It is possible that FreeBSD doesn't include it. I have no idea if it made it into vanilla gzip or not. Almost all Linux distributions include it. Using rsyncrypto without it is rather pointless. There is no point in giving up security in order to get rsync friendly cypher texts if the compression then goes ahead and makes them non-rsyncable. I also wouldn't recommend using rsyncrypto without compression, as it has some security assumptions that revolve around high entropy input. Using uncompressed files with rsyncrypto is below the security threshold I would recommend. The only solution I can offer is to compile your own version of gzip, with the rsyncable flag. Shachar |
From: Vladimir B. <vb...@gm...> - 2015-09-26 12:32:52
|
Hello, I run command [1] in FreeBSD and see gzip error [2]. Would it be possible to help me? Thank you. [1] rsyncrypto -r /backup /backup.enc keys backup.crt [2] gzip: unrecognized option `--rsyncable' FreeBSD gzip 20111009 usage: gzip [-123456789acdfhklLNnqrtVv] [-S .suffix] [<file> Cheers, -vlado -- Vladimír Botka |
From: Shachar S. <sh...@sh...> - 2015-06-17 06:24:10
|
On 17/06/2015 02:11, Maarten Bodewes wrote: > Hi rsyncrypto devs, > > I've tried reading the source code but I cannot see if there is any signature or MAC added to the ciphertext. Is it possible that this protocol is vulnerable to padding Oracle attacks (in addition to changes to the ciphertext / plaintext)? Or am I mistaken about that? My home internet connection is fried at the moment. It will take me a couple of days to give you a properly researched answer. In a nutshell, I will say this: * I was not previously aware of the padding oracle attack. Off the top of my head, the attack's premise seems counter to how rsyncrypto is typically used, but I'm open to hear of differing opinions. * There is no signature protecting the entire file. I'll elaborate when I'm not at work (in a couple of days, I hope) * If memory serves me right, the padding is not checked. This also violates the premise that POA relies upon. Then again, it might be an opening to a whole host of other problems I'm unaware of. Feel free to chime in. I always appreciate constructive feedback. > Is there any clear protocol description that would show how the ciphertext is constructed together? There is http://rsyncrypto.lingnu.com/index.php/Algorithm. If you find it lacking, please tell me what you need more, and I'll try to add it. Also, please checkout out the future plans, as it contains some known weaknesses and my plans of how to address them. Shachar |
From: Maarten B. <maa...@gm...> - 2015-06-16 23:12:02
|
Hi rsyncrypto devs, I've tried reading the source code but I cannot see if there is any signature or MAC added to the ciphertext. Is it possible that this protocol is vulnerable to padding Oracle attacks (in addition to changes to the ciphertext / plaintext)? Or am I mistaken about that? Is there any clear protocol description that would show how the ciphertext is constructed together? Regards, Maarten |
From: Shachar S. <sh...@sh...> - 2015-06-16 10:52:15
|
On 16/06/2015 12:54, compiling entropy wrote: > After reading through the man pages and the available documentation a few times, my understanding is that rsyncrypto works by generating a symmetric key for each file you're encrypting, and saving that symmetric key to a file. It then encrypts the file using that symmetric key, encrypts the symmetric key with a public key, and concatenates the encrypted symmetric key to the encrypted file. The purpose of this is so that each file can be encrypted with a different key, but even if you lose the symmetric key file, the data can be decrypted by using the private key (private key decrypts symmetric key in the file, symmetric key decrypts the file itself). > > I think I'm missing something about how rsyncrypto works, because in the model of understanding I just described, you ought to be able to decrypt files using only the keyfile or private key. While I've seen that you can decrypt the file using just the private key, use of the key file in decryption requires that you also provide the public key. I'm confused as to why this is. It seems as though if you have the symmetric key already, you could just decrypt the data and disregard the embedded copy of the same symmetric key. What is the public key used for during decryption, or what am I missing? No, you're not missing anything. This requirement is not, algorithmically, necessary. When you're decrypting with the symmetric key available, rsyncrypto uses the public key in order to know how much of the file's header to skip. In other words, all it actually needs from your public key is how many bits it is. Since the first part of the file is the symmetric key, encrypted using the private key, the key's length is needed in order to know how much to skip. Of course, in retrospect, I could have stored that information inside the symmetric key file, and made the usage simpler. I'm hoping to, some day, get around to working on rsyncrypto again, and this will definitely go there. Unfortunately, rsyncrypto's current file format makes it impossible to encrypt using a stream algorithm. Since this is a kinda important change, the changes planned for rsyncrypto 2.0 are breaking changes. As that's the case, I will not do another breaking change (changing the format of the symmetric key file) for so little gain. If this restriction is a problem for the deployment type you are planning, you can simply generate some random public key for the decryption machine. So long as it is the same length as the original encryption key, everything should work fine. Shachar |
From: compiling e. <com...@gm...> - 2015-06-16 09:54:06
|
After reading through the man pages and the available documentation a few times, my understanding is that rsyncrypto works by generating a symmetric key for each file you're encrypting, and saving that symmetric key to a file. It then encrypts the file using that symmetric key, encrypts the symmetric key with a public key, and concatenates the encrypted symmetric key to the encrypted file. The purpose of this is so that each file can be encrypted with a different key, but even if you lose the symmetric key file, the data can be decrypted by using the private key (private key decrypts symmetric key in the file, symmetric key decrypts the file itself). I think I'm missing something about how rsyncrypto works, because in the model of understanding I just described, you ought to be able to decrypt files using only the keyfile or private key. While I've seen that you can decrypt the file using just the private key, use of the key file in decryption requires that you also provide the public key. I'm confused as to why this is. It seems as though if you have the symmetric key already, you could just decrypt the data and disregard the embedded copy of the same symmetric key. What is the public key used for during decryption, or what am I missing? |
From: Shachar S. <sh...@sh...> - 2015-03-09 15:38:42
|
On 09/03/15 03:55, Zurd wrote: > Hi, thanks for the information, I understand now that rsyncrypto is a > preparation step. Only problem is that if you want to make a backup of > 800 GB, you'll need to copy that 800 GB and encrypt it (with > rsyncrypto), then rsync that. Can work fine for small backups, just > not for large backups. > You are right. There were plans to have rsyncrypto be a filter (i.e. - encrypt stdin to stdout) and thus be a plugin into rsync, saving this issue. Right now, I do not have the bandwidth to work on it (nor on quite a few other things I would like to do with rsyncrypto). Shachar |
From: Zurd <zu...@gm...> - 2015-03-09 01:55:50
|
Hi, thanks for the information, I understand now that rsyncrypto is a preparation step. Only problem is that if you want to make a backup of 800 GB, you'll need to copy that 800 GB and encrypt it (with rsyncrypto), then rsync that. Can work fine for small backups, just not for large backups. On Tue, Feb 3, 2015 at 2:56 PM, Shachar Shemesh <sh...@sh...> wrote: > On 30/12/14 07:28, Zurd wrote: > > The original rsync has an option --backup that works with --backup-dir. > When they are used, all the modified and/or deleted files are put in the > --backup-dir argument. > > When using the --delete option at the same time, the destination folder is > then always kept as a perfect copy like the source folder. And the > --backup-dir folder has all the modified/deleted files in it. > > An example of that command would be: > rsync --archive --delete --backup > --backup-dir=/home/user/backup-2014-12-30/ /home/user/backup-source/ > /home/user/backup-destination/ > > Does rsyncrypto has an option like that? > > > rsyncrypto does not have an option like that. Truth be told, rsyncrypto > was mostly designed to be a preparation step before performing rsync, and > rsync's options cover these needs quit nicely. > > Hope this helps, > Shachar > |
From: Shachar S. <sh...@sh...> - 2015-02-04 19:05:02
|
On 04/02/15 11:58, Guillaume Friloux wrote: > Ok, did the tests without --inplace, it saves the BW, but completly > nullify the benefits of ZFS snapshots. I will point out that this discussion has transgressed beyond the realm of rsyncrypto, and into the rsync turf. However: Using rsync, you can compare with files in one directory, but write the results to another directory. If you do that, you can create a secondary director of just the files that changed. Use "cat" to copy them back to the original location, and hopefully that would salvage your zfs usage. As I said, however, this is more rsync than rsyncrypto related. As a side note, please don't disable gzip in rsyncrypto. that feature was meant solely for the tests. Certain entropy assumption behind the cryptanalysis of rsyncrypto do not hold when the entropy of the file is low. In other words, when rsyncrypto does not compress, it is less secure as an encryption. Shachar |
From: Guillaume F. <gui...@fr...> - 2015-02-04 09:59:31
|
Ok, did the tests without --inplace, it saves the BW, but completly nullify the benefits of ZFS snapshots. This is disturbing. Le 2015/02/03 21:54, Guillaume Friloux a écrit : > Hello, thanks for answering. > > I have to use --inplace to limit writes on the ZFS dataset, otherwise > each snapshot > will use the total file size instead of only the diff. > In my env, i dont do local copies, i do send over SSH on a BSD host. > > I will redo all the tests without --inplace to see if it does better > for > rsync > (but wont be a real solution for my ZFS vol). > > I am using /dev/urandom only because i wanted a simple test case, but > the problem > occurs with real files, like outlook PST files, PPT files and so on. > > I intentionally not used gzip because gzip itself also produce some > problems > here with the files, and i do use --rsyncable, or tell rsyncrypto to > use > gzip (encounter the problem with both methods). > > Youre saying rsyncrypto uses gzip, but i did give --gzip=nullgzip that > is > a bash script calling cat, and so, not compression should be done, or > is > rsyncrypto adding compression over what gzip did ? > > > Le 2015/02/03 20:54, Shachar Shemesh a écrit : >> rsyncrypto compresses as part of the encryption. You obviously did not >> notice this, as you were using /dev/random as your source, and hence >> producing uncompressible files. This is also the reason (at least part >> of it) that the encrypted files were not the same size. >> >> On 03/02/15 10:34, Guillaume Friloux wrote: >> >>> The only thing i can see is that between file1.iso.enc and >>> file2.iso.enc, >>> the filesize dropped a little, and between file2.iso.enc and >>> file3.iso.enc it is higher, >>> but i have no idea if this can be related... >> You are using rsync with --inplace. In that mode, rsync cannot reuse >> blocks that were already overwritten by the destination file. When the >> new file is bigger than the old one, you are overwriting the data you >> would reuse while transferring, severely limiting rsync's ability to >> optimize your transfer. If you remove --inplace, you will see that >> rsync has no problem optimizing your encrypted files, no matter the >> size changes. >> >> You have not asked your gzip question, but I am guessing it is either >> the same issue there, or you forgot to pass it the --rsyncable flag. >> >> Shachar |
From: Guillaume F. <gui...@fr...> - 2015-02-03 20:55:56
|
Hello, thanks for answering. I have to use --inplace to limit writes on the ZFS dataset, otherwise each snapshot will use the total file size instead of only the diff. In my env, i dont do local copies, i do send over SSH on a BSD host. I will redo all the tests without --inplace to see if it does better for rsync (but wont be a real solution for my ZFS vol). I am using /dev/urandom only because i wanted a simple test case, but the problem occurs with real files, like outlook PST files, PPT files and so on. I intentionally not used gzip because gzip itself also produce some problems here with the files, and i do use --rsyncable, or tell rsyncrypto to use gzip (encounter the problem with both methods). Youre saying rsyncrypto uses gzip, but i did give --gzip=nullgzip that is a bash script calling cat, and so, not compression should be done, or is rsyncrypto adding compression over what gzip did ? Le 2015/02/03 20:54, Shachar Shemesh a écrit : > rsyncrypto compresses as part of the encryption. You obviously did not > notice this, as you were using /dev/random as your source, and hence > producing uncompressible files. This is also the reason (at least part > of it) that the encrypted files were not the same size. > > On 03/02/15 10:34, Guillaume Friloux wrote: > >> The only thing i can see is that between file1.iso.enc and >> file2.iso.enc, >> the filesize dropped a little, and between file2.iso.enc and >> file3.iso.enc it is higher, >> but i have no idea if this can be related... > You are using rsync with --inplace. In that mode, rsync cannot reuse > blocks that were already overwritten by the destination file. When the > new file is bigger than the old one, you are overwriting the data you > would reuse while transferring, severely limiting rsync's ability to > optimize your transfer. If you remove --inplace, you will see that > rsync has no problem optimizing your encrypted files, no matter the > size changes. > > You have not asked your gzip question, but I am guessing it is either > the same issue there, or you forgot to pass it the --rsyncable flag. > > Shachar > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming. The Go Parallel Website, > sponsored by Intel and developed in partnership with Slashdot Media, is > your > hub for all things parallel software development, from weekly thought > leadership blogs to news, videos, case studies, tutorials and more. > Take a > look and join the conversation now. http://goparallel.sourceforge.net/ > > _______________________________________________ > Rsyncrypto-devel mailing list > Rsy...@li... > https://lists.sourceforge.net/lists/listinfo/rsyncrypto-devel |
From: Shachar S. <sh...@sh...> - 2015-02-03 20:15:09
|
On 30/12/14 07:28, Zurd wrote: > The original rsync has an option --backup that works with > --backup-dir. When they are used, all the modified and/or deleted > files are put in the --backup-dir argument. > > When using the --delete option at the same time, the destination > folder is then always kept as a perfect copy like the source folder. > And the --backup-dir folder has all the modified/deleted files in it. > > An example of that command would be: > rsync --archive --delete --backup > --backup-dir=/home/user/backup-2014-12-30/ /home/user/backup-source/ > /home/user/backup-destination/ > > Does rsyncrypto has an option like that? > > rsyncrypto does not have an option like that. Truth be told, rsyncrypto was mostly designed to be a preparation step before performing rsync, and rsync's options cover these needs quit nicely. Hope this helps, Shachar |
From: Shachar S. <sh...@sh...> - 2015-02-03 20:10:08
|
rsyncrypto compresses as part of the encryption. You obviously did not notice this, as you were using /dev/random as your source, and hence producing uncompressible files. This is also the reason (at least part of it) that the encrypted files were not the same size. On 03/02/15 10:34, Guillaume Friloux wrote: > The only thing i can see is that between file1.iso.enc and > file2.iso.enc, > the filesize dropped a little, and between file2.iso.enc and > file3.iso.enc it is higher, > but i have no idea if this can be related... You are using rsync with --inplace. In that mode, rsync cannot reuse blocks that were already overwritten by the destination file. When the new file is bigger than the old one, you are overwriting the data you would reuse while transferring, severely limiting rsync's ability to optimize your transfer. If you remove --inplace, you will see that rsync has no problem optimizing your encrypted files, no matter the size changes. You have not asked your gzip question, but I am guessing it is either the same issue there, or you forgot to pass it the --rsyncable flag. Shachar |
From: Guillaume F. <gui...@fr...> - 2015-02-03 08:53:12
|
Hello everyone! I am having an issue with the backup of a few files here, taking more space than need on my ZFS dataset. After some digging, i found the issue is primarly caused by both gzip and rsyncrypto. Here, i will only discuss of the rsyncrypto part making rsync to fail at backup efficiently : Suppose you make 2 files of 450MB, with only 50MB that changed, in the middle of the file (no deleted or added data, not even moved). To create a test case, here is what i made : dd if=/dev/urandom of=begin.iso bs=1M count=100 dd if=/dev/urandom of=end.iso bs=1M count=300 dd if=/dev/urandom of=middle1.iso bs=1M count=50 dd if=/dev/urandom of=middle2.iso bs=1M count=50 Lets build our two files : cat begin.iso middle1.iso end.iso >file1.iso cat begin.iso middle2.iso end.iso >file2.iso So we end up with two files of identical size, but 50MB diff somewhere inside : -rw-r--r-- 1 kuri users 471859200 2 févr. 14:55 file1.iso -rw-r--r-- 1 kuri users 471859200 2 févr. 14:55 file2.iso I now encrypt them with rsyncrypto : rsyncrypto --gzip=nullgzip file1.iso{,.enc} backup.{keys,crt} rsyncrypto --gzip=nullgzip file2.iso{,.enc} backup.{keys,crt} The first noticeable thing i see is that they dont do the same size once encrypted : -rw-r--r-- 1 root root 472063316 2 févr. 14:55 file1.iso.enc -rw-r--r-- 1 root root 472062484 2 févr. 14:55 file2.iso.enc Now if i copy the original files using rsync, I get interesting i/o work : [kuri:~/tmp/random] $ rsync --progress -av --inplace --no-whole-file -i file1.iso test/file.iso sending incremental file list > f+++++++++ file1.iso 471,859,200 100% 208.71MB/s 0:00:02 (xfr#1, to-chk=0/1) sent 471,974,500 bytes received 35 bytes 188,789,814.00 bytes/sec total size is 471,859,200 speedup is 1.00 [kuri:~/tmp/random] $ rsync --progress -av --inplace --no-whole-file -i file2.iso test/file.iso sending incremental file list > f..t...... file2.iso 471,859,200 100% 135.90MB/s 0:00:03 (xfr#1, to-chk=0/1) sent 52,543,948 bytes received 152,118 bytes 8,107,087.08 bytes/sec total size is 471,859,200 speedup is 8.95 [kuri:~/tmp/random] $ Now i copy the encrypted files : [kuri:~/tmp/random] $ rsync --progress -av --inplace --no-whole-file -i file1.iso.enc test/file.iso.enc sending incremental file list > f+++++++++ file1.iso.enc 472,063,316 100% 180.86MB/s 0:00:02 (xfr#1, to-chk=0/1) sent 472,178,659 bytes received 35 bytes 134,908,198.29 bytes/sec total size is 472,063,316 speedup is 1.00 [kuri:~/tmp/random] $ rsync --progress -av --inplace --no-whole-file -i file2.iso.enc test/file.iso.enc sending incremental file list > f.st...... file2.iso.enc 472,062,484 100% 111.87MB/s 0:00:04 (xfr#1, to-chk=0/1) sent 52,608,319 bytes received 152,188 bytes 9,592,819.45 bytes/sec total size is 472,062,484 speedup is 8.95 So, it worked perfectly on this test, but sometimes, it fails to do proper diff, so lets make another test file : dd if=/dev/urandom of=middle3.iso bs=1M count=50 cat begin.iso middle3.iso end.iso >file3.iso rsyncrypto --gzip=nullgzip file3.iso{,.enc} backup.{keys,crt} Lets look at the files : -rw-r--r-- 1 kuri users 471859200 2 févr. 14:55 file1.iso -rw-r--r-- 1 kuri users 471859200 2 févr. 14:55 file2.iso -rw-r--r-- 1 kuri users 471859200 3 févr. 09:07 file3.iso -rw-r--r-- 1 root root 472063316 2 févr. 14:55 file1.iso.enc -rw-r--r-- 1 root root 472062484 2 févr. 14:55 file2.iso.enc -rw-r--r-- 1 root root 472062932 3 févr. 09:07 file3.iso.enc Lets rsync the third file : [kuri:~/tmp/random] $ rsync --progress -av --inplace --no-whole-file -i file3.iso.enc test/file.iso.enc sending incremental file list > f.st...... file3.iso.enc 472,062,932 100% 53.22MB/s 0:00:08 (xfr#1, to-chk=0/1) sent 367,307,827 bytes received 152,188 bytes 34,996,191.90 bytes/sec total size is 472,062,932 speedup is 1.28 So, it copied 350MB of a 450MB file that only had 50MB changed. Lets see with the unencrypted files : [kuri:~/tmp/random] $ rsync --progress -av --inplace --no-whole-file -i file3.iso test/file.iso sending incremental file list > f..t...... file3.iso 471,859,200 100% 135.29MB/s 0:00:03 (xfr#1, to-chk=0/1) sent 52,543,947 bytes received 152,118 bytes 9,581,102.73 bytes/sec total size is 471,859,200 speedup is 8.95 So it is working properlly if files are not encrypted. Is it possible that due to having different filesize, rsync algorithm fails ? Do you have any hints ? The only thing i can see is that between file1.iso.enc and file2.iso.enc, the filesize dropped a little, and between file2.iso.enc and file3.iso.enc it is higher, but i have no idea if this can be related... Checking at the data of each encrypted file i can see that the last 300MB are exactly the same : [kuri:~/tmp/random] $ tail -c 314572800 file1.iso.enc | sha1sum ee0c8bb19a620f7cdd44705b1293df461af389bc - [kuri:~/tmp/random] $ tail -c 314572800 file2.iso.enc | sha1sum ee0c8bb19a620f7cdd44705b1293df461af389bc - [kuri:~/tmp/random] $ tail -c 314572800 file3.iso.enc | sha1sum ee0c8bb19a620f7cdd44705b1293df461af389bc - But the first 100MB are not : [kuri:~/tmp/random] $ head -c 104857600 file1.iso.enc | sha1sum d86fa953b25e1a01a53409f567cc845535525dc1 - [kuri:~/tmp/random] $ head -c 104857600 file2.iso.enc | sha1sum 0c10309cf8fe0bb349b05081c782469e4c2fb0e2 - [kuri:~/tmp/random] $ head -c 104857600 file3.iso.enc | sha1sum 338ba6c1a58dde8c334092986e5ce20e3b8114df - Any help would be greatly appreciated, i would like to backup even bigger files (some GBs), where over 90% of the file gets transferred if encrypted with rsyncrypto while only 2-4MB would be transferred otherwise. |
From: Zurd <zu...@gm...> - 2014-12-30 05:28:20
|
The original rsync has an option --backup that works with --backup-dir. When they are used, all the modified and/or deleted files are put in the --backup-dir argument. When using the --delete option at the same time, the destination folder is then always kept as a perfect copy like the source folder. And the --backup-dir folder has all the modified/deleted files in it. An example of that command would be: rsync --archive --delete --backup --backup-dir=/home/user/backup-2014-12-30/ /home/user/backup-source/ /home/user/backup-destination/ Does rsyncrypto has an option like that? Cheers, |
From: Frederico R. A. <dev...@gm...> - 2014-03-29 14:18:59
|
I'm sure.. I even put a print together with changed lines, it appears normally. Here is 'diff' between the original source and the patched file in my directory: [bacon:/local2/users/fabraham/pessoal/backup/rsyncryptodebug/rsyncrypto-1.12] diff autommap.h ../rsyncrypto-1.12.patched/autommap.h 33c33 < autommap() : ptr(reinterpret_cast<void *>(-1)), size(0) --- > autommap() : ptr(reinterpret_cast<void *>(-1l)), size(0) 39c39 < if( ptr==reinterpret_cast<void *>(-1) ) { --- > if( ptr==reinterpret_cast<void *>(-1l) ) { 47c47 < autommap(file_t fd, int prot) : ptr(reinterpret_cast<void *>(-1)), size(0) --- > autommap(file_t fd, int prot) : ptr(reinterpret_cast<void *>(-1l)), size(0) 79c79 < that.ptr=reinterpret_cast<void *>(-1); --- > that.ptr=reinterpret_cast<void *>(-1l); 86c86 < if( ptr!=reinterpret_cast<void *>(-1) ) { --- > if( ptr!=reinterpret_cast<void *>(-1l) ) { 89c89 < ptr=reinterpret_cast<void *>(-1); --- > ptr=reinterpret_cast<void *>(-1l); On Fri, Mar 28, 2014 at 9:46 AM, Shachar Shemesh <sh...@sh...>wrote: > On 22/03/14 14:12, Frederico Rodrigues Abraham wrote: > > I patched the source, didn't seem to make a difference: > > I'm not sure how to continue from here. The source says there is no way > for that value to reach that point in the code. I'll try to come up with a > version with debug logs and see if that helps. > > In the mean while, are you sure you ran the version compiled with the > patch? > > Thanks, > Shachar > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Rsyncrypto-devel mailing list > Rsy...@li... > https://lists.sourceforge.net/lists/listinfo/rsyncrypto-devel > > -- -- Fred |
From: Shachar S. <sh...@sh...> - 2014-03-28 12:47:04
|
On 22/03/14 14:12, Frederico Rodrigues Abraham wrote: > I patched the source, didn't seem to make a difference: I'm not sure how to continue from here. The source says there is no way for that value to reach that point in the code. I'll try to come up with a version with debug logs and see if that helps. In the mean while, are you sure you ran the version compiled with the patch? Thanks, Shachar |
From: Frederico R. A. <dev...@gm...> - 2014-03-22 12:13:09
|
<html> <head> <meta content="text/html; charset=UTF-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">I patched the source, didn't seem to make a difference:<br> <br> Program received signal SIGSEGV, Segmentation fault.<br> key::read_key (buffer=buffer@entry=0xffffffffffffffff <Address 0xffffffffffffffff out of bounds>) at crypt_key.cpp:44<br> 44 if( buff->version!=htonl(VERSION_MAGIC_1) )<br> (gdb) bt<br> #0 key::read_key (buffer=buffer@entry=0xffffffffffffffff <Address 0xffffffffffffffff out of bounds>) at crypt_key.cpp:44<br> #1 0x00000000004047f0 in read_header (headfd=...) at crypto.cpp:102<br> #2 0x0000000000408a3a in file_decrypt (src_file=0x9357158 "filesencrypted/files/40AEBACBA5170D57464965CA52861A2F", <br> dst_file=0x9357238 "../ugah/files/work/tecgraf/lib/visnew/include/old/sg/strat/render/slrender.h", key_file=0x93572d8 "filesencrypted/keys/files/40AEBACBA5170D57464965CA52861A2F", <br> rsa_key=rsa_key@entry=0x6296a0, stat=stat@entry=0x7fffffffb660) at file.cpp:445<br> #3 0x0000000000409b80 in recurse_dir_enc (src_dir=src_dir@entry=0x7fffffffc588 "filesencrypted/files", dst_dir=dst_dir@entry=0x7fffffffc59d "..", key_dir=key_dir@entry=0x7fffffffc5a0 "filesencrypted/keys", <br> rsa_key=rsa_key@entry=0x6296a0, op=op@entry=0x408990 <file_decrypt(char const*, char const*, char const*, rsa_st*, stat const*)>, src_offset=src_offset@entry=15, op_handle_dir=op_handle_dir@entry=false, <br> opname=opname@entry=0x410569 "Decrypting", dstname=dstname@entry=0x40c4f0 <filemap::namecat_decrypt(char const*, char const*, unsigned int)>, <br> keyname=keyname@entry=0x409520 <name_concat(char const*, char const*, unsigned int)>) at file.cpp:207<br> #4 0x000000000040a315 in dir_encrypt (src_dir=0x7fffffffc588 "filesencrypted/files", dst_dir=0x7fffffffc59d "..", key_dir=0x7fffffffc5a0 "filesencrypted/keys", rsa_key=rsa_key@entry=0x6296a0, <br> op=op@entry=0x408990 <file_decrypt(char const*, char const*, char const*, rsa_st*, stat const*)>, opname=opname@entry=0x410569 "Decrypting", <br> dstname=dstname@entry=0x40c4f0 <filemap::namecat_decrypt(char const*, char const*, unsigned int)>, keyname=keyname@entry=0x409520 <name_concat(char const*, char const*, unsigned int)>) at file.cpp:323<br> #5 0x00000000004030f9 in main (argc=<optimized out>, argv=<optimized out>) at main.cpp:170<br> (gdb) <br> <br> -- Fred<br> <br> On 18-03-2014 12:07, Shachar Shemesh wrote:<br> </div> <blockquote cite="mid:532...@sh..." type="cite"> <meta content="text/html; charset=UTF-8" http-equiv="Content-Type"> <style type="text/css">body p { margin-bottom: 0.2cm; margin-top: 0pt; } </style> <div class="moz-cite-prefix">On 18/03/14 13:55, Frederico Rodrigues Abraham wrote:<br> </div> <blockquote cite="mid:CAPx=+Sb...@ma..." type="cite"> <div dir="ltr">Here is the stack trace:<br> </div> </blockquote> While we're at it, and in addition to the strace output, please try applying the attached patch. It should solve the actual crash, but I suspect rsyncrypto will still fail (just more gracefully).<br> <br> Just cd to the source directory, run "patch -p0 < /tmp/crash.patch", and then run "make" again. Let me know whether it solves the crash.<br> <br> Shachar<br> </blockquote> <br> </body> </html> |
From: Shachar S. <sh...@sh...> - 2014-03-21 19:48:35
|
On 21/03/14 13:43, Frederico Rodrigues Abraham wrote: > It's still crashing after the patch. I'll run with gdb tonight to give > you the stack trace. > Please reply with all non-private correspondences to the list. Rsyncrypto is an open source project. Whenever you reply to me in private, you are denying others, who might be having the same problems, the benefit of the archive and google. This means that it falls squarely on me to personally handle each and every such problem. I find it unfair to me. As such, please respect my request that, barring a really good reason (such as sending a huge trace file with sensitive information), all support for rsyncrypto go through the mailing list. Thank you, Shachar |