Re: Some questions about rsyncrypto
Brought to you by:
thesun
From: Michal 'h. S. <hra...@ce...> - 2005-07-26 14:49:54
|
On Thu, Jul 21, 2005 at 12:35:41PM +0300, Shachar Shemesh wrote: > Michal 'hramrach' Suchanek wrote: > > >Hello > > > >First I wonder if rsynrypto does really solve the problem of syncing > >effectively a file to which a single byte was inserted in the middle. > > > > > If we said we do, and you don't believe us, then why would asking again > change anything? :-) > > >I myself would not hope any software would solve that for encrypted > >files. But I do not understand cryptography and a person who studied it > >already assured me that it is likely possible to solve the problem. > > > > > Rsyncrypto will reset the encryption stream back to what it looked like > eventually, and so only the area around the inserted byte would have to > be resynced. Don't take my word for it, however. Feel free to test it > out. We now offer a 50% discount over our usual free of charge price tag > for users of rsyncrypto who only want to test it out. This plus the explanation below makes me beleive it could work. Testing with a small 26M file shows it works which is great :) > > >Second I do not understand why a new key is generated for each file. > > > Common industry practice. > > >Is > >the encryption so severely wakened that one cannot afford encrypting > >larger amount of data? > > > No, but if two files started the same, this would show up if they were > encrypted using the same key and IV. Since generating a symmetric key is > very easy, there is really no reason not to use different keys for > different files. > > >What about files that are already gigabytes > >or tens of gigabytes long? > > > > > If standard CBC were used, then the IV would not repeat itself (unless > by chance), which means that the attacker cannot deduct anything from > cipher text repetitions. > > In rsyncrypto things are a little less simple. Long enough (about 16KB > after compression, IIRC) repetitions inside the same file will result in > repetitions in the cipher text. There is really no way to implement what > rsyncrypto is trying to do without having this effect. If this is a > major problem for you, I can only suggest that you not use rsyncrypto. > If the principle is ok, but you need larger repetition before cipher > text patterns appear, use higher "--roll-min" values. Bear in mind that > this will reduce the efficiency of rsync later. Then again, if you are > using files gigabytes long, rsync will likely choose rather large blocks > to work by, which means that it will not matter much. The fact that some parts of ciphertext are the same ( and reflect similarities in the plaintext) does not sound too dangerous to me. But revealing the names of the files and their lenght could be a problem. So I would rather get a solution where the files are split in blocks and stuffed into some block pool on the remote side. Which probably means I could not use standard rsync. I need to be able to get some blocks unconditionally (those which describe the pool layout) and rsync others (and they would have to be concatenated before syncing to get anything efficient). But splitting the rsyncrypto output would probably get a bit tricky. In case I am syncing a large file I would not like to store it encrypted somewhere while rsync works on it. But getting some random part without calculating the cipher from the beginning is probably not easy. Especially since the file is compressed. Also I was thinking I could use something like ext2 filesystem to store the blocks but there is a problem with the inodes. If I simply encrypted them it would enlarge them and the whole thing would not fit together. On the other hand, I can just append some garbage to the data blocks so that everything is the same size. Thanks Michal Suchanek |